Sunday, September 07, 2014

Nerd Food: Dogen: Old Demo

Nerd Food: Dogen: Old Demo

As part of my attempt to make the work in Dogen a bit more visible, I thought I'd repost an old demo here. The interface has changed very little since those days so it's still a useful introduction.

Date: 2014-09-07 22:23:58 BST

Org version 7.8.02 with Emacs version 24

Validate XHTML 1.0

Nerd Food: Dogen: Lessons in Incremental Coding

Nerd Food: Dogen: Lessons in Incremental Coding

A lot of interesting lessons have been learned during the development of Dogen and I'm rather afraid many more are still in store. As it is typical with agile, I'm constantly reviewing processes in search of improvements. One such idea was that putting pen to paper could help improving the retrospective process itself. The result is this rather long blog post, which hopefully is of use to developers in similar circumstances. Unlike the typical bullet-point based retrospective, this post it is a rambling narrative as it aims to provide context to the reader. Subsequent retrospectives will be a lot smaller and more to the point.

Talking about context: I haven't spoken very much about Dogen in this blog, so a small introduction is in order. Dogen is an attempt to create a domain model generator. The manual goes into quite a bit more detail, but for the purposes of this exercise, it suffices to think of it as a C++ code generator. Dogen has been developed continuously since 2012 - with a few dry spells - and reached its fiftieth sprint recently. Having said that, our road to a finished product is still a long one.

The remainder of this article looks at what what has worked and what has not worked so well thus far into Dogen's development history.

Understanding Time

Dogen was conceived when we were trying to do our first start up. Once that ended - around the back end of 2012 - I kept working on the tool in my spare time, and this was a setup that has continued ever since. There are no other contributors; development just keeps chugging along, slowly but steadily, with no pressures other than to enjoy the sights.

Working on my own and in my spare time meant that I had two conflicting requirements: very little development resources and very ambitious ideas that required lots of work. With family commitments and a full time job, I quickly found out that there weren't a lot of spare cycles left. In fact, after some analysis, I realised I was in a conundrum. Whilst there is was a lot of "dead-time" in the average week, it was mostly "low-quality grade time": lots of discontinued segments of varying and unpredictable lengths. Summed together in a naive way it seemed like a lot, but - as every programmer knows - six blocks of ten minutes do not one solid hour make.

Nevertheless, one has to play the game with the cards that were dealt. I soon realised that the correct question to ask was: "what kind of development style makes one productive under these conditions?". The answer turned out to be opportunistic coding. This is rooted in having a better understanding of the different "qualities" of time and how best to exploit them. For example, when you have say five to fifteen minutes available, it makes sense to do small updates to the manual or fix trivial problems - a typo in the documentation, renaming variables in a function, mopping up the backlog and other activities of that ilk. A solid block of forty minutes to an hour affords you more: for instance, implementing part or the whole of stories for which the analysis has been completed, or doing some analysis for existing stories. On those rare cases where half-a-day or longer is available, one must make the most of it and take on a complex piece of work that requires sustained concentration. This sessions proved to be most valuable when the output is a set of well defined stories that are ready for implementation.

One needs very good processes in order to be able to manage the usage of time in this fashion. Luckily, agile provides it.

Slow Motion Agile

Looking back on ~2.4k commits, one of the major wins in terms of development process was to think incrementally. Of course, agile already gives you a mental framework for that, and we had a functioning scrum process during our start up days: daily stand-ups, bi-weekly sprints, pre-sprint planning, post-sprint reviews, demos and all of that good stuff. It worked really well, and keep us honest and clean. We used a very simple org-mode file to keep track of all the open stories, and at one point we even built a simple burn-down chart generator to allow us to measure velocity.

Granted, when you are working alone in your spare time, a chunk of agile may not make sense; for instance, providing status updates to yourself may not be the most productive use of scarce time. Surprisingly, I found quite a bit of process to be vital. I've kept the bi-weekly sprint cycle, the sprint logs, the product backlog and the time-tracking we had originally setup and found them extremely useful - quite possibly the thing that has kept me going for such an extended period of time, to be brutally honest. When you are working on an open source project it is very easy to get lost in its open-ended-ness and find yourself giving up, particularly if you are not getting (or expecting) any user feedback. Even Linus himself has said many times he would have given up the kernel if it wasn't for other people bringing him problems to keep him interested.

Lacking Linus' ability to attract crowds of interested developers, I went for the next best thing: I made them up. Well, at least in metaphorical way, I guess, as this is what user stories are when you have no external users to drive them. As I am using the product in anger, I find it very easy to put myself in the head of a user and come up with requirements that push development forward. These stories really help, because they transform the cloud of possibilities into concrete, simple, measurable deliverables that one can choose to deliver or not. Once you have a set of stories, you have no excuse to be lazy because you can visualise in your head just how much effort it would require you to implement a story - and hey, since nerds are terrible at estimating, it's never that much effort at all. As everyone knows, it's not quite that easy in the end; but once you've started, you get the feeling you have to at least finish the task at hand, and so on, one story at a time, one sprint at a time, until a body of work starts building up. It's slow, excruciatingly slow, but it's steady like water working in geological time; when you look back 5 sprints, you cannot help but be amazed on how much can be achieved in such a incremental way - and how much is still left.

And then you get hooked into measurements. I now love measuring everything, from how long it takes me to complete a story, to where time goes in an sprint, to how many commits I do a day, to, well, everything that can easily be measured without adding any overhead. There is no incentive for you to game the system - hell, you could create a script that commits 20 times a day, if the commit count is all you care about. But it's not, so why bother. Due to this, statistics start to actually tell you valuable information about the world and to impel you forward. For instance, GitHub streaks mean that I always try to at least make one commit per day. Because of this, even on days when I'm tired, I always force my self to do something and sometimes that quick commit morphs into an hour or two of work that wouldn't have happened otherwise.

As I mentioned before, it was revealing to find out that there are different types of time. In order to to take advantage of this heterogeneity, one must make scrupulous use of the product backlog. This has proven invaluable, as you can attest by its current size. Whether we are part way through a story or just idly daydreaming, each and every idea must be added to the product backlog, with sufficient detail to allow one to reconstruct one's train of thought at that point in time. Once in the backlog, items can be continuously refined until eventually we find a suitable sprint to tackle them or they get deprecated altogether. But without an healthy backlog it is not possible to make the most these illusive time slots. Conversely, it is important to try to make each story as small and as focused as possible, and to minimise spikes unless they really are on the critical path of the story. This is mainly for psychological reasons: one needs to mark stories as complete, to feel like work has been done. Never-ending stories are just bad for morale.

In general, this extreme incrementalism has served us well. Not all is positive though. The worst problem has been a great difficulty in tackling complex problems - those that require several hours just to load them into your head. These are unavoidable in any sufficiently large code base. Having lots of discontinued segments of unpredictable duration have reduced efficiency considerably. In particular, I notice I have spent a lot more time lost in conceptual circles, and I've taken a lot longer to explore alternatives when compared to working full time.

DVCS to the Core

We had already started to use git during the start-up days, and it had proved to be a major win at the time. After all, one never quite knows where one will be coding from, and whether internet access is available or not, so it's important to have a self-contained environment. In the end we found out it brought many, many more advantages such as great collaborative flows, good managed web interfaces/hosting providers (GitHub and, to some extent, BitBucket), amazing raw speed even on low-powered machines, and a number of other wins - all covered by lots and lots of posts around the web, so I won't bore you with that.

On the surface it may seem that DVCS is most useful on a multi-developer team. This is not the case. The more discontinued your time is, the more you start appreciating its distributed nature. This is because each "kind" of time has a more suitable device - perhaps a netbook for the train, a desktop at someone's house or even a phone while waiting somewhere. With DVCS you can easily to switch devices and continue exactly where you left off. With GitHub you can even author using the web interface, so a mobile phone suddenly becomes useful for reading and writing.

Another decision that turned out to be a major win is still not the done thing. Ever the trailblazers, we decided to put everything related to the project in version control. And by "everything" I do mean everything: documentation, bug reports, agile process, blog posts, the whole lot. It did seem a bit silly not to use GitHub's Wiki and Issues at the time, but, on hindsight, having everything in one versioned controlled place proved to be a major win:

  • searching is never further than a couple of greps away, and it's not sensitive to connectivity;
  • all you need is a tiny sliver of connectivity to push or pull, and work can be batched to wait for that moment;
  • updates by other people come in as commits and can be easily reviewed as part of the normal push/pull process - not that we got any of late, to be fair;
  • changes can easily be diffed;
  • history can be checked using the familiar version control interface, which is available wherever you go.

When you have little time, these advantages are life-savers.

The last but very important lesson learned was to commit early and commit often. It's rather obvious in hindsight, really. After all, if you have very small blocks of time to do work, you want to make sure you don't break anything; last thing you need is to spend a week debugging a tricky problem, with no idea of where you're going or how far you still have to travel. So it's important to make your commits very small and very focused such that a bisection would almost immediately reveal a problem - or at least provide you with an obvious rollback strategy. This has proved itself to be invaluable far too many times to count. The gist of this approach it is to split changes in an almost OCD sort of way, to the point that anyone can look at the commit comment and the commit diff and make a judgement as to whether the change was correct or not. To be fair, it's not quite always that straightforward, but that has been the overall aim.

Struggling to stay Continuously Integrated

After the commit comes the build, and the proof is in the pudding, as they say. When it comes to code, that largely means CI; granted, it may not be a very reliable proof, but nevertheless it is the best proof we've got. One of the major wins from the start up days was to setup CI, and to give it as wide a coverage as we could muster. We setup multiple build agents across compilers and platforms, added dynamic analysis, code coverage, packaging and basic sanity tests on those packages.

All of these have proven to be major steps in keeping the show on the road, and once setup, they were normally fairly trivial to maintain. We did have a couple of minor issues with CDash whilst we were running our own server. Eventually we moved over to the hosted CDash server but it has limitations on the number of builds, which meant I had to switch some build agents off. In addition to this, the main other stumbling block is finding the time to do large infrastructural updates to the build agents such as setting up new versions of Boost, new compilers and so on. These are horrendously time consuming across platforms because you never know what issues you are going to hit, and each platform has their own way of doing things.

The biggest lesson we learned here is that CI is vital but software products with no time at all should not waste time managing their own CI. There are just not enough hours in the day. I have been looking into travis to make this process easier in the future. Also, whilst being cross-platform is a very worthy objective, one has to weigh the costs with the benefits. If you have a tiny user base, it may make sense to stick to one platform and continue to do portable coding without "proof"; once users start asking for multiple platforms, it is then worth considering doing the work required to support them.

The packaging story was also a very good one to start off with - after all, most users will probably rely on those - but it turned out to be much harder than first thought. We spent quite a bit of time integrating with the GitHub API, uploading packages into their downloads section, downloading them from there, testing, and then renaming them for user consumption. Whilst it lasted, this setup was very useful. Unfortunately it didn't last very long as GitHub decided to decommission their downloads section. Since most of the upload and download code was GitHub specific, we could not readily move over to a different location. The lesson here was that this sort of functionality is extremely useful, and it is worth dedicating time to it, but one should always have a plan B and even a plan C. To make a long story short, the end result is that we don't have any downloads available at all - not even a stale ones - nor do we have any sanity checks on packages we produce; they basically go to /dev/null.

In summary, all of our pains led us to conclude that one should externalise early, externalise often and externalise everything. If there is a free (or cheap) provider in the cloud that can take on some or all of your infrastructure work away, you should always consider using them first rather than host your own infrastructure. And remember: your time is worth some money, and it is better spent coding. Of course, it is important to ensure that the provider is reliable, has been around for a while and is used by a critical mass. There is nothing worse than spending a lot of effort migrating to a platform, only to find out that it is about to dramatically change its APIs, prices, terms and conditions - or even worse, to be shutdown altogether.

Loosely Coupled

Another very useful lesson I learned was to keep the off-distro dependencies to a minimum. This is rather related to the previous points on CI and cross-platform-ness, really. During the start up days we started off by requiring a C++ compiler with good C++ 11 support, and a Boost library with a few off-tree libraries - mainly Boost.Log. This meant we had to have our own little "chroot" with all of these, and we had to build them by hand, sprinkled with plenty of helper scripts. In those dark days, almost nothing was supplied by the distro and life was painful. It was just about workable when we had time on our hands, but this is really not the sort of thing you want to spend time maintaining if you are working on a project in your spare time.

To be fair, I had always intended to move to distro-supplied packages as soon as they caught up, and when that happened the transition was smooth enough. As things stand, we have a very small off-distro footprint - mainly ODB and EOS. The additional advantage of not having off-distro dependencies is that you can start to consider yourself for inclusion on a distro. Even in these days of Docker, being shipped by a distro is still a good milestone for any open source project, so it's important to aim for it. Once more, it's the old psychological factors.

All and all, it seems to me we took the right decisions as both C++ 11 and Boost.Log have proven quite useful; but in the future I certainly will think very carefully about adding dependencies to off-distro libraries.

Conclusions

In general, the first fifty iterations of Dogen have been very positive. It has been a rather interesting journey, and dealing with pure uncertainty is not always easy - after all, one always wants to reach a destination. At the same time, much has been learned in the process, and a setup has been created that is sustainable given the available resources. In the near future I intend to improve the visibility of the project as I believe that, for all it's faults, it is still useful in its current form.

Date: 2014-09-07 22:02:42 BST

Org version 7.8.02 with Emacs version 24

Validate XHTML 1.0

Friday, August 08, 2014

Nerd Food: Using Mono In Anger - Part IV

Nerd Food: Using Mono In Anger - Part IV

In which we discuss the advances in MonoDevelop 5

This is the fourth and final part of a series of posts on my experiences using Mono for a fairly demanding project. For more context please read part 1, part 2 and part 3.

In this instalment we shall have a look at latest incarnation of MonoDevelop.

Getting Latest and Greatest

As I was part-way through these series of blog posts, Xamarin announced Xamarin Studio 5 - the commercial product based off of MonoDevelop. Clearly I had to get my hands on it. However, in this particular instance Debian unstable was proven to be rather… stable. The latest versions of Mono and MonoDevelop are rather quaint, and the packaging mailing list is not the most active, as my request for news on packaging revealed.

Building is not an entirely trivial experience, as Brendan's comment on a previous post demonstrated, so I was keen on going for binary packages. Surprisingly, there are not many private repos that publish up-to-date debian packages for mono. After much searching, I found an Ubuntu PPA that did:

add-apt-repository 'deb  http://ppa.launchpad.net/ermshiperete/monodevelop/ubuntu quantal main'
apt-get install monodevelop-current

Running it was as easy as using the launcher script:

/opt/monodevelop/bin/monodevelop-launcher.sh

And just as I was about to moan from the sidelines and beg Xamarin to try and help out Debian and Linux packagers in general, Miguel sent the following tweet:

Miguel de Icaza‏@migueldeicaza 4h Mono snapshots: @directhex just published our daily Linux packages http://mono-project.com/DistroPackages/Jenkins

It's like Xamarin just reads my mind!

Haven't had the chance to play with these packages yet, and I didn't see any references to MonoDevelop in Jenkins (admittedly, it wasn't the deepest search I've done), but seems like a great step forward.

Playing with Latest and Greatest

So what has changed? The UI may look identical to the previous version, but lord has the polish level gone up. Basically, almost all the problems I had bumped into have gone away.

NuGet support

Update: See this post by Matt Ward for more details on NuGet support.

As I mentioned before, whilst the NuGet plugin was great for basic usage, it did have a lot of corner cases including the certificates issues, full restore not working properly and so on. This has all been sorted out in MonoDevelop 5. It sports an internal implementation as explained in the release notes, and it has been flawless up till now.

I did bump into an annoying problem, but I think its more Visual Studio's fault than anything else. Basically, Microsoft decided to add some NuGet.targets to the solution by copying them to .nuget. Now, to their credit, they appear to have thought about mono:

        <!-- We need to launch nuget.exe with the mono command if we're not on windows -->
       <NuGetToolsPath>$(SolutionDir).nuget</NuGetToolsPath>

However, this fails miserably. The DownloadNuGet target does not appear to exist in mono, and copying NuGet.exe manually into .nuget also failed - apparently its not just a binary these days. The lazy man solution was to find the NuGet binaries in MonoDevelop and copy them across to the .nuget directory (had them at monodevelop-5.0/external/nuget-binary). Once this was done, building worked just fine.

Note also that I didn't have time to test the .nuget directory properly, by overriding the default directory with something slightly more sensible. However, I don't particularly like having my packages in the middle of the source tree so I'll be trying that very soon.

Overall, the NuGet experience is great, and package restoring Just Works (TM).

Intellisense and Friends

I was already quite pleased with Intellisense in MonoDevelop 4, but I did find it was easy to confuse it when files got in to a bit of a state - say when pasting random chunks of code into a file. All of these problems are now gone with MonoDevelop 5. In more challenging situations, I have noticed the syntax highlighting disappearing for a little bit but as soon as the code is vaguely sensible, it returns straight away.

It is also a pleasure to use Ctrl-Shift-T to go to definitions, in some ways it seems even more powerful than ReSharper. It is certainly more responsive, even on my lowly NetBook with 1GB of RAM.

One slight snag is that extract interface seems to have gone missing - I was pretty sure I had used it on MonoDevelop 4, but for the life of me can't find it on 5.

NUnit

I was a very happy user of the NUnit add-on for weeks on end and it performed flawlessly. However, today it got stuck loading tests and I ended up having to restart MonoDevelop to fix it. Bearing in mind I normally leave it running for weeks at a time, this annoyed me slightly. Of course, to be fair, I do restart Visual Studio every couple of days or so, so the odd MonoDevelop restart is not exactly the end of the world.

But in general, one complaint I have against both Visual Studio and MonoDevelop is with the opaqueness of unit testing. For me, it all started with shadow copying in NUnit UI and went downhill from there, really. If only one could see what exactly what it is that the IDE is trying to do, it would be fairly trivial to debug it; as it is, all I know is that my tests are "loading" but fail to load a few minutes later.

Anyway, that's just me ranting. Other than that, unit testing has worked really well, and I even started making use of the "Results Pad" and all - shiny charts!

Git FTW, UI Quirks and Resources

I had mentioned before that there were some minor UI quirks. For instance I recall seeing a search box that was not properly drawn, and having problems with the default layout of the screen. I'm happy to report that all of my UI quirks have gone away with 5. It is quite polished in that regard.

I've also started making use of the version control support - to some extent, of course, as I still think that Magit is above sliced bread. Having said that, its very useful to see a diff against what was committed or going up and down the history of a file without having to go to emacs. Version control is extremely quick. Even though Visual Studio now has git support integrated, it is a lot slower that MonoDevelop. I basically never wait at all for git on MonoDevelop.

Finally, a word on resources. I can still use MonoDevelop on my NetBook with its 1GB of RAM, much of it taken by Gnome 3 and Chrome. However, I did see it using over 250 MB of RAM on my desktop PC. I wonder if MonoDevelop is more aggressive on its usage of memory when it sees there is a lot available.

Conclusions

Whilst I'll still be using MonoDevelop for a few weeks longer, I think we have done enough on this four part series. My main objective was really to pit Mono and MonoDevelop against Visual Studio 2013 on a fairly serious project, requiring all the usual suspects: .Net, Castle, Log4Net, MongoDB and so on. To my surprise, I found I had very few interoperability problems - on the whole, the exact same source code, configuration, etc just worked for both Windows and Linux. It says a lot on how far Mono has progressed.

Regrettably, I didn't get as far as playing around with vNext - the coding is taking a lot longer than expected - but if I do get as far as that I shall post an update.

It's great news that Xamarin is improving their Linux support; I can imagine that there must be a number of companies out there considering Docker for their .Net environments. Xamarin is going to be in a great position to win over these tight-Windows-shops with the great products they have.

Date: 2014-08-09 00:51:03 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0

Tuesday, May 27, 2014

Nerd Food: Using Mono In Anger - Part III

Nerd Food: Using Mono In Anger - Part III

In which we discuss the various libraries and tools used.

This is the third part of a series of posts on my experiences using Mono for a fairly demanding project. For more context please read part 1 and part 2.

In this instalment we shall focus more on the libraries, tools and technologies that I ended up using.

Castle

I've mentioned Castle a few times already. It appears to be the de facto IoC container for .Net, so its very important to have a good story around it. As I explained on the previous post, I added NuGet references to Castle Core and Castle Windsor and after that it was pretty much smooth sailing. I setup Windor Installers as described by Mark Seemann in his post IWindsorInstaller and that worked as described. My main program does exactly as Mark's:

var container = new WindsorContainer();
container.Install(new MyModule.WindsorInstaller(), new OtherModule.WindsorInstaller());
return container.Resolve<IEntryPoint>();

Basically, I have a number of IWindsorInstallers (e.g. MyModule.WindsorInstaller() etc.) that get installed, and then all that needs to be done is to resolve the "entry point" for the app - e.g. whatever your main workflow is.

All of this worked out of the box without any tweaking from my part.

MongoDB

I've used MongoDB as the store for my test tool; I'll give a bit of context before I get into the Mono aspects. Mentally, I picture MongoDB somewhere in between PostgreSQL and Coherence / MemCached. That is, it's obviously not a relational database but one of those NoSQL specials: a schemaless, persistent, document database. You can do a lot of this stuff using hstore, of course, and it now even sports something similar-but-not-quite-the-same-as BSON - JSONB, in the usual humorist Postgres way. MongoDB's setup is somewhat easier than Postgres, on both replicated and non-replicated scenarios. It also offers Javascript-based querying which, to be fair, Postgres also does. I'd say that, if you have to choose between the two, go for MongoDB if you need a quick setup (replication included), if you don't care too much about security and if you do not need any RDBMS support. Otherwise, use latest Postgres. And RTM. A Lot.

MongoDB is obviously also much easier to setup than Coherence. Of course, if you go for the trivial setup, Coherence is easy; but once you get into proper distributed setups I found it to be an absolute nightmare, requiring a lot of expertise just to understand why your data has been evicted. That's excluding the more complex scenarios such as invocation services, backing maps and so on. Sure, you can get the performance and the scalability, but you really need to know what you are doing. And let's not mention the licence costs. Basically, for the plain in-memory cache job with an easy setup, just use Memcached.

But let's progress with MongoDB. Regrettably, there are no packages in Testing for it, but the wiki has a rather straightforward set of instructions under Install MongoDB on Debian. It boils down to:

# apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
# echo 'deb http://downloads-distro.mongodb.org/repo/debian-sysvinit dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
# apt-get update
# apt-get install mongodb-org

Since I'm using systemd I was a bit apprehensive with their control scripts. As it turns out, it worked out of the box without any problems. I did find the installation to vary depending on the machines: on some I got journaling by default, but on my really low-end NetBook it was disabled. Also, the other thing to bear in mind is that if you have a small root or var partition - e.g. the one storing /var/lib/mongodb - you may run into trouble. I ended up symlinking this directory to a drive that had more space just to avoid problems.

Once MongoDB was up and running, it was time to find a management UI. Unfortunately, MongoVue - the UI that all the Windows cool kids use - is not available on Linux. This is a bit disappointing because it seems rather full featured and well funded and - just to rub salt in the wounds - it's a .Net application. The old lack of cross-platform mentality surfaces yet again. Undeterred once more, I settled on RoboMongo instead. Not quite as matured, but seemed good enough for my needs. Simple to setup too:

$ wget -O robomongo-0.8.4-x86_64.deb http://robomongo.org/files/linux/robomongo-0.8.4-x86_64.deb
$ gdebi-gtk robomongo-0.8.4-x86_64.deb

If you don't have gdebi-gtk any other debian installer would do, including dpkg -i robomongo-0.8.4-x86_64.deb.

If you are an emacs user, be sure to install the inferior mode for Mongo. Works well on Linux but has the usual strange input-consumption problems one always gets on Windows.

Going back to Mono, all one needs to do is to use NuGet to install the CSharp Mongo Driver. Once that was done, reading, writing, updating etc all worked out of the box.

Log4Net

Paradoxically, where I thought I was going to have the least amount of trouble ended up being the most troublesome of all of my dependencies. Getting log4net to work was initially really easy - the usual NuGet install. But then, not happy with such easy success, I decided I needed a single log4net.config file for all my projects. This is understandable since all that was different amongst them was the log file name; it seemed a bit silly to have lots of copy and paste XML lying around. So I decided to use Dynamic Properties, as explained in this blog post: Log4net Dynamic Properties in XML Configuration. This failed miserably.

As everyone knows, log4net is a pain in the backside to debug. For the longest time I didn't have the right configuration; eventually I figured out what I was doing wrong. It turns out the magic incantation is this (I missed the type bit):

        <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
            <file type="log4net.Util.PatternString" value="APrefix.%property{ApplicationId}.log" />

Just when I thought I was out of the woods, I hit a Mono limitation: CallContext.LogicalGetData is not yet implemented in Mono 3.0. It is available on later versions of Mono, but these are not yet in Debian Testing. Undeterred, I decided to try to compile Mono from scratch. It turned out to be rather straightforward:

$ git clone https://github.com/mono/mono
$ cd mono
$ ./autogen.sh --prefix=${YOUR_INSTALL_LOCATION}
$ make -j${NUMBER_OF_CORES}
$ make install

Replace (or set) ${YOUR_INSTALL_LOCATION} and ${NUMBER_OF_CORES} as required. Once you got it installed, you need to tell MonoDevelop about the new runtime. Go to Edit, Preferences then choose .Net Runtimes and click on Add. Point to the top-level directory containing your installation (e.g. ${YOUR_INSTALL_LOCATION}) and it should find the newly built Mono. I then set that as my default. Incredibly enough, from then on it all just worked.

http://4.bp.blogspot.com/-KXopT6xb-Vc/U4UBNVmitwI/AAAAAAAAAnQ/5TrpusbNA7o/s1600/monodevelop_add_runtime.png

Runtimes in MonoDevelop

NUnit

As mentioned in the previous post, you should replace the NUnit references you get from MonoDevelop with NuGet ones. This is because you may be using some of the newer features of NUnit - which are not available with the version that ships with Mono. At any rate, it just gives you more confidence on the dependency rather than depending on the environment.

Another problem I found was disabling shadow copying. This does not seem to be an option in the MonoDevelop UI or the solution. It is rather annoying if you need to have some log4net config files in the test directory - as I did, due to the Dynamic Properties mentioned above.

Other than that, NUnit worked very well.

Libraries Overview

Compiling Mono from source is obviously not ideal, but perhaps the main thing to worry about is how to get latest Mono packages. As with MongoDB, it perhaps would be better to have a repository supported by the Mono community that offers more up-to-date packages, at least for the more intrepid users. Although some of these existed in the past (particularly Ubuntu PPAs) they all seem to have gone stale.

Having said that, there are still no showstoppers - the code is working on both Visual Studio 2013 and Mono.

Date: 2014-05-27 22:29:38 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0

Nerd Food: Interesting...

Nerd Food: Interesting…

Some interesting stuff I bumped into these last couple of weeks.

Arty

  • Banksy - Webby Awards Video: Banksy at his usual irreverent best. Hat-tip Emanuel Ferreira.
  • Listen to “Brian Eno Day”: a 12-Hour Radio Show Spent With Eno & His Music (Recorded in 1988). If you like ambient it's a must listen. Hat-tip Joao Santos.
  • Snowpiercer: To be fair, didn't yet get a chance to watch it yet, but the comments from Bruno Antunes were so glowing it went up on my list of must-watch movies.
  • Beware of Mr. Baker: Frigging amazing. I just can't believe I had never heard of this guy until I watched the movie. I mean, he played with Fela, Clapton, all the Jazz greats that were still alive. No words to describe it - just watch it. Crazy, crazy guy. Blind Faith are one of the many bands he was in.
  • Surfwise: Surf movie, but with a twist - or should I say nine. Crazy family with a crazy dad that decided one day that he'd spent the rest of his life surfing, and the world be damned. Amazing movie. It's also interesting because it outlines the consequences of such "heroic" decisions on the family.
  • Makthaverskan: Started listening to this rocky Swedish band.
  • El Empleo / The Employment: Great animation. Perfect description of the modern world.
  • The Expert: Like El Empleo, a dark take on the modern world, but this time as a dark comedy. Hilarious, but yet again, very painful.

C++

Other Programming Topics

Start-ups and Business

Portugues

  • Com os Holandeses: Encontrei um autor portugues que nunca tinha ouvido falar, mas que parece ser bastante conhecido na holanda. Na lista de livros a ler.

Date: 2014-05-27 12:14:54 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0

Monday, May 26, 2014

Nerd Food: Using Mono In Anger - Part II

Nerd Food: Using Mono In Anger - Part II

In which we setup our working environment.

On Part I we explored the reasons why I decided to use Mono in anger. In this part we shall get ourselves a working environment comparable a Visual Studio 2013 setup.

First things first:

  • whilst this post presumes that you may not know everything about Linux, it does expect a minimum familiarity with the terminals, desktop, etc.
  • having said that, we try to keep things simple. For instance note that commands that start with # need to be executed as root and commands that start with $ should be executed as an unprivileged user. Also \ means the command should really be one big line.
  • my distribution of choice is up-to-date Debian Testing, which is where all the commands have been tested; hopefully most things will work out of the box for you - especially if you are using Debian or Ubuntu - but its better if you understand what the command is trying to achieve, just in case it doesn't quite work.
  • Finally, I presume you know about .Net and its libraries - although I do try to give some minimal context.

Without further ado, lets set our Mono environment.

Installing Mono

We start off by following the instructions on installing mono for Debian available in the Mono project wiki:

# apt-get install mono-complete

It's worth mentioning that whilst the Mono page above mentions v2.10.8.1 for Debian Testing, v3.0.6 has actually already been migrated from Unstable so that is what you will be getting. A note on versions here: with Mono, latest is always greatest. The project moves at such a fast pace that getting an old version is almost always a bad idea.

If everything has gone as expected, you should now see something along the lines of:

$ mono --version
Mono JIT compiler version 3.0.6 (Debian 3.0.6+dfsg2-12)
Copyright (C) 2002-2012 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com
    TLS:           __thread
    SIGSEGV:       altstack
    Notifications: epoll
    Architecture:  amd64
    Disabled:      none
    Misc:          softdebug
    LLVM:          supported, not enabled.
    GC:            Included Boehm (with typed GC and Parallel Mark)

To make sure we have a fully working setup, lets compile a simple hello world:

$ echo 'class Program { static void Main() { System.Console.WriteLine("HelloWorld!"); }}' > hello.cs
$ mcs hello.cs
$ ./hello.exe
HelloWorld!

Installing MonoDevelop

Of course, no self-respecting Windows .Net developer will code from the command line; they will ask for an IDE. The IDE of choice for Mono is MonoDevelop - sometimes called Xamarin Studio because they are the main company behind it, and have a commercial product based on it.

It's pretty straightforward to install it on Debian:

# apt-get install monodevelop monodevelop-nunit monodevelop-versioncontrol \
     monodevelop-database monodevelop-debugger-gdb \
     libicsharpcode-nrefactory-cecil5.0-cil \
     libicsharpcode-nrefactory-cil-dev \
     libicsharpcode-nrefactory-csharp5.0-cil \
     libicsharpcode-nrefactory-ikvm5.0-cil \
     libicsharpcode-nrefactory-xml5.0-cil \
     libicsharpcode-nrefactory5.0-cil

You will most likely get MonoDevelop v4.0. Alas, v4.2 has already been released but not yet hit Debian. As with Mono, latest is always greatest. Note that I went a bit overboard here and install a lot of stuff - you may just want to install monodevelop. I added the NUnit integration and the Version Control integration as well as the refactoring tools.

Once its installed you can start it from the main menu. It should look vaguely like this:

http://4.bp.blogspot.com/-uY4NRi3HB6c/U4OLCAKZWdI/AAAAAAAAAmY/OLQ_BXbJ-hc/s1600/monodevelop_main_window.png

MonoDevelop's main window

The very next thing we need to do is to install the NuGet add-in for MonoDevelop. To do this:

  • Go the Tools menu, Add-in Manager;
  • Click on the Gallery tab;
  • Click on the Repository combo-box and select Manage Repositories…;
  • Click on the Add button and select Register an on-line Repository
  • Paste the repository from the page above, e.g.: http://mrward.github.com/monodevelop-nuget-addin-repository/4.0/main.mrep and click Ok.

http://4.bp.blogspot.com/-ncH0FnFu56Q/U4OK_6PzWOI/AAAAAAAAAlw/Pjv0P21Ry9o/s1600/monodevelop_add_in_nuget_repository.png

MonoDevelop Add-in Repository

You should now be able to find the NuGet add-in by searching for it on the search box at the top of the dialog:

http://2.bp.blogspot.com/-8uI8ssffHtc/U4OLDpOamaI/AAAAAAAAAmw/wY4jbcWBljg/s1600/monodevelop_search_nuget_addin.png

Installing NuGet Add-in

In my case its already installed - in your case you should get an Install option. Unfortunately we're not out of the woods just yet. We need to setup the certificates to allow us to download packages:

# mozroots --import --sync
# certmgr -ssl -m https://go.microsoft.com
# certmgr -ssl -m https://nugetgallery.blob.core.windows.net
# certmgr -ssl -m https://nuget.org

Now, to be perfectly honest there are still some setup issues for NuGet that I don't quite understand, but we'll leave that for later.

Setting up a Solution

To create a solution go to the File menu, New and choose Solution…:

http://2.bp.blogspot.com/-c8iQn55DS1Q/U4OLAWRG3-I/AAAAAAAAAl0/dueOGd_ejUk/s1600/monodevelop_creating_solution.png

Creating a new solution

This will create a solution with a project. Now, on the main screen create a second project, say MyProject.Tests, by right-clicking the solution, click on Add then Add New Project… and then fill in the project details: NUnit project and the name.

http://1.bp.blogspot.com/-XbCC-Pusk8c/U4OLAFHqr9I/AAAAAAAAAl8/-bq2uA0mEEQ/s1600/monodevelop_add_nunit_project.png

Adding a new NUnit project

To be perfectly frank, as with Visual Studio, I tend to create projects and solutions from the UI and then edit the raw .sln and .csproj files to get them in my preferred directory layout. At any rate, for this really simple solution we just get the following:

http://2.bp.blogspot.com/-BA7lMDUUAGI/U4OLBfaWp8I/AAAAAAAAAmI/FFUFaeWOrsg/s1600/monodevelop_hello_world_solution.png

The HelloWorld solution

Now go to both project options by right-clicking on the project and then Options and find General. There update the target framework to Mono .Net 4.5.

http://3.bp.blogspot.com/-ue0lcwCmDMA/U4OLEWtUV6I/AAAAAAAAAnA/Fi30BoI2xQ4/s1600/monodevelop_update_runtime.png

Using .Net 4.5

You will get some blurb about project file changes, just accept it. The other thing to do is to use xbuild, Mono's equivalent of msbuild to do the building. To do so go to Edit, Preferences and Build. Then tick the xbuild check box:

http://2.bp.blogspot.com/-uBqX6GeR6TU/U4OLEc4z_YI/AAAAAAAAAm4/zgmuCYKSyi4/s1600/monodevelop_use_xbuild.png

Using xbuild

Now all we need is to setup all of the required NuGet packages. This is where things become a bit complicated. As I said previously, we set up all the required certificates so things should just work. In practice we still get a few issues. To see what I mean, right-click on a project's references, then Manage NuGet Packages…. The following message comes up:

http://1.bp.blogspot.com/-wz998TGjf9E/U4OLCuImCiI/AAAAAAAAAmc/WHkAvtKCL1E/s1600/monodevelop_nuget_certificate_problem.png

NuGet certificate problem

If you click yes, you should then get the full blown list of NuGet packages. But it's a bit annoying to have to do that since apparently we have installed all of the required certificates. Also, as we shall see, the NuGet restore fails due to certificate problems but we'll leave that one for later. Once you waited for a bit, the packages screen will come up, and you can use the search box to search for packages and once happy, click on Add:

http://4.bp.blogspot.com/-zuhYafiJuPM/U4OLCxAqp_I/AAAAAAAAAmg/pbOY7k0WEFc/s1600/monodevelop_nuget_add_log4net.png

Adding Log4Net in NuGet

Using this workflow, add all the required packages. For example, in my case I added:

  • Main project: Log4net, Castle Core, Castle Windsor, Mongo CSharpDriver, Newtonsoft.Json
  • Test project: NUnit. I removed the reference that Mono added and forced it to go via NuGet. This saves you from a lot of problems related to incorrect NUnit versions.

A few points to note here. NuGet works but it's a little rough around the edges. When you are using it in anger, the following things will become annoying:

  • the whole certificate thing, which seems to make the initial NuGet window slower. Fortunately this appears to happen only once for a running session.
  • the inability to use NuGet restore is a pain; it means every time you swap machines you need to faff around to re-download packages. I use a dummy project for this - e.g. add packages to a project that does not have them and then remove them.
  • the NuGet add-in ignores the .nuget configuration; instead uses a "hard-coded" package directory at the same level of the solution. This is a bit painful because your Windows developers will see the packages in one location (i.e. where the .nuget config states it should be) but it will be elsewhere in Mono. Best not to use these until they are supported in Mono.
  • NuGet add-in doesn't seem to work when there is no network connection. This is a bit painful because it means that you can't add a dependency to a project that had already been added to another project without being online. In this scenario the easiest thing to do is to edit the .csproj and packages.config files manually.

A Note on F#

As I spent a considerable time describing the F# setup in the past (Adventures in F# Land Part 1, Part 2 and Part 3), it's only fair we cover how things are done these days. First install the required packages:

# apt-get install fsharp libfsharp-core4.3-cil libfsharp-data-typeproviders4.3-cil

Again, YMMV - I always go a bit overboard and install everything. Then in MonoDevelop install the Add-in. To do so simply go to Tools, Add-in Manager then click on Gallery and expand Language Bindings. Click on F# and then install.

http://4.bp.blogspot.com/-5K4tAqO2lDU/U4OLAxLEPxI/AAAAAAAAAmA/raQCxKW08gI/s1600/monodevelop_fsharp_addin.png

Adding F# Support

It is that easy these days. We may have to do a F# in anger series later on, to see how well it stacks up.

Setup Review

First I'd like to say that there are a lot of positives in the setup experience. For example, I just spent the best part of two days getting Visual Studio to work due to some licensing issues - it just wouldn't accept my key for some reason. Also Visual Studio 2013 is rather demanding hardware wise, whereas MonoDevelop seems to hover on the 200 MB range (my actual solution has around 12 projects).

And on the main, the polish on MonoDevelop is quite good, with many things just working out of the box as they do in Visual Studio. And, once you get past some of it's minor quirks, Matt Ward's NuGet Add-In does work; I have been using it in anger for 3 weeks and I can attest to that. But it could be argued that NuGet is such a central component in the .Net experience that it should warrant thorough QA - perhaps by Xamarin - to try to bullet-proof it, at least for the hot use cases.

On the main, we're still very happy campers.

Date: 2014-05-26 20:12:12 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0

Nerd Food: Using Mono In Anger - Part I

Nerd Food: Using Mono In Anger - Part I

In which we convince ourselves that it is time to use Mono in anger.

I've always been a Mono fan from the early days. Sadly, my career as a Mono developer never amounted to much - not even a committed patch - but I remained a loyal lurker and supporter. It's not so much because I thought it was going to change the face of Free Software, but because I wondered if it would allow Linux to gain a foothold in Windows-only companies. Having been working for this lot for such a long time, I desperately yearned for a bit of serious Linux interaction in my professional life, so I kept on playing around with Mono technology to evaluate progress. For instance, a few years back I wrote Adventures in F# Land Part 1, Part 2 and Part 3, based on my experiences in setting up F# in Mono1. I continually did this kind of mini-experiments, mucking around with tiny programs to query the state of the world. But what I really wanted to do was to test the whole infrastructure in anger2.

As you can imagine, I was extremely excited when I read Miguel's tweet:

Microsoft has open sourced the new generation of http://ASP.NET http://github.com/aspnet

Another blog post had more details: ASP.NET vNext. Microsoft was to start testing code against Mono, at least for ASP.Net. To me this was nothing short of revolutionary. Not to the average Linux developer, of course - barely a yawn from that camp. But to the average .Net developer this is - or should be - earth shattering news. When you couple this with the .Net Foundation announcements, it signals that Microsoft now gives the official blessing for developers to start thinking of .Net as cross-platform.

It may surprise you, but the average .Net developer does not spend days and nights awake, worrying about how to carefully design their architecture to maximise cross-platform code. They do not have Mono on their CIs. They tend not to care - or even be aware of - all the latest and greatest features that the Linux kernel sports, or how cool Docker, Chef and Puppet are. Those that are aware of this are in a minority, and most likely have been exposed to the other side via the Javascript infiltration that is now taking place. A fairly recent comment in a Castle issue should give you a flavour of this thinking:

I don't think anyone is actively pursuing Mono support at the moment. Reality with Mono historically has been that no one cared enough to properly bring it to parity with .NET and keep it that way.

Having said that, we're more than happy to improve our Mono support (…)

For those not in the know, Castle is an Open Source project that provides an IoC (Inversion of Control) container. It is really popular in .Net circles. As we shall see later, Castle actually works really well with recent versions of Mono - but the fact that the Castle team didn't have a CI with Mono is indicative of the state of the world in Windows .Net.

This is not to say that Linux is not on the Windows developer's radars - quite the opposite, and the progression has been nothing short of amazing. During my working life I've spent around 70% of my time working for Windows shops; to be honest, I still find it surprising when look at the past and compare it to the present. Things went from "Linux, what's that?" to "Linux is a nerd's toy, it will never make it" to "That's for Java guys" and finally, to "We run things on Linux where it is good at, but not everything". Things being Java and Coherence, Grid Engines, Javascript stacks and so on - no mention of .Net.

In fact, the .Net revolution hampered the Linux strategy a lot. Java was a good Trojan horse to force developers to think cross-platform, and many a Windows shop turned to Linux because of Java. However, once you had a competitive (read "enterprise-ready") .Net framework, Java receded in Windows people's minds and it was back to business as usual. Even a strong and capable Mono was not enough; a lot of the .Net code was so incredibly intermingled with Windows-only code that portability was impossible without great effort. Many a hero tried and failed to compile massive code bases on Linux.

Things started to change over the last five years. Three separate trends unified. First was the new focus on SOA. Windows developers finally found out what the UNIX geeks knew all along: that services are the way forward. And because their view of SOA was closely linked to REST and Web development, suddenly the code bases became a lot less Windows dependent and a lot more self-contained; no more WPF, no more third-party closed-source controls. The second important development was that Windows developers found Free and Open Source Software. A lot of the libraries they rely on are open source, and thus much more Mono friendly. The final trend was the rise and rise of Javascript and Linux-only tools. Every other Windows developer is now familiar with Node.js, Mongo, Redis et al, and have at least rudimentary knowledge of Linux because of it.

However, the Linux developer on a Windows-land is still stuck, and all because of a tiny little detail: IIS and ASP.Net are still a pain in the backside. Mono has provided ASP.Net support for a while now, but to be honest, the applications I've seen in the wild always seem to use some features that are either not yet supported or not supported very well. There is just so much coupling that happens when you pull in IIS that is hard to describe, and a lot of it happens without people realising - it's that lack of cross-platform mentality all over again. So for me the one last little piece of the puzzle was IIS and ASP.Net. This was anchoring the entire edifice to Windows.

Now you can see why having a fully open source ASP.Net stack is important; especially one that does not depend on IIS directly, but uses OWIN for separation of concerns and is tested on Mono by Microsoft. Code bases designed on this stack may be able to compile on Mono without any changes - or, more likely, with a few small changes - opening the floodgates for Linux and OSX .Net development. Of course, don't let me get too carried away here. There is still much, much to be done for a totally cross-platform environment: Powershell, KPM, etc. But this is a very large step in the right direction, and it covers a large amount of use cases.

A final point is worth making about the corporate ecosystem that evolved around Mono. This is not really a marketing ploy, although I do wish them all the best. Companies such as Xamarin, Unity and projects such as MonoGame are instrumental in the progress of Mono. By creating funded start-ups around Mono, with products that are used by a large number of developers, they engineered a dramatic increase of quality in the entire ecosystem. And in turn this raised its visibility, bringing more developers on board, in the usual virtuous circle of open source.

So for all these reasons and more, it seemed like Mono deserved another good look. As it happened, I was trying to develop a fairly large test tool for work in my copious free time. Since I needed to code from home, I thought this would be the right project to give it a far old whack. And you, dear reader, will get to know all about it in this series of posts.

Footnotes:

1 Mind you, all of it is to be ignored as its now trivial to setup F#.

2 For the non-British speakers out there, using something in anger just means "to give it a good go" or "to really push it" rather than writing a hello world.

Date: 2014-05-26 17:24:40 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0