Thursday, September 25, 2014

Nerd Food: Start-ups at the Gate: Trends in the Technology Industry

Nerd Food: Start-ups at the Gate: Trends in the Technology Industry

It is very difficult to convey the vast scale at which the largest Internet companies operate. To make matters worse, we are fast becoming immune to statistics such as one billion users and five trillion searches per day, surrounded as we are by a sea of large numbers on a daily basis. Having said that, any Information Technology (IT) professional worth his or her salt cannot help but feel in awe at what has been achieved. It is not just that these platforms are big; they work at a scale that is qualitatively different from anything that has come before. The sort of things that are possible at this scale are mind-boggling, and we have only begun to scratch the surface1.

Perhaps even more revolutionary is the fact that these companies have made it possible for anyone to start thinking about data in the same way as they do, and to start handling it using the very same tools they use. There is now a never-ending archive of the very best large-scalability tools, all available for free, with code that anyone can inspect, modify and optimise to meet their specific requirements. The tools come with a wealth of practical documentation on how to put solutions together - either freely available or at low-cost - and with a number of passionate user communities that provide expert advice and are eager to accept modifications.

The ecosystem they have created is truly staggering. As an example, Facebook has open sourced almost 10M lines of code to date. Twitter, Google and LinkedIn are not far behind2. It is also important to notice that non-Internet companies are making extremely large contributions too, such as Microsoft and IBM. All told, the overall pool of open source code is growing exponentially, as demonstrated by a 2008 study. In most cases, these are full-fledged products, tested in the most challenging production conditions imaginable. Of course, one must also not forget the contributions made to projects that are not under company control such as the Linux Kernel, the Apache web-server and the GNU Compiler GCC.

In order to understand why modern start-ups provide such a compelling financial case, one must first understand how we got to the amazing technology landscape we have today. To do so, we shall divide recent technology history into eras, and explain each era's contribution. We will then focus on modern start-ups, and explain how this model can be deployed to a large gamut of industries and in particular to the financial sector.

First Era: Dot-com Bubble

Silicon Valley was and still is the world's start-up factory so, unsurprisingly, it was ground zero for the start-up revolution that took place at the end of the nineties. It would eventually be known as The Dot-com Bubble. Most people remember those days as a heady time, where each and every idea was packaged as a website and sold for millions or in some cases billions of dollars. Of course, we all had a steep price to pay when the bubble burst - an extinction event that decimated the young Internet sector and IT companies in general.

There is however another way to look at this bubble: it was a gigantic experiment to determine whether there were successful business models to be found in the large scale of the Internet. Whilst much mal-investment occurred, the bubble still produced or pushed forward several of the giants of today such as Google, Amazon and Yahoo.

Most of these companies share a similar technology story. Originally faced with a dearth of investment but with bright young engineers, they found themselves relying on Free and Open Source Software (FOSS) and cheap, off-the shelf hardware. Once they became big enough, it just didn't make sense to replace all of that infrastructure with software and hardware supplied by commercial vendors.

This turn of events was crucial. If these companies had had larger budgets and less skilled engineers, they would have relied on the cutting edge technology of the time. The short-term gain would reveal itself as long term pain, for their ability to scale would be inevitably restricted. In addition, many of the business models wouldn't have worked due to this cost structure3. As it was, since they couldn't even afford the relatively cheap licences of commercial software, they had to make do with what was available for free.

The engineers in these companies - and many others that didn't make it through the dot-com filter - spent countless hours improving FOSS tools and gave back much of these improvements to communities such as Linux, MySQL, Apache, GCC and so on. However, they kept private the plumbing work done to manage the large cluster of cheap machines, as well as the domain related technology - in industry-speak, the Secret Sauce.

By the time the dot-com bubble had run its course and the dust settled, the landscape looked as follows:

  • A model had been created whereby a small number of engineers could bootstrap an Internet-based company at very low cost, serving a small number of users initially.
  • The model had been stretched to very large numbers of users and had been found to scale extremely well; as the business proved itself and investment came in, it was possible increase the size of the computing infrastructure to cope with demand.
  • Because of the open nature of the technologies involved, the ideas became widespread over the internet.

The basic high-scalability FOSS stack - ready for start-ups - was born; the Data Centre, where large amounts of computing are available at low cost, soon followed. It would eventually morph into the Cloud.

Second Era: Social Media

The bursting of the dot-com bubble did not dampen the entrepreneurial spirits, but it did dry up all the easily available capital and thus pushed the aspiring start-ups to be ever more frugal. In addition, VCs started to look for better ways to evaluate prospects. The problem they faced was no different from what they had faced during the dot-com days: how to figure out the potential of a company with no defined business model and nothing else to compare it against.

Google had proved comprehensively that the traditional valuation methods did not make sense in the world of start-ups. After all, here was a company which it's founders couldn't sell for 1M USD and yet a few years later was generating billions of dollars in revenues. Very few saw this coming. VCs were keen not to make the same mistake with the next Google4.

So it was that a system to determine potential by proxy emerged over the years, using indicators such as the size of the user base, time spent by users on the platform and so on - effectively, any attribute that was deemed to have given a competitive advantage to Google and other successful dot-com companies.

In this environment, social media start-ups took took centre stage. Following on from the examples of their predecessors, these companies took for granted that they were to operate on very large data sets. They inherited a very good set of scalable tools, but found that much still had to be built on top. Unlike their predecessors, many chose to do some or all of the infrastructure work out in the open, joining or creating new communities around the tools. This was in no small part due to the scarcity of funds, which encouraged collaboration.

The social media start-ups soon found themselves locked in an arms race for size, where the biggest would be the winner and all others would be doomed to irrelevance5. The size of the user base of the successful companies exploded6, and the tooling required to manage such incredibly large volumes of data had to improve at the same pace or faster. Interestingly, these start-ups continued to view in-house code largely as a cost, not an asset, even after they started to bring in large revenue. The size of the secret sauce was to be kept at a minimum and the pace of open sourcing accelerated over time7.

A final factor was the rise of the next iteration of the data centre, popularised by Amazon with AWS and EC2. It allowed any company to scale out without ever having to concern themselves with physical hardware. This was revolutionary because it allowed razor-thin costs for scalability:

  • Pay only for what you use: the elastic nature of EC2 meant that one could grow or shrink one's cluster based on real time traffic demands and availability of capital.
  • Zero-cost software: FOSS was available in Amazon from the very beginning and was extremely popular with start-ups.
  • Fully automated environments via APIs: resource constrained start-ups could now start to automate all aspects of the product life-cycle. This meant they could release faster, which in turn allowed them to fight more effectively for their user base. This would in time become the DevOps movement.

By the end of the decade, the scalability tooling was largely complete. It was now possible for a small start-up to create a small website and to see it scale from hundreds to millions, restricted only by their ability to bring in capital.

Third Era: Mobile

Mobile phones have been growing close to an exponential rate for over two decades. However, the rise of the smart phones was a game changer, and the line in the sand was drawn with the release of the iPhone. What makes mobile so important to our story is it's penetration. Until smart phones became ubiquitous, there was a large segment of the population that was either totally inaccessible or accessible for limited periods of time. With increasingly large numbers of people carrying smart phones as they go about their day, many use cases that were never before thought possible came to the table. So whilst we call this "the Mobile era", the true heroes are smart phones and, to a smaller extent, the tablets.

The mobile era started with simple apps. Smart phones were still new and applications for each platform were novelty. There was a need to reinvent all that existed before in the world of PCs and adapt it to the new form factor. It was during this phase that the economies of scale of mobile phones became obvious. Whereas consumer PC software had prices on the range of tens to hundreds of dollars, mobile phones bootstrapped a completely different pricing model, with many apps selling for less than one dollar. Volume made up for the loss in revenue per unit. The model was so incredibly successful that a vibrant environment of apps sprung up around each of the successful platforms, carefully nurtured by the companies running the show via their app stores.

Soon enough the more complex apps came about. Companies like Four Square and WhatsApp were trailblazers in the mobile space, merging it with ideas from social media. Many others like Spotify took their wares from the stagnant PC environment and moved to the ever growing mobile space. Complex apps differed from the simple apps in that they required large backends to manage operations. Since these companies were cash strapped - a perennial condition of all start-ups - they found themselves reusing all of the technology developed by the social media companies and became part of the exact same landscape. Of course, the social media companies were eventually forced to jump on the mobile bandwagon - lest they got crushed by it.

So it was that the circle was closed between the three eras.

Evolutionary Pressures and Auto-Catalytic Processes

The changes just described are so revolutionary that one cannot help but look for models to approximate some kind of explanation for what took place. Two stand out. The first is to imagine the population of start-up companies as a small segment of the overall company population that was submitted to an unbelievably harsh fitness function: to grow the data volumes exponentially while growing costs less than linearly. This filter generated new kinds of companies, new kinds of technologies and new kinds of ways of managing technology.

Secondly, there is the auto-catalytic nature of the processes that shaped the current technology landscape. Exponential growth tends to have at its root this kind of self-reinforcing cycle, whereby improvements in an area A trigger improvements in another area B, which in turn forces A to improve. The process keeps on repeating itself whilst it manages to retain stability.

It is this relationship we currently have between start-ups and FOSS: the better the software gets, the cheaper it is to create new start-ups and the faster these can grow with the same amount of capital. By the same token, the more start-ups rely on FOSS, the more they find themselves contributing back or else risk falling behind - both technologically and cost-wise. This feedback loop is an emerging property of the entire system and it has become extremely pronounced over time.

Finance and the Age of Disruption

The concept of disruption was developed in the nineties by Clayton Christensen in Innovator's Dilemma. This book has seen a resurgence in popularity as well as in criticism8. For good or bad, the ideas in this book became the intellectual underpinnings of a new generation of start-ups.

They seek to combine all of the advances of the previous start-ups to create solutions to problems far outside the traditional IT realm. Examples are the hotel industry (AirBnB), the taxi industry (Uber, Lyft) and even the banking industry (Simple). Whilst it's still early days, and whilst there have been many teething problems such as issues with regulation, the destination of travel is already clear: there will be more and more start-ups following the disruptive route.

What makes these companies a compelling proposition to VCs is that they are willing to take on established concerns, with cost structures that are orders of magnitude larger than that of these start-ups. Their thinking is two-fold: the established companies are leaving a lot of money on the table, consumed by their inefficiency; and they are not exploiting the opportunities to their full potential because they do not understand how to operate at a vast scale.

It is in this context that finance scene comes into the picture - as part of the expansionary movement of the disruption movement. VCs have longed eyed enviously the financial industry because they believed that the problems being solved in trading are not that dissimilar to those faced by many large scale start-ups. And yet the rewards are disproportional large in Finance, when compared with say social media.

Fintech soon emerged. As applied to start-ups, Fintech is the umbrella name given to the ecosystem of start-ups and VCs that focus specifically on financial technology. This ecosystem has grown from 930M USD in 2008 to around 3Bn in 2013 according to Accenture. Centred mainly in London, but with smaller offshoots in other financial centres, the Fintech scene is starting to attract established players in the world of Finance. For instance, Barclays has joined the fray by creating an incubator. They farmed off the work to a third-party (Tech Stars) but allowed all the start-ups in the programme to have unprecedented access to their Mobile APIs. Their target is to own the next generation of financial applications on Mobile devices.

Whist Barclays is disrupting from the outside, it is obvious that the investment banking legacy platforms are a fertile ground for start-ups. This is where the scalability stack has a near-perfect fit. A typical example is OpenGamma. The start-up designed an open source Risk platform, initially focused on back office use. They have received over 20M USD in funding as of 2014 and have already been the recipient of several of the industry's awards. There are now several open source trading platforms to choose from including TradeLink and OpenGamma, as well as the popular quantitative analytics library QuantLib.

As we have seen in the previous sections, there is an auto-catalytic process at play here. Once source code becomes widely available, the cost of creating the next Financial startup goes down dramatically because they can reuse the tools. This in turn means many more start-ups will emerge, thus improving the general quality of the publicly available source code.


The objective of this article was to provide a quick survey of the impact of start-up companies in the technology landscape, and how these relate to finance. We now turn our attention to the logical conclusions of these developments.

  • Finance will increasingly be the target of VCs and start-ups: The Fintech expansion is to continue over the coming years and it will affect everyone involved in the industry, particularly the established participants. More companies will take the route of Barclays, trying to be part of the revolution rather than dethroned by it.
  • Banks and other established companies will begin to acquire start-ups: Related to the previous item in some ways; but also with a twist. As part of the Deloitte TMT predictions event, Greg Rogers - the manager of Barclays Accelerator - stated that the acquisition of non-financial start-ups by banks was on the cards. He was speaking about Facebook's acquisition of WhatsApp for 18Bn USD, one of the largest of the year. As Google and Facebook begin integrating payments into their social platforms, banking firms will find their traditional business models under attack and will have no option but to retaliate.
  • Finance will turn increasingly to FOSS: The cost structure that finance firms had up to 2008 is not suitable to the post 2008 world. At present, the volume of regulatory work is allowing these cost structures to persist (and in cases increase). However, eventually banks will have to face reality and dramatically reduce their costs, in line with the new kind of revenues they are expected to make in a highly-regulated financial world. There will be a dramatic shift away from proprietary technologies of traditional vendors, unless these become much more competitive against their fierce FOSS rivals.
  • A FOSS financial stack will emerge over the next five years: Directly related to the previous point, but taking it further. Just as it was with social media companies, so it seems likely that financial firms will eventually realise that they cannot afford to maintain all the infrastructure code. Once an investment bank takes the leap and starts relying on FOSS for trading or back-office, the change will ripple through the industry. The state of the FOSS code is production ready, and a number of hedge funds are already using it in anger. All that is required is for the cost structure to be squeezed even further in the investment banking sector.


1 As one of many examples, see Google Flu Trends. It is a predictor of outbreaks of the flu virus, with a prediction rate of about 97%. For a more comprehensive - if somewhat popular - take on the possibilities of large data sets, see Big Data: A Revolution That Will Transform How We Live, Work and Think. For a very different take - highliting the dangers of Big Data - see Taleb's views on the ever decreasing noise to signal ratio: The Noise Bottleneck or How Noise Explodes Faster than Data.

2 In fact, by some measures, Google has contributed several times that amount. For one such take, see Lauren Orsini's article.

3 As an example, it was common practice for vendors to charge according to the number of processors, users and so on. Many of the better funded start-ups made use of technology from Cisco, Sun, Oracle and other large commercial vendors, but companies that did so are not very well represented in the population that survived the dot-com bubble, and they are not represented at all in the 2014 Fortune 500 list. Google, Amazon and E-Bay are the only Fortune 500 companies from that crop and they all relied to a very large extent on in-house technology. Note though that we are making an empirical argument here rather than a statistical one, both due to the lack of data available, as well as concern for Survivorship Bias.

4 For one of many takes on the attempt to sell Google, see When Google Wanted To Sell To Excite For Under 1 Million~— And They Passed. To get a flavour of how poorly understood Google's future was as late as 2000, see Google Senses That It's Time to Grow Up. Finally, the success story is best told by the growth of revenues between 2001 and 2003 - see Google's 2003 Financial Tables.

5 Twitter, Facebook, YouTube, LinkedIn and the like were the victors, but for every victor, a worthwhile foe was defeated; MySpace, Hi5, Orkut and many others were all very popular at one time but lost the war and faded into obscurity.

6 As an example, the number of Facebook users grew at an exponential rate between 2004 and 2013 - see Facebook: 10 years of social networking, in numbers.

7 A possible explanation for this decision is the need for continuous scalability. Even companies as large as Facebook or Google cannot dedicate the resources required to adequately maintain every single tool they own; their code bases are just too large. At the same time, they cannot afford for code to become stale because it must continually withstand brutal scalability challenges. The solution to this conundrum was to open source aggressively and to create vibrant communities around tooling. Converting themselves to stewards of the tools, they could now place quasi-skeleton crews to give direction to development, and then rely on the swarms of new start-ups to contribute patches. Once there are enough improvements, the latest version of these tools can be incorporated into the internal infrastructure. This proved to be a very cost-effective strategy, even for large companies, and allowed continued investment across the technology stack.

8 There are quite a few to choose from but Lepore's is one of the best because it robustly attacks both the ideology and the quality of the data.

Date: 2014-09-25 21:37:47 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0

Sunday, September 07, 2014

Nerd Food: Dogen: Old Demo

Nerd Food: Dogen: Old Demo

As part of my attempt to make the work in Dogen a bit more visible, I thought I'd repost an old demo here. The interface has changed very little since those days so it's still a useful introduction.

Date: 2014-09-07 22:23:58 BST

Org version 7.8.02 with Emacs version 24

Validate XHTML 1.0

Nerd Food: Dogen: Lessons in Incremental Coding

Nerd Food: Dogen: Lessons in Incremental Coding

A lot of interesting lessons have been learned during the development of Dogen and I'm rather afraid many more are still in store. As it is typical with agile, I'm constantly reviewing processes in search of improvements. One such idea was that putting pen to paper could help improving the retrospective process itself. The result is this rather long blog post, which hopefully is of use to developers in similar circumstances. Unlike the typical bullet-point based retrospective, this post it is a rambling narrative as it aims to provide context to the reader. Subsequent retrospectives will be a lot smaller and more to the point.

Talking about context: I haven't spoken very much about Dogen in this blog, so a small introduction is in order. Dogen is an attempt to create a domain model generator. The manual goes into quite a bit more detail, but for the purposes of this exercise, it suffices to think of it as a C++ code generator. Dogen has been developed continuously since 2012 - with a few dry spells - and reached its fiftieth sprint recently. Having said that, our road to a finished product is still a long one.

The remainder of this article looks at what what has worked and what has not worked so well thus far into Dogen's development history.

Understanding Time

Dogen was conceived when we were trying to do our first start up. Once that ended - around the back end of 2012 - I kept working on the tool in my spare time, and this was a setup that has continued ever since. There are no other contributors; development just keeps chugging along, slowly but steadily, with no pressures other than to enjoy the sights.

Working on my own and in my spare time meant that I had two conflicting requirements: very little development resources and very ambitious ideas that required lots of work. With family commitments and a full time job, I quickly found out that there weren't a lot of spare cycles left. In fact, after some analysis, I realised I was in a conundrum. Whilst there is was a lot of "dead-time" in the average week, it was mostly "low-quality grade time": lots of discontinued segments of varying and unpredictable lengths. Summed together in a naive way it seemed like a lot, but - as every programmer knows - six blocks of ten minutes do not one solid hour make.

Nevertheless, one has to play the game with the cards that were dealt. I soon realised that the correct question to ask was: "what kind of development style makes one productive under these conditions?". The answer turned out to be opportunistic coding. This is rooted in having a better understanding of the different "qualities" of time and how best to exploit them. For example, when you have say five to fifteen minutes available, it makes sense to do small updates to the manual or fix trivial problems - a typo in the documentation, renaming variables in a function, mopping up the backlog and other activities of that ilk. A solid block of forty minutes to an hour affords you more: for instance, implementing part or the whole of stories for which the analysis has been completed, or doing some analysis for existing stories. On those rare cases where half-a-day or longer is available, one must make the most of it and take on a complex piece of work that requires sustained concentration. This sessions proved to be most valuable when the output is a set of well defined stories that are ready for implementation.

One needs very good processes in order to be able to manage the usage of time in this fashion. Luckily, agile provides it.

Slow Motion Agile

Looking back on ~2.4k commits, one of the major wins in terms of development process was to think incrementally. Of course, agile already gives you a mental framework for that, and we had a functioning scrum process during our start up days: daily stand-ups, bi-weekly sprints, pre-sprint planning, post-sprint reviews, demos and all of that good stuff. It worked really well, and keep us honest and clean. We used a very simple org-mode file to keep track of all the open stories, and at one point we even built a simple burn-down chart generator to allow us to measure velocity.

Granted, when you are working alone in your spare time, a chunk of agile may not make sense; for instance, providing status updates to yourself may not be the most productive use of scarce time. Surprisingly, I found quite a bit of process to be vital. I've kept the bi-weekly sprint cycle, the sprint logs, the product backlog and the time-tracking we had originally setup and found them extremely useful - quite possibly the thing that has kept me going for such an extended period of time, to be brutally honest. When you are working on an open source project it is very easy to get lost in its open-ended-ness and find yourself giving up, particularly if you are not getting (or expecting) any user feedback. Even Linus himself has said many times he would have given up the kernel if it wasn't for other people bringing him problems to keep him interested.

Lacking Linus' ability to attract crowds of interested developers, I went for the next best thing: I made them up. Well, at least in metaphorical way, I guess, as this is what user stories are when you have no external users to drive them. As I am using the product in anger, I find it very easy to put myself in the head of a user and come up with requirements that push development forward. These stories really help, because they transform the cloud of possibilities into concrete, simple, measurable deliverables that one can choose to deliver or not. Once you have a set of stories, you have no excuse to be lazy because you can visualise in your head just how much effort it would require you to implement a story - and hey, since nerds are terrible at estimating, it's never that much effort at all. As everyone knows, it's not quite that easy in the end; but once you've started, you get the feeling you have to at least finish the task at hand, and so on, one story at a time, one sprint at a time, until a body of work starts building up. It's slow, excruciatingly slow, but it's steady like water working in geological time; when you look back 5 sprints, you cannot help but be amazed on how much can be achieved in such a incremental way - and how much is still left.

And then you get hooked into measurements. I now love measuring everything, from how long it takes me to complete a story, to where time goes in an sprint, to how many commits I do a day, to, well, everything that can easily be measured without adding any overhead. There is no incentive for you to game the system - hell, you could create a script that commits 20 times a day, if the commit count is all you care about. But it's not, so why bother. Due to this, statistics start to actually tell you valuable information about the world and to impel you forward. For instance, GitHub streaks mean that I always try to at least make one commit per day. Because of this, even on days when I'm tired, I always force my self to do something and sometimes that quick commit morphs into an hour or two of work that wouldn't have happened otherwise.

As I mentioned before, it was revealing to find out that there are different types of time. In order to to take advantage of this heterogeneity, one must make scrupulous use of the product backlog. This has proven invaluable, as you can attest by its current size. Whether we are part way through a story or just idly daydreaming, each and every idea must be added to the product backlog, with sufficient detail to allow one to reconstruct one's train of thought at that point in time. Once in the backlog, items can be continuously refined until eventually we find a suitable sprint to tackle them or they get deprecated altogether. But without an healthy backlog it is not possible to make the most these illusive time slots. Conversely, it is important to try to make each story as small and as focused as possible, and to minimise spikes unless they really are on the critical path of the story. This is mainly for psychological reasons: one needs to mark stories as complete, to feel like work has been done. Never-ending stories are just bad for morale.

In general, this extreme incrementalism has served us well. Not all is positive though. The worst problem has been a great difficulty in tackling complex problems - those that require several hours just to load them into your head. These are unavoidable in any sufficiently large code base. Having lots of discontinued segments of unpredictable duration have reduced efficiency considerably. In particular, I notice I have spent a lot more time lost in conceptual circles, and I've taken a lot longer to explore alternatives when compared to working full time.

DVCS to the Core

We had already started to use git during the start-up days, and it had proved to be a major win at the time. After all, one never quite knows where one will be coding from, and whether internet access is available or not, so it's important to have a self-contained environment. In the end we found out it brought many, many more advantages such as great collaborative flows, good managed web interfaces/hosting providers (GitHub and, to some extent, BitBucket), amazing raw speed even on low-powered machines, and a number of other wins - all covered by lots and lots of posts around the web, so I won't bore you with that.

On the surface it may seem that DVCS is most useful on a multi-developer team. This is not the case. The more discontinued your time is, the more you start appreciating its distributed nature. This is because each "kind" of time has a more suitable device - perhaps a netbook for the train, a desktop at someone's house or even a phone while waiting somewhere. With DVCS you can easily to switch devices and continue exactly where you left off. With GitHub you can even author using the web interface, so a mobile phone suddenly becomes useful for reading and writing.

Another decision that turned out to be a major win is still not the done thing. Ever the trailblazers, we decided to put everything related to the project in version control. And by "everything" I do mean everything: documentation, bug reports, agile process, blog posts, the whole lot. It did seem a bit silly not to use GitHub's Wiki and Issues at the time, but, on hindsight, having everything in one versioned controlled place proved to be a major win:

  • searching is never further than a couple of greps away, and it's not sensitive to connectivity;
  • all you need is a tiny sliver of connectivity to push or pull, and work can be batched to wait for that moment;
  • updates by other people come in as commits and can be easily reviewed as part of the normal push/pull process - not that we got any of late, to be fair;
  • changes can easily be diffed;
  • history can be checked using the familiar version control interface, which is available wherever you go.

When you have little time, these advantages are life-savers.

The last but very important lesson learned was to commit early and commit often. It's rather obvious in hindsight, really. After all, if you have very small blocks of time to do work, you want to make sure you don't break anything; last thing you need is to spend a week debugging a tricky problem, with no idea of where you're going or how far you still have to travel. So it's important to make your commits very small and very focused such that a bisection would almost immediately reveal a problem - or at least provide you with an obvious rollback strategy. This has proved itself to be invaluable far too many times to count. The gist of this approach it is to split changes in an almost OCD sort of way, to the point that anyone can look at the commit comment and the commit diff and make a judgement as to whether the change was correct or not. To be fair, it's not quite always that straightforward, but that has been the overall aim.

Struggling to stay Continuously Integrated

After the commit comes the build, and the proof is in the pudding, as they say. When it comes to code, that largely means CI; granted, it may not be a very reliable proof, but nevertheless it is the best proof we've got. One of the major wins from the start up days was to setup CI, and to give it as wide a coverage as we could muster. We setup multiple build agents across compilers and platforms, added dynamic analysis, code coverage, packaging and basic sanity tests on those packages.

All of these have proven to be major steps in keeping the show on the road, and once setup, they were normally fairly trivial to maintain. We did have a couple of minor issues with CDash whilst we were running our own server. Eventually we moved over to the hosted CDash server but it has limitations on the number of builds, which meant I had to switch some build agents off. In addition to this, the main other stumbling block is finding the time to do large infrastructural updates to the build agents such as setting up new versions of Boost, new compilers and so on. These are horrendously time consuming across platforms because you never know what issues you are going to hit, and each platform has their own way of doing things.

The biggest lesson we learned here is that CI is vital but software products with no time at all should not waste time managing their own CI. There are just not enough hours in the day. I have been looking into travis to make this process easier in the future. Also, whilst being cross-platform is a very worthy objective, one has to weigh the costs with the benefits. If you have a tiny user base, it may make sense to stick to one platform and continue to do portable coding without "proof"; once users start asking for multiple platforms, it is then worth considering doing the work required to support them.

The packaging story was also a very good one to start off with - after all, most users will probably rely on those - but it turned out to be much harder than first thought. We spent quite a bit of time integrating with the GitHub API, uploading packages into their downloads section, downloading them from there, testing, and then renaming them for user consumption. Whilst it lasted, this setup was very useful. Unfortunately it didn't last very long as GitHub decided to decommission their downloads section. Since most of the upload and download code was GitHub specific, we could not readily move over to a different location. The lesson here was that this sort of functionality is extremely useful, and it is worth dedicating time to it, but one should always have a plan B and even a plan C. To make a long story short, the end result is that we don't have any downloads available at all - not even a stale ones - nor do we have any sanity checks on packages we produce; they basically go to /dev/null.

In summary, all of our pains led us to conclude that one should externalise early, externalise often and externalise everything. If there is a free (or cheap) provider in the cloud that can take on some or all of your infrastructure work away, you should always consider using them first rather than host your own infrastructure. And remember: your time is worth some money, and it is better spent coding. Of course, it is important to ensure that the provider is reliable, has been around for a while and is used by a critical mass. There is nothing worse than spending a lot of effort migrating to a platform, only to find out that it is about to dramatically change its APIs, prices, terms and conditions - or even worse, to be shutdown altogether.

Loosely Coupled

Another very useful lesson I learned was to keep the off-distro dependencies to a minimum. This is rather related to the previous points on CI and cross-platform-ness, really. During the start up days we started off by requiring a C++ compiler with good C++ 11 support, and a Boost library with a few off-tree libraries - mainly Boost.Log. This meant we had to have our own little "chroot" with all of these, and we had to build them by hand, sprinkled with plenty of helper scripts. In those dark days, almost nothing was supplied by the distro and life was painful. It was just about workable when we had time on our hands, but this is really not the sort of thing you want to spend time maintaining if you are working on a project in your spare time.

To be fair, I had always intended to move to distro-supplied packages as soon as they caught up, and when that happened the transition was smooth enough. As things stand, we have a very small off-distro footprint - mainly ODB and EOS. The additional advantage of not having off-distro dependencies is that you can start to consider yourself for inclusion on a distro. Even in these days of Docker, being shipped by a distro is still a good milestone for any open source project, so it's important to aim for it. Once more, it's the old psychological factors.

All and all, it seems to me we took the right decisions as both C++ 11 and Boost.Log have proven quite useful; but in the future I certainly will think very carefully about adding dependencies to off-distro libraries.


In general, the first fifty iterations of Dogen have been very positive. It has been a rather interesting journey, and dealing with pure uncertainty is not always easy - after all, one always wants to reach a destination. At the same time, much has been learned in the process, and a setup has been created that is sustainable given the available resources. In the near future I intend to improve the visibility of the project as I believe that, for all it's faults, it is still useful in its current form.

Date: 2014-09-07 22:02:42 BST

Org version 7.8.02 with Emacs version 24

Validate XHTML 1.0

Friday, August 08, 2014

Nerd Food: Using Mono In Anger - Part IV

Nerd Food: Using Mono In Anger - Part IV

In which we discuss the advances in MonoDevelop 5

This is the fourth and final part of a series of posts on my experiences using Mono for a fairly demanding project. For more context please read part 1, part 2 and part 3.

In this instalment we shall have a look at latest incarnation of MonoDevelop.

Getting Latest and Greatest

As I was part-way through these series of blog posts, Xamarin announced Xamarin Studio 5 - the commercial product based off of MonoDevelop. Clearly I had to get my hands on it. However, in this particular instance Debian unstable was proven to be rather… stable. The latest versions of Mono and MonoDevelop are rather quaint, and the packaging mailing list is not the most active, as my request for news on packaging revealed.

Building is not an entirely trivial experience, as Brendan's comment on a previous post demonstrated, so I was keen on going for binary packages. Surprisingly, there are not many private repos that publish up-to-date debian packages for mono. After much searching, I found an Ubuntu PPA that did:

add-apt-repository 'deb quantal main'
apt-get install monodevelop-current

Running it was as easy as using the launcher script:


And just as I was about to moan from the sidelines and beg Xamarin to try and help out Debian and Linux packagers in general, Miguel sent the following tweet:

Miguel de Icaza‏@migueldeicaza 4h Mono snapshots: @directhex just published our daily Linux packages

It's like Xamarin just reads my mind!

Haven't had the chance to play with these packages yet, and I didn't see any references to MonoDevelop in Jenkins (admittedly, it wasn't the deepest search I've done), but seems like a great step forward.

Playing with Latest and Greatest

So what has changed? The UI may look identical to the previous version, but lord has the polish level gone up. Basically, almost all the problems I had bumped into have gone away.

NuGet support

Update: See this post by Matt Ward for more details on NuGet support.

As I mentioned before, whilst the NuGet plugin was great for basic usage, it did have a lot of corner cases including the certificates issues, full restore not working properly and so on. This has all been sorted out in MonoDevelop 5. It sports an internal implementation as explained in the release notes, and it has been flawless up till now.

I did bump into an annoying problem, but I think its more Visual Studio's fault than anything else. Basically, Microsoft decided to add some NuGet.targets to the solution by copying them to .nuget. Now, to their credit, they appear to have thought about mono:

        <!-- We need to launch nuget.exe with the mono command if we're not on windows -->

However, this fails miserably. The DownloadNuGet target does not appear to exist in mono, and copying NuGet.exe manually into .nuget also failed - apparently its not just a binary these days. The lazy man solution was to find the NuGet binaries in MonoDevelop and copy them across to the .nuget directory (had them at monodevelop-5.0/external/nuget-binary). Once this was done, building worked just fine.

Note also that I didn't have time to test the .nuget directory properly, by overriding the default directory with something slightly more sensible. However, I don't particularly like having my packages in the middle of the source tree so I'll be trying that very soon.

Overall, the NuGet experience is great, and package restoring Just Works (TM).

Intellisense and Friends

I was already quite pleased with Intellisense in MonoDevelop 4, but I did find it was easy to confuse it when files got in to a bit of a state - say when pasting random chunks of code into a file. All of these problems are now gone with MonoDevelop 5. In more challenging situations, I have noticed the syntax highlighting disappearing for a little bit but as soon as the code is vaguely sensible, it returns straight away.

It is also a pleasure to use Ctrl-Shift-T to go to definitions, in some ways it seems even more powerful than ReSharper. It is certainly more responsive, even on my lowly NetBook with 1GB of RAM.

One slight snag is that extract interface seems to have gone missing - I was pretty sure I had used it on MonoDevelop 4, but for the life of me can't find it on 5.


I was a very happy user of the NUnit add-on for weeks on end and it performed flawlessly. However, today it got stuck loading tests and I ended up having to restart MonoDevelop to fix it. Bearing in mind I normally leave it running for weeks at a time, this annoyed me slightly. Of course, to be fair, I do restart Visual Studio every couple of days or so, so the odd MonoDevelop restart is not exactly the end of the world.

But in general, one complaint I have against both Visual Studio and MonoDevelop is with the opaqueness of unit testing. For me, it all started with shadow copying in NUnit UI and went downhill from there, really. If only one could see what exactly what it is that the IDE is trying to do, it would be fairly trivial to debug it; as it is, all I know is that my tests are "loading" but fail to load a few minutes later.

Anyway, that's just me ranting. Other than that, unit testing has worked really well, and I even started making use of the "Results Pad" and all - shiny charts!

Git FTW, UI Quirks and Resources

I had mentioned before that there were some minor UI quirks. For instance I recall seeing a search box that was not properly drawn, and having problems with the default layout of the screen. I'm happy to report that all of my UI quirks have gone away with 5. It is quite polished in that regard.

I've also started making use of the version control support - to some extent, of course, as I still think that Magit is above sliced bread. Having said that, its very useful to see a diff against what was committed or going up and down the history of a file without having to go to emacs. Version control is extremely quick. Even though Visual Studio now has git support integrated, it is a lot slower that MonoDevelop. I basically never wait at all for git on MonoDevelop.

Finally, a word on resources. I can still use MonoDevelop on my NetBook with its 1GB of RAM, much of it taken by Gnome 3 and Chrome. However, I did see it using over 250 MB of RAM on my desktop PC. I wonder if MonoDevelop is more aggressive on its usage of memory when it sees there is a lot available.


Whilst I'll still be using MonoDevelop for a few weeks longer, I think we have done enough on this four part series. My main objective was really to pit Mono and MonoDevelop against Visual Studio 2013 on a fairly serious project, requiring all the usual suspects: .Net, Castle, Log4Net, MongoDB and so on. To my surprise, I found I had very few interoperability problems - on the whole, the exact same source code, configuration, etc just worked for both Windows and Linux. It says a lot on how far Mono has progressed.

Regrettably, I didn't get as far as playing around with vNext - the coding is taking a lot longer than expected - but if I do get as far as that I shall post an update.

It's great news that Xamarin is improving their Linux support; I can imagine that there must be a number of companies out there considering Docker for their .Net environments. Xamarin is going to be in a great position to win over these tight-Windows-shops with the great products they have.

Date: 2014-08-09 00:51:03 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0

Tuesday, May 27, 2014

Nerd Food: Using Mono In Anger - Part III

Nerd Food: Using Mono In Anger - Part III

In which we discuss the various libraries and tools used.

This is the third part of a series of posts on my experiences using Mono for a fairly demanding project. For more context please read part 1 and part 2.

In this instalment we shall focus more on the libraries, tools and technologies that I ended up using.


I've mentioned Castle a few times already. It appears to be the de facto IoC container for .Net, so its very important to have a good story around it. As I explained on the previous post, I added NuGet references to Castle Core and Castle Windsor and after that it was pretty much smooth sailing. I setup Windor Installers as described by Mark Seemann in his post IWindsorInstaller and that worked as described. My main program does exactly as Mark's:

var container = new WindsorContainer();
container.Install(new MyModule.WindsorInstaller(), new OtherModule.WindsorInstaller());
return container.Resolve<IEntryPoint>();

Basically, I have a number of IWindsorInstallers (e.g. MyModule.WindsorInstaller() etc.) that get installed, and then all that needs to be done is to resolve the "entry point" for the app - e.g. whatever your main workflow is.

All of this worked out of the box without any tweaking from my part.


I've used MongoDB as the store for my test tool; I'll give a bit of context before I get into the Mono aspects. Mentally, I picture MongoDB somewhere in between PostgreSQL and Coherence / MemCached. That is, it's obviously not a relational database but one of those NoSQL specials: a schemaless, persistent, document database. You can do a lot of this stuff using hstore, of course, and it now even sports something similar-but-not-quite-the-same-as BSON - JSONB, in the usual humorist Postgres way. MongoDB's setup is somewhat easier than Postgres, on both replicated and non-replicated scenarios. It also offers Javascript-based querying which, to be fair, Postgres also does. I'd say that, if you have to choose between the two, go for MongoDB if you need a quick setup (replication included), if you don't care too much about security and if you do not need any RDBMS support. Otherwise, use latest Postgres. And RTM. A Lot.

MongoDB is obviously also much easier to setup than Coherence. Of course, if you go for the trivial setup, Coherence is easy; but once you get into proper distributed setups I found it to be an absolute nightmare, requiring a lot of expertise just to understand why your data has been evicted. That's excluding the more complex scenarios such as invocation services, backing maps and so on. Sure, you can get the performance and the scalability, but you really need to know what you are doing. And let's not mention the licence costs. Basically, for the plain in-memory cache job with an easy setup, just use Memcached.

But let's progress with MongoDB. Regrettably, there are no packages in Testing for it, but the wiki has a rather straightforward set of instructions under Install MongoDB on Debian. It boils down to:

# apt-key adv --keyserver --recv 7F0CEB10
# echo 'deb dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
# apt-get update
# apt-get install mongodb-org

Since I'm using systemd I was a bit apprehensive with their control scripts. As it turns out, it worked out of the box without any problems. I did find the installation to vary depending on the machines: on some I got journaling by default, but on my really low-end NetBook it was disabled. Also, the other thing to bear in mind is that if you have a small root or var partition - e.g. the one storing /var/lib/mongodb - you may run into trouble. I ended up symlinking this directory to a drive that had more space just to avoid problems.

Once MongoDB was up and running, it was time to find a management UI. Unfortunately, MongoVue - the UI that all the Windows cool kids use - is not available on Linux. This is a bit disappointing because it seems rather full featured and well funded and - just to rub salt in the wounds - it's a .Net application. The old lack of cross-platform mentality surfaces yet again. Undeterred once more, I settled on RoboMongo instead. Not quite as matured, but seemed good enough for my needs. Simple to setup too:

$ wget -O robomongo-0.8.4-x86_64.deb
$ gdebi-gtk robomongo-0.8.4-x86_64.deb

If you don't have gdebi-gtk any other debian installer would do, including dpkg -i robomongo-0.8.4-x86_64.deb.

If you are an emacs user, be sure to install the inferior mode for Mongo. Works well on Linux but has the usual strange input-consumption problems one always gets on Windows.

Going back to Mono, all one needs to do is to use NuGet to install the CSharp Mongo Driver. Once that was done, reading, writing, updating etc all worked out of the box.


Paradoxically, where I thought I was going to have the least amount of trouble ended up being the most troublesome of all of my dependencies. Getting log4net to work was initially really easy - the usual NuGet install. But then, not happy with such easy success, I decided I needed a single log4net.config file for all my projects. This is understandable since all that was different amongst them was the log file name; it seemed a bit silly to have lots of copy and paste XML lying around. So I decided to use Dynamic Properties, as explained in this blog post: Log4net Dynamic Properties in XML Configuration. This failed miserably.

As everyone knows, log4net is a pain in the backside to debug. For the longest time I didn't have the right configuration; eventually I figured out what I was doing wrong. It turns out the magic incantation is this (I missed the type bit):

        <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
            <file type="log4net.Util.PatternString" value="APrefix.%property{ApplicationId}.log" />

Just when I thought I was out of the woods, I hit a Mono limitation: CallContext.LogicalGetData is not yet implemented in Mono 3.0. It is available on later versions of Mono, but these are not yet in Debian Testing. Undeterred, I decided to try to compile Mono from scratch. It turned out to be rather straightforward:

$ git clone
$ cd mono
$ ./ --prefix=${YOUR_INSTALL_LOCATION}
$ make -j${NUMBER_OF_CORES}
$ make install

Replace (or set) ${YOUR_INSTALL_LOCATION} and ${NUMBER_OF_CORES} as required. Once you got it installed, you need to tell MonoDevelop about the new runtime. Go to Edit, Preferences then choose .Net Runtimes and click on Add. Point to the top-level directory containing your installation (e.g. ${YOUR_INSTALL_LOCATION}) and it should find the newly built Mono. I then set that as my default. Incredibly enough, from then on it all just worked.

Runtimes in MonoDevelop


As mentioned in the previous post, you should replace the NUnit references you get from MonoDevelop with NuGet ones. This is because you may be using some of the newer features of NUnit - which are not available with the version that ships with Mono. At any rate, it just gives you more confidence on the dependency rather than depending on the environment.

Another problem I found was disabling shadow copying. This does not seem to be an option in the MonoDevelop UI or the solution. It is rather annoying if you need to have some log4net config files in the test directory - as I did, due to the Dynamic Properties mentioned above.

Other than that, NUnit worked very well.

Libraries Overview

Compiling Mono from source is obviously not ideal, but perhaps the main thing to worry about is how to get latest Mono packages. As with MongoDB, it perhaps would be better to have a repository supported by the Mono community that offers more up-to-date packages, at least for the more intrepid users. Although some of these existed in the past (particularly Ubuntu PPAs) they all seem to have gone stale.

Having said that, there are still no showstoppers - the code is working on both Visual Studio 2013 and Mono.

Date: 2014-05-27 22:29:38 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0

Nerd Food: Interesting...

Nerd Food: Interesting…

Some interesting stuff I bumped into these last couple of weeks.


  • Banksy - Webby Awards Video: Banksy at his usual irreverent best. Hat-tip Emanuel Ferreira.
  • Listen to “Brian Eno Day”: a 12-Hour Radio Show Spent With Eno & His Music (Recorded in 1988). If you like ambient it's a must listen. Hat-tip Joao Santos.
  • Snowpiercer: To be fair, didn't yet get a chance to watch it yet, but the comments from Bruno Antunes were so glowing it went up on my list of must-watch movies.
  • Beware of Mr. Baker: Frigging amazing. I just can't believe I had never heard of this guy until I watched the movie. I mean, he played with Fela, Clapton, all the Jazz greats that were still alive. No words to describe it - just watch it. Crazy, crazy guy. Blind Faith are one of the many bands he was in.
  • Surfwise: Surf movie, but with a twist - or should I say nine. Crazy family with a crazy dad that decided one day that he'd spent the rest of his life surfing, and the world be damned. Amazing movie. It's also interesting because it outlines the consequences of such "heroic" decisions on the family.
  • Makthaverskan: Started listening to this rocky Swedish band.
  • El Empleo / The Employment: Great animation. Perfect description of the modern world.
  • The Expert: Like El Empleo, a dark take on the modern world, but this time as a dark comedy. Hilarious, but yet again, very painful.


Other Programming Topics

Start-ups and Business


  • Com os Holandeses: Encontrei um autor portugues que nunca tinha ouvido falar, mas que parece ser bastante conhecido na holanda. Na lista de livros a ler.

Date: 2014-05-27 12:14:54 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0

Monday, May 26, 2014

Nerd Food: Using Mono In Anger - Part II

Nerd Food: Using Mono In Anger - Part II

In which we setup our working environment.

On Part I we explored the reasons why I decided to use Mono in anger. In this part we shall get ourselves a working environment comparable a Visual Studio 2013 setup.

First things first:

  • whilst this post presumes that you may not know everything about Linux, it does expect a minimum familiarity with the terminals, desktop, etc.
  • having said that, we try to keep things simple. For instance note that commands that start with # need to be executed as root and commands that start with $ should be executed as an unprivileged user. Also \ means the command should really be one big line.
  • my distribution of choice is up-to-date Debian Testing, which is where all the commands have been tested; hopefully most things will work out of the box for you - especially if you are using Debian or Ubuntu - but its better if you understand what the command is trying to achieve, just in case it doesn't quite work.
  • Finally, I presume you know about .Net and its libraries - although I do try to give some minimal context.

Without further ado, lets set our Mono environment.

Installing Mono

We start off by following the instructions on installing mono for Debian available in the Mono project wiki:

# apt-get install mono-complete

It's worth mentioning that whilst the Mono page above mentions v2.10.8.1 for Debian Testing, v3.0.6 has actually already been migrated from Unstable so that is what you will be getting. A note on versions here: with Mono, latest is always greatest. The project moves at such a fast pace that getting an old version is almost always a bad idea.

If everything has gone as expected, you should now see something along the lines of:

$ mono --version
Mono JIT compiler version 3.0.6 (Debian 3.0.6+dfsg2-12)
Copyright (C) 2002-2012 Novell, Inc, Xamarin Inc and Contributors.
    TLS:           __thread
    SIGSEGV:       altstack
    Notifications: epoll
    Architecture:  amd64
    Disabled:      none
    Misc:          softdebug
    LLVM:          supported, not enabled.
    GC:            Included Boehm (with typed GC and Parallel Mark)

To make sure we have a fully working setup, lets compile a simple hello world:

$ echo 'class Program { static void Main() { System.Console.WriteLine("HelloWorld!"); }}' > hello.cs
$ mcs hello.cs
$ ./hello.exe

Installing MonoDevelop

Of course, no self-respecting Windows .Net developer will code from the command line; they will ask for an IDE. The IDE of choice for Mono is MonoDevelop - sometimes called Xamarin Studio because they are the main company behind it, and have a commercial product based on it.

It's pretty straightforward to install it on Debian:

# apt-get install monodevelop monodevelop-nunit monodevelop-versioncontrol \
     monodevelop-database monodevelop-debugger-gdb \
     libicsharpcode-nrefactory-cecil5.0-cil \
     libicsharpcode-nrefactory-cil-dev \
     libicsharpcode-nrefactory-csharp5.0-cil \
     libicsharpcode-nrefactory-ikvm5.0-cil \
     libicsharpcode-nrefactory-xml5.0-cil \

You will most likely get MonoDevelop v4.0. Alas, v4.2 has already been released but not yet hit Debian. As with Mono, latest is always greatest. Note that I went a bit overboard here and install a lot of stuff - you may just want to install monodevelop. I added the NUnit integration and the Version Control integration as well as the refactoring tools.

Once its installed you can start it from the main menu. It should look vaguely like this:

MonoDevelop's main window

The very next thing we need to do is to install the NuGet add-in for MonoDevelop. To do this:

  • Go the Tools menu, Add-in Manager;
  • Click on the Gallery tab;
  • Click on the Repository combo-box and select Manage Repositories…;
  • Click on the Add button and select Register an on-line Repository
  • Paste the repository from the page above, e.g.: and click Ok.

MonoDevelop Add-in Repository

You should now be able to find the NuGet add-in by searching for it on the search box at the top of the dialog:

Installing NuGet Add-in

In my case its already installed - in your case you should get an Install option. Unfortunately we're not out of the woods just yet. We need to setup the certificates to allow us to download packages:

# mozroots --import --sync
# certmgr -ssl -m
# certmgr -ssl -m
# certmgr -ssl -m

Now, to be perfectly honest there are still some setup issues for NuGet that I don't quite understand, but we'll leave that for later.

Setting up a Solution

To create a solution go to the File menu, New and choose Solution…:

Creating a new solution

This will create a solution with a project. Now, on the main screen create a second project, say MyProject.Tests, by right-clicking the solution, click on Add then Add New Project… and then fill in the project details: NUnit project and the name.

Adding a new NUnit project

To be perfectly frank, as with Visual Studio, I tend to create projects and solutions from the UI and then edit the raw .sln and .csproj files to get them in my preferred directory layout. At any rate, for this really simple solution we just get the following:

The HelloWorld solution

Now go to both project options by right-clicking on the project and then Options and find General. There update the target framework to Mono .Net 4.5.

Using .Net 4.5

You will get some blurb about project file changes, just accept it. The other thing to do is to use xbuild, Mono's equivalent of msbuild to do the building. To do so go to Edit, Preferences and Build. Then tick the xbuild check box:

Using xbuild

Now all we need is to setup all of the required NuGet packages. This is where things become a bit complicated. As I said previously, we set up all the required certificates so things should just work. In practice we still get a few issues. To see what I mean, right-click on a project's references, then Manage NuGet Packages…. The following message comes up:

NuGet certificate problem

If you click yes, you should then get the full blown list of NuGet packages. But it's a bit annoying to have to do that since apparently we have installed all of the required certificates. Also, as we shall see, the NuGet restore fails due to certificate problems but we'll leave that one for later. Once you waited for a bit, the packages screen will come up, and you can use the search box to search for packages and once happy, click on Add:

Adding Log4Net in NuGet

Using this workflow, add all the required packages. For example, in my case I added:

  • Main project: Log4net, Castle Core, Castle Windsor, Mongo CSharpDriver, Newtonsoft.Json
  • Test project: NUnit. I removed the reference that Mono added and forced it to go via NuGet. This saves you from a lot of problems related to incorrect NUnit versions.

A few points to note here. NuGet works but it's a little rough around the edges. When you are using it in anger, the following things will become annoying:

  • the whole certificate thing, which seems to make the initial NuGet window slower. Fortunately this appears to happen only once for a running session.
  • the inability to use NuGet restore is a pain; it means every time you swap machines you need to faff around to re-download packages. I use a dummy project for this - e.g. add packages to a project that does not have them and then remove them.
  • the NuGet add-in ignores the .nuget configuration; instead uses a "hard-coded" package directory at the same level of the solution. This is a bit painful because your Windows developers will see the packages in one location (i.e. where the .nuget config states it should be) but it will be elsewhere in Mono. Best not to use these until they are supported in Mono.
  • NuGet add-in doesn't seem to work when there is no network connection. This is a bit painful because it means that you can't add a dependency to a project that had already been added to another project without being online. In this scenario the easiest thing to do is to edit the .csproj and packages.config files manually.

A Note on F#

As I spent a considerable time describing the F# setup in the past (Adventures in F# Land Part 1, Part 2 and Part 3), it's only fair we cover how things are done these days. First install the required packages:

# apt-get install fsharp libfsharp-core4.3-cil libfsharp-data-typeproviders4.3-cil

Again, YMMV - I always go a bit overboard and install everything. Then in MonoDevelop install the Add-in. To do so simply go to Tools, Add-in Manager then click on Gallery and expand Language Bindings. Click on F# and then install.

Adding F# Support

It is that easy these days. We may have to do a F# in anger series later on, to see how well it stacks up.

Setup Review

First I'd like to say that there are a lot of positives in the setup experience. For example, I just spent the best part of two days getting Visual Studio to work due to some licensing issues - it just wouldn't accept my key for some reason. Also Visual Studio 2013 is rather demanding hardware wise, whereas MonoDevelop seems to hover on the 200 MB range (my actual solution has around 12 projects).

And on the main, the polish on MonoDevelop is quite good, with many things just working out of the box as they do in Visual Studio. And, once you get past some of it's minor quirks, Matt Ward's NuGet Add-In does work; I have been using it in anger for 3 weeks and I can attest to that. But it could be argued that NuGet is such a central component in the .Net experience that it should warrant thorough QA - perhaps by Xamarin - to try to bullet-proof it, at least for the hot use cases.

On the main, we're still very happy campers.

Date: 2014-05-26 20:12:12 BST

Org version 7.8.02 with Emacs version 23

Validate XHTML 1.0