Wednesday, June 11, 2008

Nerd Food: On Evolutionary Methodology

Unix's durability and adaptability have been nothing short of astonishing. Other technologies have come and gone like mayflies. Machines have increased a thousand-fold in power, languages have mutated, industry practice has gone through multiple revolutions - and Unix hangs in there, still producing, still paying the bills, and still commanding loyalty from many of the best and brightest software technologists on the planet. -- ESR

Unix...is not so much a product as it is a painstakingly compiled oral history of the hacker subculture. -- Neal Stephenson

The Impossibly Scalable System

If development in general is an art or a craft, its finest hour is perhaps the maintenance of existing systems which have high availability requirements but are still experiencing high rates of change. As we covered previously, maintenance in general is a task much neglected in the majority of commercial shops, and many products suffer from entropic development; that is, the piling on of changes which continuously raise the complexity bar, up to a point where it is no longer cost-effective to continue running the existing system. The word "legacy" is in itself filled with predestination, implying old systems cannot avoid time-decay and will eventually rot into oblivion.

The story is rather different when one looks at a few successful Free and Open Source Software (FOSS) systems out there. For starters, "legacy" is not something one often hears on that side of the fence; projects are either maintained or not maintained, and can freely flip from one state to the other. Age is not only _not_ a bad thing, but, in many cases, it is a remarkable advantage. Many projects that survived their first decade are now stronger than ever: the Linux kernel, x.org, Samba, Postgresql, Apache, gcc, gdb, subversion, GTK, and many, many others. Some, like Wine, took a decade to mature and are now showing great promise.

Each of these old timers has its fair share of lessons to teach, all of them incredibly valuable; but the project I'm particularly interested in is the Linux kernel. I'll abbreviate it to Linux or "the kernel" from now on.

As published recently in a study by Kroah-Hartman, Corbet and McPherson, the kernel suffers a daily onslaught of unimaginable proportions. Recent kernels are a joint effort of thousands of kernel hackers in dozens of countries, a fair portion of which working or well over 100 companies. On average, these developers added or modified around 5K lines per day during the 2.6.24 release cycle and, crucially, removed some 1.5K lines per day - and "day" here includes weekends too. Kernel development is carried out in hundreds of different kernel trees, and the merge paths between these trees obeys no strictly enforced rules - it does follow convention, but rules get bent when the situation requires it.

It is incredibly difficult to convey in words just how much of a technical and social achievement the kernel is, but one is still compelled to try. The absolute master of scalability, it ranges from the tiniest embedded processor with no MMU to the largest of the large systems - some spanning as many as 4096 processors - and covering pretty much everything else in between: mobile phones, Set-Top Boxes (STBs), game consoles, PCs, large severs, supercomputers. It supports more hardware architectures than any other kernel ever engineered, a number which seemingly keeps on growing at the same rate new hardware is being invented. Linux is increasingly the kernel of choice for new architectures, mainly because it is extremely easy to port. Even real time - long considered the unassailable domain of special purpose - is beginning to cave in, unable to resist the relentless march of the penguin. And the same is happening in many other niches.

The most amazing thing about Linux may not even be its current state, but its pace, as clearly demonstrated by Kroah-Hartman, Corbet and McPherson's analysis of kernel source size: it has displayed a near constant growth rate between 2.6.11 and 2.6.24, hovering at around 10% a year. Figures on this scale can only be supported by a catalytic development process. And in effect, that is what Linux provides: by getting better it implicitly lowers the entry barrier to new adopters, which find it closer and closer to their needs; thus more and more people join in and fix what they perceive to be the limitations of the kernel, making it even more accessible to the next batch of adopters.

Although some won't admit it now, the truth is none of the practitioners or academicians believed that such a system could ever be delivered. After all, Linux commits every single schoolboy error: started by an "inexperienced" undergrad, it did not have much of an upfront design, architecture and purpose; it originally had the firm objective of supporting only a single processor on x86; it follows the age-old monolithic approach rather than the "established" micro-kernel; it is written in C instead of a modern, object-oriented language; its processes appear to be haphazard, including a clear disregard for Brook's law; it lacks a rigorous Q&A process and until very recently even a basic kernel debugger; version control was first introduced over a decade after the project was started; there is no clear commercial (or even centralised) ownership; there is no "vision" and no centralised decision making (Linus may be the final arbiter, but he relies on the opinions of a lot of people). The list continues ad infinitum.

And yet, against all expert advice, against all odds, Linux is the little kernel that could. If one were to write a spec covering the capabilities of vanilla 2.6.25, it would run thousands of pages long; its cost would be monstrous; and no company or government department would dare to take on such an immense undertaking. Whichever way you look at it, Linux is a software engineering singularity.

But how on earth can Linux work at all, and how did it make it thus far?

Linus' Way

I'm basically a very lazy person who likes to get credit for things other people actually do. -- Linus Torvalds

The engine of Linux's growth is deeply rooted in the kernel's methodology of software development, but it manifests itself as a set of core values - a culture. As with any other school of thought, not all kernel hackers share all values, but the group as a whole displays some obvious homogeneous characteristics. These we shall call Linus' Way, and are loosely summarised below (apologies for some redundancy, but some aspects are very interrelated).

Small is beautiful
  • Design is only useful on the small scale; there is no need to worry about the big picture - if anything, worrying about the big picture is considered harmful. Focus on the little decisions and ensure they are done correctly. From these, a system will emerge that _appears_ to have had a grand design and purpose.
  • At a small scale, do not spend too long designing and do not be overambitious. Rapid prototyping is the key. Think simple and do not over design. If you spend too much time thinking about all the possible permutations and solutions, you will create messy and unmaintainable code which will very likely going to be wrong. Best implement a small subset of functionality that works well, is easy to understand and can be evolved over time to cover any additional requirements.

Show me the Code
  • Experimentation is much more important than theory by several orders of magnitude. You may know everything there is to know about coding practice and theory, but your opinion will only be heard if you have solid code in the wild to back it up.
  • Specifications and class diagrams are frowned upon; you can do them for your own benefit, but they won't sell any ideas by themselves.
  • Coding is a messy business and is full of compromises. Accept that and get on with it. Do not search for perfection before showing code to a wider audience. Better to have a crap system (sub-system, module, algorithm, etc.) that works somewhat today than a perfect one in a year or two. Crap systems can be made slightly less crappy; vapourware has no redeeming features.
  • Merit is important, and merit is measured by code. Your ability to do boring tasks well can also earn a lot of brownie points (testing, documentation, bug hunting, etc.) and will have a large positive impact on your status. The more you are known and trusted in the community, the easier it will be for you to merge new code in and the more responsibilities you will end up having. Nothing is more important than merit as gauged by the previous indicators; it matters not what position you hold on your company, how important your company is or how many billions of dollars are at stake - nor does it matter how many academic titles you hold. However, past actions do not last forever: you must continue to talk sense to have the support of the community.
  • Testing is crucial, but not just in the conventional sense. The key is to release things into a wider population ("Release early, release often"). The more exposure code has the more likely bugs will be found and fixed. As ESR put it, "Given enough eyeballs, all bugs are shallow" (dubbed Linus' law). Conventional testing is also welcome (the more the merrier), but its no substitute for releasing into the wild.
  • Read the source, Luke. The latest code is the only authoritative and unambiguous source of understanding. This attitude does not in anyway devalue additional documentation; it just means that the kernel's source code overrides any such document. Thus there is a great impetus in making code readable, easy to understand and conformant to standards. It is also very much in line with Jack Reeve's view that source code is the only real specification a software system has.
  • Make it work first, then make it better. When taking on existing code, one should always first make it work as intended by the original coders; then a set of cleanup patches can be written to make it better. Never start by rewriting existing code.
No sacred cows
  • _anything_ related to the kernel can change, including processes, code, tools, fundamental algorithms, interfaces, people. Nothing is done "just because". Everything can be improved, and no change is deemed too risky. It may have to be scheduled, and it may take a long time to be merged in; but if a change is of "good taste" and, when required, provided the originator displays the traits of a good maintainer, it will eventually be accepted. Nothing can stand on the way of progress.
  • As a kernel hacker, you have no doubts that you are right - but actively you encourage others to prove you wrong and accept their findings once they have been a) implemented (a prototype would do, as long as it is complete enough for the purpose) b) peer reviewed and validated. In the majority of cases you gracefully accept defeat. This may imply a turn-around of 180 degrees; Linus has done this on many occasions.
  • Processes are made to serve development. When a process is found wanting - regardless of how ingrained it is or how useful it has been in the past - it can and will be changed. This is often done very aggressively. Processes only exist while they provide visible benefits to developers or, in very few cases, due to external requirements (ownership attribution comes to mind). Processes are continuously fine-tuned so that they add the smallest possible amount of overhead to real work. A process that improves things dramatically but adds a large overhead is not accepted until the overhead is shaved off to the bare bone.
Tools
  • Must fit the development model - the development model should not have to change to fit tools;
  • Must not dumb down developers (i.e. debuggers); a tool must be an aid and never a replacement for hard-thinking;
  • Must be incredibly flexible; ease of use can never come at the expense of raw, unadultered power;
  • Must not force everyone else to use that tool; some exceptions can be made, but on the whole a tool should not add dependencies. Developers should be free to develop with whatever tools they know best.
The Lieutenants:

One may come up with clever ways of doing things, and even provide conclusive experimental evidence on how a change would improve matters; however, if one's change will disrupt existing code and requires specialised knowledge, then it is important to display the characteristics of a good maintainer in order to get the changes merged in. Some of these traits are:
  • Good understanding of kernel's processes;
  • Good social interaction: an ability to listen to other kernel hackers, and be ready to change your code;
  • An ability to do boring tasks well, such as patch reviews and integration work;
  • An understanding of how to implement disruptive changes, striving to contain disruption to the absolute minimum and a deep understanding of fault isolation.
Patches

Patches have been used for eons. However, the kernel fine-tuned the notion to the extreme, putting it at the very core of software development. Thus all changes to be merged in are split into patches and each patch has a fairly concise objective, against which a review can be performed. This has forced all kernel hackers to _think_ in terms of patches, making changes smaller and concise, and splitting scaffolding and clean up work and decoupling features from each other. The end result is a ridiculously large amount of positive externalities - unanticipated side-effects - such as technologies that get developed for one purpose but uses that were never dreamt of by their creator. The benefits of this approach are far too great to discuss here but hopefully we'll have a dedicated article on the subject.

Other
  • Keep politics out. The vast majority of decisions are taken on technical merits alone, and very rarely for political reasons. Some times the two coincide (such as the dislike for binary modules in the kernel), but one must not forget that the key driver is always the technical reasoning. For instance, the kernel uses the GNU GPL v2 purely because its the best way to ensure its openness, a key building block of the development process.
  • Experience trumps fashion. Whenever choosing an approach or a technology, kernel hackers tend to go for the beaten track rather than new and exciting ones. This is not to say there is no innovation in the kernel; but innovators have the onus of proving that their approach is better. After all, there is a solid body of over 30 years of experience in developing UNIX kernels; its best to stand on the shoulders of giants whenever possible.
  • An aggressive attitude towards bad code, or code that does not follow the standards. People attempting to add bad code are told so in no uncertain terms, in full public view. This discourages many a developer, but also ensures that the entry bar is raised to avoid lowering the signal-to-noise (S/N) ratio.

If there ever was a single word that could describe a kernel hacker, that word would have to be "pragmatic". A kernel hacker sees development as a hard activity that should remain hard. Any other view of the world would result in lower quality code.

Navigating Complexity

Linus has stated in many occasions he is a big believer of development by evolution rather than the more traditional methodologies. In a way, he is the father of the evolutionary approach when applied to software design and maintenance. I'll just call this the evolutionary methodology (EM) by want of a better name. EM's properties make it strikingly different from everything that has preceded it. In particular, it appears to remove most forms of centralised control. For instance:

  • It does not allow you to know where you're heading in the long run; all it can tell you is that if you're currently on a favourable state, a small, gradual increment is _likely_ to take you to another, slightly more favourable state. When measured in a large timescale it will appear as if you have designed the system as a whole with a clear direction; in reality, this "clearness" is an emergent property (a side-effect) of thousands or small decisions.
  • It exploits parallelism by trying lots of different gradual increments in lots of members of its population and selecting the ones which appear to be the most promising.
  • It favours promiscuity (or diversity): code coming from anywhere can intermix with any other code.

But how exactly does EM work? And why does it seem to be better than the traditional approaches? The search for these answers takes us right back to the fundamentals. And by "fundamentals", I really mean the absolute fundamentals - you'll have to grin and bear, I'm afraid. I'll attempt to borrow some ideas from Popper, Taleb, and Dawkins to make the argument less nonsensical.

That which we call reality can be imagined as a space with a really, really large number of variables. Just how large one cannot know, as the number of variables is unknowable - it could even be infinite - and it is subject to change (new variables can be created; existing ones can be destroyed, and so on). With regards to the variables themselves, they change value every so often but this frequency varies; some change so slowly they could be better describbed as constants, others so rapidly they cannot be measured. And the frequency itself can be subject to change.

When seen over time, these variables are curves, and reality is the space where all these curves live. To make matters more interesting, changes on one variable can cause changes to other variables, which in turn can also change other variables and so on. The changes can take many forms and display subtle correlations.

As you can see, reality is the stuff of pure, unadulterated complexity and thus, by definition, any attempt to describe it in its entirety cannot be accurate. However, this simple view suffices for the purposes of our exercise.

Now imagine, if you will, a model. A model is effectively a) the grabbing of a small subset of variables detected in reality; b) the analysis of the behaviour of these variables over time; c) the issuing of statements regarding their behaviour - statements which have not been proven to be false during the analysis period; d) the validation of the models predictions against past events (calibration). Where the model is found wanting, it needs to be changed to accommodate the new data. This may mean adding new variables, removing existing ones that were not found useful, tweaking variables, and so on. Rinse, repeat. These are very much the basics of the scientific method.

Models are rather fragile things, and its easy to demonstrate empirically why. First and foremost, they will always be incomplete; exactly how incomplete one cannot know. You never know when you are going to end outside the model until you are there, so it must be treated with distrust. Second, the longer it takes you to create a model - a period during which validation is severely impaired - the higher the likelihood of it being wrong when its "finished". For very much the same reasons, the larger the changes you make in one go, the higher the likelihood of breaking the model. Thirdly, the longer a model has been producing correct results, the higher the probability that the next result will be correct. But the exact probability cannot be known. Finally, a model must endure constant change to remain useful - it may have to change as frequently as the behaviour of the variables it models.

In such an environment, one has no option but to leave certainty and absolutes behind. It is just not possible to "prove" anything, because there is a large component of randomness and unknown-ability that cannot be removed. Reality is a messy affair. The only certainty one can hold on to is that of fallibility: a statement is held to be possibly true until proven false. Nothing else can be said. In addition, empiricism is highly favoured here; that is, the ability to look at the data, formulate an hypothesis without too much theoretical background and put it to the test in the wild.

So how does this relate to code? Well, every software system ever designed is a model. Source code is nothing but a set of statements regarding variables and the rules and relationships that bind them. It may model conceptual things or physical things - but they all inhabit a reality similar to the one described above. Software systems have become increasingly complex over time - in other words, taking on more and more variables. An operative system such as multics, deemed phenomenally complex for its time, would be considered normal by today's standards - even taking into account the difficult environment at the time with non-standard hardware, lack of experience on that problem domain, and so on.

In effect, it is this increase in complexity that breaks down older software development methodologies. For example, the waterfall method is not "wrong" per se; it can work extremely well in a problem domain that covers a small number of variables which are not expected to change very often. You can still use it today to create perfectly valid systems, just as long as these caveats apply. The same can be said for the iterative model, with its focus on rapid cycles of design, implementation and testing. It certainly copes with much larger (and faster moving) problem domains than the waterfall model, but it too breaks down as we start cranking up the complexity dial. There is a point where your development cycles cannot be made any smaller, testers cannot augment their coverage, etc. EM, however, is at its best in absurdly complex problem domains - places where no other methodology could aim to go.

In short, EM's greatest advantages in taming complexity are as follows:
  • Move from one known good point to another known good point. Patches are the key here, since they provide us with small units of reviewable code that can be checked by any experienced developer with a bit of time. By forcing all changes to be split into manageable patches, developers are forced to think in terms of small, incremental changes. This is precisely the sort of behaviour one would want in a complex environment.
  • Validate, validate and then validate some more. In other words, Release Early, Release Often. Whilst Linus has allowed testing and Q&A infrastructure to be put in place by interested parties, the main emphasis has always been placed in putting code out there in the wild as quickly as possible. The incredibly diverse environments on which the kernel runs provide a very harsh and unforgiving validation that brings out a great number of bugs that could not have possibly been found otherwise.
  • No one knows what the right thing is, so try as many possible avenues as possible simultaneously. Diversity is the key, not only in terms of hardware (number of architectures, endless permutations within the same architecture, etc.), but also in terms of agendas. Everyone involved in Linux development has their own agenda and is working towards their own goal. These individual requirements, many times conflicting, go through the kernel development process and end up being converted into a number of fundamental architectural changes (in the design sense, not the hardware sense) that effectively are the superset of all requirements, and provide the building blocks needed to implement them. The process of integrating a large change to the kernel can take a very long time, and be broken into a sequence of never ending patches; but many a time it has been found that one patch that adds infrastructure for a given feature also provides a much better way of doing things in parts of the kernel that are entirely unrelated.

Not only does EM manage complexity really well but it actually thrives on it. The pulling of the code base in multiple directions makes it stronger because it forces it to be really plastic and maintainable. It should also be quite clear by now that EM can only be deployed successfully under somewhat limited (but well defined) circumstances, and it requires a very strong commitment to openness. It is important to build a community to generate the diversity that propels development, otherwise its nothing but the iterative method in disguise done out in the open. And building a community entails relinquishing the traditional notions of ownership; people have to feel empowered if one is to maximise their contributions. Furthermore, it is almost impossible to direct this engine to attain specific goals - conventional software companies would struggle to understand this way of thinking.

Just to be clear, I would like to stress the point: it is not right to say that the methodologies that put emphasis on design and centralised control are wrong, just like a hammer is not a bad tool. Moreover, its futile to promote one programming paradigm over another, such as Object-Orientation over Procedural programming; One may be superior to the other on the small, but on the large - the real world - they cannot by themselves make any significant difference (class libraries, however, are an entirely different beast).

I'm not sure if there was ever any doubt; but to me, the kernel proves conclusively that the human factor dwarfs any other in the production of large scale software.

Monday, January 28, 2008

Super Angola!!!

Incredible. Amazing. We actually did it. We managed to beat Senegal. Our stars Flavio and specially the new Manchester United player Manucho did the job and the end result was an amazing 3-1. Now we're only one draw away from going past the group stages for the first time ever. So all fingers crossed for Thursday 17:00 UK time, when we face the very difficult obstacle of Tunisia.


(C) 2008 Associated Press

Sunday, January 27, 2008

Ghana 2008 - Forca Palancas!!

The emotion is running high on the African Cup! Angola started well against our regional rivals South Africa, but yielded at the end. To be fair, South Africa was dominant for periods of the game, and did deserve the draw. Today we have a rather difficult game against Senegal (UK 17:00). The coverage in the UK has been superb, with all the games available on BBC interactive (on BBC1 just press the Red Button).

A positive note for Ghana too: the stadiums are superb, and things have been rather well organised, if we ignore minor glitches (like the electricity going, or playing two games on the sames stadium without allowing the grass to recover or the disorganisation with regards to granting press credentials). The camera work has been top notch, at European level. The sound could perhaps be a bit better. All and all, the best CAN ever, methinks. One lesson Angola should learn for 2010 is to ensure all tickets get sold. Its much more important to have all stadiums full that to profit from the event.

The main site for the event is http://www.ghanacan2008.com/. Not the best (can't find any pictures or live results, and the content is rather limited), but not the worst either, showing how far things have come and how much the quality bar has been raised.

(C) MTNFootball.com

Saturday, October 20, 2007

.signature

One man's constant is another man's variable. -- Alan Perlis


Alan Perlis was one of the finest specimens of the Real Programmer breed. Back in the days where Computer Scientists didn't exist, he and his kind were responsible for making many of the decisions that shape our view of computers today. I'm particularly fond of Perlis because of his views on Compuer Science:

I think that it's extraordinarily important that we in computer science keep fun in computing. When it started out, it was an awful lot of fun. Of course, the paying customers got shafted every now and then, and after a while we began to take their complaints seriously. We began to feel as if we really were responsible for the successful, error-free perfect use of these machines. I don't think we are. I think we're responsible for stretching them, setting them off in new directions, and keeping fun in the house. I hope the field of computer science never loses its sense of fun. Above all, I hope we don't become missionaries. Don't feel as if you're Bible salesmen. The world has too many of those already. What you know about computing other people will learn. Don't feel as if the key to successful computing is only in your hands. What's in your hands, I think and hope, is intelligence: the ability to see the machine as more than when you were first led up to it, that you can make it more.
The Structure and Interpretation of Computer Programs by Abelson, Sussman, and Sussman

Unfortunately, things haven't quite turned out like Perlis would have wanted.

Besides of his many contributions to Computer Science, such as his work on ALGOL, Perlis is very well known for his Epigrams on Programming, of which our quote is the first one. I like this quote because it reminds me that there can never be an ultimate truth in programming due to our human condition.

Wednesday, October 03, 2007

Nerd Food: Interview with Federico Mena-Quintero

Pretty much anyone who is involved with Free Software - even just as a lowly user like myself - has heard of Federico. His blog is a source of insightful ideas on Gnome, and lately, on performance - combined with a healthy dose of interest in politics and, more importantly, good food. I decided to send a few questions to Federico, mainly on the topics I was most curious about, and he kindly replied to my questions - and did so in record time! Many thanks to Federico for taking time off his busy hacking schedule for this interview.
(C) Gnome Foundation

1. You are one of the founders of the Gnome project, which is currently celebrating ten years of existence. On a recent interview you gave to Fosdem, you considered the platform to be maturing. However, as we all know, the last 10% normally take 90% of the time, and it's considered to be boring work. What do you think the Gnome project needs to do to get people to focus on those remaining 10%?


Basically, to provide an incentive to get that last 10% of the work done :) Instead of smacking people with a stick for not writing documentation, you could have a web page with a bar chart of "percentage of documentation coverage". Then it becomes a competition: use a carrot instead of a stick.

I'd also like companies to get more involved in this. If they want to ship GNOME as a development platform they support, then they could very well employ people to do those missing bits.

2. You have been one of the champions of performance in Gnome for a while now. As functionality increased, Gnome started suffering more and more from performance problems, particularly when looked at from a low end perspective. You have been trying to explain to the masses that performance work is interesting. What do you think can be done to increase developer focus on this neglected area?

The thing about fixing performance problems is that nobody teaches you how to do it. There is very little documentation out there on how to generically approach an optimization problem (I intend to do something about this, but oh, time, time, time!) :)

Also, sometimes you fix a performance problem, but it reappears in the future. This happens when you don't leave an infrastructure in place to let you run a benchmark periodically. You need to be able to see if there are performance regressions.

Our tools are slowly getting better, but there are really very few people working on optimization and profiling tools. It takes a *ton* of time and skill to write a good tool; maybe that's why there are so few of them.

Finally, profiling and optimizing is really about following the scientific method ("make a hypothesis, change one thing at a time, measure, confirm your hypothesis, etc."). This requires discipline and a lot of patience.

Basically, it's a problem of education :)

3. Earlier on this year, Gnome users and developers met for GUADEC. Did you find the conference as productive as in previous years? How important is GUADEC for the Gnome user and developer community?

Yes, this GUADEC was tremendously productive! I think the venue helped a lot; the Birmingham Conservatoire is rather compact and has nice practice rooms that anyone can use. So, you could grab a couple of hackers and go to a room to hack peacefully.

GUADEC has always been important, even more so now that our community is large and widespread. It is about the only time in the year when most of the GNOME contributors get together in a single place and are able to talk in person. Do not underestimate the productivity of talking over a beer :)

4. From the outside world, it appears Novell is a company who has regained it's soul and direction with Linux. How was the transition from Ximian into Novell?

Like all acquisitions, it was a bit rought at first. It's what you get when you switch from being in a small company where you know all of the employees, to one with several thousands of people. You have to adjust to bigger processes, more layers of management, new locations, new paperwork...

It has been very interesting to see the mindset of the old-time Novell people change over time. At first they seemed reluctant to touch Linux and free software, since they were of course Windows users. Then we had a period with lots of questions, lots of bugs that needed to be fixed, lots of re-training... and now we are in a very nice period, when people have accepted that we must all use our own free software. People seem to be productive with it and happy.

I miss the monkeys, though.

5. You are currently telecommuting from Mexico, a position envied by a most developers out there. Do you find that telecommuting helps improving your productivity? Are there any downsides to it?

It has good things and bad things. Good things: working in your pajamas if you feel like it, not having to commute, taking a pause when you are stuck in a hard problem to do a bit of gardening. Bad things: you can't talk to people in person. You must fix all your networking problems yourself. Sometimes, when you are uninspired, it's nice to be able to look over someone else's shoulder or talk to them.

6. Can you describe your typical day at work?

Well, since I work from home... :)

I wake up. If my wife and I are hungry, we make breakfast while my email gets downloaded. If we are not hungry, I'll just check for super-urgent email and then start programming (fixing bugs, doing new development, reviewing patches, etc.).

I usually try to get some programming done in the morning, while my brain is fresh. Processing your email in the morning is a really bad idea; it will take you up to the afternoon and by then you'll be tired to really write code.

We have lunch at really irregular hours. Sometimes it's more like an early dinner. I have the bad habit of not stopping working until I'm exhausted or my wife is angry that we haven't gone out to the supermarket yet, but I'm trying to fix that :)

In the afternoon I tend to do "light" work... maintaining wikis, answering email, coordinating people. I don't really have a fixed work schedule.

7. Many developing countries are increasingly looking at Free Software as a way to bring down the digital divide. Do you find that Mexico is taking advantage of Free Software - particularly since it has two lead Free Software developers? Are there any lessons to be learned from Mexico's experience?

Mexico is blessed and cursed to be so close to the USA. There is plenty of basic usage of free software by individuals (often enthusiastic students), but relatively little usage in the public and private sectors.

People in Mexico get very impressed by rich people; most Mexicans want to be like the rich people from the USA they see on TV. It's very easy to woo us into accepting their ways.

So, every time there has been some noise about using free software in the public sector, Bill Gates has flown down, organized a big business lunch with government officials, and made sure that they keep using Microsoft products. If you are an ignorant politician, you will love to gloat that you had lunch (imagine, lunch!) with Bill Gates, the richest man in the world --- and whatever he says must be correct, of course. The problem we have is that most of our politicians don't have the faintest idea of the economic and cultural implications of free software, unlike those in the European Union (see the recent report on the economic impact of free software there!).

Thanks for the interview!

Saturday, September 29, 2007

.signature

"We must know, we shall know." -- David Hilbert


David Hilbert was a great German mathematician. What I appreciate the most about him is his quixotic personality and single-mindedness, going along with Bertrand Russel on their impossible quest to clean mathematics of all doubt and uncertainty, always searching for strict solutions through pure thought. In 1900, Hilbert came up with a list of 23 fundamental problems, many of which are still being investigated to this day. In 1930, Hilbert finished a famous speech in Königsberg with the words "We must know, we shall know", a phrase that fits perfectly the life-long devotion he had for mathematics.

Friday, September 28, 2007

Mighty Monty is Down

We knew it had to happen one day, but never this soon. The day had started badly, a drizzly sort of day, greyness and cold everywhere. To make matters worse, London transport was yet again against me, trains were cancelled, trains were overflowing with people, the human drones bent on one thing only: to get to their destination at any cost. I was one of them. In the madness of rush hour, a distress called reach me: Shahin and Monty were in big trouble.

Monty, our faithful Rover Metro, has been with us for just under six months, and in this period, it has been the definition of reliability itself. Its name comes from the licence plate - who needs vanity plates when sheer randomness is trying to tell you something? - and it's character is as English as the brand: not particularly pretty but very functional and reliable. Never once did it broke down, never once did it chug - a real trooper, always ready for the next long haul trip. When we came back from Africa, Monty took us from London to Southampton and back several times a week. It took us from Hertfordshire to London almost weekly. And he took Shahin to work and back everyday. Ah, but not Friday.

Shahin was driving Monty along on the motorway as usual, seventy, more, miles per hour, when Monty started to loose speed and make noises of all sorts; suddenly from the fast lane she had to move to the middle lane; soon after, from the middle lane down to the slow lane; and from the slow lane, having nowhere else to go, she had to get out of the motorway. She remembered the wise words of Jay to our friend Stacey, also involved in an unfortunate breakdown: "Whatever you do, get the hell out of the motorway!!!". The lights were flashing, smoke was coming out of the engine, Stacey was scared, but she managed to impose her will on the unruly metal. And so did Shahin, Inspired by Stacey's brave behaviour in combat, and by the heavy cost of towing cars off the motorway.

Since, unwisely, we didn't have any coverage of any kind - we were going to do it, I swear! just never had the time! - we had no option but tow the car ourselves. Shahin first tried it with her sister and the brother-in-law, but their car didn't have the required apparatus. Then she rung Stacey for help, and her boyfriend Jay agreed to come to the rescue later on at night.

Night came and we all met down at Stacey's house for the operation. In our innocence, we were entirely unconcerned - how difficult can it be right? Then Shahin had a warning call from her brother, telling her how hard towing would be, had we done it before and so on. Even then I still remained unconcerned. It was only when we got to Monty and Jay started giving us instructions, in that mellow but grave voice of his: "whatever you do, make sure you keep the rope taut or you'll end up running into the back of the van. And remember, I won't break so you have to break for me. If I break you won't have enough time to react and crash into me.". OK then, I thought, other than the fact that were going to die, it's a dead easy job.

Taut was a word I learned then, but which will undoubtedly stay with me forever. The cars got hooked up just outside of Welwyn, our target being Arlesey - twenty minutes of straight driving at a good speed. Miles away. And that's when it dawned on me how hard this was going to be. Shahin was driving - I was nowhere near brave enough.

We drove in the dark, cold English countryside lanes, barely able to see anything but the white van one meter in front of us and it's flashing lights. I thought ten miles or so per-hour was going to be our top speed, but the speedometer just wouldn't obey and kept on going higher and higher until it settled at thirty or so. It felt like the fastest ride we've had ever had. Trees were rushing by us, darkness was rushing fast. Like good soldiers, we focused on the rope and kept it tight as possible, as tight as it had ever been before. But to keep it tight, we had to break often; and knowing the precise amount of breakage required is nigh impossible. Every time Shahin pressed the breaks, time froze for a split of a second; then the van would yank us, making us bounce like a ball. We would then do the same to the van, pulling it backwards, until the whole process would settle and we'd be on a straight line again. Perfectly within the laws of physics, but extremely scary nonetheless.

We stared intensely at the rope, to the exclusion of everything else. Not much we could see anyway. But then, breaking took its toll and a break pad died with an awful grinding noise - hell itself and its horsemen coming after us. We panicked with the noise, but kept on going straight on. The worse was still to come. As we past one strangely named locality after another, we suddenly noticed we weren't going the right way. It could be that Jay new a shortcut, or even a long cut, anything but just get us there. But no, we were really, truly lost. All cars stopped, maps were taken out. We had crossed the county border, and were now in the strange land of Bedfordshire - effectively, off the map. On the good side, it appeared we were not that far away.

Eventually we settled on a plan of attack; but then, as we started the cars and went past a hump, the rope snapped. Jay kept on going, but we got left behind. I thought it was the end of our adventure, somewhere in the barren lands of Bedfordshire, all was lost and we'd have to call some towing company. But resourceful Jay got rid of the metal bits, tied a simple knot and we were on our way again. All the excitement was a bit too much for Shahin, she was getting really scared by this point, but kept on going. There was nothing we could do but keep on going till the end.

It's a strange feeling, being behind a car, two meters or less, at thirty miles per hour; your brain is fully aware that any breaking, any breaking at all and you will crash. It's a simple equation really.

Sometime later we found ourselves driving in town center Arlesey, past all the pubs, past all the shops, excitedly looking for the garage. Shahin spotted it, screamig. We had made it alive. But we learned our lesson. Next time, we'll pay the hundred pounds for towing gladly - and probably even add a tenner to the chap.