Where do you start building a large scaleable service? Chances are you will start with work done by Brad Fitzpatrick, Doug Cutting and Simon Peyton-Jones. Brad is the author of several core tools of Web 2.0, memcached, mogilefs and perlbal. Doug gives us Lucene, Nutch and the Open Source implementation of the Google MapReduce in Hadoop. Simon researches and contributes to Haskell as one of the core develops of the Glasgow Haskell Compiler and functional language. Today, all three joined Tim O’Reilly on stage at the Open Souce Radar Executive Briefing to discuss how Web 2.0 requirements are changing the Open Source Developer kit.
Having previously discussed how Web 2.0 can be viewed as a operating system, Tim started with the question, what services need to be developed to complete the platform? Doug wishes for the proliferation and popularization of large scale tools like Google BigTable and MapReduce.
Brad hopes new or extended frameworks will guide people into scaleable solutions. He argues that too many of the current frameworks and tools like object relationship mappers make easy things easy and hard things really hard. These shortcuts fool you into designing the system quickly, but then make scaling up even more difficult.
Approaching the problem from a different angle, Simon is researching functional programming languages. Designed to minimize side effects and shared state, these languages enable programmers to write applications that scale cleanly as demand rises. Haskell makes shared nothing the standard mode of development.
Simon ended with an interesting question: what if we could compile to hadoop? I think that the ability for developers to write parallel code effortlessly is the way of the future in a multi-core future.