Professor Edward A. Lee from the EECS department of UC Berkeley wrote The Problem With Threads (PDF) last year. In it, he observes that threads remove determinism and open the door to subtle yet deadly bugs, and that while the problems were to some extent manageable on single core systems, threads on multicore systems will magnify the problems out of control. He suggests the only solution is to stop bolting parallelism onto languages and components–instead design new deterministically-composable components and languages.
This paper reflects two trends we see here at Radar: the first is towards multicore systems and the growing importance of distributed execution; the second is the increasing relevance of languages like Erlang, Haskell, and E. The growth of multicore is significant: if you want your program to run faster, the days of buying faster hardware are coming to an end. Instead, we’re looking at a time when you must make your program run faster on more (slow) hardware. Enter parallel programming, clusters, and their hyped big brother “grid computing”.
Google have obviously faced this problem and solved it with MapReduce. Lee argues that this kind of coordination system is how we solve the problem of threads’ nondeterminism. It nicely parallels (heh) the way that database sharding has become the way to solve scalability (see the Flickr war story for example). For this reason we’re watching Hadoop, the open source MapReduce implementation, with interest. (There are also MapReduce implementations in Perl, Ruby, and other languages)
MapReduce is built on a technique from the Lisp programming language. As the need for speed forces us out of our single core procedural comfort zone, we’re looking more and more at “niche” programming languages for inspiration. Haskell has quite the following among the alpha geeks we know (e.g., the Pugs project), and OCaml has a small but growing group of devotees. Then there was the huge interest in Smalltalk at Avi Bryant’s OSCON talk last year (SitePoint blogged about it here).