I enjoyed reading Stephen Walli’s essay recasting the “open source stack” into a network of components. As soon as I saw it, I wondered whether the transactional costs of many interface points are responsible for a lot of the difficulty in getting a particular level “right”. For example, the troika of applications, libraries, and languages is a painful one: from Microsoft’s DLL hell, to anyone who’s been caught in the endless GNOME upgrade cycle.
Obviously the circular dependencies are a problem, but I wonder whether the bidirectional nature of each dependency is a factor–the fact that libraries are written for applications, and applications written to use libraries, meaning that changing one probably means a change in the other. This locks in with what David Heinemeier-Hansson (plug: EuroOSCON keynoter) said in an O’Reilly Network interview:
That’s why we hold the notion that “frameworks are extractions” so very dear in the Rails community. Frameworks are not designed before the fact. They’re extracted when you’ve proved to yourself that an approach works. Whenever we get ahead of ourselves and try to leap over the extraction process, we come back sorely disappointed.
In other words, they break one direction of the arrows: libraries are extracted from applications, so applications shouldn’t have to be rewritten when the library changes. I’m having a lot of trouble picturing how this would scale, though: it’s easy for DHH when he extracts Rails from Basecamp, but how does that work in the GNOME world? There a shared library has to support a hundred applications, and so you must merge differing needs. When the API requires a change because of one or two project’s needs, all applications have to be changed. It’ll be interesting to see how more people using Rails breaks or fails to break existing Rails applications.