Four short links: 20 May 2009

Cognitive Surplus, Data Centers=Mainframes, Django Microframework, and a Visit To The Future

  1. Distributed Proofreaders Celebrates 15000th Title Posted To Project Gutenberg — a great use of our collective intelligence and cognitive surplus. If I say one more Clay Shirkyism, someone’s gonna call BINGO. (via timoreilly on Twitter)
  2. Datacenter is the New Mainframe (Greg Linden) — wrapup of a Google paper that looks at datacenters in the terms of mainframes: time-sharing, scheduling, renting compute cycles, etc. I love the subtitle, “An Introduction to the Design of Warehouse-Scale Machines”.
  3. djng, a Django powered microframework — update from Simon Willison about the new take on Django he’s building. Microframeworks let you build an entire web application in a single file, usually with only one import statement. They are becoming increasingly popular for building small, self-contained applications that perform only one task—Service Oriented Architecture reborn as a combination of the Unix development philosophy and RESTful API design. I first saw this idea expressed in code by Anders Pearson and Ian Bicking back in 2005.
  4. Cute! (Dan Meyer) — photo from Dan Meyer’s classroom showing normal highschool students doing something that I assumed only geeks at conferences did. I love living in the future for all the little surprises like this.

Datacenter Power Allocation Chart
Approximate distribution of peak power usage by hardware subsystem in one of Google’s datacenters (circa 2007)

tags: , , , , , ,
  • bowerbird

    all of the volunteers at distributed proofreaders deserve a huge
    “thank you” for their contributions to making the public domain
    a viable institution as we step into the 21st century. thank you!

    their dedication becomes even more impressive if you realize
    — as i have become acutely aware — that the _workflow_ at is painfully bad. the system is full of problems which
    _waste_ the precious time and energy contributed by volunteers.

    for instance, o.c.r. often exhibits a consistent recognition error
    in a book — e.g., a mistaken character name — which _could_
    be fixed with one global change to correct hundreds of errors…
    instead, d.p. requires its proofers to correct each one of those
    errors _individually_, as they come across each one on a page.

    another example is “spacey quotes” — quote-marks that are
    misrecognized by the o.c.r. as having a space on both sides…
    it’s possible to resolve these errors with a computer routine
    — one that is simple to write and astonishingly accurate —
    and i have demonstrated this to the “powers that be” at d.p.,
    but they simply refuse to institute this time-saving measure.

    and this is where the problem lies. those “powers that be”
    — a title they give themselves, humorously, but still true —
    do not seem to be interested in using the time and energy
    of their volunteers in a way that is respectful of the donation.

    indeed, they’re so determined to silence constructive criticism
    they actually _banned_ me from participating in their forums
    so they wouldn’t have to hear comments about their workflow.

    so at the same time that we congratulate the d.p. volunteers
    on their digitization of 15,000 books, let us also recognize
    that — if they would’ve been working in a better system —
    they might have digitized 30,000 books by now, or 50,000.