Four short links: 25 November 2009
Sexy HTTP Parser, 9/11 Pager Leaks, Open Source Science, GLAM and Newspapers
- http-parser — This is a parser for HTTP messages written in C. It parses both requests and responses. The parser is designed to be used in performance HTTP applications. It does not make any allocations, it does not buffer data, and it can be interrupted at anytime. It only requires about 128 bytes of data per message stream (in a web server that is per connection). Extremely sexy piece of coding. (via sungo on Twitter)
- Wikileaks to Release 9/11 Pager Intercepts — they’re trickling the half-million messages out in simulated real time. The archive is a completely objective record of the defining moment of our time. We hope that its revelation will lead to a more nuanced understanding of the event and its tragic consequences. (via cshirky on Twitter)
- Promoting Open Source Science — interesting interview with an open science practitioner, but also notable for what it is: he was interviewed and released the text of the interview himself because his responses had been abridged in the printed version. (via suze on Twitter)
- Copyright, Findability, and Other Ideas from NDF (Julie Starr) — a newspaper industry guru attended the National Digital Forum where Galleries, Libraries, Archives, and Museums talk about their digital issues, where she discovered that newspapers and GLAMs have a lot in common. We can build beautiful, rich websites till the cows come home but they’re no good to anyone if people can’t easily find all that lovely content lurking beneath the homepage. That’s as true for news websites as it is for cultural archives and exhibitions, and it’s a topic that arose often in conversation at the NDF conference. I’ve been cooling on destination websites for a while. You need to have a destination website, of course, but you need even more to have your content out where your audience is so they can trip over it often and usefully.