Four short links: 26 August 2010
Economic Growth Without Copyright, Ebook Numbers, Hypothesis Analysis Tool, Who Pays for Open Data?
- Germany’s Industrial Expansion Fueled by Absence of Copyright Law? (Der Spiegel) — fascinating article about the extraordinary publishing output in 1800s Germany vs other nations, all with no effective and enforceable copyright laws. Sigismund Hermbstädt, for example, a chemistry and pharmacy professor in Berlin, who has long since disappeared into the oblivion of history, earned more royalties for his “Principles of Leather Tanning” published in 1806 than British author Mary Shelley did for her horror novel “Frankenstein,” which is still famous today. Books were released in high-quality high-price format and low-quality low-price format, and Germans bought them in record numbers. When copyright law became established, publishers did away with the low-quality low-price version and authors complained about the drop in revenue.
- Cheap Ebooks Give Second Life to Backlist — it can’t be said enough that dead material in print can have a second life online. Here are numbers to make the story plain. (via Hacker News)
- Competing Hypotheses — a free, open source tool for complex research problems. A software companion to a 30+ year-old CIA research methodology, Open Source Analysis of Competing Hypotheses (ACH) will help you think objectively and logically about overwhelming amounts of data and hypotheses. It can also guide research teams toward more productive discussions by identifying the exact points of contention. (via johnmscott on Twitter)
- Economics of Scholarly Production: Supplemental Materials — scholarly publications include data and documentation that’s not in the official peer-reviewed article. Storing and distributing this has been the publication’s responsibility, but they’re spitting the dummy. Now the researcher’s organisation will have to house these supplemental materials. If data is as critical to science as the article it generates, yet small articles can come from terabytes of data, what’s the Right Thing To Do that scales across all academia? (via Cameron Neylon)