- CA Assembly Bill No. 292 — This bill would provide that the full text of the California Code of Regulations shall bear an open access creative commons attribution license, allowing any individual, at no cost, to use, distribute, and create derivative works based on the material for either commercial or noncommercial purposes. (via BoingBoing)
- The Inside Story of PACER (Ars Technica) — PACER has become a cash cow for the judicial branch, generating $100 million in profits the court has plowed into non-PACER IT projects. (via BoingBoing)
- Manipulating Memory for Fun and Profit (PDF) — It is a common belief that RAM loses its content as soon as the power is down. This is wrong, RAM is not immediately erased. It may take up to several minutes in a standard environment, even if the RAM is removed from the computer. And it may last much longer if you cool the DRAM chips. With a simple dusty spraying at -50°C, your RAM data can survive more that 10 minutes. If you cool the chips at -196°C with liquid nitrogen, data are held for several hours without any power.
- Pirating Buildings (Spiegel) — putting the “property” back in Intellectual Property.
"open government" entries
Open Regulations, Inside PACER, Hacking Memory, and Pirating Buildings
Open Data Institute CEO Gavin Starks on how open data's current state is similar to the World Wide Web's early days.
If you had 10 million pounds to spend on open data research, development and startups, what would you do with it? That’s precisely the opportunity that Gavin Starks (@AgentGav) has been given as the first CEO of the Open Data Institute (ODI) in the United Kingdom.
The ODI, which officially opened last September, was founded by Sir Tim Berners-Lee and Professor Nigel Shadbolt. The independent, non-partisan, “limited by guarantee” nonprofit is a hybrid institution focused on unlocking the value in open data by incubating startups, advising governments, and educating students and media.
Previously, Starks was the founder and chairman of AMEE, a social enterprise that scored environmental costs and risks for businesses. (O’Reilly’s AlphaTech Ventures was one of its funders.) He’s also worked in the arts, science and technology. I spoke to Starks about the work of the ODI and open data earlier this winter as part of our continuing series investigating the open data economy.
What have you accomplished to date?
Gavin Starks: We opened our offices on the first of October last year. Over the first 12 weeks of operation, we’ve had a phenomenal run. The ODI is looking to create value to help everyone address some of the greatest challenges of our time, whether that’s in education, health, in our economy or to benefit our environment.
Since October, we’ve had literally hundreds of people through the door. We’ve secured $750,000 in matched funding from the Omidyar Network, on top of a 10-million-pound investment from the UK Government’s Technology Strategy Board. We’ve helped identify 200 million pounds a year in savings for the health service in the UK. Read more…
Two open data items of note from readers.
When I asked whether the push to free up government data was resulting in economic activity and startup creation, I started to receive emails from people around the United States and Europe. I’ll be publishing more of what I learned in our ongoing series of open data interviews and profiles over the next month, but two responses are worth sharing now.
Open questions about open growth
Harvey Lewis, one of the primary investigators for the research project, recently wrote about some of Deloitte’s preliminary findings at the Open Government Partnership’s blog in a post on “open growth.” To date, Deloitte has not found the quantitative evidence the team needs to definitely demonstrate the economic value of open data. That said, the team found much of interest in the space: Read more…
A #PDFtribute to Aaron Swartz.
I’ve read many eloquent eulogies from people who knew Aaron Swartz better than I did, but he was also a Foo and contributor to Open Government. So, we’re doing our part at O’Reilly Media to honor Aaron by posting the Open Government book files for free for anyone to download, read and share.
The files are posted on the O’Reilly Media GitHub account as PDF, Mobi, and EPUB files for now. There is a movement on the Internet (#PDFtribute) to memorialize Aaron by posting research and other material for the world to access, and we’re glad to be able to do this.
You can find the book here: github.com/oreillymedia/open_government
Daniel Lathrop, my co-editor on Open Government, says “I think this is an important way to remember Aaron and everything he has done for the world.” We at O’Reilly echo Daniel’s sentiment.
Opening data in Congress is a marathon, not a sprint. The 113th Congress is making notable, incremental progress on open government.
It was a good week for open government data in the United States Congress. On Tuesday, the Clerk of the House made House floor summaries available in bulk XML format. Yesterday, the House of Representatives announced that it will make all of its legislation available for bulk download in a machine-readable format, XML, in cooperation with the U.S. Government Printing Office. As Nick Judd observes at TechPresident, such data is catnip for developers. While full bulk data from THOMAS.gov is still not available, this incremental progress deserves mention.
Open government coders collaborate to liberate legislative data from Congress.
When Congress launched Congress.gov in beta, they didn’t open the data. This fall, a trio of open government developers took it upon themselves to do what custodians of the U.S. Code and laws in the Library of Congress could have done years ago: published data and scrapers for legislation in Congress from THOMAS.gov in the public domain. The data at github.com/unitedstates is published using an “unlicense” and updated nightly. Credit for releasing this data to the public goes to Sunlight Foundation developer Eric Mill, GovTrack.us founder Josh Tauberer and New York Times developer Derek Willis.
“It would be fantastic if the relevant bodies published this data themselves and made these datasets and scrapers unnecessary,” said Mill, in an email interview. “It would increase the information’s accuracy and timeliness, and probably its breadth. It would certainly save us a lot of work! Until that time, I hope that our approach to this data, based on the joint experience of developers who have each worked with it for years, can model to government what developers who aim to serve the public are actually looking for online.”
If the People’s House is going to become a platform for the people, it will need to release its data to the people. If Congressional leaders want THOMAS.gov to be a platform for members of Congress, legislative staff, civic developers and media, the Library of Congress will need to release structured legislative data. THOMAS is also not updated in real-time, which means that there will continue to be a lag between a bill’s introduction and the nation’s ability to read the bill before a vote. Read more…
Scraping together the best tools, techniques and tactics of the data journalism trade.
Great journalism has always been based on adding context, clarity and compelling storytelling to facts. While the tools have improved, the art is the same: explaining the who, what, where, when and why behind the story. The explosion of data, however, provides new opportunities to think about reporting, analysis and publishing stories.
As you may know, there’s already a Data Journalism Handbook to help journalists get started. (I contributed some commentary to it). Over the next month, I’m going to be investigating the best data journalism tools currently in use and the data-driven business models that are working for news startups. We’ll then publish a report that shares those insights and combines them with our profiles of data journalists.
Why dig deeper? Getting to the heart of what’s hype and what’s actually new and noteworthy is worth doing. I’d like to know, for instance, whether tutorials specifically designed for journalists can be useful, as Joe Brockmeier suggested at ReadWrite. On a broader scale, how many data journalists are working today? How many will be needed? What are the primary tools they rely upon now? What will they need in 2013? Who are the leaders or primary drivers in the area? What are the most notable projects? What organizations are embracing data journalism, and why?
This isn’t a new interest for me, but it’s one I’d like to found in more research. When I was offered an opportunity to give a talk at the second International Open Government Data Conference at the World Bank this July, I chose to talk about open data journalism and invited practitioners on stage to share what they do. If you watch the talk and the ensuing discussion in the video below, you’ll pick up great insight from the work of the Sunlight Foundation, the experience of Homicide Watch and why the World Bank is focused on open data journalism in developing countries.
In the wake of a devastating storm, here's how you can volunteer to help those affected.
Even though the direct danger from Hurricane Sandy has passed, lower Manhattan and many parts of Connecticut and New Jersey remain a disaster zone, with millions of people still without power, reduced access to food and gas, and widespread damage from flooding. As of yesterday, according to reports from Wall Street Journal, thousands of residents remain in high-rise buildings with no water, power or heat.
E-government services are in heavy demand, from registering for disaster aid to finding resources, like those offered by the Office of the New York City Advocate. People who need to find shelter can use the Red Cross shelter app. FEMA has set up a dedicated landing page for Hurricane Sandy and a direct means to apply for disaster assistance:
— FEMA (@fema) November 2, 2012
Public officials have embraced social media during the disaster as never before, sharing information about where to find help.
No power and diminished wireless capacity, however, mean that the Internet is not accessible in many homes. In the post below, learn more on what you can do on the ground to help and how you can contribute online.
The 2012 Presidential debates show how far convergence has come and how far we have yet to go.
What a difference a season makes. A few months after widespread online frustration with a tape-delayed Summer Olympics, the 2012 Presidential debates will feature the most online livestreams and wired, up-to-the-second digital coverage in history.
Given the pace of technological change, it’s inevitable that each election season will bring with it new “firsts,” as candidates and campaigns set precedents by trying new approaches and platforms. This election has been no different: the Romney and Obama campaigns have been experimenting with mobile applications, social media, live online video and big data all year.
Tonight, one of the biggest moments in the presidential campaign to date is upon us and there are several new digital precedents to acknowledge.
The biggest tech news is that YouTube, in a partnership with ABC, will stream the debates online for the first time. The stream will be on YouTube’s politics channel, and it will be embeddable.
With more and more livestreamed sports events, concerts and now debates available online, tuning in to what’s happening no longer means passively “watching TV.” The number of other ways people can tune in online in 2012 has skyrocketed, as you can see in GigaOm’s post listing debate livestreams or Mashable’s ways to watch the debates online.
This year, in fact, the biggest challenge people will have will not be finding an online alternative to broadcast or cable news but deciding which one to watch.