With revised legislation and a chief data officer, San Francisco is iterating on its platform goals.
As interest in open data continues to grow around the world, cities have become laboratories for participatory democracy. They’re also ground zero for new experiments in spawning civic startups that deliver city services or enable new relationships between the people and city government. San Francisco was one of the first municipalities in the United States to embrace the city as a platform paradigm in 2009, with the launch of an open data platform.
Years later, the city government is pushing to use its open data to accelerate economic development. On Monday, San Francisco announced revised open data legislation to enable that change and highlighted civic entrepreneurs who are putting the city’s data to work in new mobile apps.
City staff have already published the revised open data legislation on GitHub. (If other cities want to “fork” it, clone away.) David Chiu, the chairman of the San Francisco Board of Supervisors, the city’s legislative body, introduced the new version on Monday and submitted it on Tuesday. A vote is expected before the end of the year.
Speaking at the offices of the Hatchery in San Francisco, Chiu observed that, by and large, the data that San Francisco has put out showed the city in a positive light. In the future, he suggested, that should change. Chiu challenged the city and the smartest citizens of San Francisco to release more data, figure out where the city could take risks, be more entrepreneurial and use data to hold the city accountable. In his remarks, he said that San Francisco is working on open budgeting but is still months away from getting the data that they need. Read more…
Data-driven decision engines will need patient experience to complete the feedback loop.
When I was looking for a place in Maine to go for care this summer, I went online to look at my options. I consulted hospital data from the government at HospitalCompare.HHS.gov and patient feedback data on Yelp, and then made a decision based upon proximity and those ratings. If I had been closer to where I live in Washington D.C., I would also have consulted friends, peers or neighbors for their recommendations of local medical establishments.
My brush with needing to find health care when I was far from home reminded me of the prism that collective intelligence can now provide for the treatment choices we make, if we have access to the Internet.
Patients today are sharing more of their health data and experiences online voluntarily, which in turn means that the Internet is shaping health care. There’s a growing phenomenon of “e-patients” and caregivers going online to find communities and information about illness and disability.
Aided by search engines and social media, newly empowered patients are discussing health conditions with others suffering from disease and sickness — and they’re taking that peer-to-peer health care knowledge into their doctors’ offices with them, frequently on mobile devices. E-patients are sharing their health data of their own volition because they have a serious health condition, want to get healthy, and are willing.
From the perspective of practicing physicians and hospitals, the trend of patients contributing to and consulting on online forums adds the potential for errors, fraud, or misunderstanding. And yet, I don’t think there’s any going back from a networked future of peer-to-peer health care, anymore than we can turn back the dial on networked politics or disaster response. Read more…
With a new mobile app and API, Captricity wants to build a better bridge between analog and digital.
Unlocking data from paper forms is the problem that optical character recognition (OCR) software is supposed to solve. Two issues persist, however. First, the hardware and software involved are expensive, creating challenges for cash-strapped nonprofits and government. Second, all of the information on a given document is scanned into a system, including sensitive details like Social Security numbers and other personally identifiable information. This is a particularly difficult issue with respect to health care or bringing open government to courts: privacy by obscurity will no longer apply.
The process of converting paper forms into structured data still hasn’t been significantly disrupted by rapid growth of the Internet, distributed computing and mobile devices. Fields that range from research science to medicine to law to education to consumer finance to government all need better, cheaper bridges from the analog to the digital sphere.
“I was looking at the information systems that were available to these low-resource organizations,” Chen said in a recent phone interview. “I saw that they’re very much bound in paper. There’s actually a lot of efforts to modernize the infrastructure and put in mobile phones. Now that there’s mobile connectivity, you can run a health clinic on solar panels and long distance Wi-Fi. At the end of the day, however, business processes are still on paper because they had to be essentially fail-proof. Technology fails all the time. From that perspective, paper is going to stick around for a very long time. If we’re really going to tackle the challenge of the availability of data, we shouldn’t necessarily be trying to change the technology infrastructure first — bringing mobile phones and iPads to where there’s paper — but really to start with solving the paper problem.”
When Chen saw that data entry was a chokepoint for digitizing health indicators, he started working on developing a better, cheaper way to ingest data on forms. Read more…
The 2012 Presidential debates show how far convergence has come and how far we have yet to go.
What a difference a season makes. A few months after widespread online frustration with a tape-delayed Summer Olympics, the 2012 Presidential debates will feature the most online livestreams and wired, up-to-the-second digital coverage in history.
Given the pace of technological change, it’s inevitable that each election season will bring with it new “firsts,” as candidates and campaigns set precedents by trying new approaches and platforms. This election has been no different: the Romney and Obama campaigns have been experimenting with mobile applications, social media, live online video and big data all year.
Tonight, one of the biggest moments in the presidential campaign to date is upon us and there are several new digital precedents to acknowledge.
The biggest tech news is that YouTube, in a partnership with ABC, will stream the debates online for the first time. The stream will be on YouTube’s politics channel, and it will be embeddable.
With more and more livestreamed sports events, concerts and now debates available online, tuning in to what’s happening no longer means passively “watching TV.” The number of other ways people can tune in online in 2012 has skyrocketed, as you can see in GigaOm’s post listing debate livestreams or Mashable’s ways to watch the debates online.
This year, in fact, the biggest challenge people will have will not be finding an online alternative to broadcast or cable news but deciding which one to watch.
Want to build a business on open data? Add value by solving a problem for your users.
Hjalmar Gislason commented earlier this year that open data has been all about apps. In the future, it should be about much more than consumer-facing tools. “Think also about the less sexy cases that can help a few people save us millions of dollars in aggregate, generate new insights and improve decision making on various levels,” he suggested.
Today, the founder and CEO of DataMarket told the audience of the first White House Energy Datapalooza that his company would make energy data more discoverable and usable. In doing so, Datamarket will be be tapping into an emerging data economy of businesses using open government data.
“We are honored to have been invited to take part in this fantastic initiative,” said Gislason in a prepared statement. “At DataMarket we focus on doing one thing well: aggregating vast amounts of heterogeneous data to help business users with their planning and decision-making. Our new energy portal applies this know-how to the US government’s energy data, for the first time enabling these valuable resources to be searched, visualized and shared through one gateway and in combination with other domestic and worldwide open data sources.”
Energy.datamarket.com, which won’t go live officially until mid-October, will offer search for 10 thousand data sets, 2 million time series and 50 million energy facts. DataMarket.com is based upon data from thirteen different data providers including the U.S. Department of Energy’s Energy Information Agency (EIA), Oak Ridge National Laboratory, Energy Efficiency and Renewable Energy program, National Renewable Energy Laboratory, the Environmental Protection Agency (EPA), the Bureau of Transportation Statistics, the World Bank and United Nations.
Last week, I interviewed Gislason about his company and why they’re focusing on energy data.
Commonalities between the Knight Foundation's News Challenge winners hint at journalism's networked future.
It’s not just a focus on data that connects the most recent class of Knight News Challenge winners. They all are part of a distributed civic media community that works on open source code, collects and improves data, and collaborates across media organizations.
These projects are “part of an infrastructure that helps journalists better understand and serve their communities through data,” commented Chris Sopher, Knight Foundation Journalism Program Associate, in an interview last week. To apply a coding metaphor, the Knight Foundation is funding the creation of patches for the source code of society. This isn’t a new focus: in 2011, Knight chose to help build the newsroom stack, from editorial search engines to data cleaning tools.
Following are four themes that jumped out when I looked across the winners of the latest Knight News Challenge round.
An intercontinental project that bridged citizen science, open data, open source hardware, civic hacking and the Internet of things to monitor, share and map radiation data? Safecast is in its own category. Adapting the system to focus on air quality in Los Angeles — a city that’s known for its smog — will be an excellent stress test for seeing if this distributed approach to networked accountability can scale.
If it does — and hacked Chumbys, LED signs, Twitter bots, smartphone apps and local media reports start featuring the results — open data is going to be baked into how residents of Los Angeles understand their own atmosphere. If this project delivers on some of its promise, the value of this approach will be clearer.
If this project delivers on all of its potential, the air itself might improve. For that to happen, the people who are looking at the realities of air pollution will need to advocate for policy makers to improve it. In the future, the success or failure of this project will inform similar efforts that seek to enlist communities in data collection, including whether governments embrace “citizensourcing” beyond natural disasters and crises. The idea of citizens as sensors continues to have legs. Read more…
The common thread among the Knight Foundation's latest grants: practical application of open data.
Data, on its own, locked up or muddled with errors, does little good. Cleaned up, structured, analyzed and layered into stories, data can enhance our understanding of the most basic questions about our world, helping journalists to explain who, what, where, how and why changes are happening.
Last week, the Knight Foundation announced the winners of its first news challenge on data. These projects are each excellent examples of working on stuff that matters: they’re collective investments in our digital civic infrastructure. In the 20th century, civil society and media published the first websites. In the 21st century, civil society is creating, cleaning and publishing open data.
The grants not only support open data but validate its place in the media ecosystem of 2012. The Knight Foundation is funding data science, accelerating innovation in the journalism and media space to help inform and engage communities, a project that they consider “vital to democracy.”
Why? Consider the projects. Safecast creates networked accountability using sensors, citizen science and open source hardware. LocalData is a mobile method for communities to collect information about themselves and make sense of it. Open Elections will create a free, standardized database stream of election results. Development Seed will develop better tools to contribute to and use OpenStreetMap, the “Wikipedia of maps.” Pop Up Archive will develop an easier way to publish and archive multimedia data to the Internet. And Census.IRE.org will improve the ability of a connected nation and its data editors to access and use the work of U.S. Census Bureau.
The projects hint at a future of digital open government, journalism and society founded upon the principles that built the Internet and World Wide Web and strengthened by peer networks between data journalists and civil society. A river of open data flows through them all. The elements and code in them — small pieces, loosely joined by APIs, feeds and the social web — will extend the plumbing of digital democracy in the 21st century.
The United States National Institutes of Health (NIH) wants to tie development of mobile health apps to evidence-based research, and it hopes to do that with a new grant program. The imperative to align developers with research is urgent, given the strong interest in health IT, mobile health and health data. There are significant challenges for the space, from consumer concerns over privacy and mobile applications to the broader question of balancing health data innovation with patient rights.
To learn more about what’s happening with mobile health apps, health data, behavioral change and cancer research, I recently interviewed Dr. Abdul Sheikh. Our interview, lightly edited for content and clarity, follows.
What led you to your current work at NIH?
Dr. Abdul Sheikh: I’ve always had a strong grounding in public health and population health, but I also have a real passion for technology and informatics. What’s beautiful is, in my current position here as a program director at the National Cancer Institute (NCI), I have a chance to meld these worlds of public health, behavior and communication science with my passion for technology and informatics. Some of the work I did before coming to the NIH was related to the early telemedicine and web-based health promotion efforts that the government of Canada was involved in.
At NCI, I direct a portfolio of research on technology-mediated communication. I’ve also had the chance to get involved and provide leadership on two very cool efforts. One of them is leadership for our division’s Small Business Innovation Research Program (SBIR). I’ve led the first NIH developer challenge competitions as well.
The Library of Congress launched a new website for a more mobile public to access legislative information
The Library of Congress is now more responsive — at least when it comes to web design. Today, the nation’s repository for its laws launched a new beta website at Congress.gov and announced that it would eventually replace Thomas.gov, the 17-year-old website that represented one of the first significant forays online for Congress. The new website will educate the public looking for information on their mobile devices about the lawmaking process, but it falls short of the full promise of embracing the power of the Internet. (More on that later).
Tapping into a growing trend in government new media, the new Congress.gov features responsive design, adapting to desktop, tablet or smartphone screens. It’s also search-centric, with Boolean search and, in an acknowledgement that most of its visitors show up looking for information, puts a search field front and center in the interface. The site includes member profiles for U.S. Senators and Representatives, with associated legislative work. In a nod to a mainstay of social media and media websites, the new Congress.gov also has a “most viewed bills” list that lets visitors see at a glance what laws or proposals are gathering interest online. (You can download a fact sheet on all the changes as a PDF).
On the one hand, the new Congress.gov is a dramatic update to a site that desperately needed one, particularly in a historic moment where citizens are increasingly connecting to the Internet (and one another) through their mobile devices.
On the other hand, the new Congress.gov beta has yet to realize the potential of Congress publishing bulk open legislative data. There is no application programming interface (API) for open government developers to build upon. In many ways, the new Congress.gov replicates what was already available to the public at sites like Govtrack.us and OpenCongress.org. Read more…
Dr. Stephen Friend on open science and the need for a "GitHub for scientists."
To unlock the potential of health data for the public good, balancing health privacy with innovation will rely on improving informed consent. If the power of big data is to be applied to scientific inquiry in health care, unlocking genetic secrets, finding a cure for breast cancer or “preemptive health care,” changes in scientific culture and technology will both need to occur.
One element of that change could include a health data commons. Another is open access in the research community. Dr. Stephen Friend, the founder of Sage Bionetworks, is one of the foremost advocates of what I think of as “open science.” Earlier in his career, Dr. Friend was a senior vice president at Merck & Co., Inc., where he led the pharmaceutical company’s basic cancer research program.
In a recent interview, Dr. Friend explained what open science means to him and what he’s working on today. For more on the synthesis of open source with genetics, watch Andy Oram’s interview with Dr. Friend and read his series on recombinant research and Sage Congress.