ENTRIES TAGGED "health care data"
Predixion service could signal a trend for smaller health facilities.
Analytics are expensive and labor intensive; we need them to be routine and ubiquitous. I complained earlier this year that analytics are hard for health care providers to muster because there’s a shortage of analysts and because every data-driven decision takes huge expertise.
Currently, only major health care institutions such as Geisinger, the Mayo Clinic, and Kaiser Permanente incorporate analytics into day-to-day decisions. Research facilities employ analytics teams for clinical research, but perhaps not so much for day-to-day operations. Large health care providers can afford departments of analysts, but most facilities — including those forming accountable care organizations — cannot.
Imagine that you are running a large hospital and are awake nights worrying about the Medicare penalty for readmitting patients within 30 days of their discharge. Now imagine you have access to analytics that can identify about 40 measures that combine to predict a readmission, and a convenient interface is available to tell clinicians in a simple way which patients are most at risk of readmission. Better still, the interface suggests specific interventions to reduce readmissions risk: giving the patient a 30-day supply of medication, arranging transportation to rehab appointments, etc. Read more…
The 30,000-foot view and the nitty gritty details of working with electronic health data
Ever wonder what the heck “meaningful use” really means? By now, you’ve probably heard it come up in discussions of healthcare data. You might even know that it specifically pertains to electronic health records (EHRs). But what is it really about, and why should you care?
If you’ve ever had to carry a large folder of paper between specialists, or fill out the same medical history form in different offices over and over—with whatever details you happen to remember off the top of your head that day—then you already have some idea of why EHRs are a desirable thing. The idea is that EHRs will lead to better care—and better research data—through more complete and accurate record-keeping, and will eventually become part of health information exchanges (HIEs) with features like trend analysis and push-notifications. However, the mere installation of EHR software isn’t enough; we need not just cursory use but meaningful use of EHRs, and we need to ensure that the software being used meets certain standards of efficiency and security.
A network graph approach to modeling the health-care system.
To achieve the the triple aim in healthcare (better, cheaper, and safer), we are going to need intensive monitoring and measurement of specific doctors, hospitals, labs and countless other clinical professionals and clinical organizations. We need specific data and specific doctors.
In 1979 a Federal judge in Florida sided with the AMA to prevent these kinds of provider-specific data sets violated doctor privacy. Last Friday, a different Florida judge reversed the 1979 injunction, allowing provider identified data to be released from CMS under FOIA requests. Even without this tremendous victory for the Wall Street Journal, there was already a shift away from aggregation studies in healthcare towards using Big Data methods on specific doctors to improve healthcare. This critical shift will allow us to determine which doctors are doing the best job, and which are doing the worst. We can target struggling doctors to help improve care, and we can also target the best doctors, so that we can learn new best practices in healthcare.
Evidence-based medicine must be targeted to handle specific clinical contexts. The only really open questions to decide are “how much data should we relese” and “should this be done in secret or in the open.” I submit that the targeting should be done at the individual and team level, and that this must be an open process. We need to start tracking the performance and clinical decisions of specific doctors working with other specific doctors, in a way that allows for public scrutiny. We need to release uncomfortably personal data about specific physicians and evaluate that data in a fair manner, without sparking a witch-hunt. And whether you agree with this approach or not, it’s already underway. The overturning of this court case will only open the flood gates further. Read more…
The White House added a community for the "smart disclosure" of consumer data to Data.gov.
On Monday morning, the Obama administration launched a new community focused on consumer data at Data.gov. While there was no new data to be found among the 507 datasets listed there, it was the first time that smart disclosure has an official home in federal government.
“Smart disclosure means transparent, plain language, comprehensive, synthesis and analysis of data that helps consumers make better-informed decisions,” said Christopher Meyer, the vice president for external affairs and information services at Consumers Union, the nonprofit that publishes “Consumer Reports,” in an interview. “The Obama administration deserves credit for championing agency disclosure of data sets and pulling it together into one web site. The best outcome will be widespread consumer use of the tools — and that remains to be seen.”
You can find the new community at Consumer.Data.gov or data.gov/consumer. Both URLs forward visitors to the same landing page, where they can explore the data, past challenges, external resources on the topic, in addition to a page about smart disclosure, blog posts, forums and feedback.
“Analyzing data and giving plain language understanding of that data to consumers is a critical part of what Consumer Reports does,” said Meyer. “Having hundreds of data sets available on one (hopefully) easy-to-use platform will enable us to provide even more useful information to consumers at a time when family budgets are tight and health care and financial ‘choices” have never been more plentiful.”
What's bigger than a yottabyte, the role big data will play in health care, and the potential impact of vehicle data.
Here are a few stories from the data space that caught my attention this week.
Bigger and bigger … and bigger … big data
MIT Technology Review’s business editor Jessica Leber reports this week on a conference presentation by MIT’s Andrew McAfee, wherein McAfee predicts data volumes will soon surpass the current upper bounds of metric measurement — the yottabyte. McAfee discussed in his presentation (and on his blog) how we’ve moved through the data measurement eras — terrabyte, petabyte, and soon the zettabyte … leaving us only with the yottabyte for the future. The yottabyte, Leber notes, was the largest scale of measurement scientists could imagine at the 1991 General Conference on Weights and Measures where it was established.
Leber reports that as we head into the zettabyte era, a threshold that Cisco predicts we’ll surpass by the end of 2016, McAfee predicts the General Conference on Weights and Measures will convene before the end of the decade to contemplate yottabyte’s successor. McAfee’s favorite contender prefix, Leder notes, is the “hella.”
Stacey Higginbotham at GigaOm recently covered this issue as well (I reported on it here). She reports that during a recent presentation, Intel’s Shantanu Gupta predicted the next prefixes: brontobytes and gegobytes. Higginbotham notes that the brontobyte is “apparently recognized by some people in the measurement community.”