• Print

Report from first health care privacy conference

Strange that a conference on health privacy has never been held before, so I’m told. Privacy in health care is the first topic raised whenever someone talks about electronic health records–and dominates the discussion from then on–or, on the other hand, is dismissed as an overblown concern not worthy of criticism. But today a conference was held on the subject, prepared by Patient Privacy Rights and the University of Texas’s Lyndon B. Johnson School of Public Affairs, and held just a few blocks from the Capitol building at the Georgetown Law Center as a preconference to the august Computers, Freedom & Privacy conference.

The Goldilocks dilemma in health privacy

Policy experts seem to fall into three camps regarding health privacy. The privacy maximalists include Patient Privacy Rights, as well as the well-known Electronic Privacy Information Center and a number of world-renowned experts, including Alan Westin, Ross Anderson from Cambridge University, Canadian luminary Stephanie Perrin, and Carnegie Mellon’s indefatigable Latanya Sweeney (who couldn’t attend today but submitted a presentation via video). These people talk of the risks of re-identifying data that was supposed to be identified, and highlight all the points in both current and proposed health systems where intrusions can occur.

On the other side stand a lot of my closest associates in the health care area, who intensely dislike Patient Privacy Rights and accuse it of exaggerations and mistruths. The privacy minimalists assert that current systems provide pretty good protection, that attacks on the average person are unlikely (except from other people in his or her life, which are hard to fight systematically), and that an over-concern for privacy throws sand in the machinery of useful data exchange systems that can fix many of the problems in health care. (See for instance, my blog on last week’s Health Data Initiative Forum)

In between the maximalists lie the many people trying to adapt current systems to the complex needs of modern health care with an eye toward privacy–those who want to get it “just right.” The Direct Project (discussed today the Chief Privacy Officer of the Office of the National Coordinator, Joy Pritts) is an example of these pragmatic approaches.

It so happens that the American public can also be divided into these three camps, as Westin explained in his keynote. Some will go to great lengths to conceal their data and want no secondary uses without their express permission. Others have nothing to hide, and most of us lie in between. It is sobering, though, to hear that Americans in surveys declare that they don’t trust what insurers, employers, and marketers will do with their health data. What’s more disturbing is that Americans don’t trust researchers either. Those who take on the mantle of the brave biological explorer acting in the highest public interest must question why ordinary people question his devotion to their needs.

The dilemma of simplicity: technical solutions may not be implementable

As technologist Wes Rishel pointed out, technical solutions can often be created that solve complex social problems in theory, but prove unfeasible to deploy in practice. This dilemma turns up in two of the solutions often proposed for health privacy: patient consent and data segmentation.

It’s easy to say that no data should be used for any purpose without express consent. For instance, Jessica Rich from the FTC laid out an iron-clad program that a panel came up with for protecting data: systems must have security protections built in, should not collect or store any more data than necessary, and should ensure accuracy. It is understood that sharing may be necessary during treatment, but the data should be discarded when no longer needed. Staff that don’t need to know the data (such as receptionists and billing staff) should not have access. Indeed, Rich challenged the notion of consent, saying it is a good criterion for non-treatment sharing (such as web sites that offer data to patients) but that in treatment settings, certain things should taken as a given.

But piercing the ground with the stake of consent reveals the quicksand below. We don’t even trace all the ways in which data is shared: reports for public health campaigns, billing, research, and so on. Privacy researchers have trouble figuring out where data goes. How can doctors do it, then, and explain it to patients? We are left with the notorious 16-page privacy policies that no one reads.

Most patients don’t want to be bothered every time their data needs to be shared, and sometimes (such as where public health is involved), we don’t want to give them the right to say no. In one break-out session about analytics, some people said that public health officials are too intrusive and that few people would opt out if they were given a choice about whether to share data. But perhaps the people likely to opt out are precisely the ones with the conditions we need to track.

Helen Nissenbaum of NYU suggested replacing the notion of “consent” with one of “appropriateness.” But another speaker said that everyone in the room has a different notion of what is appropriate to share, and when.

The general principle here–found in any security system–is that any technology that’s hard to use will not be used. The same applies to the other widely pushed innovation, segmented data.

The notion behind segmentation is that you may choose to release only a particular type of data–such as to show a school your vaccination record–or to suppress a particular type, such as HIV status or mental health records. Segmentation was a major feature of an influential report by the President’s Council of Advisors on Science and Technology.

Like consent, segmentation turns out to be complex. Who will go throw a checklist of 60 items to decide what to release each time he is referred to a specialist? Furthermore, although it may be unnecessary for a a doctor treating you for a broken leg to know you have a sexually transmitted disease, there may be surprising times when seemingly unrelated data is important. So patients can’t use segmentation well without a lot of education about risks.

And their attempts at segmentation may be undermined in any case. Even if you suppress a diagnosis, some other information–such as a drug you’re taking–may be used to infer that you have the condition.

A certain fatalism sometimes hung over the conference. One speaker went to far as to suggest a “moratorium” on implementing new health record systems until we have figured out the essential outlines of solutions, but even she offered it only as a desperate speculation, knowing that the country needs new systems. And good models for handling data certainly exist.

Here is the strenuous procedure that the Centers for Medicare & Medicaid Services (CMS) engage in when they release data sets. Each set of data (a Public Use File) represents a particular use of CMS payments: inpatient, outpatient, prescription drugs, etc. The procedure, which I heard described at two conferences last week, is as follows:

  1. They choose a random 5% sample of the people who use particular payments. These samples are disjoint, meaning that no person is used in more than one sample. Because they cover tens of millions of individuals, a small sample can still be a huge data set.

  2. They perform standard clean-up, such as fixing obvious errors.

  3. They generalize the data somewhat. A familiar way to release aggregated results in a way that makes it harder to identify people is to provide only the first three digits of a five-digit ZIP code. Other such fudge factors employed by CMS include offering only age ranges instead of exact ages, and rounding payment amounts.

  4. They check certain combinations of fields to make sure these appear in numerous records. If fewer than 11 people share a certain combination of values, they drop these people.

  5. If they had to drop more than 10% of the people in step 4, they go back to step 3 and try increasing the fudge factors. They iterate through steps 3 and 4 until the data is of a satisfactory size.

Clearly, this procedure works only with data sets on a large scale, not with the limited samples provided by many hospitals, particularly for relatively rare diseases.

Avoidable risks and achievable rewards

As Anderson said, large systems with lots of people have leaks. “Some people will be careless and others will be crooked.” As if to illustrate the problem, one of the attendees today told me that Health Information Exchanges could well be on the hook for breaches they can’t prevent. They rely on health providers to release the right data to the right health provider. The HIE doesn’t contact the patient independently. Any mistake is likely to be the doctor’s fault, but the law holds the HIE equally liable. And given a small, rural doctor with few funds, well liked by the public, versus a large corporation, whom do you suppose the patient will sue?

I can’t summarize all the questions raised at today’s conference–which offered one of the most impressive rosters of experts I’ve seen at any one-day affair–but I’ll list some of the challenges identified by a panel on technology.

  • Use cases to give us concrete material for discussing solutions

  • Mapping the flows of data, also to inform policy discussions

  • Data stewardship–is the data in the hands of the patient or the doctor, and who is most trustworthy for each item?

  • Determining how long data needs to be stored, especially given that ways to crack de-identified data will improve over time

  • Reducing the fatigue mentioned earlier for consent and segmentation

  • Identifying different legal jurisdictions and harmonizing their privacy regulations

  • Identifying secondary levels of information, such as the medication that indirectly reveals the patient’s condition

Rehab

Some of the next steps urged by attendees and speakers at the conference include:

  • Generating educational materials for the public, for doctors, and for politicians

  • Making health privacy a topic for the Presidential campaign and other political debates

  • Offering clinicians guidelines to build privacy into procedures

  • Seeking some immediate, achievable goals, while also defining a long-term agenda under the recognition that change is hard

  • Defining a research agenda

  • Educating state legislatures, which are getting more involved in policy around health care

tags: , , , , , , ,
  • Wally

    You think this is the first conference ever held on health privacy? Are you kidding? There have been health privacy conferences for years.

  • http://www.patientprivayrights.org Deborah C. Peel, MD

    Andy: Thanks for a really great summary of the first ever PUBLIC forum on the Future of Health Privacy—and no—this has NOT been done before. The serious concerns the public has about control over the most sensitive information about them–bar none—-have not been the subject of any conference.

    Heavy weight experts from industry were on panels with top federal officials, academic experts on technology, policy experts, academics and LAST but MOST crucially with health privacy advocates, who are rarely ever at the table at industry conferences.

    KEY facts about privacy must be taken into account as we build HIT systems:
    1) The public fully expects to be able to decide who can see and use which parts of their health information. Large majorities have consistently told pollsters this ever since since polling on this subject began.
    2)The federal government’s own figures show that over 4 million people/year avoid early diagnosis and medical treatment for cancer, mental illness, and substance abuse because they know the information will not stay private. People put their lives at risk to keep the information from getting out to EMPLOYERS—they do not want to lose their jobs, because many employers do discriminate. I learned this from my patients.
    2) Americans have strong legal protections in every state REQUIRING that they are able to prevent information about genetics, mental health, and certain STDs from being disclosed without consent. That is a fact and health IT must comply with those laws.
    3)New rights in the stimulus bill, which modified HIPAA, require your consent before your health data is sold, require that if you pay for care yourself you should be able to prevent your data from flowing to insurers, and require the ability to segment data (prevent selected data from being disclosed). We need systems that segment data not only to allow us to protect that data from going to those who do not need it or will use it to discriminate against us BUT because we need to be able to segment data that is WRONG—so we can STOP ERRORS in our medical records from being disclosed, with potentially lethal consequences. What if your record has the WRONG blood type or medication listed? You need to be able to stop that data from flowing everywhere.
    4) We need data maps because health data is THE MOST VALUABLE commodity on the Internet and there is a vast, unseen industry of data thieves, data aggregators, data re-identifiers, and data sellers who use YOUR data and sell it to people who will make decisions about your jobs, credit, and futures. In fact, many, many health IT vendors SELL health data and discount the costs of their technology to doctors and hospitals who agree that they can sell YOUR data.

    The summit’s industry/technology experts agreed with the privacy experts that the technologies REALLY DO exist to build systems that do what the public expects—-it will not be easy, but it is essential for trust.

    The problem is the unseen health data theft, data mining and data sales industries are exploding because we have health IT systems where thousands of strangers can access and use our health information. SEE: WSJ Series “What They Know” about these industries.

    It is foolish and naive to ignore the very real problems caused by the lack of health privacy—and it wrong to present patients with the false choice of giving up privacy forever to benefit from HIT, when technology solutions exist and can be used. Thoughtful, serious people want to address real problems, not ignore them.

    Finally, Patient Privacy Rights has ALWAYS promoted health IT—-but we refuse to accept bad systems and technologies that harm people because they will make millions more people REFUSE treatment and refuse to participate in electronic health systems.

  • http://www.ross-anderson.com Ross Anderson

    Andy

    It may have been the first health IT privacy conference in the USA, but America’s about fifteen years behind Europe on this! The UK government has been trying to centralise medical records since 1992; we organised a workshop on “Personal Medical Information – Security, Engineering and Ethics” in Cambridge in 1996; there was an official report (from the Caldicott committee) in 1997; (bad) legislation in 2000; a big IT project starting in 2002; a report on how its was breaking human-rights law in 2009; and finally the project’s failure, recently documented by our National Audit Office. There’s more on my web page and from the NAO.

    Please, guys, don’t repeat all our mistakes!

  • http://www.TomorrowsEnterprises.com Brian Mulconrey

    Andy, congratulations, I thought that you did a great job with the recap. I was at the meeting and you certainly captured many of the key points.

    I also thought Helen Nissenbaum’s point around considering a bigger focus on norms and standards of appropriateness made a lot of sense. It reminded me of a similar point that Tim O’Reilly made about privacy in general at http://radar.oreilly.com/2011/06/facebook-face-recognition.html when he noted; “Overall, I think our privacy regimes need to move to a model similar to that applied to insider trading. It’s not possession of secret information that is criminalized; it is misuse of that information to take advantage of the ignorance of others.” He and Helen may be onto something that could help to cut the Gordian knot that has grown up around this issue.

    The fact is that, as with insider trading abuses in the securities industry, health information represents a real and substantial asset which should be protected with similar measures when abuses are detected. On the other hand, providers, pharma companies, researchers, and yes even the “evil insurance companies” need to access and analyze that information for purposes of measuring the effectiveness of treatments, identifying patterns that might suggest fraud, and gaining insights into the patterns that will drive future innovations. We can’t compromise this access either.

    We often hear stats thrown around from a US perspective about how our $2.5 trillion in health care expenditures are moving from 17.5% to 20% of US GDP over the next few years. Another way to look at the extent to which we can’t continue along this existing health care paradigm is to consider that the US has only 4.5% of the world’s population. In order to spend a similar amount on the health care of everyone on earth it would cost over 89% of 2010 world GDP – about $55.2 trillion.

    We have the potential to radically accelerate the pace of health care innovation over the next few decades but that acceleration will be fueled by our ability to measure health care actions at a very detailed level. In order to realize the dream of quality healthcare for everyone on earth we’ll need to address these issues in a manner that “like insider trading laws” allows this information to flow in ways that are secure, appropriate, and fair.

  • david warner

    One development from the break out group I attended that was startling was the near unanimous belief that HIPPA protections needed to be revised. Both because of their limited and incomprehensible protections for patients and the substantial regulatory burden on providers. Out of the 15-20 people in the breakout–representing a variety of perspectives only one did not think they needed to be substantailly revised.

  • http://latanyasweeney.org Latanya Sweeney

    Correction to your post: There are three camps, and the third camp, for which my (Latanya Sweeney’s) efforts have historically dominated are in the middle of the other two camps you describe. In historical context, the two sides reflect the long-time conflict between privacy or utility. Framing the discussion as being two-sided merely perpetuates the false belief that society must choose between privacy OR utility.

    In reality, the third camp has historically used scientific knowledge and facts to ground discussions and to offer thought leadership and real-world solutions to show how society can enjoy both privacy AND utility. For other examples across a broad spectrum of issues, see my website (latanyasweeney.org) or that of the Data Privacy Lab.

    A problem with the one example you gave as actually being in the middle is that it gives up utility in favor of privacy. There exist many superior possible solutions for the same problem that provide privacy AND utility.

    In terms of the conference you mention, my presentation is available online at dataprivacylab.org/projects/sorrell/Sweeney.mp4 and it describes the tension between the two extreme sides in a recent case before the U.S. Supreme Court and reports on re-identification experiments underway to add clarity and facts.

  • http://82mercer.com/index.htm Pierre @ Conference Center

    Wow! I really love this article, while I’m reading this, I really imagined the emotion and everything, cause I really relate from this article, so I really thanks you for this!