Preferred structures for cleaned-up doctor data

Which data formats should the DocGraph project support?

The DocGraph project has an interesting issue that I think will become a common one as  the open data movement continues. For those that have not been keeping up, DocGraph was announced at Strata RX, described carefully on this blog, and will be featured again at Strata 2013. For those that do not care to click links, DocGraph is a crowdfunded open data set, which merges open data sources on doctors and hospitals.

As I recently described on the DocGraph mailing list, work is underway to acquire the data sets that we set out to merge. The issue deals with file formats.

The core identifier for doctors, hospitals and other healthcare entities is the National Provider Identifier (NPI). This is something like a Social Security number for doctors and hospitals. In fact it was created in part so that doctors would not need to use their Social Security numbers or other identifiers in order to participate in healthcare financial transactions (i.e. paid by insurance companies for their services). The NPI is the “one number to rule them” in healthcare and we want to map data from other sources accurately to that ID.

Each state releases none, one or several data files that can be purchased and also contain doctor data. But these file downloads are in “random file format X.” Of course we are not yet done with our full survey of the files and their formats, but I can assure you that they are mostly CSV files and a troubling number of PDF files. It is our job to take these files and merge them against the NPI, in order to provide a cohesive picture for data scientists.

But the data available from each state varies greatly. Sometimes they will have addresses, sometimes not. Sometimes they will have fax numbers, sometimes not, sometimes they will include medical school information, some will not. Sometimes they will simply include the name of the medical school, sometimes they will use a code. Sometimes when they use codes they will make up their own …

I am not complaining here. We knew what we were getting ourselves into when we took on the DocGraph project. The community at large has paid us well to do this work! But now we have a question? What data formats should we support?

The simple answer is that everyone can handle JSON, XML or CSV and we will probably end up supporting all of those formats to some degree. XML is famous for its capacity to handle “constraints” that solve some or all of the problems that we are considering addressing. Of course, there is some work on similar constraint systems for JSON. This guy thinks more is needed, This guy thinks less is needed. Both of them seem to have thought about it more than I have.

What I am concerned with is how to architect the schema(s) for these merges. As we merge the doctor data, we will be making guesses about what a particular field in a file download means? To what degree should we expose those guesses?

How do we merge data sources that will have overlapping and potentially redundant data sources (like addresses for doctors that come from five different files)? How do we handle metadata, when so much of the metadata involves which file/state a data point originated from? Some people will really want to know that, but most people want a slim data structure that is easy to work with.

I feel somewhat uneasy just asking the community of data scientists about these issues, but it ends up being a pretty big deal in the long run. We hope to maintain this as an open dataset, which means that we want people to be able to rely on these file formatting decisions for as long as possible (we will be versioning the format) . Eventually people will start to merge this dataset with DNA and phonemic data, including geo-aware population health data. That is going to be a pretty complicated process and I really want to save headaches for the people involved in this kind of work.

Also important is the size of this dataset. The core data file is an almost 5 GB CSV file. I expect the state files to be around a 1 GB each. Add all of that up and add JSON/XML formatting and you are talking about a lot of space. How can we chose a format that will make working with this dataset easier? So far, we have seen lots of cool stuff because we kept things simple for data scientists.

I would like to open the floor to comments (on this blog post), which the DocGraph project will be taking very seriously. When you comment, if you could mention other “open projects that have solved similar problems and how they did it, that would be great. It would also be great to have links to any relevant SQL or document-based layout standards that might help us. If you are going to provide an academic reference, please also let us know if you have any experience working with those standards. While we really want to solve this problem in a parsimonious way, we do have practical usability as our primary aim here. Thanks for your help in advance.

Related:

tags: , ,