- Cello CAD — Verilog-like compiler that emits DNA sequences. Github repo has more, and Science paper forthcoming.
- Privacy-Preserving Read Mapping Using Locality Sensitive Hashing and Secure Kmer Voting — crypographically preserved privacy when using cloud servers for read alignment as part of genome sequencing.
- How to Network in Five Easy Steps (Courtney Johnston) — aimed at arts audience, but just as relevant to early-career tech folks.
- Quantified Baby — The idea of self-tracking for children raises thorny questions of control and consent, Nafus said. Among hard-core practitioners, the idea has not really taken off, even as related products have started hitting the market.
Bio-IT World shows what is possible and what is being accomplished
If your data consists of one million samples, but only 100 have the characteristics you’re looking for, and if each of the million samples contains 250,000 attributes, each of which is built of thousands of basic elements, you have a big data problem. This is kind of challenge faced by the 2,700 Bio-IT World attendees, who discover genetic interactions and create drugs for the rest of us.
Often they are looking for rare (orphan) diseases, or for cohorts who share a rare combination of genetic factors that require a unique treatment. The data sets get huge, particularly when the researchers start studying proteomics (the proteins active in the patients’ bodies).
So last week I took the subway downtown and crossed the two wind- and rain-whipped bridges that the city of Boston built to connect to the World Trade Center. I mingled for a day with attendees and exhibitors to find what data-related challenges they’re facing and what the latest solutions are. Here are some of the major themes I turned up.
Open source, distributed computing tools speedup an important processing pipeline for genomics data
As open source, big data tools enter the early stages of maturation, data engineers and data scientists will have many opportunities to use them to “work on stuff that matters”. Along those lines, computational biology and medicine are areas where skilled data professionals are already beginning to make an impact. I recently came across a compelling open source project from UC Berkeley’s AMPLab: ADAM is a processing engine and set of formats for genomics data.
Second-generation sequencing machines produce more detailed and thus much larger files for analysis (250+ GB file for each person). Existing data formats and tools are optimized for single-server processing and do not easily scale out. ADAM uses distributed computing tools and techniques to speedup key stages of the variant processing pipeline (including sorting and deduping):
Very early on the designers of ADAM realized that a well-designed data schema (that specifies the representation of data when it is accessed) was key to having a system that could leverage existing big data tools. The ADAM format uses the Apache Avro data serialization system and comes with a human-readable schema that can be accessed using many programming languages (including C/C++/C#, Java/Scala, php, Python, Ruby). ADAM also includes a data format/access API implemented on top of Apache Avro and Parquet, and a data transformation API implemented on top of Apache Spark. Because it’s built with widely adopted tools, ADAM users can leverage components of the Hadoop (Impala, Hive, MapReduce) and BDAS (Shark, Spark, GraphX, MLbase) stacks for interactive and advanced analytics.
What is needed for successful reform of the health care system?
Here’s what we all know: that a data-rich health care future is coming our way. And what it will look like, in large outlines. Health care reformers have learned that no single practice will improve the system. All of the following, which were discussed at O’Reilly’s recent Strata Rx conference, must fall in place.
A video interview with Colin Hill
Last month, Strata Rx Program Chair Colin Hill, of GNS Healthcare, sat down with Dr. Dennis Ausiello, Jackson Professor of Clinical Medicine at the Harvard Medical School, Co-Director at CATCH, Pfizer Board of Directors Member, and Former Chief of Medicine at the Massachusetts General Hospital (MGH), for a fireside chat at a private reception hosted by GNS. Their insightful conversation covered a range of topics that all touched on or intersected with the need to create smaller and more precise cohorts, as well as the need to focus on phenotypic data as much as we do on genotypic data.
The full video appears below.
Design's role in genomics and synthetic biology, robots taking our jobs, and scientists growing burgers in labs.
On a recent trip to our company offices in Cambridge, MA, I was fortunate enough to sit down with Jonathan Follett, a principal at Involution Studios and an O’Reilly author, and Mary Treseler, editorial strategist at O’Reilly. Follett currently is working with experts around the country to produce a book on designing for emerging technology. In this podcast, Follett, Treseler, and I discuss the magnitude of the coming disruption in the design space. Some tidbits covered in our discussion include:
- Design’s increasing role in genomics and synthetic biology. (For more on the genomic/synthetic biology space, here’s a recent Wired video interview with Craig Venter.)
- Robots taking our jobs, and what we humans will do for work.
- Embedded sensor networks and connected environments — soon, we’ll never get lost in a building again.
- Cross-pollination of industries to inform and evolve our emerging connected environments, such as the cross-disciplinary nature of the Wyss Institute for Biologically Inspired Engineering at Harvard.
- Approaching political policy as a design problem — politicians could benefit from design theory and rapid prototyping techniques found in design and manufacturing fields.
- Scientists growing burgers in labs.
And speaking of that lab burger, here’s Sergey Brin explaining why he bankrolled it: