Beyond lab folklore and mythology

What the future of science will look like if we’re bold enough to look beyond centuries-old models.


Editor’s note: this post is part of our ongoing investigation into synthetic biology and bioengineering. For more on these areas, download the latest free edition of BioCoder.

Over the last six months, I’ve had a number of conversations about lab practice. In one, Tim Gardner of Riffyn told me about a gene transformation experiment he did in grad school. As he was new to the lab, he asked two more experienced scientists for their protocol: one said it must be done exactly at 42 C for 45 seconds, the other said exactly 37 C for 90 seconds. When he ran the experiment, Tim discovered that the temperature actually didn’t matter much. A broad range of temperatures and times would work.

In an unrelated conversation, DJ Kleinbaum of Emerald Cloud Lab told me about students who would only use their “lucky machine” in their work. Why, given a choice of lab equipment, did one of two apparently identical machines give “good” results for a some experiment, while the other one didn’t? Nobody knew. Perhaps it is the tubing that connects the machine to the rest of the experiment; perhaps it is some valve somewhere; perhaps it is some quirk of the machine’s calibration.

The more people I talked to, the more stories I heard: labs where the experimental protocols weren’t written down, but were handed down from mentor to student. Labs where there was a shared common knowledge of how to do things, but where that shared culture never made it outside, not even to the lab down the hall. There’s no need to write it down or publish stuff that’s “obvious” or that “everyone knows.” As someone who is more familiar with literature than with biology labs, this behavior was immediately recognizable: we’re in the land of mythology, not science. Each lab has its own ritualized behavior that “works.” Whether it’s protocols, lucky machines, or common knowledge that’s picked up by every student in the lab (but which might not be the same from lab to lab), the process of doing science is an odd mixture of rigor and folklore. Everybody knows that you use 42 C for 45 seconds, but nobody really knows why. It’s just what you do.

Despite all of this, we’ve gotten fairly good at doing science. But to get even better, we have to go beyond mythology and folklore. And getting beyond folklore requires change: changes in how we record data, changes in how we describe experiments, and perhaps most importantly, changes in how we publish results.

There are many variables in a science experiment, many more than can be collected in a lab notebook. That’s the problem, but it’s also the solution. In the 1950s, scientists were limited in the data they could record. They had that notebook, and little more, and their analytical tools were limited to what could be done on a slide rule or a mechanical desktop calculator. (Indeed, the historical origin of statistics has nothing to do with “big data.” Statistics developed as a way of understanding variations in results when data was scarce, where a few dozen data points was a lot.) In the 70s, computers started to make their way into laboratories, and we started to build interfaces to collect data from experiments. We still couldn’t collect much data, by modern standards: a computer with 256K of RAM and 2 MB of disk space was still a big deal. But it was still more than you could scribble down in a notebook.

30-odd years later, we have experiments that throw off terabytes and even petabytes of data per day, and we have the tools to analyze that data. But we have problems with reproducibility, arguably more problems than we had years ago. I don’t believe the problem is that we don’t sacrifice enough chickens. Rather, we haven’t been radical enough about what data to collect. We’ve automated some of the data collection, but we still don’t collect all (or even most) of the data that’s potentially available: intermediate results from each step, calibration data from each piece of equipment, detailed descriptions of the process, and the configuration of every piece of equipment. Kleinbaum told me that some experiments are sensitive to whether you use glass or plastic test tubes. That makes sense: it’s easy to scratch plastics, and microbes can hide in scratches that are invisible to the human eye. Plastics can also release trace amounts of gasses long after they’re manufactured, or absorb some of the compounds you want to measure; for some experiments, that matters, for others, it doesn’t. Few scientists would consider the test tubes used, the pipettes, and so on, as part of the experimental data. That must change if we’re going to solve our problems with reproducibility.

In addition to the data, we also have to record exactly how experiments are performed, in detail. Everybody I talked to had stories about protocols that were part of their labs’ oral culture: you did things a certain way because that’s how you did it. It worked, no need to belabor the point. A recent article asks, in frustration, Never mind the data, where are the protocols? Having the data means little if you don’t have the methods by which the data was generated. The best way to record the protocols isn’t by scribbling in lab notebooks (or their virtual equivalents), but by implementing the experiment in a high-level programming language. The program is then a complete description of how the experiment was performed, including setup. The importance of this step isn’t that the experiment can be run on lab robots, though that is important in itself; programming forces you to describe the process precisely and completely, in a standardized language that is meaningful in different contexts, different labs. Thinking of an experiment as a program also allows the use of design tools that make it easier to think through the entire process, and to incorporate standard protocols (you might think of them as subassemblies or subroutines) from other experiments, both in your lab and in others.

Thinking of an experiment as a program allows the use of design tools that make it easier to think through the entire process.

We’re still missing an important component. Science has always been about sharing, about the flow of ideas. For the first few centuries of scientific research, publishing meant paper journals, and those are (by nature) scarce commodities. You can’t publish a terabyte of data in a journal, nor can you publish a long, detailed, and extremely precise description of an experiment. You can’t publish the software you used to analyze the data. When you’re limited to paper, about all that makes sense is to publish a rough description of what you did, some graphs of the data, and the result. As our experiments and analyses get more complex, that’s no longer enough. In addition to collecting much more data and describing the experiments as detailed programs, we need ways to share the data, the experimental processes, and the tools to analyze that data. That sharing goes well beyond what traditional scientific journals provide, though some publications (notably F1000Research and GigaScience) are taking steps in this direction.

To understand what we need to share, we need to look at why we’re sharing. We’re not sharing just because sharing is a good in itself, and it’s what our nursery school teachers encouraged us to do. Sharing is central to the scientific enterprise. How can anyone reproduce a result if they don’t know what you’ve done? How can they check your data analysis without your data or your software? Even more importantly, how can they look at your experimental procedures and improve them? And without sharing, how can you incorporate protocols developed by other researchers? All science builds on other science. For 400 or so years, that building process was based on sharing results: the limitations of print journals and face-to-face meetings made it difficult to do more.

Fortunately, we’re no longer bound by paper-based publishing. Amazon Web Services makes it possible to build huge, publicly accessible data archives at relatively low cost. Many large datasets are becoming available: for example, the Protein Data Bank, and the raw data from the Large Hadron Collider. Figshare is a cloud-based service for managing and publishing scientific datasets. Sharing protocols and the software used for data analysis is a different problem, but it’s also been solved. GitHub is widely used by software engineers, and provides an excellent model for sharing software in ways that allow others to modify it and use it for their own work. When we have languages for describing experiments precisely (Antha is a very promising start), using tools like GitHub to share protocols will become natural.

That’s how we’re going to get beyond lab folklore: by looking into other labs and seeing how they’ve solved their problems.

Once the data and protocols are available, they have to be searchable. We tend to view scientific results as wholes, but any experiment is made up of many small steps. It’s possible that the most useful part of any experiment isn’t the “result” itself, but some intermediate step along the way. How did someone run a particular reaction, and how does that relate to my experiment? I might have no interest in the overall result, but information about some particular part of the experiment and its intermediate results might be all I need to solve an entirely different problem. That’s how we’re going to get beyond lab folklore: by looking into other labs and seeing how they’ve solved their problems. With many scientists running the same reaction in different contexts, collecting and publishing the data from all their intermediate steps, it should be possible to determine why a specific step failed or succeeded. It should be possible to investigate what’s different about my experiment: what changes (intentional or not) have I made to the protocol?

Titus Brown’s work on data-driven discovery proposes to do a lot of what I’m talking about, and more; it’s a glimpse at the future of science. Titus is building a distributed database for scientific data, where data can be stored, tagged, and searched, even prior to publication. Brown notes: “We need to incentivise pre-publication sharing by making it useful to share your data. We can do individual analyses now, but we’re missing the part that links these analyses to other data sets more broadly.” Helping scientists to analyze their own data is important, but the goals are much bigger: analysis across data sets, large-scale data mining, “permissionless innovation.” This is what the future of science will look like if we’re bold enough to look beyond centuries-old models.

The way we do science is changing. Experiments are getting much more complex; the data they produce is growing by orders of magnitude; and the methods that worked 50 or 60 years ago, when we barely knew what DNA was, are far from adequate now that we’re learning how to write the genetic code. Fortunately, we know how to get beyond mythology and folklore. Many of the tools we need exist, and the ones that don’t yet exist are being built. We’re creating a new generation of scientists who can collect all the data, share everything, and build the tools needed to facilitate that sharing. And when we’ve done that, we will have achieved a new scientific revolution — or, more precisely, fulfilled the promise of the first one.

Cropped image on article and category pages by Alejandro Hernandez on Flickr, used under a Creative Commons license.

tags: , , , , , ,
  • Lenny Teytelman

    This is a terrific article! As scientists, we routinely curse as we try to reproduce work from the pervasively atrocious “Materials & Methods” sections. And still, most of us sin and perpetuate the “as reported elsewhere” in the descriptions. I think a major cause of this is that good methods, just like good code, take a lot of time. Another is that most journals still have space constraints, forcing non-scientific reporting.
    We have been building a central protocol repository for the past two years. Our hope is – if the scientist’s protocol is already in our web/mobile tool and we make it easy with a click of a button to generate a detailed methods section – this may improve the M&M sections. For the journals with space limits, a simple link to the published method on, as used in this particular research group, will facilitate better reporting and reproducibility. But we will only succeed if we make it easier rather than harder for the researcher to write the methods sections.

    • LabArchives

      I agree this is a great article. Our LabArchives Electronic Lab Notebook also includes a feature to make public any data (either via a URL or a DOI for a “persistent” identifier). We should talk about collaborating with to enable scientist to import your protocols into our ELN. Among other things, LabArchives enables users to link experiments to a specific version of a protocol, so that the protocol can be modified over time but it is still clear what was used to conduct the research.

  • Peter Apps

    The points that you make are good ones, but I would add one other. It has become possible to generate and record more and more data using automated instrumentation to do what was once done by human effort. Because it is easy to do more measurements it has become correspondingly tempting to measure more and more things, tip the measurement results into the black bucket of multivariate statistics, stir with a general model and hope that something sensible floats to the top. When making the measurements involved a lot of work, it made sense to think carefully about what needed to be measured to answer an important question – to actually test an hypothesis rather than the increasingly fashionable practice of substituting curve fitting for critical thought.

  • True story about oral transmission of protocols: We used to use method of isolating and concentrating parasite eggs from tissue and feces that had been developed in-house back in the early ’60’s. As people moved on, they took it to other labs, and over time, it was given a passing mention here-and-there in various journals under the “materials and methods” section. Most of us thought that it was “standard,” and must have been published at some point. Imagine our shock in the mid-80’s when we opened up the latest journal and found a paper on a “novel method of isolating and concentrating parasite eggs.” Which was exactly the same method we’d been using for decades, just no one thought it was “novel enough” to publish.

  • TiberiusB

    Very interesting article. We are thinking about all this INTENSIVELY for more than 4 years. We are an open network of scientists, engineers and other kinds, who design open source scientific instruments. We noticed early on that we need to embark on the open science movement, and realized that open science was not just about sharing scientific papers. It is even more than sharing protocols and raw data. It is also about hardware that allows scientists to capture more data and to socialize their work!. Think about your smart phone. I don’t even know why we still call them “phones”, because they are in fact universal machines. Among many other things, that thing allows you to socialize your life: snap a picture or shoot a short video and SHARE it to Facebook, Youtube… where your social life now extends. Scientific instruments need to allow scientists to share their experiences with others, live.
    Here’s a presentation I did not too long ago about the open science movement.

    feel free to contact me or SENSORICA if you want to explore this even further!

  • James

    I really like this article. It discusses many points that I have thought about for a few years. Personally, I am interested to see how the internet of things and block chain technologies will impact lab work. I imagine a day when I go into the lab, press the “record” button, and generate a file that tracks my every movement like recording a macro. These files could be used to train new students and compare different protocols with a detailed level of precision.

    I think the world of biological sciences could learn a lot from fields in earth sciences. In earth sciences the phenomena under study is typically only a minor component of the total variability and there is no such thing as a replicate or control. For example, if you are investigating soil ph as a function of its composition you must contend with annual cycles such as seasonal climate, spatial influences such as a position relative to roads, streams, etc, daily variations such as weather, and sun angle, vegetation and many others. There are no 2 plots of land that are identical and there will never be another moment in time where all of the conditions are similar to when your sample is taken. To cope with this, we tend to measure EVERYTHING no matter how unrelated it may be. Much of this data is considered metadata and is not really investigated but is often included as part of the site and sampling description.

    Another major difference is in the culture. The term “scoop” was not in my vocabulary before working in a biochemistry lab. In environmental sciences, it is almost impossible for a single lab to unravel a causal mechanism or make any strong conclusions. If one group makes an interesting finding on their site, they will need several other sites to conduct similar measurements and verify the findings. Only after this process occurs does the community accept the results as meaningful. As this isn’t an instantaneous point in time or a single group’s effort, credit is much more evenly distributed. This process encourages open collaboration and disclosure of more details. In biological sciences, the first to publish gets the bulk of the credit which has led to a major wealth disparity between research labs. As a less powerful lab, fuzzy protocols are your best protection from getting scooped. It is a way of staying one step ahead. I truly believe that the problems of reproducibility would change if the culture were different and I commend those with this goal in mind.