The dummy's guide to engineering genes

Note: Yesterday we began Quinn Norton’s five-part series on Drew Endy and synthetic biology with “Everything you needed to know about human-created life forms but were afraid to ask.”

lego-dna.jpg Photo courtesy of Mike & Amanda Knowles, via flickr.

Dr. Drew Endy’s approach to the next generation of bio technology depends on engineers, programmers, hackers, social theorists, lawyers and so forth, to inform biology. He believes we can make genetic engineering, like computers, part of every facet of our lives, changing the way humans do their business.

He seeks to put synthetic biology into the hands of the interested, not merely the professional. The potential is to widen the range of goals, to extend this emerging tool to many disciplines.

The key, says Endy, is what computer scientists call abstraction.

Fundamental to what created modern software was that idea that no one should have to type in that monotonous stuff twice. Once something was there, it should just be reused, not re-created. More important, once it was done the programmer didn’t have to know how it worked to do it again. The common wisdom became that no one should have to know how a computer worked to make it do entirely new things.

Also: Dr Endy explains Abstraction (mp3, 4.9mg) and Standards (mp3, 3.1mg) for synthetic biology.

The language of genetic engineering is out of reach for most people, but the idea of making something do what they want is not. Along with famed MIT computer engineer Tom Knight, Endy is trying to bury the DNA and its nucleotides down the same deep hole that swallowed the 1s and 0s we users never have to thinks about. To do this, they are creating standards and a vocabulary defining a DNA language for programming organisms. Assembling basic gene codes into patterns that can be strung together and reused allows people with far less sophistication than modern genetic engineers to create cells that those modern genetic engineers would never have dreamed up. This puts the power of the Ph.Ds into the hands of the rest of us, and what was a multi-million dollar research task can become a high school science fair project. This path echos the success of computers from their enormous and expensive infant stage to their penny-cheap microscopic adulthood.

Tom Knight is one of the greybeards who watched much of that process, and came to understand how the history of computers and the internet worked. Knight recognized early that to do this in biology, he’d need a reliable way to categorize bits of DNA by what they did rather than their sequence, making the syntax of the programming language. In 2001, Knight invented the first standard for creating a genetic programming language and called it BioBricks. To build with BioBrick parts, one had to learn the basics of how they hooked together and what they did. From there it was a process of building up DNA like Lego.

No one has ever done anything like this. “It might not work,” Endy freely admits. But Endy sees value even in the failure of abstraction. “The ‘worst case scenario’ for synthetic biology is that we won’t be able to build anything useful, but our failures will highlight the most relevant unknowns in biology, (what) we should figure out next.” He adds that we are already beyond that scenario, but as we go further out, more issues of society intrude on issues of science. The history of GMOs suggests the biologist should try to keep the public informed of where they might meet the biologist’s work.

Tomorrow: The legal status of genes may ultimately be even more contentious.

  • How do we reconcile “might not work” (e.g., we might not know what we’re talking about) with the prolific and lax handling of these procedures and materials? I mean, seeing as how we’re energetically tweaking the matrix of all life on earth. Not to be alarmist, or anything.


  • @Thomas, what makes you talk about “lax handling of these procedures and materials”? Please back up with a concrete example.

  • “Lax handling” example:

    I got to tour labs where, with a slight of hand, I’d have had no trouble causing problems. The granting of a tour was correct procedure. What I saw was not.

    Same lab has freezers full of random GM life forms under poor control — the inventory is kept mostly in theory, for a small point. Much more seriously, there is no realistic protection against predictable natural disasters.

    Anecdotal reports of the protocol handling from two labs are disappointing, to put it mildly. Everyone seems to rely on some anecdotal hunches from the 1970s about the fragility of the cells being modified. Everyone seems resigned to the impossibility of imposing stronger standards of handling. There is no active monitoring of environmental impact, even though the same facility requires things like radiation badges.

    Let’s put it this way: I passed through environments where it was very, very clear I was *likely* to come into contact with things like GM E Coli. If someone wants to stick a cotton swab up the anterior end of my alimentary tract to sequence looking for evidence of careless handling, and I am convinced they are qualified and intending to do such a study, I’m game.

    The students, over beers, treat safety as either an impolitic subject or else one to make sarcastic jokes about. They feel powerless and regard the matter as purely the concern of their elders.

    What are the odds on setting the atmosphere on fire?


  • I agree that there’s certainly things many labs probably could do better. However, many of the genes added are designed to do things that are easily viewed, like fluoresce when certain conditions are met. They don’t have genes added that do something dangerous. And the labs that are smart make it so they can only survive on specific media. In the case of Drew Endy’s early work, much of it was on how well bacteriophages survived when the genes were separated out, so he certainly wasn’t relying on stuff from the 1970’s in doing experiments that test exactly that.

    I think these present much less danger, than say the chemicals we have on our shelves and use daily that act as hormones in animals, resulting in fish kills where entire rivers and lakes have all the fish become unisex and unable to reproduce. Somehow I feel like the safety measures put around GM bacteria are slightly higher than those put around chemicals that we know are dangerous.

    (Also, I have presumed here that you’re meaning GM microorganisms, which I refer to as bacteria, despite the fact that much of Drew Endy’s work the last time I looked was bacteriophages. Since you talk about labs, I presume you are not discussing the GM organisms we eat on a daily basis that now make up the majority of crops in the US.)

  • And the labs that are smart make it so they can only survive on specific media.

    A popular hypothesis. Nobody has even looked at the environment around long-running labs.

    I don’t mean to shirk uptake on the rest and may return to it. As I said to Drew in the part 1 thread, it’s late and i’m tired.


  • Drew Endy writes:
    However, many of the genes added are designed to do things that are easily viewed, like fluoresce when certain conditions are met. They don’t have genes added that do something dangerous.

    Normally only a small fraction of the bacteria being modified will take up the genes you’re trying to add, so you include something that lets you kill the other unmodified bacteria — typically this is an antibiotic-resistance gene, so that only your modified bugs will survive when you poison the culture with an antibiotic. In itself this might be considered dangerous, and it’s one of the reasons GM bacteria get bleached before dumping them down the drain.

    (My experience with these labs is probably even less than Tom’s, though.)

  • They don’t have genes added that do something dangerous

    Sorry, that’s not known and it’s unlikely to be true. They add genes that sometimes, whatever other effects they might have, do something recognizable. It’s not well known at all what the addition of these genes does in general.