Alternative Life Styles
Natural History, May 2001
Do universal laws of evolution exist? There’s no bigger question in biology, and none harder to answer. To discover a universal rule, you need more than a single case, and when it comes to life, we’re stuck with a data set of one. All life on earth descends from a common ancestor, with every species storing its genetic information in DNA (or, in the case of some viruses, RNA). If scientists someday discover another form of life, perhaps lurking on a moon of Jupiter or in some distant solar system, they may be able to compare its evolution to our own and see if the two histories have followed the same playbook. But such an opportunity may be a long way off.
In 1992 the eminent biologist John Maynard Smith declared that the only way out of this quandary was to build a new form of life ourselves. “We badly need a comparative biology,” he wrote. “So far, we have been able to study only one evolving system, and we cannot wait for interstellar flight to provide us with a second. If we want to discover generalizations about evolving systems, we will have to look at artificial ones.”
In the nine years since Maynard Smith’s call, computer scientists have done their best to answer it. They’ve tried to create a menagerie of artificial life-forms, from self-replicating software to “intelligent” robots, and they’ve set off plenty of breathless hype in the process. Some claim, for example, that computer processing speeds are climbing so quickly that within fifty years, robots with superhuman intelligence will be walking among us. Neuroscientists have countered that brains are much more than just masses of neurons: they consist of complex networks that communicate with one another using dozens of chemical signals. Even the simplest of these networks can take decades to decipher. Just figuring out the system of thirty neurons that lobsters use to push food through their stomachs has taken more than thirty years and the collective labor of fifteen research teams. At that rate, millions of years could pass before scientists fully comprehend the workings of the 100 billion neurons that make up the human brain.
But not all is hype and skepticism. Suitably humble experts on artificial life and suitably open-minded biologists are starting to work together. One promising collaboration is being led by Chris Adami, a physicist at the California Institute of Technology; Charles Ofria, a former student of Adami’s who is now at Michigan State University; and Richard Lenski, a microbiologist at Michigan State. Building on pioneering work by Tom Ray (now at the University of Oklahoma), Adami and Ofria have created computer programs-digital creatures-that behave in remarkably lifelike ways. And working with Lenski, they’ve shown that these creatures, which they call Digitalia, evolve much the way biological life-forms do.
Each digitalian consists of a short program that can be run by a computer. The computer moves line by line through the program, methodically executing each command until it reaches the end, whereupon it loops back to the beginning and starts over. A program can reproduce by instructing the computer to make a copy of the program, and this duplicate then starts running on its own.
Adami and his colleagues conceive of the digitalia as organisms living on a two-dimensional plane divided up into thousands of cells. Each digitalian occupies a single cell, and when it reproduces, its offspring take up residence in the adjoining cells. Once a digitalian starts reproducing, its progeny can race across the plane like mold spreading over a slice of bread. (The researchers can watch their progress by means of a graphic display on a computer screen, although the screen itself isn’t actually the habitat-there’s no one-to-one correspondence between the pixels and the cells.)
Digitalia don’t simply replicate; they also evolve. Every time a digitalian replicates, there’s a small chance the copy will contain a mutation. Mutations in nature are random changes in a sequence of DNA; in the case of digitalia, mutations consist of certain kinds of random changes in a program. For example, the computer may copy part of a program twice instead of once or may switch one command for another.
As in the real world, most mutations are harmful to digitalia, inserting fatal bugs that prevent them from replicating. Other mutations have little or no effect, building up like junk through the generations. And some help digitalia replicate faster. Those so blessed come to dominate their artificial world, just as natural selection favors welladapted biological life.
Adams and his associates have found that their digitalia consistently evolve in certain ways-ways that are similar to what biologists see in real life. In one experiment, they created several different strains of digitalia and let them evolve. They found that these programs consistently shrank down to sleek, short sequences of commands-as few as eleven command lines in some casesthat carried the minimum amount of information necessary for replicating. It takes less time to copy a short program than a long one, so the shorter the program, the more quickly it can multiply.
In the 1960s Sam Spiegelman and his colleagues at the University of Illinois got a similar result when they studied the evolution of RNA viruses. They put viruses into a beaker and supplied them with all the enzymes they needed to replicate. Twenty minutes later, the researchers transferred some of the newly replicated viruses to a new beaker and let them replicate again. After a few rounds, the scientists waited only fifteen minutes each time, and then only ten minutes, and finally only five. By the end of the experiment, the viral RNA had shrunk to 17 percent of its initial size. The viruses evolved into such small versions of themselves because they could shed genes they had used to invade and commandeer host cells. These had become unnecessary and now only slowed down the RNA’s replication. As with digitalia, the most successful viruses under these conditions were the simplest.
Normally, however, life doesn’t exist in a test tube, with all its needs taken care of by a technician. Organisms have to eat, or photosynthesize, or somehow consume the energy and matter around them. To make digitalia more lifelike, Adami and his colleagues require them to eat to survive. Numbers are their food.
Each organism is supplied with a random sequence of l’s and O’s. Just as some bacteria eat sugar and transform it into useful proteins, digitalis are required to read these numbers and transform them into meaningful outputs. With the right combinations of commands, for instance, they can determine whether three numbers in a row are identical, or they can turn a string of numbers into its opposite (“10101” becoming “01010”). The scientists reward the digitalis for evolving the ability to do these tasks. The programs of these lucky organisms start running faster, and as a result, they multiply faster. Soon their numbers overwhelm the less capable digitalis.
Under these complex conditions, the digitalis don’t turn into strippeddown creatures. Instead, they evolve from simple replicators into sophisticated data processors that can crunch numbers in complicated ways. Human programmers, of course, can also write programs to carry out these tasks, but sometimes the digitalia evolve versions that are unlike anything ever conceived by a human designer.
Someday this sort of evolution may produce new kinds of efficient, crashproof software. But Adami and Ofria are not interested in the commercial possibilities of digitalia; they’re too busy working with Richard Lenski, comparing their artificial life to biological life.
The partnership began after Lenski heard Adami give a talk about his digitalia. Adami showed the audience a graph charting the creatures’ replication rate. The line on the graph rose for a while before reaching a plateau and then rose to still higher plateaus in a series of sudden jerks. Lenski was astonfished. He and his colleagues had observed how Escherichia coli bacteria evolved over thousands of generations, acquiring mutations that helped them consume sugar more efficiently and reproduce faster. When Lenski had charted their evolution, he had found the same punctuated pattern that Adami identified. Digitalia and E. coli apparently had some profound things in common. The two teams of scientists joined forces in 1998.
Since then, they have found more similarities between digitalia and biological organisms. In 1995 Lenski and a student, Mike Travisano, ran an experiment to gauge the importance of chance, history, and adaptation to the evolution of bacteria. From a single E. coli they cloned twelve populations, which they regularly supplied with the simple sugar glucose. Over the course of 2,000 generations, all the colonies evolved, becoming better and better adapted to the glucose diet. Then Lenski’s team switched the bacteria to a diet of a different sugar, maltose. Over the course of another 1,000 generations, the colonies adapted until they could grow almost as well on their new food.
But the evolution of the colonies was not just a simple story of adapting to food. Lenski and his colleagues also kept track of the size of the microbes as they adapted. Originally the colonies were identical, but by the time the scientists switched them from glucose to maltose, they had diverged into a range of different sizes. Then, as the bacteria adapted to their new diet, their size changed again. Some colonies changed from big to small, others from small to big. Overall, the researchers found, the bacteria’s adaptation to their diet had nothing to do with their size change. Chance mutations could alter cell size with little effect on their fitness, that is, their success in survival and replication.
Adami and a student, Daniel Wagenaar, recently converted Lenski’s experiment into one they could run with their computer creatures. They created eight copies of a single program, which they used to seed eight separate digitalia colonies. The organisms were rewarded for mastering a set of logical operations, but once they had become well adapted, the researchers changed the reward system so that an entirely different set of operations was favored-a digital version of switching from glucose to maltose.
Adami and Wagenaar observed that digitalia could evolve quickly and thrive in new conditions, just as E. coli had. They also monitored the length of the programs-a trait that proved relatively unimportant to their fitness, just as cell size was for the bacteria. Adami and Wagenaar found that the evolution of a program’s length was determined mainly by its history and by chance mutations, rather than by the pressure to adapt.
These parallel experiments suggest, once again, that artificial and biological life evolve according to at least some of the same rules. When it comes to traits that experience intense natural selection-such as the mechanisms for finding food or crunching numbers-the end results may erase much of a trait’s previous history. But in the case of traits that experience only weak selection—such as the size of a bacterium or the length of a computer programchance mutations can send evolution off in unpredictable directions, and their effects can linger for a long time as historical vestiges.
One of the great attractions of digitalia is that they’re so much easier to work with than biological life. You can create billions of different digitalia strains and watch them evolve for thousands of generations in a matter of hours. And every step of that complicated journey is preserved on a computer, instantly available for studymaking it possible to ask questions about digitalia that can’t be addressed in ordinary experiments.
For example, biological evolution has produced structures and organisms of awesome complexity, from termite colonies to the human brain. But does this mean that evolution has been dominated by a steady trend of rising complexity over time? A long line of thinkers have claimed that it does; one recent example is Robert Wright, in his book Nonzero: The Logic of Human Destiny. Stephen Jay Gould, on the other hand, has argued that what some people may interpret as an overall trend toward complexity is really the random rise and fall of complexity in different branches of the tree of life.
Though fascinating, this debate has stalled because scientists have yet to settle on a definition of complexity in biology or on a way to measure its change. Complexity is not unique to biology, however. Mathematicians have found precise ways to measure the complexity of information, whether it’s a picture of Jupiter transmitted by the Galileo probe or the sound of a friend’s voice on the telephone. Since digitalia genomes are strings of commands-in other words, information-Adami and his associates have been able to adapt mathematical methods to measure digitalia complexity as well.
To gauge how much of the information in a digitalia program is vital to the organism’s survival, the researchers mutate each command in the program in every possible way and then see whether the organism can still function. A program may be stuffed with useless commands and turn out to be quite simple; even if you tamper with a lot of its code, it will still function. But another program of the same length may turn out to be complex, using most of its commands in precise ways that don’t tolerate much tinkering.
Following this method, Adami and his coworkers have measured the complexity of digitalia colonies as they’ve evolved through 10,000 generations. Overall, the complexity consistently rises until it levels off. Its ascent is jagged but is an ascent nevertheless. For digitalia, at least, evolution does have an arrow pointing toward greater complexity.
As interesting as these results are, any application to biological life comes with some important caveats. For one thing, the scientists only measured the information that was contained in the digitalia programs, which is akin to biologists measuring the complexity of information encoded in a genome. There’s no simple equation that enables a biologist to use genetic complexity to calculate the complexity of the things a genome creates.
Another caveat to bear in mind is that the complexity of digitalia increases in a fixed environment-that is, the rewards for processing data don’t change. In the natural world, conditions are always changing, with an endless flow of droughts, floods, outbreaks of disease, and other life-altering events. Every time conditions change, genes that were specialized to deal with the old conditions become useless. The obsolete genes may mutate or even disappear, and in the process, the complexity of a species’ genome dwindles. Only as the species adapts to new conditions may complexity increase again.
In the natural world, the arrow of complexity may get turned back too often to have any significant effect on long-term evolution. But just finding that arrow is an admirable start. Indeed, everything about digitalia is, at the moment, a start-the start of a new kind of science and just maybe the start of a new kind of life.
Copyright 2001, Carl Zimmer. Reproduction or distribution is prohibited without permission.