Darwin in the Machine
John Koza hovers over a computer terminal in a cramped office a mile from the building where he keeps his invention machine. Seated at the terminal is a clean-cut researcher named Lee Jones, one of Koza's two employees. His other employee, Sameer Al-Sakran, leans over a second terminal, stroking his facial stubble.
Jones is reviewing one of the invention machine's latest accomplishments, which Koza is preparing to present at the annual Genetic and Evolutionary Computation Conference, familiarly called GECCO. In this instance, the machine has created a complex lens system that outperforms a wide-field eyepiece for telescopes and binoculars patented just six years ago by lens designers Noboru Koizumi and Naomi Watanabe-and which does so, moreover, without infringing on the Koizumi-Watanabe patent.
Jones calls up an optical simulator known as KOJAC. From a prescription (which numerically describes the curvature, thickness and glass type of lens components), KOJAC predicts how the compound lens will function in the real world. The numerous variables make the effect of simple changes difficult to predict. As a result, lens designers are a creative bunch, who depend as heavily on intuition as on knowledge.
What Koza has done is to automate the creative process. To begin, the invention machine randomly generates 75,000 prescriptions. It then analyzes them in KOJAC, which assigns each a fitness rating based on how close it comes to a desired set of specifications-in this case, a wide field of view with minimal distortion. None of the 75,000 members of the first generation will be usable wide-field telescopic eyepieces. But a few of these primitive systems will be marginally effective at focusing a wide field of view, and a couple others might slightly reduce distotrtion in one way or another.
From there, it's Darwinism 101. The invention machine mates some systems together, redistributing characteristics from two parent lens systems into their offspring. Others it mutates, randomly altering a single detail. Other lenses pass on to the next generation unchanged. And then there are the cruel necessities of natural selection: The machine expels most lenses with low fitness ratings from the population, kills them off so their genetic material won't contaminate the others.
Koza asks Jones to pull up the stats on the wide-field telescopic eyepiece. Amid a rush of figures, he reads off the number "295." That's how many generations it took for genetic programming to engineer around the Koizumi-Watanabe patent. In fact, the invention machine's lens is better than the Koizumi-Watanabe system: Because it keeps breeding until all design specs are met, often some performance requirements are exceeded by the end of the run. The final field-of-view for Koza's eyepiece is a remarkable 10 degrees higher than the 55 degrees achieved by Koizumi and Watanabe.
Jones swiftly rotates through several other recent inventions, all generated using the same technique as the lens system. There are logic circuits and amplifiers and filters, some of them suitable for the challenging low-power requirements of cellphones and laptops. Each took between one day and one month to evolve, generating an electricity bill of more than $3,000 a month.
The Breeding Grounds
Like every engineering breakthrough, genetic programming did not emerge fully formed from the ether. Rather it grew out of two promising yet unfulfilled lines of research in computer science: genetic algorithms and artificial intelligence.
Koza's thesis adviser at the University of Michigan was John Holland, the man widely regarded as the father of genetic algorithms. While Holland was a grad student studying mathematics at Michigan in the 1950s, he'd happened upon a book called The Genetical Theory of Natural Selection, written by English biologist Ronald Fisher in 1930. The book laid out, in strict mathematical terms, the basic mechanism of variation in plants and animals. "I thought I'd try to figure out a program that did that," Holland recalls. He envisioned a system that, through small, incremental improvements, would breed good code the way a farmer breeds good corn-an early forerunner to genetic programming in the sense that, say, Mendelian inheritance was the first step toward understanding Darwinian evolution.
Holland and his student David Goldberg implemented the idea in 1980. Goldberg had studied civil engineering before working on his doctorate and was interested in the practical problem of how computers could be used to optimize the capacity of gas pipelines. They began by creating a rough model of an efficient layout for the pipeline. The software then made small random changes to the system-alternately varying the pressure, flow rate and pumping schedule-and simulated the gas flow anew following each mutation cycle. The computer retained any alteration that improved performance, even by the slightest bit, and discarded all those that didn't. Over 20 or 30 generations, the system evolved, by almost imperceptible steps, to become markedly better than it was at the start.
The power of genetic algorithms increased in step with the power of the processors they ran on. By the mid-1980s, Holland's process had spawned a small cottage industry, complete with dedicated academic conferences and myriad industrial applications.
Artificial intelligence, on the other hand, was decidedly less practical then. Its goal was (and largely remains) to model human cognitive functions, such as language use and pattern recognition, in computer systems-to make machines think. AI researchers were also hugely optimistic. (One conference topic, seriously debated, was "Should AI run for president?")
Koza had just left the lottery business. He was interested in the commercial potential of genetic algorithms and thought, given his financial success and the strong economy, that he would become a venture capitalist. He read Holland's book on genetic algorithms, subscribed to journals, and attended meetings, all of which reminded him how much he had enjoyed pure research as a grad student and how much he missed it. "I became more and more interested in the technical problems," he recalls. "I realized that venture capital was just another hectic business."
He took a position at Stanford as an adjunct professor, which gave him time to absorb the latest research in genetic algorithms and artificial intelligence. Yet he felt that both disciplines were somehow fundamentally lacking. Pragmatic by nature, Koza was frustrated-as many others would later become-by the growing gap between promise and performance in artificial intelligence. Meanwhile genetic algorithms presented a different kind of frustration: Although they proved to be excellent optimizers, perfect for tweaking well-defined systems, they lacked the creative capability to come up with novel solutions to their problems.
In 1987 Koza was on an airplane, returning to California from an AI conference in Italy, when he had the crucial insight that Holland himself would later deem revolutionary. AI was all promise, an underachieving prodigy. Genetic algorithms were all performance, reliable drones. Koza was 30,000 feet above Greenland when he asked himself why a genetic algorithm, so adept at refining pipelines, couldn't be used to evolve its own software. Why couldn't a computer program adapt itself and, in doing so, solve any problem fed into it?single page
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.