There was a time when science produced robots, but a paper published recently in the Automated Experimentation Journal suggests that in the future robots will autonomously produce science. It’s not just a matter of cheap labor or taking menial tasks off the hands of researchers; the authors argue that science needs to be uniform and formalized, and AI robot scientists could help us get to that point by developing their own hypotheses and carrying out experiments with minimal human input.
As their models, the authors cite a pair of robo-researcher prototypes, Adam and Eve. While Eve is still under development, she’s designed to demonstrate the automation of closed-loop learning, feeding the conclusion of each experiment back into her experimental models. Adam, in service since 2005, has already conducted yeast metabolism studies leading to a variety of conclusions, some of which have been verified in manual biological experiments.
When Eve is ready, the two systems will be combined so they can cross-experiment with one another, which brings us to the real benefit of autonomous, robotic experimentation: formalization. While human experimentation has brought us this far, there are some inherent problems with it: incongruity in experimental methods, competition between researchers that discourages the sharing of information, and the inherent human problem of bias.
Then there’s the simple fact that robotic scientists are machines, capable of a battery of tasks that humans simply can’t match:
Computational closed-loop learning systems have certain advantages over human scientists: their biases are explicit, they can produce full records of their reasoning processes, they can incorporate large volumes of explicit background knowledge, they can incorporate explicit complex models, they can analyse data much faster, and they do not need to rest.
Autonomous research ‘bots could certainly help humans think outside our respective boxes; our own biases can produce tunnel vision in our hypothetical thinking as well as misinterpretation of our experimental results, either of which can lead to bad science. But if we turn over the pursuit of scientific knowledge to autonomous, computer-driven robots will that lead to intellectual laziness on the part of humans? It seems like there’s a thin line to tread between increasing our capacity to hypothesize and experiment and creating a scientific community that lists toward complacency.
That is, at least until the robots hypothesize that they would be better off without their human overlords. But that’s a topic for another post.