What are the ethics of creating new life in a simulated universe?
The morality of intelligence in laboratory-made worlds.
The following is an adapted excerpt from A Big Bang in a Little Room: The Quest to Create New Universes by Zeeya Merali, available in stores now. In the book, Merali explores the possibilities of creating an infant universe in a laboratory. In this excerpt, she meets with noted futurist Anders Sandberg to discuss the ethics of potentially creating new intelligent life in a baby universe, or the possibility of sentience evolving in a computer simulation.
When Anders Sandberg was a kid in the 1980s, he enjoyed making simulations on his Sinclair ZX81, mocking up mini solar systems. Later, he graduated to designing artificial neural networks that use learning algorithms inspired by the brain. “Some people relax by watching television. I program simulations while listening to philosophy lectures,” Sandberg chuckles. One day back in 1999, he recalls, he deleted a copy of a neural network on his computer and got a “tinge of bad conscience.” He couldn’t help worrying: “Have I just killed a little creation?”
After feeling that pang of guilt at the loss of his neural network, Sandberg shifted gears toward philosophy, and now he writes about the ethics of simulations at the Future of Humanity Institute at Oxford University. He argues that people will have to tackle questions about how to treat machine entities with compassion sooner than they might think. Yet, he notes, there is a general reluctance to face these issues, not only among the broader population but among scientists too.
I have come up against that reticence when talking to physicists involved in universe building. Some have tried to evade questions about the moral implications of creating life in a lab-made cosmos, saying that such issues are beyond their purview. “Most people have a weirdness budget and you’re not really allowed to use up too much of that, because if you are overdrawn on the weirdness account, then obviously you can’t be taken seriously,” says Sandberg. “So a lot of people keep quiet about considerations that might actually matter.”
Sandberg can conceive why a super intelligent race might have created a simulation and put us in it; many of the reasons are the same sorts of mundane justifications we currently have for running simulations. For instance, we are struggling to identify the most efficient way to spend limited money for health care. Is it better to have a world in which the overall health of the general population is higher but healthcare is unequally distributed, so you have a minority suffering horribly? Or is it better to aim for a fairer society where everybody has access to the same level of health care, even though that level may actually be quite low? Simulating these two worlds may help you decide. As long as none of the simulated beings have conscious experiences, that’s fine. But if they evolve intelligence and feelings, then you may have accidentally created a lot of suffering in your artificial world.
Sandberg believes it is possible that we could be part of a relatively small simulation that’s monitoring the outcomes of different spending policies by the National Health Service across the British population, for instance. In that case, the point of focus would be the individuals in the United Kingdom who use these resources, while the rest of the simulated universe might just be sketched in for color.
But now I want to examine what moral responsibilities we have as programmers of our own simulated universes. First, is there a serious danger that someone’s health care policy simulation could develop sentient life? “It’s less likely that artificial intelligence would arise accidentally than if someone deliberately set out to make it, but it wouldn’t surprise me if it could happen in principle,” says Sandberg. If it did occur, it would most likely be because we are creating increasingly smart pieces of software, which individually would not develop sentience but are being designed to interface with other pieces of smart software. The danger is that when linked together, the whole may become more than the sum of the parts.
Let’s say this does happen inadvertently and our health care beings develop experiences. Should we intervene, or should we pull the plug and end their lives? In terms of the health care simulation, Sandberg says, one suggestion for assuaging our guilt at forcing some of our creations to live through poverty and poor access to health care would be to reward them when the simulation is over by transferring them into another simulation where they can lead pleasurable lives.
“That sounds a lot like sending people to heaven,” I say.
“It is a stolen idea,” Sandberg concedes. But making an artificial heaven to compensate your beings raises a new problem: which version of your mistreated simulated entity do you upload to paradise? It would seem unfair to upload a person after her memories and brain function have been ravaged by Alzheimer’s disease, say, so perhaps you should upload a younger version. But it is difficult to decide at what point that entity should be transferred, and which life events should be regarded as crucial to the development of its identity and which should be wiped from its memory. Should you upload that entity from a point in its life before or after religious conversion, falling in love, having a child, or experiencing a traumatic incident? “If you think you have a moral responsibility for simulated entities, where it ends is a bit unclear,” says Sandberg. “Maybe you should resurrect copies of them at all points in their life.”
It would be a coup to make a universe in a particle accelerator. But it seems unlikely that we could wield the level of control in the lab that Sandberg refers to when talking about computer-simulated universes, given our current capabilities. In the LHC, for instance, researchers mainly employ a hit-and-hope strategy, with little room for nuanced tinkering with the products of particle collisions. In that case, we may give rise to life inadvertently, with our beings able to experience its accompanying pains and pleasures, but we would have no control over their well-being afterward. So should we go ahead and do it anyway?
Though this is a classic problem that philosophers have thought long and hard about in the context of simulations, Sandberg notes that there’s no consensus. Perhaps the easiest answer is just to plainly say no. If there is any chance that your universe will involve the production of a sentient being who will suffer pain, you should not make it. Others will say that it’s the total sum of experiences within the universe that matters; if you add up the happy people, subtract the unhappy people, and come up with an overall positive answer, then go ahead and do it. Still others have argued that you need to have some measure of the average level of happiness in the universe. But there’s no clear mathematical answer for what constitutes a good universe. We’re back to the health care puzzle again, slightly restated: would a universe where almost everyone is mildly happy but a few people are being horrifically tortured be better or worse than one where half the population is deliriously happy and the other half is slightly miserable? “Any way you try to argue it, you can make a case, but then someone will come up with a counterexample showing why it’s bad,” says Sandberg.
There may also be a case to make that creating intelligent observers would continually amplify the amount of good in the universe, even if we lose control of our creations. “One argument I would make is that intelligent life tends to try to take control of its environment to make things better for itself,” says Sandberg. “So you should actually expect that a universe that is overrun with intelligent observers would tend to become slightly better to live in than universes that don’t have any.”
It’s honestly a view that I hadn’t considered. Maybe we are morally obliged to try to bring more life into being. I thank Sandberg and say goodbye, feeling reassured. I hope that he is right, of course, because the stakes are, quite literally, astronomical.
Adapted excerpt from A Big Bang in a Little Room: The Quest to Create New Universes by Zeeya Merali. Copyright © 2017. Available from Basic Books, an imprint of Perseus Books, LLC, a subsidiary of Hachette Book Group, Inc.
Popular Science is delighted to bring you selections from new and noteworthy science-related books. If you are an author or publisher and have a new and exciting book that you think our readers would love, please get in touch! Send an email to firstname.lastname@example.org.