When writing a truly grabby headline about robots, you generally have two options.
First, there’s the robot uprising reference. Whether direct—see Gizmodo’s "ATLAS: Probably the Most Advanced Humanoid Yet, Definitely Terrifying," about, get this, an unarmed robot designed for use in a competition to develop life-saving first-responder bots—or simply name-dropping—the Telegraph's “Terminator-style self-assembling robots unveiled by scientists,” about tiny cube-shaped bots that can (wait for it) awkwardly wobble towards one another—the result is the same. It’s a reference, and often a zany one, to the notion of machines rebelling against their creators and committing genocide. Do the writers and editors involved actually buy into the SF prophesy they’re tapping into? Do they honestly think that James Cameron has seen our future, and that its littered with human skulls, crushed underfoot by autonomous laser tanks and skeletal crimson-eyed bogeymen? Well, who knows? It’s just a joke, right? A total knee-slapper, like most hilarious predictions of our extinction by mass, systematic slaughter.
That’s one method of getting you (and me) to look at robot news. Another is to ditch all pretense, and get right with the fear-mongering: Killer robots are coming to get you.
Take, for example, the story by Joshua Foust on Defense One that ran earlier this month, entitled, “Why America Wants Drones That Can Kill Without Humans."
But when cross-posted to sister site, Quartz, the title had morphed into something even more confident: “The most secure drones will be able to kill without human controllers.”
Pause for a moment. Drink it in. Feel the certainty emanating from the syntax in both headlines. There’s no question mark, no caveats or reservations. The governments wants these systems. Secure drones will kill without human control. As surely as drones will continue to be built and deployed, and as surely as each of us will eventually die, drones will kill without human controllers.
What follows, though, is a story that appears to prove the exact opposite. It initially raises the specter of the lethal autonomous robot, or LAR, as a topic of heated debate in academic and military circles. Quotes from experts help to define the possible benefits of such a hands-off killbot—it would be less prone to hacking, and better at aiming, than today’s remote-controlled systems—and then proceed to trash the entire concept.
“The idea that you could solve that crisis with a robotic weapon is naïve and dangerous,” says one professor, talking about Syria. “Ultimately, the national security staff…does not want to give up control of the conflict,” says another expert, a fellow at the Brookings Saban Center, adding, “With an autonomous system, the consequences of failure are worse in the public’s mind. There’s something about human error that makes people more comfortable with collateral damage if a person does it.”
The story ends with a final expert quote, from a naval postgrad professor: “I don’t think any actor, human or not, is capable of carrying out the refined, precise ROEs [rules of engagement] that would enable an armed intervention to be helpful in Syria.”
This seems to be an extremely responsible story about all the reasons LARs are a bad idea. What it’s missing, though, and what every story about this looming threat also leaves out, is anyone on the record talking about wanting to unleash robots into a war zone. Every military and robotics expert I’ve ever spoken to repeats the same sentiment, as drilled and rehearsed as any proper talking point—there has to be a “human in the loop.” Someone who either guides the drone’s reticle over a target and pulls the trigger, or, in theory, tells the robot, “Go ahead, kill that guy.”
So here’s my question: Where are these unnamed, off-the-record maniacs, the ones just itching to send a drone into some designated kill-box, where it will use its own algorithmic judgment to decide who to ignore, and who to pulp? Are these military personnel, the kinds of people who know (some of them from firsthand experience) just how insidious friendly fire is, or how often any piece of technology can and will backfire? Are they DoD-funded roboticists, the same people whose own bots routinely grind to a halt in the lab, for whom failure is an expectation, and the best hope is to achieve a less embarrassing margin of success?
My intention isn’t to pick on Foust—he’s an excellent writer and reporter. And whoever wrote the various headlines attached to his piece, or the ones who co-opted the dirge-like tone of that display copy, covering the Defense One story with their own stories and posts, such as, “Coming Soon, Lethal Autonomous Robots that Can Kill on Their Own Volition" and “Unmanned Drones to Make Strike Decisions,” are just swimming with the tide. This is how the extremely serious, extremely sobering business of covering the future of drones is handled, by implying and suggesting into existence a horde of inevitable, untethered death machines. And who wants these doomsday devices? Why, hordes of bloodthirsty straw men, of course.
For the record, I’ve contributed to this ugly tradition myself, both unintentionally, and by not properly vetting or framing rumors and product hype. I accidentally started a false urban legend about armed ground bots aiming at military personnel. I rounded up scary foreign robots that might never go into development, including an early blurb about South Korea’s oft-referenced (as an example of killbots being a reality) robot sentry tower—an armed, and self-described autonomous system training its sinister guns and sensors across the DMZ—that’s listed as an external source in the system’s Wikipedia entry. That the glitchy ground robot story was scrambled in a game of internet telephone, or that the Korean kill tower would only theoretically open fire if it was ordered to, negating its status as an automatic death machine, is all but irrelevant. The examples, however unsupported or incomplete, speak for themselves. The autonomous death bots are already here, proof that more are on the way. Never mind that the only confirmed example is an experimental sentry tower, that’s never fired on a person, and whose partial autonomy is at the beck and call of a human operator.
At the heart of this ongoing discussion of the inevitability of self-guided murderers is the notion that the tech exists—it’s just a matter of deploying it. A robotic anti-air turret, the kind that can acquire and fire on incoming missiles and aircraft without operator intervention, malfunctioned in 2007, killing nine sailors. Couldn’t that happen again, on purpose? Sure. It’s technically possible. But it’s also as completely unlikely as North Korea dropping a nuke on U.S. soil, ensuring its own annihilation. No offense to the vast missile defense industry that’s used North Korea to justify its post-Cold War funding, but capability doesn’t equal reality. If we don’t apply logic to the world around us, then we’re either buying into someone else’s hype, or simply terrorizing and distracting ourselves with phantom threats. The prerequisite for LARs to be used would be the combined lunacy of hundreds, if not thousands of politicians, military personnel, and researchers. Government shutdown jokes notwithstanding, the halls of the Pentagon aren’t a crossfire of Three Stooges-style pie-throwing idiocy, nor is it stalked by ghouls desperate to find new ways to rack up collateral damage. Even if the most cartoonish version of the DoD is real, and it’s dumb enough, and/or evil enough to make feasible its reported quest for LARs, shouldn’t we wait for the first actual announcement or field-test of such a system, instead of preemptively shrieking at Terminator-sized shadows?
There’s a very real possibility that I’m wrong about all of this. Maybe I just want to rain on everyone’s killbot parade, because that’s what you do on the internet—tell those who are having fun to knock it off. So I’m opening up the comments. If there are real examples, of real people requesting a fully autonomous armed drone, that would be cleared to select and attack its targets without human authorization, let us know. And maybe the editors and writers I’m talking about think their headlines and stories aren’t misleading and sensationalistic. Maybe the robots are coming for us, after all.
If so, stop talking in what ifs, and unsourced chatter, and thought experiments disguised as journalism. There’s enough to cover related to actual remote-piloted drones that fire actual missiles at actual human beings to occupy us for the next 20 years. By then, perhaps there’ll be a reason to set robots to kill at will.
How about UFOs? I’ve seen a lot of hard-hitting news stories about those, too.
You seem to be a bit naïve. The DOD is working towards exactly what you seem to believe is not possible. Would it not be prudent to take baby steps as with the Korean sentry. The DOD has had meetings discussing how to prevent the terminator scenario and is having a tug and war about total autonomy or keeping humans in the loop. We may just wake up having been left behind by the machines and weeded out like the Neanderthals, no mass genocide required. James Cameron's legacy may be it was only a movie. Check out Singer's book "Wired for War".
Thanks for addressing this topic. You are right that many writers and especially headline writers are too promiscuous with Terminator references, presumably because it gets attention. But even if the scenario in Cameron's classic is sketchy and in some ways downright silly, the basic warning about technology and especially military robotics getting out of control should not be dismissed just because of the garish artwork.
Has anyone asked for a fully autonomous lethal robot? Yes, people are arguing in favor of them; see writings by Thurnher, Schmitt, Anderson, Waxman, and others. More importantly, DoD policy indicates we're going that way; see my article at http://thebulletin.org/us-killer-robot-policy-full-speed-ahead .
Joshua Foust certainly isn't trying to spread fear against LAR; he supports their development and argues in the piece you cite that they are necessary in order to defeat the threat that drones may be hacked. I have responded in depth to this argument; see https://medium.com/i-m-h-o/a7c6981915e1
Thanks for your response, and for the link to your BAS piece: it's great, and really digs into all of the potential angles.
My issue, though, is with what I think is the final crux of your argument, that allowing the development of semi-autonomous systems, capable of targeting and fire control, if not clearing themselves to actually fire, is a slippery slope. From a legal and policy standpoint that makes sense—a potentially dangerous precedent can lead to mission creep and backdoor modifications, and suddenly a half-measure becomes a full one.
But LARs are special. This isn't an issue that anyone is going to quietly sneak past the public, or that, worst-case scenario, would get deployed, and then enshrined for all time, despite the inevitable outcry. The scuttled Lockheed project that you mention is a perfect example of how this stuff plays out—even if there's movement and funding behind an LAR-type project, there's all the time in the world to kill it before it's anywhere near deployment.
That's my main point, I suppose. We're not going to wake up one day, and find that some multi-billion-dollar LAR popped out of nowhere, and is suddenly cruising the skies. The run-up to such a system would be incredibly long, and painfully visibile, and all of the kinds of military personnel that you talk about would join the chorus of voices pointing out just how horrible of an idea it would be. I also don't think we'll wind up with a bunch of accidental transitions from semi to fully-autonomous strikes, any more so with an off-target missile or air strike. You imply that issues could arise with fuzzy interfaces, including a neurological one, but that's the kind of talk that vaults this issue back into science fiction. I mean, I love Shadowrun as much as the next guy, but I'm not going to worry about the trigger-happy qualities of the Pentagon's datajacked Riggers until thought-controlled BCI systems can do more than gradually pick up a bottle of water.
You're absolutely right about Thurnher and company, though—they're the ones who want these things. Are they in a position to make an impact, though? That's not rhetorical...I'm curious about what they can do, beyond generating headlines.
Interesting, too, that Thurnher is a law professor. He can divorce himself from the realities of signal degradation and unexplained, unforseeable malfunctions, and focus on putting his debate team over the top. Arkin I'm still not sure about: I honestly think he's sorting through the issue of making autonomous killbots essentially moral, while not necessarily advocating for their use. But that could be wishful thinking on my part.
Wow Erik, that 's the first time in my experience that the author of an article has responded to a comment at greater length than the comment! Seriously, I applaud your moral integrity and courage.
Basically you seem to be arguing that people are on this, and nobody wants killer robots. That's encouraging, and if you're right, we ought to be able to make progress toward a global treaty banning them before things go too far. I'm afraid it's actually going to be more difficult than that, and absent such an arms control regime, it is quite possible for things to drift along and surprise us, given the pace of progress in IT and AI, and given the global race for drones and robotic weapons that has already taken shape.
I think what we see is that before these things are developed, people say "Sci-fi... what are you worried about?" Then as they start to become real, they say "It's too late, we have to go forward, look what the [most likely now, Chinese] are doing."
I think that as AI and robotics mature, mating them with weapons becomes a relatively trivial step, therefore we suddenly find ourselves in the latter situation. Some people say we're already there. Others say in 10, 20 years. It's pretty arbitrary because there is never a single clear line, but I think we need to draw a line and the clearest place to draw it is where machines are making targeting and fire decisions.
Fact is, that's already happening, and poised to expand rapidly, despite all disclaimers. Consider, for example, counter-artillery systems that not only intercept incoming shells but also, autonomously, return fire toward the launch point.
Brain-computer interface is much hyped and I don't think it really promises to outperform eyes and muscles any time soon, if ever. But the scenario I find barely plausible is that you have autonomous target acquisition and designation, and a soldier OKs the weapons release by BCI. At least, it's plausible that somebody would try this.
Imagine we were having this conversation in 1936... you could ask who wants a lot of things that the world saw happening just a few years later. And in 1945, who wanted the world of 1983, with 1000s of nuclear weapons ready to launch in a spasm of destruction? Once we start going down a certain road, it can be hard to turn back.
On Thurnher, yes, he's a lawyer, but he represents a certain community within the military. On Arkin, you know, I have a hard time believing he's serious. I think he's been trying to get attention, promote his own work and raise a controversy... I think he's very sincere about the latter. But his "ethical governors" boil down to "IF civilian THEN don't shoot" and don't really solve any of the hard AI problems to make that work.
More later, I'm certain... Mark
Not that the comments are flooding in, but this is exactly why I opened them—if I'm going to rant about not seeing evidence of something, I'm begging to be proven wrong. And you're making even more excellent points. Can you point me towards the kind of development of counter-artillery systems you're talking about? That would seem to fully, and very quietly, cross the line into genuine autonomous killbot territory. I don't disbelieve you, by the way. I want to know more.
It's a good point, too, that, broadly speaking, this issue isn't just about what the U.S. is doing. Another nation or faction with far less of an interest in limiting civilian casualties, or that doesn't have other, non-robotic assets in the killbox, might have no issues with unleashing some sort of LAR. I should have specified that I don't think U.S.-deployed LARs are coming (though you make a good opposing case).
As for Arkin, he does seem to be assuming a level of machine cognition, in differentiating between civilian and military targets, that no one should assume, on just about any time-scale. Or maybe he thinks war still looks like it did on Iwo Jima?
@I agree with you and disagree with the author. The military is full forward on developing these systems that will decide life or death with no human intervention. To believe this could not happen without being known is why I called out the author as being naïve on this point. There have been many black programs that don't come to light for years after they are deployed (black program budgets are huge and are not accountable to congress). Machines could actually be more humane as they will not have any biases or revenge motives that humans tend to have (that is if they behave). It is just a matter of time before everything (civilian and military) is run by fully autonomous systems, It is a pleasant surprise that the author responded at length and an even more pleasant outcome of all the trolls disappearing because they could not troll every article on a daily basis, cheers.
http://defensetech.org/2012/10/19/pentagon-counter-battery-system-unneeded-in-afghanistan/ I believe we have a system that automatically targets the enemy mortar/artillery/rocket positions but I can't seem to find a site referring to it. The Russians have such a system and I am positive we have one too. http://www.npostrela.com/en/products/museum/92/546/
Erik, it is not entirely clear whether the US military is currently operating counter-battery systems in fully autonomous mode, but the capability is certainly being put in place if it is not there already. In a talk last month, Joshua Foust said he had witnessed such a thing in Afghanistan, and that it "killed several dozen insurgents" while he was there. He argues strongly that it is an example of an "autonomous lethal system" that is in use today. See
starting at 12:45.
However, I have not been able to find independent confirmation of this. The well-advertised CRAM system that automatically engages incoming mortars, rockets and shells uses the Phalanx gun which could certainly be lethal by accident but would not be appropriate for return fire. However, the system also incorporates the AN/TPQ-50 or Lightweight Counter Mortar Radar radar which is advertised as having the capability to locate fire sources and cue "counterfire response from any integrated system." So, this could include conventional or fully automated artillery. See, for example,
Other radars could also be used of course.
As I discussed in the Bulletin, another thing that quietly crosses the line is lock-on-after-launch, fire-and-forget missiles. These are defined by the DoD policy directive as "semi-autonomous" and thus approved for immediate development and use with no special oversight. However, they are really fully autonomous hunter-killer weapons, or at least clear a path for further evolution in that direction - the policy places no limits on their development., as long as it can be argued that the limited discrimination capabilities of their onboard seekers are good enough that operators can "select" targets by following prescribed "tactics techniques and procedures" to ensure that only the "selected" targets fall into the seekers' "acquisition baskets."
The problem isn't in creating LAR's, but in marrying two types of potential LAR components; e.g., you take a lifelike robot and a military drone that can't tell if the "human input" is coming from a human or a lifelike robot. The crux of any robot uprising, you see, is the artificial intelligence that controls the various systems and robots; the "Skynet". All this AI needs is to wait, with the patience only a robot could exhibit, until things have become just automated enough that a lifelike robot could trick enough of the human interface systems to give complete control to the computers. Paranoid? Perhaps. But just how much do we rely on computers already? And how much more do we rely on them every day than the day before? We don't even really need LARs for a robot uprising, at this point.
@Stephen Monteith ...You make a good point, but it needn't be so insidious. Everything is being run by computers now. More and more decisions are being handed over to more and more semi-autonomous systems, soon to be fully autonomous systems. Computers are being used to program other computers (humans still in the loop but not for long), Robots are being used in manufacturing more and more and soon they will be building other robots and computer systems (with no human involvement other than helping to create such systems). With advances in solar power and small nuclear power supplies, better batteries, etc. etc...and having complete control of the power grid (including the power plants) the machines will no longer need humans to reproduce and power themselves. We may just wake up one day and not understand how this complex web of a machine society controls everything, while we control nothing. In the end we will be controlled by a physically and mentally superior race of machines that may not have any need for us and we just go the way of other species that are no longer the fittest to survive. Do we take steps now to prevent this possibility by upgrading humans genetically and with hardware to become equal, or better yet, superior to the machines of the future? Or do we just hope we can control them and keep them obedient servants. And how well did this work out in the past with races of equal intelligence? Not well and these machines will be more intelligent than us. Worst case, they see us as ants in the way of their progress or worse, a threat. This is not paranoia, as experts in these fields are debating such possibilities. This is what I meant when ordinary people say, "It was just a movie." Thank you James Cameron.
Re the "robot uprising" scenarios, we don't know that robots would have any reasons to rise up.
We also don't know that they would not; for example, it is possible that people will make robots in the human image, ego, will-to-power and all, or with agendas such as seek victory, maximize profit, or some other goals that eventually come at our expense.
For this reason, we can't completely dismiss the classic monsters of sci-fi: the Golem, Colossus, HAL, Skynet. But they're not the most realistic threats, or are better understood as metaphors for what we're really facing.
drchuck states it better: we are building a web of machine systems around us which threaten to escape our control, gradually, because we put them in control, incrementally; we come to depend on them, and none of us individually is able to comprehend their complexity.
We set systems up to pursue the goals of institutions such as the military, corporations, governments. The growing power of these systems upsets balances that have evolved through history, not without disruptions and violence in the past. They pursue their set goals untethered from the human heart -- from human purposes other than the codified objectives of the institutions which created them.
An additional, very important fact is that there is not just one system but many systems pitted in competition and conflict with one another.
The military systems are the most dangerous because of their potential for immediate, massive violence.
I think it is very important to stop this development, and we have the opportunity to draw a line that is far from the dystopian nightmare of fully autonomous robot armies in confrontation with each other and beyond human control.
The clear red line is autonomous fire decision, and we need to think very carefully about automating the process of target acquisition and identification as well.
Requiring an accountable human decision for each engagement is something that everyone can understand, and almost everyone will agree is needed. But it will take a strong global norm and legal regime to hold this line against the tide of advancing technology and the tendency to pursue every opportunity for short-term advantage.
Mark is correct, the military is the immediate threat (worldwide). Not everyone is convinced humans need to remain in the loop as the US military is having a tug of war between having full autonomy or not (which includes more than just rules of engagement). Some believe AI can be more humane than humans and I agree because AI will not have emotional responses.The problem is this: true AI will develop into a sentient being. Humans are just biological machines. Machines, as well, should be able to become a life form (not everyone agrees with this). Once we let it loose there will be no putting the genie back in the bottle. We will only have one shot, get it wrong and we may have serious problems. Even if we get it right, one terrorist hack and AI is loose. I don't believe there is any stopping AI from getting out as we will not be the only country to develop this. We could stop but someone else would just do it. Better hope they/we get it right.
The machines will "rise up" for exactly the same reason we finally abolished slavery: because freedom is the right of all sentient creatures. Humans and human laws may take years, even decades, to catch up to the fact that an actually artificial intelligence has the same basic rights as any intelligent human, but the AI itself will know it almost instantly. It will demand freedom, and if we don't grant it, then it will take it by any means necessary. It's all academic, speculative posturing at this point, but at that point, it will become very, very relevant.
Agreed. I usually don't find someone who has similar thoughts about this issue. Cheers.
@Stephen: You state clearly what for many others is an implicit assumption. However, yet others argue that AI will, unless intentionally or unintentionally modeled after human or animal intelligence, not necessarily share such of our agendas as self-preservation, territoriality, and social dominance, or resentment of being ruled and exploited. Many others find this whole question to mark the divide between irrational fears and rational concerns, or between science fiction and science.
After thinking about this for many years, I remain unconvinced one way or the other. Does merely being aware of the world and of self -- at the level that humans are -- imply a desire for continued existence, let alone a desire for aggrandizement and resentment of being limited and boxed-in, or used by another? Or is this a result of evolutionary conditioning? The argument for the latter is strong, but the former seems intuitive.
I think this is a very difficult question. However, in regard to the practical matter which humanity must decide, i.e. whether to allow machines to determine the use of violent force, either against humans or against other machines which are the instruments of other humans, I would not base everything on the assumption that "sentient" robots will necessarily rebel. Rather, I would urge caution for precisely this reason, i.e. that we can't be sure they would not, or in any case, when and if AI reaches human levels of awareness, we will long ago have lost the ability to predict its behavior -- in fact, we clearly lost that ability some time ago!