“We are still at the 'horseless carriage stage of this technology,” Peter Singer says in a new Brookings Institute piece on robotics. Singer, director of the 21st Century Defense Initiative and a senior fellow at Brookings, knows what he’s talking about. His book Wired for War is the seminal text on the topic of robotic warfare and the ongoing robotics revolution. And if you’re not going to read the book (and really, you should), he’s posted a fantastic piece on the state of the robotics revolution over at Brookings that can quickly bring you up so speed. As the U.S. prepares to open the national airspace to unmanned robotic systems and our various foreign engagements keep military robotics up high on the defense budget, we’re all taking part in an era of unprecedented robotic research and development, prompting Singer to ask: “Are we going to let the fact that what is unveiling itself now seems like science-fiction to keep us in denial of the fact that it is already part of our technological and political reality?”

9 Comments

Automation brings with it the diminishing value of labor. Robotic war drones brings with it the diminishing value of life itself.

Humanity imagination and ingenuity will automate its own demise. This futuristic self-impose doom is unstoppable as we are blinded and bedazzles from its efficiency and effectiveness, like we have always been with automation from its start.

And yet, if I was in a battle situation, I would want the best available weapons.

Humanity is doomed.

Science-fiction; if something not present happens in the future, does that mean it is fiction until it happens? How about crossing the DNA of a narwhal and a horse? Fantasy-fiction appears enabled by same science to be given presence. So, why is it popular to ridicule? We become willfully blind when using rejection based on experience and not the possibilities in full knowledge. Still, I don't see why robots should have lethal arms if they could be better equipped to subdue enemies. I don't think it's right to take life if you aren't risking one, knowing you didn't need to design your efforts to conclude your engagement resulting in death.

Capital increases the value of labor. If one person with automation can make what fifty would without, that one person's labor is fifty times more valuable.
The value of life itself is not dependent whether or not we have autonomous killing machines, thank God.

@ Robot

If you have 50 people working at a shoe factory, and automation comes in so you only need 5, then you have the same number of shoes, and the work of 45 more people (that's 360 hours a day) to make more hamburgers, or wallets, or jeans, or computers, or anything.

There isn't a finite number of jobs out there - a job is just an organized, directed effort of creating wealth. It's an abstract production method; not a commodity. It will always exist so long as their is human desire for things (and thank goodness that capacity is infinite).

Those 45 other people can go and do a different job, and now we have more total shoes, and cheaper at that, plus the wealth created by those 360 saved man-hours that can now be directed elsewhere.

For someone with the name "robot" you sure don't seem the grasp the good implications of your own existence.

brain144,
It will be with humanities great wisdom, efficiency and effectiveness we will automate our own demise.

Yes, I understand the wonders of automation very well. And like a factory that puts forth widgets fast, cheaply and effectively, so will yours, mine and other people lives will be removed and whatever weapon or factory I can make, so can my enemy make too.

The automated robotic machines will be the only thing left standing, once all the dust is settled.

A very interesting article (click the link and go read it lazy internet people!).

He doesn't touch, however, on the one issue that always jumps to the fore with wartime robotics - at what point in time to we turn over the kill decision to the machine.

While many systems are still slow enough to be "man in the loop" at some point (likely with landbased turrents or air-vs-air drones) the cycle's needs will become too imediate for the "chain of kill" to play out. (Note that this is already the most contentious part of mankind's most dangerous unmanned weapon - the landmind).

@ Robot.

Is it possible for us, at some point, to automate the entire war process? Yes.

Will we? No. Robots are too monumentally stupid. It isn't a question of processing power. There is a fundamental difference in the current logic structure of computers, and a system that integrates/simulates consciousness, inventiveness, and self-awareness, like our brains.

Is it possible to simulate AI with our current hardware paradigm? Yes, maybe with a warehouse-sized supercomputer and some extremely elegant programming. But it will still be slow. Impossibly slow. The speed of light, believe it or not, is a real limitation on our computing ability.

Think of trying to simulate a brain with silicon as trying to simulate an exponential operator with only addition. If I want to find 2^10, I need to change that to (2) -> (2+2)-> (2+2)+(2+2) -> ((2+2)+2+2)) + ((2+2)+(2+2)) -> ... and so on.

So, in my lifetime or yours, I do not expect any computer to be competent in decision making. Large or small. The decisions required in waging a war are far too complex.

Additionally, you act as though people don't understand that machines are fallible. Every time my spell check replaces a real word (typically scientific terms) with a wrong one, I am reminded of how fallible it is, as is the rest of the world's populace.

Oakspar77777 has a much better understanding of this. You can automate defenses. Identifying an object moving fast in the air, and locking a missile onto it is well within computers capability. Even a land-mine is an automated weapon. So when we deploy these automated weapons, we always take into account their discrimination. Air defense has difficulty discriminating, so humans are kept in the loop. Land-mines cannot discriminate friend from foe, either, so humans are left in the loop.

We make the logical deduction of physical locations people would only walk across if they were an enemy, and then place the landmines there. It is still a conduit of a human decision. We don't put landmines all over the place because there are many places where that level of discrimination is insufficient. We recognize the limitation and conform our strategy to it.

As we make weapons better at discriminating, we let them operate with less oversight. But since history, policy, and general human distrust of machines all point to reduced oversight coming only from proven confidence... I don't expect humans to be removed from any "kill-chain" at any point where a human's discrimination is still needed.

As for robots taking over the world... we can automate factories. We can't automate supply chains, and we can't automate maintenance. Any "rogue AI" out of scifi can have its batteries pulled. A broken valve somewhere, a few cut wires, and they are SOL. Sure, if they somehow launch a large series of nuclear warheads, in the doomsday scenario, it'll wipe out a lot of people. But they'll also decay. And the human race will live, and recover.

Dunno if its stupidity, or narcissism, but you have a very 'futurist', delusional idea of what robots are - or will be - capable of.

I read a recent article that made the case that the current UAVs used by the US military (ie. predator, global hawk, etc.) and other countries actually cost more to operate and require more total manpower than an equivalent manned system would. Regardless of cost, any UAV definitely doesn't put a pilot at risk.

As for industrial robots, they are quickly making cheap labor rates irrelevant. Within a few years, we'll see manufacturing robots become sophisticated enough that they will be able to displace even the cheap manual labor in places like India and China. This will allow lots of manufacturing business that moved overseas for cheap labor rates to move back to the US, benefiting our domestic GDP, improving our trade deficit, and ultimately reduce our national debt.


140 years of Popular Science at your fingertips.

Innovation Challenges



Popular Science+ For iPad

Each issue has been completely reimagined for your iPad. See our amazing new vision for magazines that goes far beyond the printed page



Download Our App

Stay up to date on the latest news of the future of science and technology from your iPhone or Android phone with full articles, images and offline viewing



Follow Us On Twitter

Featuring every article from the magazine and website, plus links from around the Web. Also see our PopSci DIY feed


February 2013: How To Build A Hero

Engineers are racing to build robots that can take the place of rescuers. That story, plus a city that storms can't break and how having fun could lead to breakthrough science.

Also! A leech detective, the solution to America's train-crash problems, the world's fastest baby carriage, and more.



Online Content Director: Suzanne LaBarre | Email
Senior Editor: Paul Adams | Email
Associate Editor: Dan Nosowitz | Email

Contributing Writers:
Clay Dillow | Email
Rebecca Boyle | Email
Colin Lecher | Email
Emily Elert | Email

Intern:
Shaunacy Ferro | Email

circ-top-header.gif
circ-cover.gif