Robot Madness: Preventing Insurrection of Machines
In Robot Madness, LiveScience examines humanoid robots and cybernetic enhancement of humans, as well as the exciting and sometimes frightening convergence of it all. Return for a new episode each Monday, Wednesday and Friday through April 6.
A robotic future holds the promise of providing tireless workers and companions for humans, but it can also evoke worries about an armed machine insurrection along the lines of the "Terminator" movies.
Experts consider that dark vision to be on the distant horizon, although they now point to other ethical issues that arise from the growing presence of battlefield bots and their potential to decide to attack autonomously, possibly as soon as in the next 20 years .
For instance, the U.S. military alone has more than 5,000 unmanned aerial vehicles, such as the Predator, keeping watch from the skies, not to mention thousands more on the ground supplied by companies such as iRobot —makers of "Roomba" vacuum bots.
Most current military robots have human handlers, but some can pull the trigger on their own. The U.S. Navy and Army use anti-missile systems resembling R2-D2 toting a Gatling gun, which can go into full-automatic mode to track and shoot down incoming missiles. Israel and South Korea have deployed robotic sentries along their borders that may shoot first and ask questions later.
Such defensive systems could give way to robots that make attack decisions "within this century if not in the next decade or two," said Patrick Lin, a researcher at California State Polytechnic University who compiled a report for the U.S. Navy on the ethics and risks of military robots. And that raises questions of how to keep robots in line during confusing battlefield situations.
{{ video="LS_090309_00_FutuBots" title="Future 'Bots: Robot-Human Convergence Begins" caption="They are increasingly made in our image; yet their core technologies are changing us into entities more like them. They will take care of us; one way or the other." }}
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
"Yes, a robot does not feel anger or revenge but neither does it feel empathy or compassion," said Noel Sharkey, a robotics expert at the University of Sheffield in the UK. Sharkey noted situations that may require decision-making beyond current robotic intelligence, such as civilians wandering into the field of fire or child soldiers forced into battle.
There are workaround solutions. Militaries will probably not replace humans entirely with robots, said Ronald Arkin, a robotics researcher at Georgia Tech. Instead, robots will operate and fight alongside humans in specialized roles. Their tireless presence may even end up saving lives, when weary human fighters might make bad decisions and end up abusing prisoners or killing civilians.
However, robotic perfection may not be feasible or ideal in all situations. And it is unclear if humans should demand more of robots than they do of themselves, in some ethical situations.
"It is not my belief that an autonomous unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of," Arkin told LiveScience.
And that question could become irrelevant as new technologies blur the line between robot and human.
Episode 2: Creating True Artificial Intelligence
- Video - Future-'Bots: Robot-Human Convergence Begins
- More Robot News and Information