Stopping Killer Robots at the Source (Code)
Not too long ago, a powerful collection of scientists, industry leaders and NGOs launched the Campaign to Stop Killer Robots, an activist group dedicated to preventing the development of lethal autonomous weapons systems. Among those that signed up for the cause: Stephen Hawking, Noam Chomsky, Elon Musk and Steve Wozniak.
Those high-profile names earned the cause a lot of attention and lent legitimacy to the notion that killer robots, once considered a science fiction fantasy, are actually a fast-approaching reality.
But are they, really? An intriguing study published in the International Journal of Cultural Studies takes a different approach to the idea of "killer robots" as a cultural concept. The researchers argue, in part, that even the most advanced robots are just machines, like anything else our species has ever made. If we're careful with the components we put in — both technologically and culturally — they won't just somehow turn on us in a future robot revolution.
RELATED: Killer Machines and Sex Robots: Unraveling the Ethics of A.I.
"The point is that the 'killer robot' as an idea did not emerge out of thin air," said co-author Tero Karppi, assistant professor of media theory at the University of Buffalo. "It was preceded by techniques and technologies that make the thinking and development of these systems possible."
In other words, we're worried about killer robots because that's the story we keep telling ourselves and the terminology we keep using. The authors cite films like "The Terminator" or "I, Robot," in which it's just assumed that far-future robots will eventually turn on the human race. Those same assumptions are informing how we're preparing for a future of machine intelligence.
WATCH VIDEO: What Makes a Machine Intelligent?
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
For instance, the paper cites a passage from the Campaign to Stop Killer Robots website:
Over the past decade, the expanded use of unmanned armed vehicles has dramatically changed warfare, bringing new humanitarian and legal challenges. Now rapid advances in technology are resulting in efforts to develop fully autonomous weapons. These robotic weapons would be able to choose and fire on targets on their own, without any human intervention.
The researchers respond that these alarmist dystopian scenarios reflect a "techno-deterministic" worldview where technological systems, when given too much autonomy, become destructive not only for society but also for the human race.
"It implies a distinction between human and machine," the authors write. "It seems to offer a clear 'evolutionary' break or categorical distinction between humans-in-control of machines versus autonomous weapons as machines-in-control-of-themselves."
RELATED: 'Ex Machina': Science vs. Fiction
But what if we coded machine intelligence in such a way that robots don't even make a distinction between human and machine? It's an intriguing idea: If there's no "us" and no "them," there can be no "us versus them."
Indeed, Karppi suggested that we may be able to tweak the way future machines think about humans on a fundamental level.
"One possible scenario might be to try to think of robots and machine intelligence as social," he said. "How these systems are working together with humans — not independently and in opposition to humans."
By focusing on these cultural techniques, as the paper terms them, we can analyze and redirect the technologies that will determine the nature of our future robots. The authors cite a recent New York Times report that the Pentagon has allocated $18 billion of its latest budget to develop systems and technologies that could form the basis of fully autonomous weapons.
If we want to make changes to the way we develop these systems, the time is now. Simply banning lethal autonomous weapons down the line doesn't address the root causes of the dilemma. To really avoid the development of autonomous killing machines, we need to dig into the digital and cultural DNA at the root of the problem.
The key to creating a kinder, gentler robot future is to recognize that robots are ultimately a creation of humankind. The robots of the future won't be technological menaces that drop down from the stars (hopefully). They'll be built by humans, with all our attendant complexities.
"Machine intelligence is here and we need to learn to live with it," Karppi said. "What living with these systems means is not only a problem of technology or engineering, but a problem that involves culture, humanity and social relations."
Originally published on Seeker.