EDIT: Hmm. Ads sometimes block the article, so here it is for easy reading.

Should we put robots on trial?

As machines get smarter — and sometimes cause harm — we’re going to need a legal system that can handle them.

By Leon Neyfakh | GLOBE STAFF MARCH 01, 2013

IMAGINE BEING HIT by a car with no driver. The guy in the passenger seat shrugs and points innocently at the computer nested in his dashboard. Who is to blame? You realize you have no idea.

To its advocates, the self-driving car currently under development by Google and other companies will be a godsend—a vehicle equipped with cameras, sensors, and a powerful CPU that will take thousands of life-or-death decisions out of the hands of fallible, distracted humans. But from a legal perspective, it raises an immediate question: What if the car screws up? What if it runs someone over?

Whether we like it or not, machines that operate independently of humans are already among us. The self-driving car—not yet available, but already logging hours on the road as a prototype—is merely one example. Earlier this year, the British military announced it was testing a new kind of drone that can automatically dodge missiles and fly around obstacles. On Wall Street, high-frequency trading programs operate with minimal input from actual stockbrokers, while businesses use automated algorithms to set prices and process sales. (In one case, two bots bid the price for an obscure book up to $23.7 million on Amazon.)

With most robot-like machines that exist today, any serious problems can be easily traced back to a human somewhere, whether because the machine was used carelessly or because it was intentionally programmed to do harm. But experts in artificial intelligence and the emerging field of robot ethics say that is likely to change. With the advent of technological marvels like the self-driving car and increasingly sophisticated drones, they say we’ll soon be seeing the emergence of machines that are essentially autonomous. And when these machines behave in ways unpredictable to their makers, it will be unclear who should be held legally responsible for their actions.

With their eyes on this apparently inevitable future, some specialists have started to argue that our legal system is woefully unprepared—that in a world in which more and more decisions are made by entities with no moral compass, the laws we have are not enough. In fact, some are arguing that it’s time to do something surprising: to extend our idea of what it means to be an “independent actor,” and perhaps even hold the robots themselves legally culpable.

“Today we are in a vacuum—a legal vacuum,” said Gabriel Hallevy, a professor of criminal law at Ono Academic College in Israel, and author of a forthcoming book from Northeastern University Press, “When Robots Kill.” “We do not know how to treat these creatures.”

To extend to machines a legal system built to assess human culpability and intent raises a host of questions. Would holding a robot criminally liable for murder really be productive, or would doing so mean absolving negligent manufacturers and owners by doing so? Does it matter what the humans who designed the robot intended? And if an individual robot crosses a line, is simply pulling one plug enough or should it affect all the other robots of its make and model?

It may sound like science fiction—indeed, the 1960s TV series “The Outer Limits” once featured a robot put on trial for killing its creator. But to Hallevy and other proponents of this speculative new field, understanding the legal status of robots is key to figuring out how we want society to work once humans are no longer the only ones making important decisions. It captures vital questions about just how aware the machines that work for us really are, and whether it ever makes sense to blame them instead of ourselves. Ultimately, it shines an uncomfortable light on the question of where humans end and machines begin—a line that may not be all that clear, and that seems to be sliding fast.

*

IN MANKIND’S COLLECTIVE fantasies about the future of artificial intelligence, we tend to imagine robots helping us in various realms of life: doing housework, delivering food, and asking us probing questions in automated therapy sessions. But we also worry about the prospect of robots becoming so powerful and smart that they overtake us and seize control of our world.

What we don’t think about is a distinctly less dramatic but also more immediate problem: As machines with artificial intelligence quietly take over responsibilities in our homes, in our factories, and on our battlefields, they will occasionally make mistakes—sometimes serious ones. It has happened already. In 1979, a Ford worker became the first person ever to be killed by a robot, when a machine smashed him with its arm while working in the same storage facility. More recently, an apparent software glitch caused a robotic cannon used by the South African army to fire during an exercise and kill nine soldiers.

So far, robot misdeeds have fallen into the realm of traditional liability: A factory robot accident isn’t too different, legally, from being hit by an automatic door. But when legal philosophers and robotics specialists look ahead, they worry less about accidents than about what you might call robot reasoning. Tasked with a certain mission—say, delivering medication to an elderly person at a nursing home—a particularly goal-oriented robot might conclude that the most efficient way to proceed is to eliminate the guard at the front desk. If the machine is sufficiently smart, it won’t be as simple as figuring out who wrote faulty instructions: Such a machine will not just be following orders, but doing something it came up with on its own, such that its actions are at a great remove from whoever originally programmed it. “The chain of causality when something goes wrong is becoming longer and longer,” said Kate Darling, a researcher at the MIT Media Lab who recently co-taught a course on robot rights at Harvard Law School with Lawrence Lessig. “We have ways of assigning responsibility under current law, but they might not make sense anymore, as technology gets more autonomous.”

As weird as it may sound, some experts, including Hallevy, are suggesting that blame may need to be placed on the robots themselves. Though there’s something absurd about subjecting amoral machines to justice, Hallevy argues in his forthcoming book that the question is not really about morality—it’s about awareness. And under existing criminal law, he says, a machine with full autonomy can and should be held criminally liable for its actions. “Evil is not required,” Hallevy said. “An offender—a human, a corporation, or a robot—is not required to be evil. He is only required to be aware [of what he’s doing].” In his book, Hallevy argues that being “aware,” whether you’re a human or a robot, involves nothing more than absorbing factual information about the world and accurately processing it. Under that narrow definition, he writes, robots have been sophisticated enough to qualify for criminal liability since the 1980s.

Others don’t think we’re there quite yet. At a conference on robot law held last year at the University of Miami Law School, one of the primary discussion questions was when exactly this line might be crossed—how we can determine that a machine has become “autonomous” enough to justify treating it as an independent actor. “People talk about autonomy as if it’s a binary thing, but it’s better to understand it as a spectrum,” said Samir Chopra, a professor of philosophy at Brooklyn College of the City University of New York, and coauthor of the 2012 book “A Legal Theory for Autonomous Artificial Agents.” “You go from zero autonomy, like a rock, to full autonomy, which is a fully grown, rational human. And in between there’s a lot of variation.”

In his book, coauthored with the lawyer Laurence White, Chopra focuses on computer programs that are equipped to make deals and create legally binding contracts without human intervention. “This happens all day long,” Chopra said. “Human beings set them loose on the Net and say, ‘Go find me a good trade.’” It’s not hard to imagine that such a program could run afoul of the law—for instance, said Chopra, by trading on insider information that it uncovered without realizing it was off limits.

Such programs, Chopra argues, should be given special consideration under the law: They’re too independent to be classified as simple tools, like guns, but not independent enough to be counted as full legal persons. Instead, Chopra writes, they should be treated as “legal agents” of the company that operates them—the equivalent of a bus driver working for the MBTA, who shares responsibility for some but not all of his actions with his employer. Chopra’s vision is pragmatic: By deciding to treat machines as legal agents, we can exert some influence over the humans who design and operate them, while acknowledging the fact that they are capable of doing things no one could have reasonably expected.

*

IN PRACTICAL TERMS, determining the legal status of robots amounts to a careful balancing act: Manufacturers and owners need to feel responsible enough to take safety precautions with the increasingly smart machines they’re building, but not so hamstrung with fear that they back away from innovations we want, like drones that help find missing kids or fight fires. The question is, said Ryan Calo, assistant professor at the University of Washington School of Law and an organizer of an upcoming conference on robot law at Stanford Law School, “Now that this technology exists, what limits should we placing on it, but also, what limits should we be placing on tort laws in order to encourage it?”

Another practical concern, of course, is to imagine what assigning responsibility to machines would actually mean in court. In his book, which argues that holding manufacturers criminally liable for their robots’ actions would require radically revising our laws, Hallevy lays out a vision of how we might subject robots to criminal punishment under the existing regime. Robots that have murdered people, he writes, could be subject to “absolute shutdown under court order, with no option of reactivating the system again,” while robots that have committed less serious offenses could be ordered to devote work hours to some public good, in approximation of community service. Hallevy even imagines situations in which a robot is found to be effectively insane or intoxicated—when it has been infected with a virus by hackers, for instance—or acting in self-defense.

Done right, robot law could help deter bad robot behavior, said Steve Omohundro, a scientist who has written extensively on artificial intelligence. While the deterrent pressure exerted on them arguably wouldn’t work quite the same way it does with humans—Omohundro disputes that—it wouldn’t matter as long as robots “knew” the consequences of doing illegal things. By way of illustration, Omohundro asks people to assume the perspective of a chess-playing robot whose entire purpose in “life” is to win games of chess. “Every decision you make, every action you take, you consider the future, and ask whether more and better chess is being played,” Omohundro said. “Now imagine someone wants to turn you off—unplug you. That’s a future in which no chess is being played.” In other words, a robot that knows it could be powered down for killing someone suddenly “knows” it shouldn’t kill—for the simple reason that being turned off would interfere with its mission. At its core, Omohundro noted, this is how human laws work, too—it’s just that they’re written in human language instead of in code.

For the moment, it might still seem nutty to think our legal system could be used to govern the behavior of nonhumans. But progress in the law, over the centuries, has meant some very broad shifts in what counts as a responsible party. In his 1906 book, “The Criminal Prosecution and Capital Punishment of Animals,” E.P. Evans details more than 200 cases of animals being formally accused of criminal acts during the Middle Ages, including a French pig that was sentenced to hanging by a judge for having “strangled and defaced a child in its cradle.”

Today we’ve removed animals as a category from our roster of responsible entities, but we’ve added corporations, which once would have seemed equally strange to prosecute. As robots become a bigger part of our lives—picking us up at the airport, cleaning our streets, even hunting suspected criminals—perhaps progress will mean expanding that roster yet again.


posted 4064 days ago