a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  3400 days ago  ·  link  ·    ·  parent  ·  post: When autonomous software breaks the law, who's to blame?

Whoever is legally responsible for the software, I would assume.

Software is a tool. Crimes committed with tools are the responsibility of the legal owner of the tool unless the owner of the tool can demonstrate that the tool performed in a manner notably out of line with expectations. Run over a pedestrian? The driver is at fault. Run over a pedestrian because the accelerator suddenly jammed down and the brakes spontaneously failed? Bet your ass the driver is going to try to pin that on the manufacturer.

It's not illegal across most of the USA to own a still. It's not illegal to manufacture stills. It's not illegal to sell stills. Actually using a still? Bet your ass that's illegal. Who gets busted? The guy who lights the burner.

There's nothing illegal about coding something that buys shit off Silkroad. However, executing code that says "buy me illegal shit off Silkroad" is criminal intent, open and shut.

But, shared because a bot that buys illegal shit off Silkroad to put it on exhibit next to a police station is fucking hilarious. God bless the Swiss.





Dendrophobe  ·  3400 days ago  ·  link  ·  

    Whoever is legally responsible for the software, I would assume.

I'm inclined to agree, though I wonder if there could ever be a situation where courts would decide that the person using the software is innocent.

kleinbl00  ·  3400 days ago  ·  link  ·  

There will be a test case and it will be f'ing bizarre.

Culpability, as I understand it, comes down to agency and intent. Those three words are only together because I've hung around enough law students to pick up a little by osmosis. Whippin' out that august body of legislative wisdom, Wikipedia:

    A person is culpable if they cause a negative event and

    (1) the act was intentional;

    (2) the act and its consequences could have been controlled (i.e., the agent knew the likely

    consequences, the agent was not coerced, and the agent overcame hurdles to make the event

    happen); and

    (3) the person provided no excuse or justification for the actions

So whatever test case we're gonna find, it's gonna have to be a fully autonomous event that somehow isn't negligent.

How 'bout...

Teen hacker creates a Roomba plugin ('skynet 9000') that causes the celebrated line of robotic vacuums to chase any movement it detects (cats, etc). However, the hacker programs a failsafe that stops the rogue roomba when its collision sensors trigger.

iRobot pushes a software update that uses a different communication pathway between the collision sensors and the drive motors. As a consequence, Roombas running Skynet 9000 no longer stop upon bumping up against their targets, as Maude Smith discovers when her Roomba 'Mr. Finster' mauls her morbidly obese chihuahua 'Tiddlypoop' when his short little legs and corpulent little belly prove no match for Mr. Finster's murderous rage, courtesy some goofing around by her 13-year-old son Timmy.

Mrs. Smith sues iRobot for the software update. iRobot argues that Mr. Finster was running malicious code. Mrs. Smith's attorneys point to iRobot's Create program and argue that Skynet 9000 was not dangerous to canines until they pushed their update.

Mr. Finster ran on a schedule - there was no agency that set him in motion. Timmy didn't intend for the dog to get mauled - he overcame some hurdles to accomplish his task but that was before the software update. iRobot's only culpability is in whether or not their software should have been so locked down that nobody could hack it, and whether or not they should have been cognizant of all unofficial patches.

I could see a court ruling that nobody was at fault there (see "moral evil" vs. "natural evil"). The question is what kind of precedent it would set.

veen  ·  3400 days ago  ·  link  ·  

This is pretty much how I understand the culpability issue. Most of the problems with automated vehicles will be accidents, and most of the situations will see the blaim to be put on the company making the automated vehicle.

Automated vehicles will try their best to avoid accidents, and will make judgements just like a human does (the intent part of culpability). The question is whether those judgements are good or bad ones. If there is a clear misjudgement that could have been changed by changing some code, than the car (and the company) is to blame. When the car couldn't have done it any better (aka best intentions), it's not to blame. If you were to fall in front of an automated vehicle going 100, no, it's not to blame.

There are already automated vehicles in positions where they could hurt people. ASI is a company which automates mining and farming operations. Out of interest I've emailed them who they think is responsible when causing damage, and I've asked them for a license or terms of use for me to read. Let's hope they respond!

kleinbl00  ·  3399 days ago  ·  link  ·  

You know you're a geek when you want to read TOS and licensing agreements for automated mining equipment for fun.

I'm gonna bet they've got boilerplate that says "no matter what happens, no matter where it happens, no matter how it happens, no matter why it happens, no matter when it happens, it isn't our fault." The question then becomes the enforceability of said boilerplate.