a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00
kleinbl00  ·  3539 days ago  ·  link  ·    ·  parent  ·  post: Should your robot driver kill you to save a child's life?

I love how this "thought experiment" proposes that your autonomous car is going so fast around blind corners that it's overshooting its braking distance. Let's rephrase the question, shall we?

"Should you buy an autonomous car so irresponsibly designed that it can practice reckless driving thereby involving you, the manufacturer and the highway authority in a joint criminal tort involving negligent homicide?"

I'm gonna go ahead and solve that one by inspection, thanks.





briandmyers  ·  3538 days ago  ·  link  ·  

It's easy to re-phrase the thought experiment without insisting on badly designed auto-drivers, surely.

A child can dash in front of a car from a hidden spot, and eclipse any reasonable braking distance. It's not a moot question - driverless cars WILL need to make these kinds of decisions, eventually - either explicitly or implicitly.

kleinbl00  ·  3538 days ago  ·  link  ·  

It isn't, though. See, you start with the presumption that any autonomous vehicle must be at least as competent as a human driver or else they have no business on the roads. Google data bears this out - where they drive, they're damn good at it.

So then you have to replace the autonomous driver with a human driver and give it a look. So now it's not "you're a passenger in a Google car down a winding mountain road" it's "you're driving down a winding mountain road." In order for thought experiments like this you have to presume that you're driving at an unsafe speed - you're out-driving your brakes - and that you're going to speed up when presented with a tunnel. Which people don't do, by the way. They slow down.

In order to put an autonomous vehicle in a "choose you or choose the kid in the road" scenario you have to put a driven vehicle in a "choose you or choose the kid in the road" scenario. And those are few and far between but there's 100 years of case history behind them anyway so it's not really the thought experiment the pundits want it to be.

Disagree? Go ahead. Rephrase the question. It all comes down to the trolley problem which, lemme tell ya, BNSF has gone well out of its way to render moot through safeguards and best practices. This is the DOT we're talking about - you really think some columnist somewhere has come up with a Sophie's Choice problem that hasn't been ironed out over 100 years on 4 million miles of roads in the US alone?

There are real problems with autonomous vehicles and their implementation. Focusing on this one draws attention away from them.

briandmyers  ·  3538 days ago  ·  link  ·  

>> It isn't, though.

Yes, it is. I laid it out for you - a child dashing into the path of an autonomous car from a hidden spot. Simple.

I have no idea how you got "speed up" when presented with a tunnel, or any of that other rubbish about overdriving your brakes, from what I said.

I'm not particularly fussed, because although these scenarios are unsolvable, they are compensated for by the innumerable driver-error accidents that driverless cars are not prone to. But it is still a fact that these systems will need to make these kinds of decisions, or else be unaware that there was a choice to be made.

kleinbl00  ·  3538 days ago  ·  link  ·  

Look - we generally get along. And your opinion and observations matter to me. So how 'bout you pretend we're friends and I'll respond patiently and clearly? 'cuz this isn't the kind of problem you think it is, but I don't feel like discussing it with someone shouting at me.

briandmyers  ·  3538 days ago  ·  link  ·  

No worries mate. I'm not butthurt in the slightest, and I truly didn't mean to be anything other than friendly.

Just so it's clear, I agree 100% with this :

    There are real problems with autonomous vehicles and their implementation. Focusing on this one draws attention away from them.

However - I believe this issue IS worth discussing (and not dismissing), because sooner or later the google boffins WILL need to decide what autonomous cars will do when faced with "the trolley problem"; regardless of the iffy journalism that got us on this track in the first place.

kleinbl00  ·  3538 days ago  ·  link  ·    ·  

Alright. So it comes down to "engineering problems" vs. "philosophical problems."

Engineers solve engineering problems all day long. Thing of it is, though, most of the solving is in formulating the question. Contrary to popular belief, an engineering degree is not a degree in solving story problems, it's a degree in writing them - and in writing the correct ones to give you the answers that matter.

Philosophers don't really solve anything: they look for the unsolvable. The qualia of color and Agrippa's trilemma can keep philosophers going for decades. Some problems are indeed unsolvable: if you step halfway to the wall with every step, will you ever actually touch the wall?

This is never an issue until an engineering problem is framed as a philosophical problem, or a philosophical problem is framed as an engineering problem. An engineer will take a look at qualia and go "approximate green as green" and Munchausen and say "test for repeatability" and be done with it. Will you ever reach the wall? sure. The tolerance on your ability to stand that still is about a quarter of an inch. Philosophy suddenly becomes a story problem and it's solvable.

My least favorite story of all time is The Cold Equations by Tom Godwin, supposedly about a stowaway on a space ship facing certain death. Why does she face certain death? Because Tom Godwin wanted to write about a girl facing certain death. Great stories can be written this way: "Kaleidoscope" by Ray Bradbury deals with the problem, as does Heinlein with "The Long Watch." However, both of these latter authors established as the ground rules that death was inevitable because of a catastrophe and bravery respectively. Godwin, on the other hand, argued that death was inevitable because the supply ship the stowaway was aboard was so precisely engineered that an extra 100 lbs meant the difference between total 100% success and a fiery death for the pilot and utter destruction of his life-saving medicine.

The issue with phrasing a philosophical problem as an engineering problem is you are dismissing the universe. You are saying "it's not worth asking about solutions because there aren't any, I checked." Which forces the reader to poke at your problem and find all the unexamined issues. Which, if you're after a philosophical discussion, pisses off both parties - "any of that other rubbish" misses the philosophical issue of, in this example,

autonomous cars choosing to kill people.

    Consider this thought experiment: you are travelling along a single-lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the centre of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you. Both outcomes will certainly result in harm, and from an ethical perspective there is no “correct” answer to this dilemma. The tunnel problem serves as a good thought experiment precisely because it is difficult to answer.

Autonomous cars are an engineering problem. Everyone wants to discuss the philosophical issues - everyone wants to talk about the "choices" that an autonomous car will "make" and how that will endanger our lives and open up "dilemmas."

and it's bullshit.

It is purest, simplist, most saccharine bullshit because cars don't choose. Robots don't choose. Computers don't choose. Logic dictates operation based on inputs and the system is truly complex. But that doesn't even matter because the issue here isn't what a computer will do, it's what you'd do: the basic premise of autonomous cars is they have to perform at least as well as we do. So as soon as you swap yourself for the car, the question, when posed as a binary (kill you, kill the kid) becomes offensive as fuck.

Look. Let's get real, shall we? Let's pick a real tunnel.

That's Bluff Mountain Tunnel in Virginia. Here it is in Google Streetview. seems to fit the bill - a car and a kid can't both fit through it, swerve and hit the wall. Speed limit through there is 45mph, which is cruisin' fast enough that things are gonna get dicy. So let's take things at face value: swerve and hit a wall or run over the kid?

As a philosophical problem, it's clear-cut: the discussion is whether you're going to die or the kid is. As an engineering problem, it's anything but:

1) NHTSA safety standards require a car to protect its occupants from a 35mph front-end collision with an immovable object. You tellin' me you can't bleed off 10mph?

2) What the hell are you doing hauling ass so fast where you can't stop anyway? If you are, you're breaking the law - and guaranteed, your autonomous car won't break the law.

3) I'll bet you could bleed off some speed on that grass. And those trees are gonna slow you a whole lot more gently than the cliff face.

4) You know, if you run over the kid's legs, she'll live.

5) You know, if you glance the side of the tunnel with your passenger side, you and the kid will be fine even if you don't so much as tap the brakes.

And yeah - it's tough for you to make dispassionate decisions like this in the heat of the moment. But it isn't for an autonomous vehicle because a real-live human programmed all this stuff in before it ever rolled off the factory floor. It's thinking at 3gHz and it has absolutely no adrenaline in the game.

Besides which, the software is just responding to the conditions in the way it was programmed to respond by engineers who are dealing with real problems that they have to investigate and put numbers to and error-check and beta-test and otherwise suck all possible philosophy out of it. It really isn't the lady or the tiger, it's "how much braking traction does the vehicle have on these road conditions and is that adequate to address the limited sight distance of the blind corner ahead."

Honda's rolling that shit out next year.

There was a science fiction story called "The Cold Solution." It was one of a long line of pragmatic thinkers bugged to distraction by philosophers attempting to call engineering problems "unsolvable." In it, the pilot and the stowaway cut off their legs and pitch them out the airlock. Everybody lands safely and the manufacturer of the space tug is sued for negligence.

The problem of autonomous cars is worth discussing. The problem of the "choices" they will have to make as we wring our hands in panic about the Frankenstein we have unleashed to the countryside is worth ignoring.

Any time someone tells you there are only two choices, you know at least one thing:

They haven't thought about the problem very hard.

briandmyers  ·  3537 days ago  ·  link  ·  

You've clearly thought a lot about this, and I do see what you're saying.

I'd still like to know what an autonomous car will do in these "oh shit" situations, and I don't believe it's mere sophistry to want those answers. I'd like to see some testing done (say, with pop-ups), designed to throw a monkey wrench into the car's navigation.

To put it really simply - if the car detects something unexpected in its path, does it : A) Swerve to avoid, if it deems that to be safe? If so, what exactly does it consider "safe" to be? B) Stay on course and slow itself as best it can? C) Other?

I'd also be interested in what you (or anyone) thinks are appropriate thought experiments.

One I'd be interested in exploring is "How badly can a determined person or group fuck up an autonomous car?", or "How well would such a car handle a malicious agent intent on (say) causing harm to a passenger in such a vehicle?"

kleinbl00  ·  3537 days ago  ·  link  ·  

Ahhhhhhhhh. Now we have an engineering problem. More than that, we have an engineering problem involving human mortality. Which means we have an engineering problem involving liability and statistics. It's going to come down to inputs and outputs, of which I know neither. But I think I know a little more than you, so let me expound upon shape of the problem as I understand it:

So a Google car relies on three things: LIDAR, vehicular telemetry and a phatty, phatty phatty GIS database. Based on this post we know that Google does not unleash a car on a road that it hasn't mapped in 3d space down to the INCH. Let that sink in for a minute:

Google knows the road so well it can detect a chipmunk. It can probably detect a tin of Carmex. It knows about that Camel Lights hardpack box you threw out the window. In fact, if there was a google car in front of you and a google car behind you when you threw it out the window, it knows YOU threw it out.

So there's an ethical issue to discuss.

Google also knows the telemetry of every Google car that has driven that road, ever. It knows the deviation from its normative map as recorded by that Google car's LIDAR. It knows the tire traction, the ambient temperature, the lateral acceleration and speed of every google car to ever go around the corner. And it not only knows the speed limit on the Blue Ridge Parkway, it knows when it changes due to road conditions.

Google also knows everyone who lives around that tunnel, and probably some of the people vacationing near it. Google likely knows that your family rented a Winnebago, they know that you're a quarter mile up the river, and they know that you have an eight year old daughter.

Google can not only see a chipmunk in the road, it can predict that your eight-year-old has a possibility of jumping in front of your car as you round the bend.

There's another ethical issue to discuss.

So let's talk about your "oh shit situation." The car's not going to violate the law. The car is not going to exceed safe road conditions. The car is not going to outdrive its brakes. The car can tell between a log and a person, between a deer and a 6-year-old. So if the car has passed all these checks and still finds an error in its programming (which is what a soon-to-be-dead 8 year old is, when you get right down to it), it's certainly going to file a bug report. Which means the next time a google car goes around the corner, it'll probably come at the tunnel slower.

But that's just liability to Google's customers. Chances are good that since road conditions put Google in a position of liability, Google is going to sue the highway department for unintentional tort and get the speed limit reduced. Google is going to raise the issue of highway safety at that corner and get a fence put up to keep campers from wandering onto the road. And google is going to point out that its vehicle was obeying every aspect of the law and driving as safely as a human would, and accidents happen.

But really, the scenario that even brings all this up is basically someone lunging out in front of an autonomous vehicle with the deliberate intent of getting hit. Which is suicide, which also doesn't fault Google. What's the car gonna do? The car is going to consider an impermanent hazard in its path to weigh less than a permanent hazard not in its path and it's gonna hit it. It's gonna put on the brakes, it's gonna try to get around the hazard, but it's gonna hit it. Same as if the girl were a deer.

Right?

There's another ethical issue to discuss - the only real one to come out of this whole discussion. In this tiny, moot corner case, Google is saddled with the task of identifying a human as a human and responding to it differently than a deer. But in order to do that, we need to know how and if Google can tell the difference between a human and a deer with LIDAR. Hell, we need to know if and how Google can tell the difference between a person and a mannequin. And I'm willing to bet Google isn't interested in having that discussion. Which is okay for our purposes because the author of this very-not-good article didn't even think to ask it.

So here's the ethical issue at the heart of this: how much responsibility does Google have to road hazards that are in violation of the law? More than a human driver? Less? That's what we're supposed to be discussing. Any court in the land will say "the same" and move on.

There were some kids in my town that decided it would be funny to attach a scarecrow to an overhanging branch on Halloween. A car would come around a blind corner and they'd throw it down to dangle in front of the road from a noose around its neck. Ha ha. People swerved. Ha ha. People cussed. Not so ha ha. One of them wrecked and had to go to the hospital. Yer damn skippy the kids were charged with reckless endangerment.

A human might be better able to distinguish a scarecrow on a rope from a real live person than a Google car will be. One thing about the Google car, though - the actions it takes will be scripted by someone calm, rather than someone trying not to run over a suicide victim.

thundara  ·  3537 days ago  ·  link  ·  

    But really, the scenario that even brings all this up is basically someone lunging out in front of an autonomous vehicle with the deliberate intent of getting hit. Which is suicide, which also doesn't fault Google.

To be fair, this happens in Russia quite frequently and people try to make easy money suing if they survive (Though usually it's aimed at only getting injured >_>). I wouldn't put it past people targeting a vehicle that they /knew/ would respond in a predictable way. Doubtful Google would lose in court, but hey, no court case is the best court case.

kleinbl00  ·  3537 days ago  ·  link  ·  

Absolutely - thus the proliferation of dashcams and with them, hilarious .gifs of Russians faking traffic accidents. Because if you have documentation of it, you can prove it was deliberate.

Thing of it is, a Google car kicks the shit out of a dashcam. All the collision avoidance sensors, the LIDAR, any sort of video, they're all streaming. And LIDAR takes up a lot less data than video. If I were Google, I'd set it to cache several minutes worth of data in RAM and then, if any sensor registered an anomalous event of any kind I'd write that shit to memory and upload it to the mothership. I mean, Google is going to be in the business of wanting to know about unknown potholes and shit in the road, not just fraud-minded jumpers.

And they're Google. They could be emailing you a fully-rendered Sketchup flythrough of the accident scene in 3D space, raw LIDAR traces helpfully ghosted over the map, GPS coordinates accurate to the inch, timestamped to within 40 nanoseconds of the NIST atomic clock before you finish dialing 911.

Which is another ethical issue to consider: Google is going to have lots of data about you and they'll analyze the shit out of it whether you personally need it or not. As far as our fraudster, though, the last car you want to go toe to toe with in court is gonna be a self-driving car.

thundara  ·  3537 days ago  ·  link  ·  

100% agree, except:

    If I were Google, I'd set it to cache several minutes worth of data in RAM and then, if any sensor registered an anomalous event of any kind I'd write that shit to memory and upload it to the mothership

If at all manageable, I'd upload 100% if I were Google (Or 0% if I was not). You want big data, "every car on the road" is huge data. Crowd-source street maps (They already do this with cell phones for roads, iirc), plot traffic patterns, study wild-life, get ambulances in the area before an accident even occurs, submit request tickets to cities to alter traffic laws where the traction has become a bit too low. A camera on ever corner is both a sci-fi writer's and data scientist's dream.

Even if they just took the position of selling (or opening) that data, that's A+ value to a business trying to pick out the next site to expand their offices / restaurants / outlets.

kleinbl00  ·  3537 days ago  ·  link  ·  

I'd expect some thinning, but yeah, that's about the gist of it.

mknod  ·  3039 days ago  ·  link  ·  

Thanks for posting this kleinbl00 these are basically my own thoughts and discussions I've had with my own engineering friends. I feel like all of us agree that the question is interesting, but ultimately not one that an engineer should answer wholly.

Another side to this is right now at this moment, there are hundreds of cars being driven by drunk, or manic people who have very little control over themselves or their faculties. Self driving cars aren't making a decision to save a childs life or you in the future, they are paying back the debt of hundreds of childrens lives who have died due to our own negligence and unsafe driving.

In the future we will be looked at like monsters I imagine, for ever letting a single child get into a human controlled monster.