Ahhhhhhhhh. Now we have an engineering problem. More than that, we have an engineering problem involving human mortality. Which means we have an engineering problem involving liability and statistics. It's going to come down to inputs and outputs, of which I know neither. But I think I know a little more than you, so let me expound upon shape of the problem as I understand it: So a Google car relies on three things: LIDAR, vehicular telemetry and a phatty, phatty phatty GIS database. Based on this post we know that Google does not unleash a car on a road that it hasn't mapped in 3d space down to the INCH. Let that sink in for a minute: Google knows the road so well it can detect a chipmunk. It can probably detect a tin of Carmex. It knows about that Camel Lights hardpack box you threw out the window. In fact, if there was a google car in front of you and a google car behind you when you threw it out the window, it knows YOU threw it out. So there's an ethical issue to discuss. Google also knows the telemetry of every Google car that has driven that road, ever. It knows the deviation from its normative map as recorded by that Google car's LIDAR. It knows the tire traction, the ambient temperature, the lateral acceleration and speed of every google car to ever go around the corner. And it not only knows the speed limit on the Blue Ridge Parkway, it knows when it changes due to road conditions. Google also knows everyone who lives around that tunnel, and probably some of the people vacationing near it. Google likely knows that your family rented a Winnebago, they know that you're a quarter mile up the river, and they know that you have an eight year old daughter. Google can not only see a chipmunk in the road, it can predict that your eight-year-old has a possibility of jumping in front of your car as you round the bend. There's another ethical issue to discuss. So let's talk about your "oh shit situation." The car's not going to violate the law. The car is not going to exceed safe road conditions. The car is not going to outdrive its brakes. The car can tell between a log and a person, between a deer and a 6-year-old. So if the car has passed all these checks and still finds an error in its programming (which is what a soon-to-be-dead 8 year old is, when you get right down to it), it's certainly going to file a bug report. Which means the next time a google car goes around the corner, it'll probably come at the tunnel slower. But that's just liability to Google's customers. Chances are good that since road conditions put Google in a position of liability, Google is going to sue the highway department for unintentional tort and get the speed limit reduced. Google is going to raise the issue of highway safety at that corner and get a fence put up to keep campers from wandering onto the road. And google is going to point out that its vehicle was obeying every aspect of the law and driving as safely as a human would, and accidents happen. But really, the scenario that even brings all this up is basically someone lunging out in front of an autonomous vehicle with the deliberate intent of getting hit. Which is suicide, which also doesn't fault Google. What's the car gonna do? The car is going to consider an impermanent hazard in its path to weigh less than a permanent hazard not in its path and it's gonna hit it. It's gonna put on the brakes, it's gonna try to get around the hazard, but it's gonna hit it. Same as if the girl were a deer. Right? There's another ethical issue to discuss - the only real one to come out of this whole discussion. In this tiny, moot corner case, Google is saddled with the task of identifying a human as a human and responding to it differently than a deer. But in order to do that, we need to know how and if Google can tell the difference between a human and a deer with LIDAR. Hell, we need to know if and how Google can tell the difference between a person and a mannequin. And I'm willing to bet Google isn't interested in having that discussion. Which is okay for our purposes because the author of this very-not-good article didn't even think to ask it. So here's the ethical issue at the heart of this: how much responsibility does Google have to road hazards that are in violation of the law? More than a human driver? Less? That's what we're supposed to be discussing. Any court in the land will say "the same" and move on. There were some kids in my town that decided it would be funny to attach a scarecrow to an overhanging branch on Halloween. A car would come around a blind corner and they'd throw it down to dangle in front of the road from a noose around its neck. Ha ha. People swerved. Ha ha. People cussed. Not so ha ha. One of them wrecked and had to go to the hospital. Yer damn skippy the kids were charged with reckless endangerment. A human might be better able to distinguish a scarecrow on a rope from a real live person than a Google car will be. One thing about the Google car, though - the actions it takes will be scripted by someone calm, rather than someone trying not to run over a suicide victim.