Alright. So it comes down to "engineering problems" vs. "philosophical problems." Engineers solve engineering problems all day long. Thing of it is, though, most of the solving is in formulating the question. Contrary to popular belief, an engineering degree is not a degree in solving story problems, it's a degree in writing them - and in writing the correct ones to give you the answers that matter. Philosophers don't really solve anything: they look for the unsolvable. The qualia of color and Agrippa's trilemma can keep philosophers going for decades. Some problems are indeed unsolvable: if you step halfway to the wall with every step, will you ever actually touch the wall? This is never an issue until an engineering problem is framed as a philosophical problem, or a philosophical problem is framed as an engineering problem. An engineer will take a look at qualia and go "approximate green as green" and Munchausen and say "test for repeatability" and be done with it. Will you ever reach the wall? sure. The tolerance on your ability to stand that still is about a quarter of an inch. Philosophy suddenly becomes a story problem and it's solvable. My least favorite story of all time is The Cold Equations by Tom Godwin, supposedly about a stowaway on a space ship facing certain death. Why does she face certain death? Because Tom Godwin wanted to write about a girl facing certain death. Great stories can be written this way: "Kaleidoscope" by Ray Bradbury deals with the problem, as does Heinlein with "The Long Watch." However, both of these latter authors established as the ground rules that death was inevitable because of a catastrophe and bravery respectively. Godwin, on the other hand, argued that death was inevitable because the supply ship the stowaway was aboard was so precisely engineered that an extra 100 lbs meant the difference between total 100% success and a fiery death for the pilot and utter destruction of his life-saving medicine. The issue with phrasing a philosophical problem as an engineering problem is you are dismissing the universe. You are saying "it's not worth asking about solutions because there aren't any, I checked." Which forces the reader to poke at your problem and find all the unexamined issues. Which, if you're after a philosophical discussion, pisses off both parties - "any of that other rubbish" misses the philosophical issue of, in this example, autonomous cars choosing to kill people. Autonomous cars are an engineering problem. Everyone wants to discuss the philosophical issues - everyone wants to talk about the "choices" that an autonomous car will "make" and how that will endanger our lives and open up "dilemmas." and it's bullshit. It is purest, simplist, most saccharine bullshit because cars don't choose. Robots don't choose. Computers don't choose. Logic dictates operation based on inputs and the system is truly complex. But that doesn't even matter because the issue here isn't what a computer will do, it's what you'd do: the basic premise of autonomous cars is they have to perform at least as well as we do. So as soon as you swap yourself for the car, the question, when posed as a binary (kill you, kill the kid) becomes offensive as fuck. Look. Let's get real, shall we? Let's pick a real tunnel. That's Bluff Mountain Tunnel in Virginia. Here it is in Google Streetview. seems to fit the bill - a car and a kid can't both fit through it, swerve and hit the wall. Speed limit through there is 45mph, which is cruisin' fast enough that things are gonna get dicy. So let's take things at face value: swerve and hit a wall or run over the kid? As a philosophical problem, it's clear-cut: the discussion is whether you're going to die or the kid is. As an engineering problem, it's anything but: 1) NHTSA safety standards require a car to protect its occupants from a 35mph front-end collision with an immovable object. You tellin' me you can't bleed off 10mph? 2) What the hell are you doing hauling ass so fast where you can't stop anyway? If you are, you're breaking the law - and guaranteed, your autonomous car won't break the law. 3) I'll bet you could bleed off some speed on that grass. And those trees are gonna slow you a whole lot more gently than the cliff face. 4) You know, if you run over the kid's legs, she'll live. 5) You know, if you glance the side of the tunnel with your passenger side, you and the kid will be fine even if you don't so much as tap the brakes. And yeah - it's tough for you to make dispassionate decisions like this in the heat of the moment. But it isn't for an autonomous vehicle because a real-live human programmed all this stuff in before it ever rolled off the factory floor. It's thinking at 3gHz and it has absolutely no adrenaline in the game. Besides which, the software is just responding to the conditions in the way it was programmed to respond by engineers who are dealing with real problems that they have to investigate and put numbers to and error-check and beta-test and otherwise suck all possible philosophy out of it. It really isn't the lady or the tiger, it's "how much braking traction does the vehicle have on these road conditions and is that adequate to address the limited sight distance of the blind corner ahead." Honda's rolling that shit out next year. There was a science fiction story called "The Cold Solution." It was one of a long line of pragmatic thinkers bugged to distraction by philosophers attempting to call engineering problems "unsolvable." In it, the pilot and the stowaway cut off their legs and pitch them out the airlock. Everybody lands safely and the manufacturer of the space tug is sued for negligence. The problem of autonomous cars is worth discussing. The problem of the "choices" they will have to make as we wring our hands in panic about the Frankenstein we have unleashed to the countryside is worth ignoring. Any time someone tells you there are only two choices, you know at least one thing: They haven't thought about the problem very hard.Consider this thought experiment: you are travelling along a single-lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the centre of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you. Both outcomes will certainly result in harm, and from an ethical perspective there is no “correct” answer to this dilemma. The tunnel problem serves as a good thought experiment precisely because it is difficult to answer.