I didn't even think of this philosophical concern when considering self-driving cars. While I think the circumstance would be rare, I think when you consider that large transport truck trailer will probably be the first automated transports like this, it seems like a valid concern. Should these cars/trucks follow Asmiov's laws of robotics? Or should they not be as complex, and just have an "avoid all accidents at any cost" paradigm?
I've been meaning to ask you about that thread. Ethical discussions like the linked article tend to pop up regularly while researching automated vehicles. Could you see if I'm missing something in understanding your line of reasoning? Using the trolley problem as a way to think about the safety issues surrounding self-driving vehicles is inherently useless; this is because it reduces the complex, real world into a simplified scenario (A or B, kill you or kill me) ; engineers will just try to reduce all risk possible and will try to anticipate situations that the car can't get itselves out of. I think a much more interesting conversation is how much risk we accept from an automated car. For example, the British press is going bananas over a rollercoaster accident last week. Nevermind the fact that rollercoasters are incredibly well-engineered and much, much safer than any other transportation method (and that it was very likely a human error), a lot of people are now much more hesitant to step in to a rollercoaster. Even if a fully automated car is 100 times safer than driving yourself, it will still have accidents and it will still remove power from people. While a super-safe network of automated vehicles is a great goal, the transition towards is won't be silky smooth, I fear.
You're describing the dirty bomb problem. To recap, a dirty bomb is a bunch of radioactive shit put somewhere that it will freak people out. The freakoutitude of most people associated with radioactivity is off-the-hook bad; an excellent example of the possible effects can be found in Brazil. - 93 grams of radiology-grade Cesium 137, stolen by scrap metal thieves - contaminated scrap metal sold to a wholesaler - thief's 6-year-old daughter painted herself up with enough glowing Ce137 to kill her dead within a month - thieves begin to think oh shit maybe we shouldn't have pried out that glowing blue shit we didn't understand - total fatalities: 4 - total cases of radiation sickness treated: 20 - Number of radiation screenings performed: 112,000 I can't find a good estimate of the Goiania costs, but they were scraping topsoil, demolishing buildings, all sorts of oh-shit remediation the likes of which reminds one of anthrax island. It was a major infrastructure clusterfuck and a stupendous effort, wholly outsized when one considers the actual area denial and health impact. And that's because dirty bombs spook the shit out of people. Here's the trick, though - they spook the shit out of people once. Ask any expert and he'll tell you that once people acclimatize to the fundamental danger level, as opposed to the hyperbolic danger level, and people stop caring nearly as much. The first dirty bomb is gonna be hell on the economy. The second one? Not so much. Kiev is just another city. Radiation is just another hazard. People put up with contaminated air, contaminated groundwater, you name it. Cars had a rough time initially. A horse-drawn culture wasn't ready for them. They were spectacles. Fast forward thirty years and they're commonplace. The step between "driven car" and "driverless car" is substantially less than the step between "horse" and "car" and we managed that just fine.
And that is my whole problem with the issue - we're seeing this from the point of view of "the car hits someone". But we're not willing to acknowledge that if the car functions correctly, literally the only way for that car to harm a human is for the human to bring itself in harm's way.
Yup. I'm not sure why this question is so popular when automatic cars will always make sure it's driving safely. I do wonder about what it'll do when faced with a bunch of people on a sidewalk or walking on the side of the road, though. Will it calculate how fast the human could theoretically jump out in front of it, and drive assuming that that can happen at any time? How will it work in large cities where there may be a lot of people on the sidewalk at any given time? I guess just drive slowly enough that it can stop in a centimeter in under X amount of time.
In general, speed limits are chosen such that vehicles meeting safety standards can stop or otherwise maneuver away from hazards. This is why you can be cited for reckless driving if you're obeying the speed limit during inclement weather - most limits are based on safety. "People jumping from the sidewalk" is the sort of thing that is covered under this unless you're dealing with crazy or suicidal people. Driverless cars will necessarily obey the rules of the road, and won't be licensed if they can't safely do that. Anything outside the regime of legality becomes the fault of whoever broke the law, and an autonomous car won't break the law. Bet on it.
Morality has no place in engineering. Morality has a place in the application of engineering. There is nothing inherently evil about chemical weapons - if a stockpile of sarin gas is what it takes to keep a maniacal despot from committing genocide against minorities, then the stockpile of sarin gas is being used in a moral way. Using that sarin gas, on the other hand, is almost always going to be an immoral act. Caselaw is never about morality. It's always about culpability. And that is why "fuck everything about this entire line of questioning" - it replaces culpability with morality and goes "holy fuck! there's no moral framework here!" Fuckin' A there's no moral framework here. There' s no morality to bulldozers, there's no morality to slip'n'slides, there's no morality to taxi services, there's no morality to take-out food. There's culpability and when you insist that we now need to come up with a whole new way to understand a tool just because you can't wrap your head around it, I'm not only entitled to call you on it, I'm entitled to do so in a snarky tone of voice.
Yeah, i guess i was specifically thinking of the suicidal person scenario. I'm sure there will be nauseating amounts of research dedicated to safety, and i'm sure that the computer-controlled car will be safer than a human-controlled one any day. Given all that though, i guess the question given in OP comes down to "be as safe as possible to people external to the car, but save the people inside the car at all costs". That's probably what i'd go with.
One, I will never, ever buy a self driving car for as long as I'm able to drive myself. Two, if I ever do buy a self driving car and it comes down to a choice between me and a bus of pre-schoolers, that car better throw my ass off a cliff. I don't think I could force myself to do that just because our will to live is hard to override. However, if it's not in my hands, I think it'd be the socially responsible thing to do. Three, you mentioned an "avoid all accidents at any cost" paradigm OP, but I didn't see that in the article. Unfortunately with driving, there's no such thing. Automated cars or not. So many things outside of anyone's control can happen to cause accidents. The only way to avoid any and all accidents is to never get in a car.
I'd totally buy a self-driving car. Those things are going to completely change America - imagine if that hour and a half you spend every day driving to work could be spent chilling in the back working on spreadsheets? Watching netflix? Knitting? I really enjoy driving but there are big pieces of "driving" that aren't at all enjoyable. The busload of preschoolers is safe for the simple reason that an autonomous driving system isn't going to drive if it fails any of its pre-flight checks. That's something the computer has a leg up on - it's not going to think you're okay to drive after three drinks, it knows that the brake pads should have been replaced six months ago and based on the feedback it gets from using the brakes, it knows your tires aren't up to this rainstorm. Negligent homicide is going to plummet and autonomous cars can't commit willful acts.
Re: two - an interesting thing here is that you presumably have some sort of implied contract (it will soon become obvious that I don't know anything about this) with the vehicle manufacturer in which your safety while using their product is a significant issue. Would the manufacturer have that same level of agreement with the general population? Presumably they are still liable to prosecution if their vehicle causes harm to anyone, but it in terms of programming and contractual obligation, I wonder if it's 'customer first'? Edit: minor correction
The autonomous system vendor will use a whole bunch of parts that are certified for use on US roads by the Department of Transportation, just like seat belts, just like tires, just like brake fluid. It's funny how nobody wrings their hands over cruise control but autonomous vehicles are just a matter of degree beyond it.
Although... Could a company reduce their obligation to you via a EULA?
To me, this is the interesting part of the debate - not, 'what will my car decide / be programmed to do?', but 'who will be legally responsible for the consequences of that decision?' I can't imagine a company attempting to bring an autonomous vehicle to market if there is any chance that they bear all responsibility for potential death and / or injury. So, if you are the only person in the vehicle, will you still be the legally designated driver even if you're spending the journey browsing on your ipad and at no point were you involved in the driving process? It's when the lawyers get involved in this that I think it's going to get interesting.
And again, 'who is my autonomous car going to kill today?' is the 'cool' end of the conversation, but I haven't seen a lot of people talking about what autonomous vehicles might mean once they're here. So, imagine when all vehicles are autonomous and traffic management itself is about the traffic system managing the movements of all cars. Now imagine I'm wealthy and rest of hubski isn't. I subscribe to a premium service called 'Get Me There On Time' which means that I get automatic preferential traffic treatment. Your cars will merge out of the passing lane and slow down when my car catches up to yours. Doesn't matter how late you are, you're a standard subscriber and I am not. Or imagine that as a premium subscriber, my journey is going to clash with the journey of an autonomous ambulance going into a poor neighbourhood. Who wins? Or, I'm a platinum premium subscriber, and one of my benefits is that on certain classes of suburban roads, all traffic will come to a stop as I pass as long as the delay caused to others is calculated as being no more than an average of, say, 3 minutes. This is because, as a platinum premium customer, my status is immediately obvious because my car is moving and yours isn't. There's some interesting possibilities and consequences wrapped up in this technology.
Okay, let's talk about this. 1) Google introduces a new tier of service: Google Diamond. Google Diamond works by deprecating the performance of Google Standard allowing subscribers a 6 minute edge in commutes in certain areas. 2) A million hungry lawyers launch class actions against Google for deprecating the performance of an existing product. 3) The press roasts Google alive for hobbling traffic nationwide in favor of the wealthy. 4) Delphi, sensing an in, advertises one level of performance priced in between Google Standard and Google Diamond. It sucks worse than both of them but they get a windfall because everyone is so pissed at Google. 5) Every company using Google threatens to embargo Google unless they retire Google Diamond. It'd never get to Step (1), though, because Google isn't stupid. This is like RyanAir threatening to install pay toilets on their planes - it's such a painfully stupid idea that the only reason you would ever even mention it is to get press. There is so much legally and financially wrong with the idea that I can't see it ever making it out of marketing.
By the way, reading your other comments / threads leads me to think you're much better informed about this than I am, and your objections to my speculation are on target. It's been interesting to read the way you pull apart the essential problems. Just wanted to say I appreciate your contributions.
You could well be right, but when I imagine this happening, I don't see it as 'it will be everywhere, all at once', but instead, 'it might be governing jurisdiction by governing jurisdiction.' I don't know at what level traffic management is governed in the US - is it at the State level? If so, I could imagine states like California or Nevada, which I understand are litigation friendly to corporations (could be wrong about this), as places where 'automated traffic diversification' (can't wait to see this turn up in a serious article somewhere) could have more legs than elsewhere. Ultimately, it's about the potential value of a new market, versus, as you say, the likely potential cost of exploiting it. In a state willing to pass favourable legislation with enough cars to make it worthwhile, I could see serious thought given to it, if the technology was there to support it. And my last thought is: is there anything about our current use of roads and cars and traffic management that accords us any specific rights to a standard of service beyond, perhaps, safety? A couple of nights ago I spent 10 minutes in a traffic jam caused by roadworks - at no point did it occur to me that I had any legal right that was potentially being compromised. If an ambulance had been caught in that same traffic jam and a patient had died 6 minutes before getting to life support unit in a hospital, who would / could the family sue? Maybe there's already legislation / case law that covers these situations... Or another example - when one of the big events comes to a city - the Olympics, or the G8 or whatever - and roads are shut down and priority routes are isolated away from normal traffic, causing chaos for ordinary residents, we are basically hard-wiring the kind of preferential treatment I'm talking about, and at a MUCH more intrusive level. Imagine a selling point of a company saying, we will seamlessly manage traffic around your special and extranormal events, in return you licence us the right to monetise traffic management at a subscriber level. You're probably right that it will never happen, but I believe some people will be giving it serious thought as the technological potential approaches.
So the legal authority for roads is different than the legal authority for motor vehicles. The legal authority for operating motor vehicles is different than the legal authority for selling motor vehicles. This is why every time you register a car in a state for the first time it has to pass a state-specific safety inspection, but why the standard and mandatory equipment of vehicles is dictated by the NHTSA and DOT. General rule of thumb: the tallest rules govern. So New Mexico, which pretty much requires cars to have a mirror and a horn, has vehicles that are 50-state legal because cars legal for sale in one state must be legal for sale in all states. Does that make sense? An autonomous driving system is vehicular equipment, which means its legality and governance will be covered by the DOT. Note that certain states have passed legislation making operation of autonomous vehicles legal; that's very different than sales. Sales will be governed by the same body that does seat belts, airbags, brakes, etc. So far so good? Adding onto that: the discussion isn't "legality" so much as "contract liability." If you buy a self-driving car that will get you there the fastest way legal, and it suddenly becomes a car that will get you there "the fastest way you can afford", you're going to run afoul of price gouging statutes - after all, you bought one thing, you're getting another, and that's intentional tort. And there's no way to get around it. If Google Carbon and Google Diamond are on the same roads, obeying the same laws, and Google Carbon has to deprecate its service in order for Google Diamond to serve its customers, then Delphi Drive has a sales advantage over both because it's got one universal level of service. Which means Google Diamond has no ability to enforce its deprecation on Delphi Drive, then nobody buys Google Carbon because you're suddenly the only loser that can't get to work on time. Simply offering two tiers also invites congressional inquiry into your methods... and Google don't want that. You're correct: you have no "right" to speedy traffic. However, you absolutely have a "right" to a product you paid for. By the way: Life safety is a non-issue. 911 gets to say "there's an ambulance coming through" and every self-driving car slows down and pulls over before you even hear it. G8? Life safety. Whole 'nuther issue than "my car is faster than yours 'cuz I pay more per month."
There actually was a two tiered system for a brief time in the early 2000s. California had recently passed stricter emissions standards, so all of a sudden the big three had a choice: make different cars for CA or make all their cars cleaner. They chose to temporarily make different cars for sale in CA while suing them in court under the Commerce Clause (because option three, don't sell cars in CA, was a nonstarter due to the gigantic market there). Meanwhile, a bunch of other states signed onto CA's standards, and the problem 'fixed' itself. It was an interesting but very temporal situation in the industry. Everyone knew it was temporary and that's the only way it occurred in the first place. Not that this is very relevant to the discussion at hand; just wanted to share a bit of history that I doubt many people are familiar with.
They had a new regime, the Zero Emission Vehicle program, that took effect in 2003, and applied to all new cars starting in MY 05 that created a huge legal mess in which the EPA and the auto industry jointly sued the state of CA over their standards. In the end, CA prevailed, if I'm not mistaken. (I only know this because I interviewed for a job as an emissions engineer for DaimlerChrysler in late 2005--didn't get it, but still had to study up a bit.) But you're correct that it had nothing to do with safety of the car.
Ah, okay, so it would be materially different, then, from a company being able to deliver a value-to-the-business individualised experience on its incoming telephone network because it owns that network, whereas in the example of cars they don't own the 'network' (i.e., the roads) and therefore can't prejudice the experience of those using the network. Is that at all close to the mark?
Look at it this way: You've got a system of interconnected players. There's "you", "other drivers on the same autonomous network", "other drivers on different autonomous networks", "other drivers not on autonomous networks", "life safety vehicles", "local jurisdiction", "state jurisdiction", "national jurisdiction," etc. The situation you're describing covers "you" and "other drivers on the same autonomous network." There are lots of players that aren't covered. They all have "rights" governed by their social contract and actual contracts through citizenship, licensure, etc. Let's come up with some players: - Adam is Google SelfDrive Carbon (cheap) - Bob is Google SelfDrive Diamond (expensive) - Charlie is Delphi Autocruise (untiered) - Dave is in a '77 Nova - Elliott is a cop - Fred is a long-haul trucker - Google is a company too smart to get themselves in this mess, but bear with me. Adam is going to work. Bob is also going to work, but Bob has the added advantage of fucking Adam over whenever he feels like it. This is likely to create seething resentment of Google by Adam, but we'll disregard that for a moment. So Bob is bombing down the interstate and Google tells Adam's car to pull over out of Bob's way. Adam is going to cut in front of Charlie, if he can. Charlie's car has accident avoidance. But is Google going to let Adam's car drive aggressively enough to risk an incident? If it can be proved that they did, Google can be sued by Delphi. Charlie moves over and gets in front of Dave. Dave isn't paying much attention and catches it late - he rear-ends Charlie. Delphi can still sue Google, but now Dave could sue both Google and Delphi. Fred was asleep - his truck is driving itself. It slams on the brakes and performs a precision panic maneuver to end up on the margin so that Charlie and Dave aren't street meat. Fred can likely be fired for being asleep. Fred's trucking company can sue Google and Delphi, and maybe Fred can sue his trucking company. Elliott watches this pigfuck of an operation and files a report. The highway patrol subpoenas Google's data and discovers that none of this shit would happen if Google wasn't favoring Bob. Meanwhile Bob has caused a pile-up simply for owning Google Diamond, which makes him a likely target of litigation, which adds to the existing caselaw against Google Diamond so his insurance rates go up. Meanwhile, he's not actually any faster to work since the only person he has power over in this entire pigfuck is Adam. This all came about because Google chose to not drive the best they could in two separate instances solely so they could make a buck. There will be plenty of curious litigation associated with autonomous vehicles anyway - the costs/benefits analysis of Google sticking their neck out on this one just doesn't pan out. Neither will it pan out for anyone else - the acquisition cost for a network of the scale necessary is staggeringly high and you don't jeopardize its certification for penny-ante shit like this. And that's really the bottom line - because it's a network, rather than an individual car system, everyone has to be on the same page. Scribble on that page and it's scribbled for everyone. Remember how Audi had to virtually retreat from the US market because mmmmmmmmmaybe their gas pedals were getting stuck? This is like that, only voluntary. It won't happen. Not in any country on the planet. Autonomous networks will drive to the best of their legal abilities, period.
I wonder if this would ever be feasible in other markets? Japan, China, Europe, Russia etc. Edit: removed Hong Kong due to inclusion of China, and added Russia just because.
My personal theory is that transportation is going to become more and more customized and tailored as the years and the technology progresses. In a sense, Google could use the same strategy with their self-driving car as they can with Android. Build a proprietary software platform, license it to hardware manufacturers and let others come up with services to use it (apps). Imagine not owning a car, but instead summoning a car on an as-needs basis. The company who owns the car can offer you a range of cars from small hatchbacks to SUV's. It's similar to regular car rental but could be much more granular. If you really want the comfort of a Mercedes over a simple Chrysler, you can just pay more. Or if you want a red car. Or a car with [insert preference here]. Whatever company owns the car can retrofit some sensors and get it out on the road. Your scenario of pay-to-be-faster depends on whether fully automated cars will be so-called 'connected cars' or not. Will they talk to each other? Car companies are really pushing for connected cars, partly because they produce so much data. But fully automated cars can totally work without being connected. There's added benefits like the ones you described, but connecting all cars to each other could be an insurmountable challenge, or just not worth the added cost.
Google is going to lease their dataset and bundle a precertified constellation of sensors and servos as a VAR. They're going to be Delphi. Their contracts are going to require two way communications such that Google gets to retread and revise their master database everywhere their cars travel. The Google-Car-As-Uber discussion is wholly separate; the actual hardware won't be any different, just the ownership. Zipcars are well-established and legally uninteresting and I can totally see Google operating a vast fleet of autonomous vehicles at a loss in order to increase the penetration and acceptance of their offerings in order to drive B2B sales. Come to think of it, that's a good reason to short sell Uber and Lyft - there isn't a single market that has survived once Google decides to enter it. Google has a solid and logical reason to get into car hire and they will beat Uber and Lyft by inspection. And I think Google could structure their services in such a way that they couldn't be barred from use at airports the way Uber and Lyft are... there might be an in using membership modeling that would make them livery rather than for-hire. But that's well beyond my expertise.
This is great. Everyone assumes that with these changes and advanced technologies, our human nature changes and advances too. -not the case. However, where in the public sector do we see things like "platinum" status? I'm not saying it doesn't exist, but I can't think of it. The roads aren't going to be privatized. However, the maps are. At what point does google maps become a utility that is somehow regulated by the state? I don't recall the post, but at some point kleinbl00 made the point that Google will eventually own the digital toll booths of the world. -powerful stuff.
Bing is at least giving it a crack.
There are pay roads in California, at least, designed to save you commuting time. Important distinction: those roads in no way interfere with the commutes of people not on them. And yes, they're privatized. Google Maps becomes a regulated utility the minute it's used in operating a vehicle on a public road. At that point the entire system needs to be certified by the DOT, just like everything else that goes on a car. Here's the conversation you are looking for: In my opinion, the issues discussed on that page are a hundred times more important than yet another hack journalist misapplying the trolley problem to a subject they don't understand.
Can't think of anything in the public sector, but the private sector of course puts pressure on governments to privatise public services and assets. The platinum service idea itself is already here - when you call your cable provider, the fact that you're talking to someone in another country who is obviously just following prompts that appear on their screen (i.e., they have no specific training in the transaction you have contacted them about) is taking place for 1 of 3 (among several other) reasons: 1) you weren't identifiable when you called in (e.g. You called from your work phone to talk about your home service) 2) you identified by selections that you are calling to perform a revenue neutral or revenue negative transaction (you want to complain, you want to ask a question about your bill etc) 3) in real time it was determined that you are a low value customer to the company So, matching the overall value of customers to a particular customer service experience already exists, to the level where priority treatment in incoming call queues can be given based on your value to that company. Worth a lot of money? You're definitely going to be answered next, regardless of whether others have been waiting longer. So, what I'm talking about above with autonomous cars and traffic management just pushes that idea to one of commercialising a currently latent market - once traffic can be truly centrally controlled down to the individual vehicle, you have a potential market for 'diversification of the individual traffic experience<tm>.'
It's not interesting, though. Responsibility will go to the responsible party. Your mother's GoogleCar spins out on ice and hits a tree. She wasn't driving; she was a passenger. Who's at fault? - Should Google have detected ice but didn't? Google is responsible. - Were the conditions atypically icy due to a sprinkler left running, for example? Then the owner of the sprinkler is responsible. - Did your mother tamper with the instrumentation of the Google Car in any way? Then your mother is responsible. - Was the road constructed in such a way that it is abnormally dangerous in icy conditions? Then the highway department is responsible. Seat belt manufacturers aren't liable if you die in a crash and you were wearing your seat belt. Seat belt manufacturers are liable if their product doesn't conform to the standards set forth in certification by the DOT. It really is this simple. It's why these discussions bug me. There's a reason Google is going fully autonomous first: they don't have to worry about being blamed for a bunch of chuckleheads that decide to take the wheel. They get blamed for the things they're responsible for, and it's stupid simple to keep their liability in a regime where there's almost nothing that ends up as their fault.
What I meant by "avoid all accidents" was that in the hypothetical situation given (drive you into a bus of kids or into a wall) the way the AI makes a decision is by whatever results in the least harm/damage/risk with no regard for one human life over how many others.
As a software developer, this was one of the very first questions that occurred to me when considering self-driving cars. First off, you simply can't "avoid all accidents at any cost". Accidents will happen, and the car has to be programmed to do something when they are about to, even if that something is just shutting down the car's autonomous systems. The way computers work, you must at some point reach a line of code where the car decides whether it goes left into a pole (killing you) or right into the crowd of people (saving you, but killing them). For the time being at least, some human being has to write that code. That means that ultimately some software developer is going to be the one making that choice. If I were that developer, the only sane choice would be to minimize the total loss of life. I've always thought that deontology was mostly bullshit anyway, but in this case a real person, not the car is going to have to make an active decision one way or the other.
Turning off the autonomous systems in an emergency situation is probably the worst decision possible. Human reaction times are orders of magnitudes slower than that of a computer, provided the human is even paying any attention to the road when the emergency situation happens. I probably wouldn't as I'd trust the car much more than I would trust myself. Your example situation is neglecting to mention why the car is going what would have to be at least 100 km per hour when driving in that situation. If it was going at a safe speed of say 35 kmph (as there are potential hazards around, the crowd of people near the road) it would slam on breaks and not hit anything. Thinking about how to get out of these situations is a flawed plan. Think about how to avoid getting in these situations in the first place is a much better one. The easiest being to slow down when there are hazards close to the car.
Of course turning off autonomous systems would be the worst decision possible. My point was simply that the developer must make some decision about what to do. Your thinking is naively optimistic. Yes, the system will do its best to avoid undesirable situations, but if you reduce the speed of the vehicle to a point where the vehicle's stopping distance is effectively 0 any time there is a non-autonomous mobile object (or a place for such an object might be hiding) in view, the system will need a contingency plan. A system designed to be perfectly safe is going to also be constrained to approximately walking pace any time it's operating inside of an urban area.
It wouldn't need zero stopping distance, but rather a low enough stopping distance that the risk of death is very low. Hitting someone at 20 kmph doesn't have very much risk to it I think. 30kmph with half a meter distance to the sides and 2 meters in front is probably enough combined with fast reaction times to avoid the vast majority of deaths in emergency situations.