Just over two years ago, I had a few discussions on here about self-driving cars which led to me writing a blog post about Google's car, which led to an article I wrote getting published for a Dutch platform called The Correspondent. (It's basically, Pando but better.)
I'd say writing that article is what solidified my interest for the subject. Since then, I've written by bachelor's thesis on the subject, have been invited to a few meetups and have started my master's degree specifically at the university with the best experts in the field. Next year I'll likely do my master's thesis with one of those experts.
This summer I decided to spend my free time researching one of the aspects of self-driving cars that I found difficult to grasp: machine ethics. Much inspired by kleinbl00's rant, I wanted to delve deeper into the topic and explore just why the trolley problem and the issue of machine ethics for self-driving cars kept coming up.
De Correspondent - "Does your car need a conscience?".
It's in Dutch, but a proper English translation is in the works. Google Translate works a-okay, even though it reads like it's written by a caveman.
________________________
My argument roughly goes as follows. First, I present an example of the kind of dilemma that you so often find in articles about self-driving cars (AVs) and ethics. An AV is driving down a twisty mountain road. At the same time, a girl crosses the street but trips and falls over. The car comes driving down a blind corner at the same time. It can save the kid, but only if it drives you off the cliff to a certain death.
Hypothetical dilemmas like this are basically misleading. Traffic doesn't work like that. First of all, difficult traffic scenarios are much more complex and have a lot of factors that come in play. The weather, people, and surroundings are inconsistent and can be unpredictable. The car needs to drive in a complex traffic system. What such a dilemma does is strip away all the parts that make up that system so that only the moral choice remains. It basically assumes nothing else matters, which is not the case at all (no matter what James Hetfield would like you to believe).
So we need cars to drive in this complex traffic system, and we need the end result to be ethically justified. Does this also mean that a car needs to think ethically? Ideally, you'd have the car making a moral assessment of the value of each life. However, this is practically impossible: you can't get all the information you need (e.g. age of the people around you) without some serious privacy breaches. Even if you do have that info, there is no reliable and generally agreeable way to weigh those factors.
Moral rules-based solutions have also been put forth. This is also really difficult to do because the traffic system is so complex. To every moral rule you can think of, there is a situation imaginable that forms an exception to the rule. And for exceptions, there are also exceptions to the exceptions. We humans use our common sense and moral intuition to know when to follow or when to break a rule, but machines have neither.
So then what can an AV do? Well, at its core is a fairly simple process. First, it measures its surroundings. Then, it tries to classify and interpret those measurements (e.g. that tree-like object is probably a tree). It then tries to find out what is moving and will try to predict where those moving objects will go. Finally, it will plan its path where the main goal is to avoid hitting any of the objects.
Note how there is no ethical step in this process. This is because an AV doesn't need a conscience to produce ethically right results. It can do just fine with those four steps, and technicians should focus on making those steps better. If the car becomes more and more reliable, it will be able to detect, predict and drive around nearly all dangers in its path.
That doesn't mean it will be perfect at it. The real question then is who to hold responsible for getting the car to drive ethically. The fatal accident with the Tesla recently shows how difficult this can be. On the one hand, the driver wasn't paying attention and drove the car too fast on the road. On the other hand, the Tesla didn't see a truck coming. This is mainly because it only has two sensors for detecting dangers and both failed.
Besides Tesla and the driver, there are more parties that share some responsibility. This is because of the complex environment that a car has to drive in. You can think of the NHTSA, who has approved the Autopilot system on the road. Or Consumer Reports, slamming Tesla down to protect consumers. With AV accidents, liability cannot reasonably be put on one party. There is a division of blame, and we need to think about how to properly balance this. If we wait for companies to balance this, they can tip the balance in their favor. How this is done is not yet clear, but it is in our own interest to start thinking about it, now that we can still steer the technology in the right direction.
________________________
This post is partly to share what I've done, partly to thank you guys for motivating me. Thanks.
(it's also partly to share the amazing artwork they made for the article. Like, damn, it fits so perfectly.)
Thanks. I suggested to translate it myself first, but they preferred to put their own people on it. (And the NHTSA part is probably not entirely true - here, the Dutch equivalent of NHTSA road-approved Tesla for the entirety of Europe. But it is safe to say that Tesla isn't prevented from driving by transportation authorities, which is the point I was trying to make.)
Even when it comes to ethical problems, people forget one thing: They don't HAVE to be perfect. they just have to be better than us. Spoiler Alert, they already are.
They have to be better - and then some. An ideal future would be one where automated vehicles do most of the transportation for us in such a way that it is just as safe as, say, air travel. But to get there, you can't just present a safe AV and be done. They need to gain people's trust. They need to drive courteously, reliably. They shouldn't randomly try to kill their owners. I don't think just being safe enough will cut it. Safety, especially traffic safety, is an abstract and distant concept until it's not and you're in the hospital. I recently drove in one of the first self-driving buses in existence, and it was a bit of a shaky ride as the car had a hard time accelerating and braking smoothly. While I knew that bus, with its own bus lane, has never been in a serious accident in 10 years, my gut feeling was that it wasn't much safer than a regular bus.
One problem I have with every AV conversation is that people are trying to create a system that can drive in traffic, better than humans can. Practically speaking, I think AVs are far more likely to travel their own routes, rather than spend time sitting in traffic waiting for some granny trying to parallel park. Consider this: When traffic got bad in Seattle (late 70's? early 80's?), they decided we needed to reduce the number of vehicles on the road by incentivizing people to carpool. More people in each car would equal fewer cars, right? So they built the Express Lanes. (Now HOV lanes.) This was an extra lane added to the left side of every freeway that was specifically for carpoolers. Consider this: When bus traffic got bottled up in downtown Seattle, the city decided to try something different: During rush hour, 3rd Ave in Seattle is closed to all vehicles except buses. Now the buses can get through downtown much faster. Consider this: AVs could travel in much tighter packs, because they talk to each other. Also, when in communication with each other and running in a pack, they can run a consistent speed, which eliminates the need to go fast. Consider this crazy idea: What if we eliminated left turns? (In countries that drive on the right side of the road.) This is the most dangerous maneuver there is in traffic. It is also the primary reason traffic lights exist. And, if there are no left turns, then the left lane can effectively become the "AV Lane", like the HOV lanes of yesteryear. This would allow MUCH dumber AVs. Cities regularly build light rail, trams, electric (overhead wire) buses, etc. All of these need a dedicated, or semi-dedicated lane. Providing AVs with this kind of framework would greatly simplify many of the practical problems surrounding integrating AVs into normal traffic flows. A car that drives like a human? Not likely in my lifetime. A framework within which autonomous vehicles can operate in parallel with regular traffic? Shit, we could do that next year, if we really wanted to.
Congratulations! Well-deserved, veen. One thing that comes to my mind, is that AVs can see things that people cannot. The kid's sneakers could have a registered RFID that lets the car know that she is in the area and approaching far before the car needs to start slowing down. The Tesla truck accident wouldn't have happened if AVs were ubiquitous, because the truck and the car would have been talking to each other.