Setting aside that I don't think a car will ever have such a slider, giving people the option to set a slider from "selfish" to "Gandhi" doesn't seem like a great idea. Why would you give a user, ignorant of the actual algorithms used by an extremely complex device, control over something like that other than to make them feel better?
I don't think this is as big of an issue as these researchers are making it out to be. I have this post saved for a reason. What this issue boils down to is that the real world doesn't work like the philosophical, moral world that is posited by articles like these. They never ask real-world questions, like, when road accidents actually occur. Ever heard of the Swiss Cheese Model by James Reason?
The idea is that bad actions and errors will happen, but that there are layers of defense that prevent it from happening most of the time. Say, for example, your driver is inattentive and oversteers on a highway. Will it turn into an accident? Likely not - highways are very wide, so over- or understeering is not a big deal. When you do go over the road markings, you can often hear a sound from the tires, making sure inattentive drivers know they're crossing the line. Even if they do go over the line by quite a bit, the driver can jerk the steering wheel back (the last safety net on the right, the defence mechanism).
If an accident happens, it means that multiple things have gone wrong, at the same time - the 'holes' can be passed through in the cheese. Only when all of the big safety nets fail, when you can draw a straight arrow through all of the holes in the cheese, does an accident occur.
There are two major categories of 'things that can go wrong' here: human errors and unsafe actions. At both, autonomous vehicles will be better, as they can greatly reduce the amount of errors and are increasingly good at preventing unsafe actions. For an autonomous vehicle to have to make a decision of 'kill 1 passenger vs kill 10 others', it needs to not be able to see the dangerous situation way in advance, it needs to have literally no room to divert to, on a road where danger can happen under the speed limit and those 10 others need to be in the only place the car can steer its momentum to. In the real world, that means two things: either the car is buggy / broken (computer error), or those ten people are doing something extremely dumb (unsafe action). The former scenario means Google will pay, the latter will mean that Google has gigabytes of data to prove how dumb they were because of all the sensors in the vehicle.
Look - the goal of autonomous vehicles is to drive the same as a human driver, but safer. How often does a moral dilemma like the trolley problem happen right now? How often have you made that decision, or have you pondered about it during an unsafe situation? Never, or almost never. Just because actions are written down in code doesn't mean ethics need to be coded as well.