A recent survey asked the following hypothetical question. If a driverless car is headed on a collision course with a group of pedestrians, and it is impossible for the car to stop. Should the car be programed to plough into the pedestrians, killing many of them, or swerve off the road to avoid collision and possible kill the passengers in the car?
People answering the survey no doubt thought this way: “I am not a sociopath therefore I don’t want my car to kill people, but on the other hand I don’t want it to kill me and my family, so I won’t buy one.”
But shouldn’t this question have been asked long before now when buying an SUV that is as big as a small house, and built like an armored truck. People buy these vehicles to protect themselves and their families. Protect themselves from the other driver, that is.
The fact that these larger, heavier vehicles are then a greater hazard to every other road user, pedestrians and cyclist especially, and even smaller compact cars. The question doesn’t even arise because each individual SUV buyer sees himself as a good and safe driver, it is always the other driver that is the problem.
Isn’t the whole purpose behind the driverless car to eliminate driver error? The cause of the majority of collisions ever since the automobile was invented. You notice I said “Collision” and not “Accident.” This is deliberate.
When human error is a factor it is easy to say “Whoops-a-Daisy” it was just an accident. As I previously pointed out most people are not sociopaths they don’t intend to kill people. But drive in a reckless and dangerous fashion and someone’s death is a likely outcome.
But what will happen when the robots take over and all cars are driverless? Without the human error factor you can no longer call it an accident when someone dies, either inside or outside the car. Who gets sued? Not the driver, because there isn’t one. The robotic system will have failed, so the car manufacturer will be responsible.
And can a corporation even program a computer controlled car to decide who lives or dies? I can see that one going all the way to the Supreme Court.
If cars become driverless, speeds will have to come down dramatically. A pedestrian hit at 30 mph. or less has a good chance of survival. Above that speed the odds become less, and above 50 mph. death is almost certain.
Robotics does not overcome physics. A vehicle traveling at 50 mph. still needs 125 feet to stop. That doesn’t include human driver reaction time, I am assuming a computer will react faster.
Another factor to consider. Cars may be robotic, but not pedestrians and cyclists. Will pedestrians learn that if you step out in front of an approaching driverless car, it will stop? Will cyclists realize that by riding in the middle of the lane, a car on auto pilot will not pass unless safe to do so? How will that go over on a morning commute? Following a cyclist at 10 or 15 mph.
I think fully driverless cars are a long way off. The delay will not be a technical issue, it will be one of, “Will it be accepted by the general populous, is this what people want, and will they buy it?”
A more sensible approach would be to concentrate on reliable and low cost public transport. (Possibly driverless to cut costs.) Another way robotics could come into play is to restrict speeds in heavily congested areas. The current system of everyone driving at least 5 mph. over the limit is ludicrous.
Maybe put a few slow moving driverless cars into the traffic system just to slow everyone down. You can honk and cuss at a robot as much as you like, it won’t do you any good. Speed is the culprit, if everyone was forced to drive at 20 or 25 mph. there is time to react and avoid collisions even when the other driver makes a mistake or does something stupid.