Opinion: are driverless cars immoral?

By Andrew Bell

Perhaps the most famous problem in ethics involves a speeding tram moving toward multiple people on one track and only one person on the other. Philippa Foot first poses the problem in 1967: is it ethical for the tram driver to steer the tram into the only alternative track and towards a lone worker in order to save the group of five workers? Or is it more ethical to take no action, and therefore avoid actively killing one man?

Publications such as The Guardian and Medium argue that this problem exposes a fundamental fault in the driverless car, questioning how autonomous vehicles could possibly be programmed to act ethically and safely in dangerous situations like this. I would argue that the nature of the self-driving vehicle actually allows us to reframe the problem to demonstrate that it can’t apply to autonomous vehicles in the same way as it would an individual driver, or ‘tram driver’.

As driverless technology and smart city infrastructure become more ubiquitous, ethical questions surrounding individual agency will become less relevant, as driverless vehicles will synchronise with one another and these urban features. At this point, these vehicles will be able to cope with accidents collectively, meaning that the vehicles involved will be able to action the least damaging solution together, drawing on information from each other’s smart city IoT data sources and from the nearby smart infrastructure.

Additionally, as Google engineers point out here, the train on the tracks scenario is simply not comparable to the example of driverless cars; autonomous vehicles are carefully programmed to act preventatively, meaning that situations like this should not be possible. The trolley problem implies a degree of negligence that allowed the situation to occur – perhaps someone was asleep at the controls, or a barricade was breeched. This railroaded vehicle is out of control, making it the very antithesis of the autonomous car, which constantly maintains situational awareness to prevent these situations. The notion of an autonomous vehicle is fundamentally different enough to the out-of-control, railroaded tram as to make the trolley problem simply not comparable.

Moreover, machine learning means that self-driving vehicles’ reactions to dangerous situations will be dynamic, and not entirely restricted to the programming of a few engineers. Autonomous vehicles continuously collect accident data in order to devise safer ways to react in different situations. However unlike the speeding tram, driverless cars are more likely to simply brake than to undertake elaborate evasive manoeuvres, according to Google’s engineers.

In conclusion, I would argue that driverless cars are actually better equipped to deal ethically with problematic situations, demonstrating better reaction times than humans and behaving legally at all times. Their inbuilt radar systems and IoT functionality will mean that there are no immediate blind spots, demonstrating a situational awareness superior to a human driver’s. For evidence of this, watch this footage of a Tesla reacting to an accident as it happens immediately in front of it.