Many people, myself included, sometimes dismiss the Trolley Problem. However, as technology creeps ever further into our everyday lives, it seems these moral problems are becoming ever relevant.
How do you think a self-driving car should respond to an accident?
I have been exposed to The Trolley Problem, although I have never studied philosophy. However, I always thought the scenario was not reflective of current dilemmas. I never fully considered the ethical questions which arose.
What is the theory?
A trolley, or a train, is speeding down a track towards a junction. A villain (I always imagine a black and white movie felon) has tied five people to the track ahead, and another person to the branch line. You are standing next to the lever that controls the junction. The only way to save the lives of the five people is to divert the trolley onto another track that has only one person on it. If you divert the trolley onto the other track, that one person will die, but the other five people will be saved.
What would you do? More specifically, what is the ethical course of action?
Why am I blogging about this?
Self-driving and driverless cars has made this problem notorious again. A truly self-driving car will need have to be given ethical instructions of some sort by human programmers. You can imagine the pleasure of ethicists whose jobs are now in high demand.
Nature published a paper where psychologists and computer scientists took a different approach to deciding ethics. Rather than asking a small band of philosophers for their professional thoughts, Edmond Award from MIT thought it best to ask the general public.
How to get the public involved?
The team created a website where visitors are presented with a series of choices about whom to save and who to run over with the trolley. The example I watched in ‘The Good Place’ included five workmen on one side of the track and a close friend on the other.
An applicable example is if the brakes on a self-driving vehicle stopped working whilst heading towards a pedestrian crossing. If the car stays in a straight line it will hit a woman and her dog, if it swerves, the car will hit two business executives of no particular gender. What should the car do? What do you think you would do?
The website became hugely popular; it was mentioned in Reddit and with YouTube stars. So much so that nearly 40 million people made decisions across 233 countries, territories or statelets.
What were the preferences?
Unsurprisingly, the strongest universal response was saving human lives over animal lives. Also preferring to save many rather than few and prioritising children over the old were common themes.
What I found the most surprising, and in my opinion, scary result was that criminals were viewed as soulless. They were the penultimate ranked category – even below dogs – in the public priority list. Their rank was only just above cats.
However, I also have a problem with the categorisation of criminals. Surely it would depend on the type of crime the criminal has committed. Are they an ex-criminal or have they just committed a crime? What was the extent of their crime? I am struggling with how a driverless vehicle would weigh up these options and whether they have the capacity for this deep human thought.
The utilitarian perspective dictates that most appropriate action is the one that achieves the greatest good for the greatest number. Meanwhile, the deontological perspective asserts that certain actions – like killing an innocent person – are just wrong, even if they have good consequences. In both versions of the trolley problem above, utilitarians say you should sacrifice one to save five, while deontologists say you should not.
So what perspective should a driverless vehicle take?
Cultural differences
As we can imagine, results differed between countries. Interestingly, the preference for saving women was stronger in places with higher levels of gender equality. Eastern countries (Eastern Europe, Middle East, China and India) showed a weaker preference for saving the young over the elderly. Southern Nations (Latin America) showed a less pronounced preference for humans over animals.
So, will there need to be a different moral compass for different regions and areas of the world? People view moral dilemmas differently in different countries, but how or should technology affect this? There are many debates which can additionally arise from this issue.
Self-driving cars in practice
Iyad Rahwn, a computer scientist at MIT comments that the team do not intend to deposit their findings directly into policy or government.
The only real-world example is Germany which has implemented ethical rules for self-driving and driverless cars. A conflict with the team’s results can be seen especially where discriminating in terms of age is forbidden in Germany.
Concluding thoughts
Some concluding questions which I will be thinking about and invite you to do so too are:
- Should machines be making human decisions?
- Should technology have this much power?
- As morals differ geographically, how should this be reflected in technology?
- If different technologies have different moral values, what does this mean for trade and moving machinery?
- How should the law reflect these problems?
Thank you for reading my article. I hope you have found it interesting and a topic to think about or discuss with friends. I know I will be doing so.
Leave a Reply
Want to join the discussion?Feel free to contribute!