Proposition for Solving Ethical Dilemmas of Autonomous Cars

Zolboo Erdenebaatar
3 min readFeb 14, 2021

As technological progresses happen at an unprecedented rate, there are many dilemmas that we need to consider. Especially in the realms of artificial intelligence, there are careful decisions we have to make because we are putting more and more trust as well as more and more responsibilities in the hands of machines. A prime example of this is the implementation of autonomous vehicles. Most machines before this have been performing their duties in isolated incidences. For example, factory machines can only result in so much damages outside the factory. However, in the case of autonomous vehicles, we know that they will have to interact with humans, drive on human streets and coexist even though they have the ability to run over humans at any given time.

An ethical dilemma that arises from this can be explained through a simple scenario: Consider a car that sees someone running across the road. Let’s say that the car is going to fast to stop in time to prevent itself from hitting that person. However, the only other alternative is to swerve into roadblock. In this case, what decision should the autonomous vehicle be designed to make?

The ethical dilemma pictured

We see that if it stays on route, it will kill 5 people and if it swerves, it will kill the passengers in the car. As it is a moral choice, there is no single answer; we know that different people have different ethical values and priorities. However, it is difficult for the car itself to know what the driver must be thinking and it has to make the decision quick. If the driver had to make the choice to kill the people crossing the street, would they be responsible for murder since it was made out of choice? What if they could’ve survived after hitting the roadblock? What is the moral answer here?

An easy answer to this problem is, I believe, the utilization of mathematics in two ways. First, we can use math to make the machine learn how much damage that each scenario could bring using Machine Learning and artificially stimulated scenario. Then, if the vehicle is faced with the dilemma in real life, it will know exactly how much damage that each choice will do in each scenario (i.e. it will know if the passengers will die if the car swerves). Second, we can use math to make a fair judgment on the human lives. Speaking from a purely utilitarian perspective, if we kill an old person, we will, on average, result in less number of “years of lives lost” than if we killed a kid. If we killed a kid, we will be robbing them of decades of life. So, I think an easy way to decide who lives, if someone HAS to die, is for the autonomous vehicle to perform facial recognition, determine the age of the victims, and decide which choice gives us the least number of “years of lives lost” so that we minimize the damage. It is the only fair way.

--

--