Algorithmic Design and the Resolution of Moral Dilemmas in Autonomous Vehicles

Why is the implementation of Machine Learning Important for the Development of AVs?

Companies like Google, Baidu, Tesla, and Uber are heavily invested in the production and deployment of Autonomous Vehicles (AVs). While each company is motivated by its own set of incentives, financial or otherwise, one concern warrants consistent and universal attention: safety. The large-scale deployment of AVs can only be realized insofar as citizens trust that their safety will never be compromised, or at least, less compromised than it would be if humans were driving. 

There are a number of benefits intimately linked to AVs and their potential widespread implementation. From a reduction in carbon emissions and city-scale footprints to cheaper and more efficient transportation supported by IoT-powered smart-grids that streamline traffic flow and reduce accident rates. AVs cannot only pave the path for sustainable technological and infrastructural growth, but also increase equality of opportunity in vulnerable or impoverished areas. 

People living in the inner-city or rural areas will soon have access to affordable ride-sharing platforms provided by companies like Baidu and Uber, where individuals can essentially ‘rent out’ their AVs for the day as public transport vehicles. The citizens who own the vehicles receive a small percentage of the fees charged, along with the company that provides the software, while those ride-sharing can benefit from an easily accessible and affordable public transport service. 

None of the AV benefits I have mentioned thus far would be achievable without high-level computer vision, image classification/representation and motion detection, vehicle-to-vehicle communication and data transfer, as well as autonomous navigation. These functions are directly supported and powered by Machine Learning

Machine Learning is at the heart of all AVs; without it, an AV, just like a human without a brain, cannot make sense of or learn from all the data it encounters and processes. And, while the quality of the data will ultimately have the most significant bearing on how well a Machine Learning algorithm is able to optimize for particular functions, there are nonetheless certain moral dilemmas these algorithms must be able to solve, a feat that requires refining algorithmic design. In other words, we have to imbue AV algorithms with the ability for moral and value-laden decision-making capacity

The Trolley Problem

The Trolley Problem is one of the most common moral dilemmas in modern philosophy. Inspired by utilitarian logic, the dilemma frames the following hypothetical scenario: if there are five people in a trolley that is going to crash, and the only way to save those people is by diverting the trolley to another set of tracks where it ends up killing one person, is it morally acceptable to do so

Depending on your personal philosophical and psychological inclinations, you might feel that the decision to kill one person to save five is not only morally acceptable but required. In such a case, you would be engaging in utilitarian moral judgment that quantifies the course of right action in terms of a maximization of benefits and a minimization of harms

Consider the following scenario: an AV contains an old couple that is on their way home from grocery shopping. The AV is taking them through a busy suburban neighborhood that is inhabited mostly by families with small children. One of these children, while playing with their friends, jumps out into the middle of the road, not noticing the oncoming vehicle. The AV, not having enough time to brake and avoid a collision must decide whether or not to avoid the child completely, and risk bringing serious harm, potentially even death, to the old couple it carries. Conversely, it can decide to hit the child, likely killing it, therefore maintaining the safety of the old couple. 

I have chosen a child and an old couple as the potential victims to highlight how sticky this kind of decision could be, especially since as humans, we have the tendency to weigh the importance of life differentially as people age. This is not to say that one life is inherently worth more or less than another, but rather, to show that in scenarios like this one, such judgments are necessary to avoid cases in which everyone involved risks serious harm or even death. 

If we choose to design an AV algorithm with utilitarian logic, we must necessarily consider not only direct maximization of benefits and minimization of harms; we must evaluate how personhood, at the individual level is quantified, and whether discrete metrics for this kind of quantification are not only definable, but morally acceptable. Some would argue that building a system that makes decisions in this way is fundamentally inhumane

Deontological Ethics

Another approach to the Trolley Dilemma involves the use of deontological ethics whereby universal maxims or moral imperatives dictate whether or not an action is morally acceptable. A universal maxim is an act that is morally justifiable insofar as anyone else is equally capable of performing it and effectively willing it into universal law. 

Consider the maxim “killing is wrong” and how it would affect any decisions made with respect to the Trolley Dilemma. In the case of the child and old couple, any solution involving harm brought upon either party is fundamentally unjustifiable because it directly contradicts the universal maxim. As such, deontological ethics, in this form, do not provide any conclusive solution to the Trolley Dilemma. 

However, if we restructure our maxim such that it includes a set of necessary conditions that justify its existence, we might find that the ability to arrive at a sound moral conclusion is amplified. For instance, if we say “killing is wrong only in cases in which there exist other, less harmful solutions” we may find that an AV algorithm powered by deontological logic can still provide an answer to the dilemma. 

Nonetheless, we would inevitably be pushed to clarify what we mean by “less harmful solutions” and whether the prospect of death for a child carries the same level of moral relevance and importance as it does for the old couple. Moreover, the whole concept of harm is intrinsically relativistic; I may have a particularly high pain tolerance and require less anesthetic when I go to the dentist than someone else does, but this has no bearing on my capacity to experience pain in the first place. 

If AV algorithms were designed with deontological logic in mind, they would have to be capable of evaluating an individual’s quality of life and well-being in conjunction with their capacity to experience harm within a justificatory framework. The prospect of a Machine Learning algorithm deciding how much your life is worth feels deeply problematic, even if it retains the ability to solve the Trolley Problem. 

So, where does this leave us? 

A Hybrid Approach?

Both the utilitarian and deontological approaches fall short in their abilities to provide conclusive answers to the Trolley Dilemma within the context of AV algorithmic design. Unfortunately, AVs still require the ability to make such decisions, even if these decisions make our stomachs churn. 

As such, it could be beneficial to break down these decisions in terms of their evaluative components. At the most basic level, we must deal with total harm inflicted; this represents a quantifiable scenario, defined by the total maximization of benefits and minimizations of harms upon parties involved, in which utilitarian logic can provide the answer. However, we also need a way to quantify the degree of harm inflicted at the individual level, a feat that can be accomplished using conditional universal maxims.

 Finally, we also need to evaluate the quality and worth of an individual’s life – this is the hardest part because it involves judgments regarding an individual’s character attributes and moral standing in society. Utilitarianism and deontology are not well-suited to this kind of character evaluation, however, Aristotelian Virtue Ethics is. 

Virtue ethics evaluates how ‘good’ an individual is based on their possession of virtuous character traits such as courage, integrity, wisdom, and loyalty, to name a few. While it still feels dubious to allow an algorithm to make these kinds of judgments, they are nonetheless possible. The sheer wealth and quality of currently available behavioral data surplus have allowed Machine Learning algorithms to carefully curate highly individualized personality profiles that do not only reflect consumer purchasing/spending habits but also certain moral attributes/tendencies people possess. 

By designing an AV algorithm that employs utilitarian, deontological, and virtue ethics as logical frameworks at different stages of the moral decision-making process, we may be able to derive an acceptable approach to the Trolley Problem solution. However, it would still be necessary to assign weights to each stage of the decision process, so that we understand the relative importance of the conclusions or outputs at which they arrive. In this case, it could be beneficial to consider who or what would bear responsibility in these kinds of scenarios. 

Who or What should be Held Accountable?

The question of accountability, and more importantly, legal liability, should be carefully considered. There are three potential approaches: 

  1. We hold the owners of the AV accountable. 
  2. We hold the AV itself accountable.
  3. We hold the company that manufactured the AV accountable. 

If we hold the owners of the AV accountable, we essentially attribute blame and legal liability to an individual(s) that, at the time of a collision, was not exercising any physical control over the vehicle in question. However, it could be claimed that by purchasing the AV in the first place, the consumer has already implicitly accepted the risks associated with AV use, and should therefore be held responsible when said risks turn into real-life scenarios. 

If we hold the AV itself accountable, we are without reservation, establishing a norm that inanimate autonomous objects, if they possess some degree of computational intelligence, warrant moral status, and therefore, agency. From a legal standpoint, if this is the case, the AV in question would need to be persecuted as if it were akin to another human being, a procedure that is so extreme, it almost feels satirical. 

Finally, if we were to hold the company that manufactures the AV accountable, our rhetoric would be in line with current approaches in corporate law and regulation. This solution, from an intuitive standpoint, feels most appropriate. When a product is being designed, it is the company’s responsibility to ensure that it will benefit consumers safely, and more importantly, that consumers are not engaging with unknown risks that carry potentially life-threatening consequences. 

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap