The Ethical Implication of Autonomous Weapons Systems

Autonomous Weapons Systems (AWS) are a direct product of modern technological innovation and development; there are no historical precedents, either legal or moral, that currently dictate how these systems should be implemented, designed, and used. This highlights the necessity for an ethical and legal framework. In particular, one that takes into account the following notions: 

  1. The importance of an empirical understanding of the function/design of said weapons, especially with respect to the structure, explainability, and predictability of their decision-making algorithms. 
  2. The significance of the mass deployment of AWS in combat as it relates to ontological questions of moral value regarding human agency and dignity
  3. The significance of the mass deployment of AWS in combat as it relates to the Laws of War and decisions to use lethal force
  4. The question of moral responsibility, specifically which individuals or entities should be held accountable in cases in which violent actions taken by an AWS produce morally reprehensible results

Before delving into the gritty details of what has been outlined above, let us first define what we mean by an Autonomous Weapons System. The International Committee of the Red Cross (ICRC) has broadly defined an AWS as “a weapon system that can attack and select targets without human intervention” – several countries have already implemented semi-autonomous armed drones, much like those used by the US government to target al-Qaeda strongholds and militants in Syria. 

However, these drones, because their targeting function is remotely controlled by a human pilot, cannot be considered an AWS. As such, it is important to recognize that the essence of an AWS is held in its capacity to identify, select, and subsequently administer force (lethal or non-lethal) on a desired target; this entire process, including the decision to use force, is devoid of any human intervention. 

Any weapons system that cannot autonomously execute a targeting function is not an AWS, even if it can modulate and control certain physical functions (e.g. flight, navigation, robotic limb movement, etc…) without human intervention. 

Artificial Intelligence and Unpredictability

The targeting function of an AWS works by employing a generalized target profile, which the AI algorithm then uses to establish the timing, location, and characteristics of the target in question. This means that the AWS, whether it is a drone, an ICBM system, or any other kind of military technology, does not actually know who or what it will target until it has identified the target and decided to administer force. Even if the targeting algorithm is highly accurate, there is no way for it to anticipate civilian presence at potential target sites, as this would require intel that is difficult to receive and process in real-time. 

If the target is a human, the AWS may be able to mediate for civilian casualties insofar as the algorithmic design of the targeting function includes the capacity for facial recognition and image classification. This would allow an AWS to identify specific human targets and then decide whether to administer force depending on the number of civilians present. However, there remains no concrete way in which to conclusively eliminate the risk of civilian casualty, especially in cases in which targets are physical strongholds or bases

More generally, an AWS whose targeting function is driven by machine learning should be adequately scrutinized with respect to algorithmic explainability. For instance, Neural Networks are notorious for their impressive ability to identify patterns and make predictions from large data sets within an unsupervised learning framework. However, even the most proficient software programmers often do not possess a holistic understanding of the process by which an algorithm arrives at a given output, even if that output is accurate; such algorithms are colloquially referred to as “Black Boxes”

Consider the following scenario: a targeting function has an accuracy rate of 99%. This means that 1% of the time, it does not correctly identify targets but still makes the decision to administer lethal force. The fact is, that any decision to end human life, regardless of the justification provided, should necessarily be explainable. In other words, we should be able to clearly show why a targeting function has an error rate of 1% and identify which specific parts of the algorithm are responsible. 

Moreover, the lack of algorithmic explainability, even if error rates are low, makes it particularly difficult to anticipate how a targeting function will make decisions in novel situations. While a neural network driven targeting classification algorithm would presumably have the ability to learn from novelty, it is still constrained by structural paradigms that make certain learning functions non-transferable across domains – after all, we have to assume that these algorithms do not possess general reasoning ability. Simply put, it would be like asking someone who has studied physics their whole life to solve a chemistry problem. 

Human Agency in Decisions Regarding the Use of Force

An AWS, by definition, functions without any human intervention. Yet, at the end of the day, wars are still fought by people. And those people, importantly, must comply with the combat regulations issued by International Humanitarian Law (IHL), which typically require context-specific justifications for the instigation of violent military action. 

IHL is a vastly complex legal domain, and a discussion of all its rules and regulations is beyond the scope of this article. However, there are three important rules to consider in the context of AWS use: 

  1. The Rule of Distinction 
  2. The Rule of Proportionality
  3. The Rule of Precautions

The Rule of Distinction entails the necessity for parties engaged in military action to distinguish between civilians/civilian objects and relevant military objectives or targets. This rule is intuitively grounded in a human’s ability to make context-relevant judgments. These judgments must be defined and approved prior to the initiation of an attack, and more importantly, they must remain valid until the attack happens. While this rule provides a framework within which to evaluate the actions of an AWS, it does not allow us to build a direct connection between human agency and the actions of an AWS, especially since an AWS targeting function does not preemptively establish and identify targets.  

The Rule of Proportionality is a logical extension of the Rule of Distinction: it entails the ability, on behalf of military personnel, to justify an attack insofar as the harms brought to civilians do not outweigh the benefits generated by achieving the military objective. It is difficult to imagine how the inclusion of utilitarian logic into an AWS targeting function would be morally justifiable. To apply this rule to an AWS, we would require a pre-defined and exact threshold for what is considered an acceptable civilian harm to military benefit ratio. Given the importance of context interpretation and subjective reasoning in arguments of proportionality, this feels fundamentally infeasible. 

The Rule of Precautions is exactly what it sounds like: parties engaged in military conflict must seek to minimize the level of harm inflicted upon civilian entities. The entire purpose of an AWS is to optimize the targeting and elimination process of enemy combatants. Assuming an AWS has a targeting accuracy of 100%, this rule could actually be invoked in favor of the implementation of such a system. However, seeing as a 100% accuracy rate is unrealistic, the most valid way to justify the use of AWS under this condition is by comparing cases in which an AWS outperformed human or semi-autonomous military tactics. Unfortunately, the data regarding this matter is currently insufficient. 

As we can see, under the guidelines provided by IHL, it is difficult to draw a clear connection between human agency and the use of an AWS, especially since such systems have not yet been widely deployed in combat. However, the structure of IHL can provide us with a vital legal framework upon which we can model future legal regulations and ethical imperatives. Importantly, we must be sure to highlight how an AWS is an extension of human agency rather than a nullification of it. 

Human Dignity – What does this mean for how Wars are Fought?

The Principle of Human Dignity, as it is applied in the context of war, entails a necessity not only to preserve civilian life during combat, but also to systematically restrict military practices that objectify civilian bodies (e.g. the holding of hostages, the use of civilians as leverage in the completion of the military objectives, etc…). In the context of AWS use, however, this principle must be extended. 

Civilians trapped in war-time still possess the “right to life”, a fundamental existential right that combatants have inherently forfeited through their willingness to engage in lethal military operations (or so we think). However, events such as the CIA-sponsored torture regimes in Abu Ghraib along with the historical precedents set by POW camps during WWII and the Vietnam War have provided ethicists and international humanitarian lawmakers with an important message: human life, whether it is the enemy’s or yours, is intrinsically valuable

The way human life is ended, especially within the context of war, necessitates moral scrutiny and investigation. It is commonly accepted that killing is both a condition and reality of war. Conversely, we have not accepted torture, weapons, or military practices that prolong death or induce suffering; consider the banning of Mustard Gas in trench warfare during WWI or the transition from the electric chair to the lethal injection in US state-sponsored executions. 

What all these examples have in common is their fundamental detachment from the harm they cause and the subsequent dispossession of human bodies. The implementation of AWS in wartime would lead us to the same moral conclusions – ask yourself the following question: is it worse to be killed by an intelligent machine instead of a human? If you answer “machine”, your moral intuition is leading you in the right direction.

An AWS cannot understand the significance or value of human life, and therefore, it cannot comprehend what it means to take a life and carry that burden indefinitely. Soldiers are instruments of war, however, we pity some when they return because we can sympathize with the horror they have witnessed and engaged in. Yet, we fear others that display no remorse or empathy. Why is this the case? Because killing is, and always will be, personal. It is deeply unsettling to view the act of killing as a cold calculation. 

There is one way around all of this, which involves considering the nature of war as it would manifest itself if it were fought entirely by AWS. In this case, the consequences of war would include a loss of machinery, national territory, and perhaps some, although minimal, human life. If this method of war were ‘perfected’ (so to speak) we may find ourselves morally compelled to pursue it with the aim of preserving human life. 

That being said, we encounter a fallacy here: if wars are not fought by people, and lives are not lost, then what is the point of fighting at all? Would we not just be better off communicating our differences instead of spending billions of dollars on expensive machinery whose only purpose is to destroy other expensive machinery? 

How and where do we Attribute Moral Responsibility?

It has taken us a while to get to this point, so let us take a breath and regather our thoughts. 

We have evaluated the level of unpredictability of AI-powered targeting algorithms in conjunction with algorithmic explainability. We have also established that the current precedents set by IHL do not adequately encapsulate regulations and ethical frameworks concerning the use of AWS, although they could provide us with the necessary groundwork to do so. Furthermore, we have shown that the implementation of AWS fundamentally undermines human dignity, that is, unless wars are entirely fought by these technologies. Keeping all this in mind, who should be held responsible? 

An AWS functions without human intervention, so it would be unreasonable to hold individuals accountable for the consequences these technologies produce. Moreover, manufacturers of these systems, while they may bear a slightly higher moral burden, are similarly devoid of responsibility since their primary objective is to sell products that meet market demands. These demands, however, are dictated by Governments, private military organizations, and individuals with particularly high degrees of military influence; these are the most obvious culprits

In conclusion, we should ensure, through grass-roots political activism, the revision of IHL, and consultation with relevant legal experts and policy think-tanks, that those entities which represent the pinnacle of power in modern society are adequately regulated, monitored, and potentially punished for their use and implementation of AWS technologies.  

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap