Deane Baker is a Senior Lecturer in the School of Humanities and Social Sciences at UNSW Canberra, and a Visiting Senior Research Fellow in the Centre for Military Ethics at Kings College London. He is a panelist on the International Panel on the Regulation of Autonomous Weapons (IPRAW), however his comments here are his own and do not represent the position of IPRAW. You can follow him at @DPBEthics
A significant shortcoming in the ongoing debate over whether autonomous weapons systems (AWS) should be regulated under the Convention on Certain Conventional Weapons (CCW), is that so few of those engaged in the debate have a strong grasp of the nature of military operations. Without a strong dose of military realism to balance the laudable idealism and concern to prevent human suffering that drives many of the contributors to the debate, there is a degree of unreality to some of the positions that have been taken in this context. In some cases it seems like there is a form of contingent pacifism at play, where what is really being challenged is the right of militaries to use force at all.
A good example of this is in the discussions over the idea of ‘meaningful human control’. I don’t find this term particularly helpful (it’s not clear how ‘meaningless’ control would constitute control at all), and I prefer to instead consider what I call the Morally Appropriate Degree of Control (MADOC). Broadly speaking, there is a correlation between the potential for causing unjustified harm and the degree of control required – the greater the potential for causing unjustified harm the higher we (figuratively) set the level of MADOC. But context also matters. Today (unfortunately) we find ourselves in an environment in which many police officers around the world are armed with essentially the same rifles as those carried by members of the armed forces. The bullets fired from those rifles have precisely the same ability to cause harm, whether justified or unjustified. But the key difference between the police rifles and the military rifles is that (usually) the police rifles do not have an automatic fire capability. Firing bursts from a rifle on fully automatic, even controlled bursts, reduces the operator’s control over where each individual round impacts. There are circumstances in which this is nonetheless appropriate for military personnel engaged in combat (i.e. the MADOC is met), but there are almost no circumstances in which it would be appropriate for a local law enforcement officer to fire an automatic weapon in the line of duty (the MADOC would not be met). The MADOC is defined by the context.
Context matters also in evaluating the MADOC for AWS. Very often in the discussion over AWS the implicit MADOC standard being applied is that which is appropriate to the individual infantry soldier exercising control over his rifle fire in a low-intensity counterinsurgency operation. Consider, for example, this statement from Human Rights Watch: “Determining whether an individual is a legitimate target often depends on the capacity to detect and interpret subtle cues, such as tone of voice and body language. Humans usually understand such nuances because they can identify with other human beings and thus better gauge their intentions.” (HRW 2016) The fairly obvious problem here is that very few humans involved in selecting and engaging enemy targets on today’s battlefields are even remotely in a position to interpret cues such as tone of voice and body language. Yes, this might sometimes apply to an infantry soldier manning a checkpoint during a counterinsurgency campaign. But the sniper providing overwatch will not be able to hear the target’s tone of voice, and the Reaper pilot observing through the cameras located under his remotely piloted aircraft’s fuselage will have even less ability to interpret such cues. And what of the fast jet pilot delivering a 500 pound bomb onto a building that intelligence has identified as an insurgent command post? Her ability to pick up on tone of voice and body language is nil. So does that mean that in carrying out her mission the pilot fails to meet the MADOC requirement? Of course not – because the context is different. If we apply the MADOC standard appropriate to the infantry soldier at the checkpoint across all military uses of force, we would essentially rule out the use of almost all standoff weapons, not just AWS. Part of the problem here, it seems to me, is that the armed conflicts that have been most present on our television screens, in movies and in the popular media over the last decade and more have primarily been low-intensity counterinsurgency conflicts, and many commentators have unthinkingly taken the constraints that apply in such conflicts to be the norm for all forms of war. That is, of course, not at all true – the appropriate application of the legal and ethical principles that regulate war looks significantly different in the context of high-intensity wars between state peer adversaries.
The ICRC expresses a similar concern focused on the distinction between what they call ‘specific’ and ‘generalised’ targeting:
“The key difference between a human or remote-controlled weapon and an autonomous weapon system is that the former involves a human choosing a specific target – or group of targets – to be attacked, connecting their moral (and legal) responsibility to the specific consequences of their actions. In contrast, an autonomous weapon system self-initiates an attack: it is given a technical description, or a “signature”, of a target, and a spatial and temporal area of autonomous operation. This description might be general (“an armoured vehicle”) or even quite specific (“a certain type of armoured vehicle”), but the key issue is that the commander or operator activating the weapon is not giving instructions on a specific target to be attacked (“specific armoured vehicle”) at a specific place (“at the corner of that street”) and at a specific point in time (“now”). Rather, when activating the autonomous weapon system, by definition, the user will not know exactly which target will be attacked (“armoured vehicles fitting this technical signature”), in which place (within x square kilometres) or at which point in time (during the next x minutes/hours). Thus, it can be argued, this more generalized nature of the targeting decision means the user is not applying their intent to each specific attack.” (ICRC 2018, 12)
This description clearly fits well with something like the Harop loitering munition that is manufactured by Israeli Aerospace Industries. When employed in autonomous mode (it can also be operated with a human in the loop), the Harop is designed to loiter over the target area, using its in-built sensors to detect radio emissions associated with air defence systems. Once one is detected, the Harop homes in on the signal and detonates its explosive warhead. Clearly this is the sort of generalized targeting that the ICRC is concerned about – the commander or operator of the system is not instructing the Harop to engage a particular radar system at a particular location and at a specific time. But so what? The fact is that the level of specificity that is being demanded here is often not met when traditional standoff weapons systems are used. An ASW helicopter weapon systems operator who launches a Mk 46 acoustic homing torpedo in response to a fleeting sonar contact does not direct that torpedo to engage a particular submarine at a particular time and in a particular location. Rather, her intent is that the weapons should engage any submarine within range of the system and at some time during the systems’ endurance, when and if the active/passive sonar homing system of the torpedo detects one. If there is more than one enemy submarine in the vicinity, and the torpedo strikes one that was not the original sonar contact, so be it! The Mk 46 torpedo is essentially the underwater version of a loitering munition – yet no-one has ever suggested that when a Mk 46 torpedo destroys a sub-surface target “the user is not applying their intent” to the attack (the Mk 46 has been in service since the early 1960’s).
Consider also an artillery battery conducting counter battery fire, directed by a radar system which can track and plot incoming enemy artillery shells, will seek to hit not only the artillery pieces the enemy is using to fire the incoming artillery shells, but also any enemy personnel, vehicles and equipment in the unit operating those artillery pieces. Though the target is thus somewhat ‘generalized’ (to use the ICRC term), and those conducting the counter-battery fire do not know precisely what enemy personnel they will be hitting in the near vicinity of the enemy’s artillery pieces, killing or disabling all enemy personnel within that geographical space is clearly within the scope of their intent. The addition of AWS to the battlefield seems, if anything, to enable more specific application of intent in the use of standoff weapons. Rather than being limited to saturating a specific geographical area with high explosive munitions, employing a well-designed AWS potentially allows for the addition of further parameters which give greater opportunities for specificity in the application of the targeteer’s intent.
The real concern underlying the ICRC’s objection is not really about intent, but rather (like the Human Rights Watch objection considered earlier) about the appropriate epistemic conditions for killing in war. That’s a legitimate concern, but one that must be weighed appropriately – we cannot apply epistemic standards to AWS that do not apply to other types of weapons systems that are already used and accepted.
Image: IAI Harop UAV at Paris Air Show 2013, by Julian Herzog, via Wikimedia.
You can also read this post on the University of New South Wales site here.
What a clear and balanced articulation of the problem. The wider debate also seems somewhat devoid of the application of force in that most state players would develop and deploy a system they couldn’t control, in my opinion. Therefore the doomsday scenario that is often painted is a system that self infinities, self selects targets and then kills people lacks the very understanding of the use of the military instrument you are talking about. I
I really enjoyed your piece.
LikeLike