Artificial Intelligence versus mission command

DR KENNETH PAYNE

In a new paper, Kareem Ayoub and I explore how Artificial intelligence will shape strategy. Here, I focus on one important aspect of that: the ability of leaders to control the use of force.

Technology is sometimes seen as a threat to the British military’s philosophy of mission command. When it works as intended, mission command allows subordinates to exercise their on-the-spot judgement about how best to realise the intentions of their superiors. Commanders describe what they want to achieve, but leave the execution to those perhaps better sighted, being nearer the action. This requires good judgment in subordinates, but imbues the military practicing it with the flexibility to adapt to novel and unexpected developments. As an additional benefit, those carrying out the orders feel a degree of ownership and control that perhaps contributes towards their fighting power, building cohesion and resilience.

That’s the theory. The ability for senior commanders, lawyers and even politicians to scrutinise tactical activities from afar has increased dramatically in recent decades. Frustration at those wielding the fabled ‘long screwdriver’ is a hardy perennial of conversation with tactical commanders. The demands of the strategic level, with its differing perspective on risk and on wider alliance considerations, are nothing new: Churchill liked to involve himself in tactical matters too, often to the consternation of the Chief of the Imperial General Staff, General Alan Brooke. But thanks to satellite communications, remote sensors and broadband internet, the ability of senior commanders to exercise control of tactical actions has increased. At the same time, the smaller scale of modern conflict and the perception of risk averse domestic populations has created a sensitivity to tactical events at the top level. And so, both the desire and the ability of senior commanders to control the action pushes against mission command.

I’ve always been a bit sceptical of this critique. The ability of commanders to micromanage is fundamentally constrained by their attention – likely to be in short supply. Perhaps it’s more often a case of self-policing by subordinates: not adopting the optimal approach for fear of being monitored.

In any event, my argument here is that the advent of fully autonomous tactical AI capabilities will shift the balance again, this time back towards the local decision-maker, precisely because the tactical decision-maker of the near future will not be human. Artificial intelligence challenges mission command from the other direction – limiting the commander’s ability to manage the interpretation and execution of intent.

AIs bring distinct tactical advantages – in speed, precision, endurance and raw information processing power, among others. These characteristics mean that the AI can act too quickly for senior command to control. Communication with superior agents introduces distortions in messaging, slows the speed of execution, and introduces vulnerabilities that an enemy will seek to exploit. Tactical AI overcomes these problems by reaching and executing decisions locally – with great speed and precision, and without being susceptible to a large range of human biases.

The sort of platforms and systems I am referring to are currently in their infancy but are developing extremely rapidly. AI has already mastered complex control problems – including learning to fly a helicopter. It increasingly has the ability to comprehend its environment in rich detail, and the ability to generate innovative approaches to solving problems within its universe. AIs can identify and track targets in noisy and cluttered environments – as with one that accurately distinguishes mosquitos by the sound of their wing beats. When it comes to decision-making, AIs are moving beyond the constrained universe of the chess computer, bounded by strict rules and with perfect knowledge of possible options. Clausewitz likened war most to a game of cards – bound up with uncertainty and chance. As it happens, an AI has recently ‘solved’ poker – a game of asymmetric knowledge – sufficiently to come out ahead of humans on a consistent basis. It did that by recognising statistical patterns in adversary behaviour

In the next decades, AIs will assume increasing importance in information acquisition and analysis, sorting and categorising vast databases. In ISR it will obtain and interrogate huge quantities of information in near real time. In manoeuvre and fires, progress is slower, but it will ultimately outperform humans and manned systems, with their physical, biological constraints. Spherical AIs will storm defended beaches; AI UAVs will outperform the best Top Guns – manoeuvring more sharply, reacting faster and less predictably.

AIs will generate pattern of life information on human targets in urban environments, discerning subtle biological signatures; nano-AIs will surveil targets. Data crunching AIs will track after-action data and develop new operational concepts and tactics. In war colleges, AIs will red-team war games. In R&D laboratories, AIs will ‘evolve’ new weapons platforms – challenging ingrained and deeply held organisational proclivities – for manned fighters, or aircraft carriers.

Yet, there are two fundamental issues, at least, worth considering as we move towards tactical battlefield AI.

First, there is an ethical conundrum to address: one that humans already face, but that they might feel less comfortable outsourcing to a machine. This is the question of risk, and of whose life to endanger. War entails violence and death and, like humans, AIs will need to weigh the risk to lives before acting. When humans do this, we often utilise two distinct philosophical approaches – consequentialism and deontology. Should we prioritise the greatest good for the largest number of people (and, if so, which people), or do we have a duty to each individual life? Let’s suppose our AI must chose, in a flurry of combat, between sacrificing the life of a friendly soldier, or killing two other people, likely to be civilians. Humans make these sorts of choices already, of course, but outsourcing them to an impersonal, non-biological agent feels pretty uncomfortable.

The second problem is in deciding which goals to pursue and in what order. How do we know what it is we want our tactical AI to achieve? Victory is too trite an answer, because of its subjective and contingent meanings. How hard should our machine fight, and for what end? What is an acceptable use of resources, and acceptable level of violence, to achieve a particular goal? This is the problem of revealed preferences – establishing intent ex ante is problematic if one’s goals are sometimes revealed via action, and if one cannot readily communicate these new or revised goals to agents in sufficient detail and timely fashion.

Tactical AI will, in theory, still reflect the intent of its superiors – specified in the protocols that establish its goals and rules of engagement. But there are problems in fully anticipating such contingencies – the goals we seek and our willingness to fight for them are often only revealed as events unfold. We can hazard a guess at some of the parameters that should constrain or guide a tactical AI and these may suffice to deliver the desired performance. But social complexity will limit our predictive accuracy and, additionally, AIs will, in the normal run of things, develop unexpected solutions.

Both these issues can be ameliorated by intervening ahead of the machine’s decision to offer guidance: keeping the fabled ‘man in the loop’. The difficulty here is that of speed and communication. AIs benefits, in addition to its raw computational power and unbiased decision-making, include its capacity to decide with alacrity and without human guidance in complex and rapidly changing environments. Intervening undermines both autonomy and speed. In some circumstances that will be fine – better to be right and lose a robot than to be wrong and kill the wrong people. But in situations where enemies have the capacity to respond autonomously and quickly themselves, pausing for reflection and higher guidance is a poor strategy.

These problems won’t go away and aren’t easily answered. The solution, however, is not to resist the adoption of autonomous weapons systems, or seek to outlaw them – both because adversaries are unlikely to cooperate, and because technologies are changing so fast that reaching mutually agreeable and enforceable definitions is highly problematic. Ultimately, weapons that can identify, track and prosecute targets with inhuman speed are likely to confer battle-winning advantages. In doing so, they will stretch the scope of mission command to its limit – the commander’s intent may not bound the scope of tactical action sufficiently to cover all possibilities, while their capacity to monitor and impose themselves into the action will be curtailed.

Image: Northrop Grumman Bat, via wikimedia commons.

Leave a comment