Leadership in an age of Artificial Intelligence

Ken Payne, Defence Studies Department, King’s College London

Military leadership has some distinctive and enduring features, some of which look set to be challenged by the increasing adoption of autonomous systems, including AI weapons. When the British Chief of the Defence Staff speculates about an army of 30,000 robots, it’s probably a good time to explore the implications.

First, a large caveat. We’re not talking about the Terminator or HAL. Today’s cutting-edge AI is closer to an abacus than it is Arnie. Despite lots of hullabaloo, combat will retain its distinctively human nature for a long time to come. The capabilities of robots and computers simply aren’t good enough. More caveats will be along shortly.

The battlefield of the near future

In my view, leadership today is often about the psychology of groups – with leaders acting, often instinctively, to shape the identity and norms of their group. That’s true too of military leadership, with one distinctive feature: motivating and directing soldiers in combat. Tactical leadership sometimes calls for physical courage to inspire followers into action, and moral courage to make the right ethical choices in complex and emotionally charged circumstances. 

Now throw 30,000 robot systems into the mix. What changes? There’s lots of speculation, and no one knows for sure. We are in the whacky races phase of AI experimentation.  It’s clear though that new platforms and processes are likely to emerge. New packages of combined arms are likely too: why emphasise stealth and armour when there’s no crew to protect? Perhaps speed, manoeuvrability and endurance will be more important attributes for unmanned platforms. I certainly think so. Add into the mix AI’s accuracy, increasing mastery of control, and its ability to coordinate huge, distributed swarms before rapidly concentrating. I think we can start to make some informed speculation about what will eventually be possible.

Some big themes:

  1. AI allows mass – it’s possible to replicate code far more easily than pilots. That suggests to me large volumes of disposable platforms using saturation to defeat active defences. (Another caveat – the swarm and the long-range submersible both need fuel).

  2. And AI does search – it’s excellent at finding patterns in huge quantities of noisy data – making it increasingly difficult to hide on the battlefield. (Another caveat – some AIs can be ‘spoofed’, or deceived, by image masking. And deepfakes – realistic artificially created signatures, might induce attacks on false positives. Still, targets typically give off more than one signature).

  3. Lastly, AI is fast – if you’re going to lead those robots, you’ll need to get inside an OODA loop that’s going at warp speed, whether that’s the automated cyber battle for control of C3 networks, or the blurring combat of agile physical platforms.

So my vision for an automated battlefield is one that’s far larger and deeper, with fighting power distributed across huge distances in great numbers, but with excellent situational awareness and an ability to concentrate force rapidly and decisively at weak points. How best to lead in that intensely hostile environment? Some more hazy speculation follows.

Mosaic warfare

First, AI will push humans away from combat. Sure, ground based robots today are clunky; sure ‘human-machine teaming’ is the military’s preferred way to think about AI warfare. And yes, only infantry can hold ground, digging in grimly in the most inhospitable terrain. So in the first instance, expect tactical robots pushed down into human platoons and squads.

But the impact of AI won’t end there. Starting in the air and under the sea, and eventually on land too, increasingly adroit robots will come to dominate tactical action. We’ll need lots of thinking about how to employ them, and how to lead them. One interesting concept has already emerged.

In DARPA’s mosaic warfare concept, the tactical leader designs and then pulls forward from the reserves a package of capabilities to defeat the enemy in any given moment. Plus ca change? Perhaps so, but now the tactical commander is a machine, and the capabilities are automated too. And the pull-through is also automated – the tactical robot’s not asking the humans for reinforcements: that would take far too long.

So tactical leadership won’t be human, if this vision pans out. There’ll be no need to inspire through physical courage. And no time for authentic, morally courageous leaders to make tough choices about who to deploy. Just overwhelming force assembled against enemy vulnerabilities, in a flash.

Robot Kingfishers

Still, what about the operational artists? Surely back at brigade and division HQ there’s still scope for ‘kingfisher moments’, as talented commanders plan and execute operations? We know that AI lacks the sort of understanding humans possess, that it’s still not good at prediction in complex social environments, and that its creativity is of a narrow sort – parsing gazillions of options, rather than making novel, even transformative connections. So there ought to be space somewhere in conflict where actions can be knitted together to achieve goals. Maybe at the operational HQ AI can act as a decision-aid, or even offer a way of rehearsing options synthetically. As with ‘centaur chess’, where human and machine work as a team, it might even enrich the creativity of commanders without entirely substituting for their human ‘genius’. Operational leaders will still inspire followers with the bold sweep of their vision, and with their dynamism and willingness to seize the initiative. Won’t they?

For now, definitely. But leadership in this middle ground of war will be under siege from two directions – the autonomous commanders pulling resources forward into combat in sequences they determine; and strategic leaders far from the action, translating their political goals into broad military objectives – providing rules of engagement for the warbots. In the squeezed middle, operational artists may find themselves increasingly focused on overseeing the reserve – the mosaic from which robotic warfighters will be drawn. That calls for leadership, sure, but also a heavy dose of managerialism – it’s about keeping the show on the road. Less ‘rise of the machines’, more ‘rise of the loggie’. In the armed forces of the future, we’ll need plenty of technologists, systems specialists and skilled managers. But possibly fewer kingfishers.

Cybernetic strategists

Meanwhile, back home, military leadership will also involve more of these technical competencies, in order to assemble and direct this huge, integrated warfighting system. There’ll be plenty of scope for automation in this back-office operation, with AI involved in weapons’ design, concept development and testing. AI designers will certainly challenge existing service shibboleths – as with, in the UK, the Royal Air Force’s proclivity for inhabited aircraft, and the Royal Navy’s for large, crewed surface combatants. And there’ll be scope for technical competence in the formulation and execution of strategy: For example, how can a given strategy be represented to the AI system in a way that adequately captures our goals ahead of the action? Not human ‘in the loop’, or even ‘on the loop’, but human ‘before the loop’ will be the order of the day.

But surely here at home, where the pressures of combat are most distant, there’s still space and time for creativity and ingenuity. And also space to lead, through inspiration and example. Leadership here will involve articulating strategy in a compelling fashion, to forge a sense of shared endeavour and to bring followers along through difficult times, whether those followers are in uniform or part of wider society. Strategic leadership will even involve building support for the vision I’ve set out here – of a largely autonomous armed force, or an awesomely powerful surveillance machinery. Both are controversial and dangerous undertakings. The military leader can play a part in all that. But leadership in the traditional sense, of directing and inspiring uniformed followers to fight – perhaps not.

As today’s military leaders climb the ranks, their tactical acumen becomes arguably less relevant to senior command.  But the military rightly prizes their leadership ability: they know its value. Perhaps that’s why the teeth arms are so well represented at the top of the military.

Today’s uniformed strategists in the UK emerge blinking into the light of Downing Street towards the very end of their operational careers with a deep understanding of their organisation. With any luck they also have strategic acumen, though that’s certainly not a given. But what they do have in spades is leadership experience, earned along their ascent of a steep, competitive pyramid. What happens when that goes? What career pyramid will tomorrow’s officers have ascended, and what will they have learned on the way about leadership?

On the edge of tomorrow

The speculative vision I’ve sketched is far distant. It won’t arrive with the first RAF swarming squadron, Russian nuclear powered submersible drone, or Chinese mobile battery of loitering cruise missiles.  Have you seen the Tom Cruise/Emily Blunt film, Edge of Tomorrow? That’s my take on autonomous warfare – the aliens that is, not Cruise’s plucky time traveller. Blisteringly fast, autonomous, cloned, and networked aliens making a real mess of a conventional looking human army. Spoiler alert: Cruise eventually leads his rag tag platoon to victory over the alien onslaught via conventional leadership attributes – inspirational courage, physical prowess, humour in a tight spot, and self-aware authenticity. Success for the human way of war? Not really – Cruise learns how to win via thousands of repeated goes at it – just like a deep learning AI, in fact.

2 thoughts on “Leadership in an age of Artificial Intelligence

  1. As early as about 335 BC, when the military engineers of Sparta’s Archidamus showed him their new arrow throwing catapults, his first remark was that man’s courage would now become a thing of the past. Perhaps the professor is making the same scholarly mistake.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s