Artificial Intelligence

Breaking: Opening salvo fired in coming war with machines

Dr. Ken Payne

DeepMind, the world’s leading Artificial Intelligence outfit, has released a remarkable new study with implications for those of us interested in war, cooperation, and the strategic ramifications of AI.

You can read and watch it here.

In short, their agents demonstrated the ability to relate socially in a competitive environment. When resources (green apples) were plentiful, the agents cooperated happily. When they were scarce, all hell broke out – DeepMind had endowed he agents with the ability to shoot each other; and that’s exactly what they did.

So what?

In my next book, I take a long-run view of strategy, all the way from early human evolution through to the advent of AI. It’ll be out soon, but by way of preview, and since DeepMind has made it relevant.

First things first – DeepMind’s agents may have cooperated or fought; but they didn’t do so for the same reasons we did. They were making ‘rational’ decisions about whether to cooperate on the basis of individual gain, of the sort that will be familiar from game theory puzzles, like the famous Prisoner’s Dilemma.

Human cooperation is a puzzle – why do we cooperate when the benefits from slacking off can be substantial, especially if we deceive others about it? Natural selection happens at the level of the gene, not the group – so why should I risk my genes to help the group out?

One answer (mine, in fact) is war: the pressure of intergroup conflict, which we now think was pretty ubiquitous. Groups that cooperate together win against groups that do not. This is particularly true when weapons are pretty primitive, and fighting is in the form of a melee, rather than a one on one duel. If you don’t cooperate with group members there’s a good chance that both you and your group will go out of business.

Frederick Lanchester did the maths back in 1916, in his catchily named ‘square law of armed conflict’. When the force of many could be concentrated against the few, there were disproportionate gains from scale. The takeout: there is a massive military advantage to be had from being in a larger group; and larger groups that cooperate will win against smaller groups that do not.

Two results follow – we instinctively cooperate, especially with those we identify as being in our group (conversely we are chippy xenophobes against outgroup people). And second, there’s a clear incentive to form larger groups – including of non-related people. Hence a motive for the development of cultural identities – a shorthand way of saying who can be trusted in a tussle. Over a long period of time, and beginning some half-million years ago, humans developed unique social abilities – including a sophisticated empathy and theory of mind, allowing us to gauge what other individuals believed. Could we trust them not to malinger? In war, we, like chimpanzees today, worked the odds in our favour – fighting primarily by raid and ambush, and attacking with surprise and massively advantageous force ratios. Victory goes to the big battalions.

The arrival of culture, thanks to all that cooperation, modified the situation somewhat: new strategies were available – including, of course, defence in depth! Scale mattered, but clever thinking might offset it.

Back to DeepMind, and the impending rise of the machines. Their agents cooperate to harvest digital apples; but the logic of that cooperation is not the same as that which drove humans to develop culture, empathy, theory of mind, and instinctive cooperation. Their cognitive architectures are far different from ours: they are not embodied biological intelligences, whose very survival depends on navigating a rich social terrain. They are not enmeshed in a biological world of natural selection; driven – often unconsciously – by the imperative to propagate their genetic code. They do not exist in groups that are in a constant state of conflict with neighbours. Harvesting a digital apple does not require the same cooperation as a mammoth hunt. DeepMind’s ‘toy universe’ is a much simpler affair – like an 80s arcade game, but with better baddies.

So, let’s all relax, no?

Well, to a point. Artificial General Intelligence, if it ever manifests, is unlikely to mirror human intelligence, which evolved as the answer to a particular environmental problem – replete with its emotions, massively parallel unconscious deliberation, and narrowband self-aware, reflective conscious. You could perhaps model that in cyberspace – but that would just remain a model. Philosophers like to pose the conundrum: what is it like to experience life as a bat? Answer: we’ll never know, but you can bet it’s much closer to us than the ‘life’ of a machine.

Still, groups of AI will face very similar meta-problems: scarce resources; the possibility of conflict to secure them; and a need to understand what other agents are likely to do. Their inclination to cooperate or compete with those agents may differ radically from our own. Watch this space.

Image courtesy of Wikimedia Commons.

Consciousness and strategy: What will AI want?

DR KENNETH PAYNE

This is the second of a two-parter on AI and strategy – the first, dealing with creativity is here. In this part, I reflect on the goals for which strategy is a servant. Are the machines coming to take over, and destroy human life as we know it? Or will they be our faithful servants?

Much hangs on the question of motivation. That’s a large topic, and here I want to focus on one aspect – conscious reflection on goals. For humans, consciousness is both the result of our core biological motivations, and a way of mediating them: deciding what to do. Would an AI develop something like consciousness, and so be able to reason about the world and its place within it? If so, it might generate its own goals?

Biological motivations beget conscious humans

Our ultimate motivation as humans is to remain alive at least until we can pass along our genes. We are embodied agents, and our cognition is inherently linked to our bodily processes. All animals have that instinct to survive and reproduce – it’s the basis of natural selection. But no other animal has developed human-level consciousness, which gives us the ability to reflect on our desires, and to strategise ways of attaining them.

Our consciousness is a biological adaptation that evidently confers important advantages. Consciousness, Nicholas Humphries suggests, evolved in humans from our mind’s embodied relationship with the environment – a way of monitoring those interactions, and its responses to them, in a recursive manner. It gives us the ability to focus attention, to edit and integrate complex mental imagery together in new and useful combinations. Consciousness, argues Stanislas Dehaene, is a ‘global workspace’ like a clipboard on a computer.

Among other benefits, consciousness allows us to track our social interactions; reflecting on what others intend; it allows us to generate and (via language) to share abstract meanings. And it allows us to think about time – using experience, memory and creativity to imagine possible futures. Consciousness, language and abstract reasoning are interwoven features of a cognitive package that permits a rich human culture.

All animals share the basic survival motivation, and many may have similar ways of mapping their bodies’ interaction with the world around them – including their social world – but none have developed the same degree of self-awareness as us. I suspect that our intense sociability is the reason for that. If survival is our ultimate motivation, the search for meaning – an understanding of our place in the world – may, as Victor Frankl suggests, be the main proximate motivation shaping human behaviour. Without a deep understanding of the social world, we are vulnerable. Part of that happens instinctively, unconsciously, as with empathy – but much of our social understanding entails conscious reflection and exchange. If consciousness evolved to map our place in the world – human consciousness extends that to a rather abstract level.

Artificial consciousness

Would an AGI share that core animalistic motivation to survive? Perhaps if so, there be something like evolutionary pressure forcing it to refine and improve iteratively better solutions to environmental challenges. Ultimately, that could result in a conscious AI. It could, but I am sceptical that it would look much like human consciousness.

In humans, cognitive function follows the biological form. One approach to developing AGI proceeds in the opposite direction – with form following function. We start with the human-like functions we wish to model, and work out what architectures can deliver them. AGI that is modelled on human cognitive functions may exhibit broad parallels – a synthetic emotional ‘module’; another to model episodic memories; perhaps even a deliberative ‘clipboard’ or workspace model to ape consciousness. There are attempts to do precisely this – but they are not particularly convincing, because both the broad functions (emotions, episodic memory) and their particular content are responses to a particular human problem. An AGI assembled from cognitive models would be like some Frankenstein’s monster – a kit of parts, rather than an evolved response to evolutionary pressures.

A more promising approach allows the AI to develop autonomous solutions to its own environmental challenges, so that its motivations follow in a more naturalistic fashion from the environmental challenge. This ‘emergent’ AGI need not be seeking to pass along ‘genes’ or any synthetic proxy, (after all, it would not be subject to the same perishability of a biological life-form). But it would be seeking at a minimum to perform some function – and that mission itself would require it to preserve its autonomy. In that case, the AI would need to observe its relationship to the surrounding environment, and respond to its interactions. A complex, recursive relationship between agent and environment would ensue. If consciousness emerged in humans from a similar sort of process, an autonomous artificial agent might produce something ostensibly similar.

Perhaps, but I would venture that any emergent consciousness would be radically different, because its physical manifestation would be so very different.

Philosophers are apt to ask ‘what does it feel like to be a bat’ – very different from being a human, no doubt – but it’s a good bet that it feels like something. Recent neuroscientific work suggests a limited form of consciousness in other creatures, including lowly insects – again as a way of mapping their relationship with their environment. The contention is that feels like something to be a bee. They have similar cognitive architecture to us, albeit with greatly reduced neuronal complexity. They are made of similar material, subject to similar biological laws of entropy. We ourselves cannot experience their subjective experience, of course – and it’s a fair bet that bees and bats aren’t discussing Kierkegaard – but the evolutionary rationale for consciousness as a way of focussing awareness on feelings and emotions is striking, as are their behavioural responses to environmental stimulation. They possess motive, means and method to develop consciousness – all highly suggestive of some form of animal consciousness.

The contrast with the emergent AGI is striking. No inevitable mortality, like that facing a biological body, no genetic selection to underpin evolution. As a result, there’s nothing a priori to suggest that the AI will use similar cognitive mechanisms to humans or other animals. Its hugely parallel processing might not need to focus its narrowband attention in the same manner as our consciousness, or a bee’s proto-consciousness – it can handle the urgent tasks and the less important ones without fatal loss of performance, because its cognitive architecture is so radically different. And it might not need emotions to tag and prioritize mental imagery for attention. The priority rationale for emotions and consciousness doesn’t stack up for a massively parallel processing AGI, even if it needs to orient itself in response to the environment.

So it might not feel like anything to be an emergent AGI; and it might not need to focus a single beam of attention on priority messages from the myriad simultaneous processes of cognition. The AGI, in sum, may seek to preserve its agency and to interact with other agents in ways that are radically different from animals.

And yet something like consciousness may still emerge – perhaps as a means of rehearsing strategies, or framing intuitive connections in ways that have contextual meaning. Or perhaps of providing some executive capacity to coordinate responses. As Tononi argues, there is nothing per se to preclude the development of consciousness in machines – consciousness is just an emergent function of material, and, he argues, of the organisation of those materials.

Nonetheless, I would expect human consciousness and subjective experience to be considerably closer bees and bats than to that of an emergent AGI housed in a distributed network of hardware and capable of lightning fast parallel processing. An AGI like that might generate consciousness, but not necessarily. It may be parallel but sufficiently integrated that we can talk of an ‘agent’, without necessarily being synchronised to the extent that a phase transition results in subjective awareness. It would not feel like anything to be such an AI, and its motivations would correspondingly differ. For example, while it might operate socially, its reproductive chances would not hinge on its social status, so it would not feel obliged to jealously defend its prestige, nor to instinctively favour kin over non-kin.

Without consciousness, the machine would be an unreflective executor of whatever goals it was tasked with. Were consciousness to develop, we might expect the machine to reflect on the big question – why? But the ways in which the machine reflected and the conclusions it reached would be very different indeed from our human, embodied reflections and our timeless search for meaning.

An ominous sense of impending doom attends discussion of super-intelligent AGI, pursuing ends inimical to humankind. There is a particularly well-worn sci-fi trope featuring malign AI that does not care about human understanding – this is AGI as psychopath, amoral and unempathetic. 2001’s HAL and Ava from Ex_Machina fit the bill nicely: they are intelligent enough to understand human motivation, and then to manipulate humans by exploiting that knowledge.

It’s not clear what’s driving those fictitious AIs – curiosity? Selfishness? Perhaps even a similar urge to survive – like the shortlived Replicants of Bladerunner? But what they all have in common is that they are human artistic inventions – anthropomorphic projections of our own imagination, and imbued with a human sort of consciousness.

I doubt that will happen. Our cognition is shaped by our bodies, with their biological constraints and emotional wiring. Our motivations originate in that biology – even our higher order reasoning, with its incessant search for meaning, its curiosity and imagination.

There are dangers here for humans – but not necessarily of a passionate, self-aware Intelligence relentlessly pursuing its own agenda, even where that clashes with ours. If anything, the danger is of an unemotional lack of understanding and empathy as AGIs execute their allotted task, because the machines do not employ emotion as a way of organising their cognition.

Strategic AGI

What’s the relevance of all this for strategic studies scholars and practitioners? We are soon to enter a world with multiple artificial agents, flexibly pursuing varied and sometimes antagonistic ends – on our behalf initially and maybe one day on their own. Google’s DeepMind aims to deliver Artificial General Intelligence in relatively short order. It’s not entirely clear what that would mean in practice – but broadly speaking, such an AGI ought to be able to tackle a variety of cognitive challenges in a reasonably complex environment. That does not imply consciousness, or intrinsic motivation.

Image: Matrix wallpaper, via flickr.

 

Strategy, creativity and artificial intelligence

DR. KENNETH PAYNE

I’m working on a book about evolution and strategy. It ends with me wondering whether Artificially Intelligent (AI) machines will be good strategists. My BLUF – yes, but not yet, or any time soon.

Here I’m going to rehearse some of that argument. I identify two big issues in AI that bear on its strategic impact – creativity and motivation. This post is on creativity, the next (coming out next week) on motivation. The two are related: creativity is a powerful way of realizing our goals, especially in complex, uncertain and fast-changing environments. AI will need creativity to tackle … well, what, exactly? That’s the subject of my second post – what do machines want?

Creativity

Strategy is a creative process, and at present the ability of AI to perform strategic activities is severely limited by their ability to think creatively.

But what is creativity? I argue that it requires imagination. This is the ability to make unexpected connections; to intuit meanings that are not immediately obvious, partly based on experience. AlphaGo seemingly managed something like that in its recent match against the world champion human. But was it really creative? I say no: its ‘intuition’ was illusory – merely anthropomorphism on our part, as we project back from behaviours we observe but did not expect and cannot readily explain to intuit some plausible human cognitive process underlying them.

Humans are creative as a result of the interactions of their conscious and unconscious minds. To be sure, the unconscious inner workings of the mind play an important part in creativity, as artists of all stripes readily testify. Robert Louis Stevenson argued strikingly that his inner mind was the hub of all his creativity, telling him his stories while he slept.

But that’s not enough. To be really creative, we need to be conscious and self-reflective, grounding our unbridled imagination within a framework of meaning. So consciousness serves an important creative purpose in humans. Creativity is not just random connections – ideas have meaning, in a given context. This capacity to reflect on these meanings typically requires conscious thinking, even allowing that instinct is important. There’s a strategic rationale for consciousness: Some evolutionary theorists suggest that human consciousness evolved and deepened as a response to our increasing sociability, and efforts to gauge what our peers were up to.

AlphaGo was making its choices on an entirely different basis. It was engaged in some complex information processing that weighed probabilities and recalled previous games to produce a move unexpected by on-looking humans. In a two step process, it combined a reinforced learning algorithm that had ‘trained’ by playing itself at Go many times over, and a probabilistic algorithm that narrowed the scope of its many possible options to those most feasibly useful, thereby reducing the vast number of possible moves to something more computable. That’s not creative, in the human sense – it’s calculating. The machine lacked that human ability to reflect on the meaning of its move. In Go, that was no big problem – the universe of Go, while containing an unfathomable complexity in terms of combinations of possible moves, was pretty simple in terms of type of move. That’s a structured universe that lends itself to computation. The moves favoured by the machine were those most likely to succeed, indeed those that had succeeded before.

DeepMind’s computer made some radical moves (by human standards), and the reasons are hard for humans to intuit. The temptation to anthropomorphise is strong. But there’s no need. The beautiful simplicity of Go constrains the ability of the artist to be creative, but it’s perfect for a devastatingly efficient machine learning algorithm.

Sceptics have long argued that AIs shuffle symbols or tokens that represent reality, without actually understanding what they mean. This is the basis of John Searle’s famed ‘Chinese Room’ thought experiment, in which a machine translates to and from Chinese via protocols, without any innate understanding of what all these symbols mean. This lack of native understanding would certainly limit AI’s capacity to take over the universe, still less perform many useful functions on our behalf. Without grasping what things mean, AIs are just hugely sophisticated data processors.

Not all AIs work by shuffling tokens in that ‘cognitivist’ fashion, where reality is represented symbolically, so that cognition is merely the processing of symbols. Recent breakthroughs in AI like AlphaGo use an alternative approach modelled closely on our biological cognition. Our own minds are the product of complex, typically recursive networks of neurons. Human intelligence emerges through the development of these networks, via evolutionary inheritance and then experience. AIs employing Artificial Neural Networks work similarly, although in a rather coarser, more simplified fashion than the billions or neurons and trillions of connections at our disposal. Like us, they work not by blindly processing symbolic representations of reality, but by learning what connections produce the optimal response. Machine intelligence emerges from this evolution of network connections. Perhaps such ‘emergent’ approaches allow the machine to ‘understand’ like we do – getting round Searle’s objection?

No. While we also learn that way at the neuronal level; rather more is involved at a functional level, thanks to the subjective richness of those associations in our conscious mind. In the inner workings of the human mind, the content of mental images, and the neuronal networks that generate them are not available via introspection – we only experience the subjective outcome of those feelings, at a higher functional level. It’s there, in our conscious mind that things really ‘mean’ something. And here too that creativity is fully manifest – as our experience is ceaselessly edited and manipulated in new combinations.

That emergent, first-person experience is what we understand by ‘meaning’. It’s rather more than the simple output of the neural network: it’s a phase transition, qualitatively different. Moreover, one where the emergent property, our mind, feeds back recursively into the network, so that cognition is always dynamic, recursive and multi-causal.

Artificial General Intelligence and creativity

AI cannot do this at the moment, but I reckon it will eventually. Such an AI will be rightly called an Artificial General Intelligence (AGI). Maybe all that’s lacking is complexity of the system. The idea that consciousness emerges from systemic complexity and integration comes from Giulio Tononi; others like Kristof Koch take a similar view – consciousness per se is independent of the ‘platform’ on which the information processing happens.

Many AI experts disagree – for them there is something fundamentally different about human cognition and consciousness. Some, like Antonio Damasio, argue that there is something particular about the biology of the brain that generates consciousness – the material is at least as important as the function, and can’t be modelled in silicone. Fine; but then, what happens when biological material is used to construct artificial neural connections – couldn’t consciousness emerge then? Functionalists like me don’t think Damasio’s biological argument is right, but even if it is, biotechnology means it’s not the end of the road for strong, human-like AI; just that it would like some way distant, given current capabilities.

Whoever is right, it’s surely the case that when the AI cognition achieves that phase change, its consciousness will involve a qualitatively different sensation to humans – the subjective experience of being that AI will be radically different. Because it will be the product of different type of agent, and different environmental pressures will act on it. Its phylogeny and development will differ from human, and so its conscious experience will be different too. That thought gets us towards part 2 – motivation.

It is possible that the AI may develop into an AGI without aping the human brain’s twin-track architecture: its distributed, parallel unconscious processing, and narrowband consciousness, always editing and interpreting in a rich web of meaning. Instead, the AGI’s ‘creativity’ may be a souped-up version of AlphaGo’s sophisticated calculation – one that is able to model and train in a less rigid, deterministic universe than the Go board. That, however, would require a new approach to AGI, since the huge complexities ambiguities of the real world are not readily solvable by clever calculation of probabilities combined with reinforcement learning.

AGI and strategy

How does this discussion on artificial creativity affect strategic studies?

Strategy is dynamic, interactive and hugely uncertain. It requires the ability to reflect on past experience and project into the future. The possible combinations are not computable by brute force logic. DeepMind recently demonstrated their algorithm navigating a maze, like a rat in search of rewards, and the achievement is phenomenal – not least because the AI is coping with the problem of delayed rewards. But pseudo-rat level intelligence is still a very long way from AGI that can master strategic interactions between competing groups of humans.

Far more likely, as Kareem Ayoub and I argue in the Journal of Strategic Studies, the military impact of AI will be felt in the more constrained universe of tactical, domain specific AI – that can handle logistics, make sense of vast datasets, or even master some control problems, like flying and driving.

 

Image: Image to Represent AI. Courtesy of Wikimedia Commons.

Artificial Intelligence versus mission command

DR KENNETH PAYNE

In a new paper, Kareem Ayoub and I explore how Artificial intelligence will shape strategy. Here, I focus on one important aspect of that: the ability of leaders to control the use of force.

Technology is sometimes seen as a threat to the British military’s philosophy of mission command. When it works as intended, mission command allows subordinates to exercise their on-the-spot judgement about how best to realise the intentions of their superiors. Commanders describe what they want to achieve, but leave the execution to those perhaps better sighted, being nearer the action. This requires good judgment in subordinates, but imbues the military practicing it with the flexibility to adapt to novel and unexpected developments. As an additional benefit, those carrying out the orders feel a degree of ownership and control that perhaps contributes towards their fighting power, building cohesion and resilience.

That’s the theory. The ability for senior commanders, lawyers and even politicians to scrutinise tactical activities from afar has increased dramatically in recent decades. Frustration at those wielding the fabled ‘long screwdriver’ is a hardy perennial of conversation with tactical commanders. The demands of the strategic level, with its differing perspective on risk and on wider alliance considerations, are nothing new: Churchill liked to involve himself in tactical matters too, often to the consternation of the Chief of the Imperial General Staff, General Alan Brooke. But thanks to satellite communications, remote sensors and broadband internet, the ability of senior commanders to exercise control of tactical actions has increased. At the same time, the smaller scale of modern conflict and the perception of risk averse domestic populations has created a sensitivity to tactical events at the top level. And so, both the desire and the ability of senior commanders to control the action pushes against mission command.

I’ve always been a bit sceptical of this critique. The ability of commanders to micromanage is fundamentally constrained by their attention – likely to be in short supply. Perhaps it’s more often a case of self-policing by subordinates: not adopting the optimal approach for fear of being monitored.

In any event, my argument here is that the advent of fully autonomous tactical AI capabilities will shift the balance again, this time back towards the local decision-maker, precisely because the tactical decision-maker of the near future will not be human. Artificial intelligence challenges mission command from the other direction – limiting the commander’s ability to manage the interpretation and execution of intent.

AIs bring distinct tactical advantages – in speed, precision, endurance and raw information processing power, among others. These characteristics mean that the AI can act too quickly for senior command to control. Communication with superior agents introduces distortions in messaging, slows the speed of execution, and introduces vulnerabilities that an enemy will seek to exploit. Tactical AI overcomes these problems by reaching and executing decisions locally – with great speed and precision, and without being susceptible to a large range of human biases.

The sort of platforms and systems I am referring to are currently in their infancy but are developing extremely rapidly. AI has already mastered complex control problems – including learning to fly a helicopter. It increasingly has the ability to comprehend its environment in rich detail, and the ability to generate innovative approaches to solving problems within its universe. AIs can identify and track targets in noisy and cluttered environments – as with one that accurately distinguishes mosquitos by the sound of their wing beats. When it comes to decision-making, AIs are moving beyond the constrained universe of the chess computer, bounded by strict rules and with perfect knowledge of possible options. Clausewitz likened war most to a game of cards – bound up with uncertainty and chance. As it happens, an AI has recently ‘solved’ poker – a game of asymmetric knowledge – sufficiently to come out ahead of humans on a consistent basis. It did that by recognising statistical patterns in adversary behaviour

In the next decades, AIs will assume increasing importance in information acquisition and analysis, sorting and categorising vast databases. In ISR it will obtain and interrogate huge quantities of information in near real time. In manoeuvre and fires, progress is slower, but it will ultimately outperform humans and manned systems, with their physical, biological constraints. Spherical AIs will storm defended beaches; AI UAVs will outperform the best Top Guns – manoeuvring more sharply, reacting faster and less predictably.

AIs will generate pattern of life information on human targets in urban environments, discerning subtle biological signatures; nano-AIs will surveil targets. Data crunching AIs will track after-action data and develop new operational concepts and tactics. In war colleges, AIs will red-team war games. In R&D laboratories, AIs will ‘evolve’ new weapons platforms – challenging ingrained and deeply held organisational proclivities – for manned fighters, or aircraft carriers.

Yet, there are two fundamental issues, at least, worth considering as we move towards tactical battlefield AI.

First, there is an ethical conundrum to address: one that humans already face, but that they might feel less comfortable outsourcing to a machine. This is the question of risk, and of whose life to endanger. War entails violence and death and, like humans, AIs will need to weigh the risk to lives before acting. When humans do this, we often utilise two distinct philosophical approaches – consequentialism and deontology. Should we prioritise the greatest good for the largest number of people (and, if so, which people), or do we have a duty to each individual life? Let’s suppose our AI must chose, in a flurry of combat, between sacrificing the life of a friendly soldier, or killing two other people, likely to be civilians. Humans make these sorts of choices already, of course, but outsourcing them to an impersonal, non-biological agent feels pretty uncomfortable.

The second problem is in deciding which goals to pursue and in what order. How do we know what it is we want our tactical AI to achieve? Victory is too trite an answer, because of its subjective and contingent meanings. How hard should our machine fight, and for what end? What is an acceptable use of resources, and acceptable level of violence, to achieve a particular goal? This is the problem of revealed preferences – establishing intent ex ante is problematic if one’s goals are sometimes revealed via action, and if one cannot readily communicate these new or revised goals to agents in sufficient detail and timely fashion.

Tactical AI will, in theory, still reflect the intent of its superiors – specified in the protocols that establish its goals and rules of engagement. But there are problems in fully anticipating such contingencies – the goals we seek and our willingness to fight for them are often only revealed as events unfold. We can hazard a guess at some of the parameters that should constrain or guide a tactical AI and these may suffice to deliver the desired performance. But social complexity will limit our predictive accuracy and, additionally, AIs will, in the normal run of things, develop unexpected solutions.

Both these issues can be ameliorated by intervening ahead of the machine’s decision to offer guidance: keeping the fabled ‘man in the loop’. The difficulty here is that of speed and communication. AIs benefits, in addition to its raw computational power and unbiased decision-making, include its capacity to decide with alacrity and without human guidance in complex and rapidly changing environments. Intervening undermines both autonomy and speed. In some circumstances that will be fine – better to be right and lose a robot than to be wrong and kill the wrong people. But in situations where enemies have the capacity to respond autonomously and quickly themselves, pausing for reflection and higher guidance is a poor strategy.

These problems won’t go away and aren’t easily answered. The solution, however, is not to resist the adoption of autonomous weapons systems, or seek to outlaw them – both because adversaries are unlikely to cooperate, and because technologies are changing so fast that reaching mutually agreeable and enforceable definitions is highly problematic. Ultimately, weapons that can identify, track and prosecute targets with inhuman speed are likely to confer battle-winning advantages. In doing so, they will stretch the scope of mission command to its limit – the commander’s intent may not bound the scope of tactical action sufficiently to cover all possibilities, while their capacity to monitor and impose themselves into the action will be curtailed.

Image: Northrop Grumman Bat, via wikimedia commons.