Breaking: Opening salvo fired in coming war with machines

Dr. Ken Payne

DeepMind, the world’s leading Artificial Intelligence outfit, has released a remarkable new study with implications for those of us interested in war, cooperation, and the strategic ramifications of AI.

You can read and watch it here.

In short, their agents demonstrated the ability to relate socially in a competitive environment. When resources (green apples) were plentiful, the agents cooperated happily. When they were scarce, all hell broke out – DeepMind had endowed he agents with the ability to shoot each other; and that’s exactly what they did.

So what?

In my next book, I take a long-run view of strategy, all the way from early human evolution through to the advent of AI. It’ll be out soon, but by way of preview, and since DeepMind has made it relevant.

First things first – DeepMind’s agents may have cooperated or fought; but they didn’t do so for the same reasons we did. They were making ‘rational’ decisions about whether to cooperate on the basis of individual gain, of the sort that will be familiar from game theory puzzles, like the famous Prisoner’s Dilemma.

Human cooperation is a puzzle – why do we cooperate when the benefits from slacking off can be substantial, especially if we deceive others about it? Natural selection happens at the level of the gene, not the group – so why should I risk my genes to help the group out?

One answer (mine, in fact) is war: the pressure of intergroup conflict, which we now think was pretty ubiquitous. Groups that cooperate together win against groups that do not. This is particularly true when weapons are pretty primitive, and fighting is in the form of a melee, rather than a one on one duel. If you don’t cooperate with group members there’s a good chance that both you and your group will go out of business.

Frederick Lanchester did the maths back in 1916, in his catchily named ‘square law of armed conflict’. When the force of many could be concentrated against the few, there were disproportionate gains from scale. The takeout: there is a massive military advantage to be had from being in a larger group; and larger groups that cooperate will win against smaller groups that do not.

Two results follow – we instinctively cooperate, especially with those we identify as being in our group (conversely we are chippy xenophobes against outgroup people). And second, there’s a clear incentive to form larger groups – including of non-related people. Hence a motive for the development of cultural identities – a shorthand way of saying who can be trusted in a tussle. Over a long period of time, and beginning some half-million years ago, humans developed unique social abilities – including a sophisticated empathy and theory of mind, allowing us to gauge what other individuals believed. Could we trust them not to malinger? In war, we, like chimpanzees today, worked the odds in our favour – fighting primarily by raid and ambush, and attacking with surprise and massively advantageous force ratios. Victory goes to the big battalions.

The arrival of culture, thanks to all that cooperation, modified the situation somewhat: new strategies were available – including, of course, defence in depth! Scale mattered, but clever thinking might offset it.

Back to DeepMind, and the impending rise of the machines. Their agents cooperate to harvest digital apples; but the logic of that cooperation is not the same as that which drove humans to develop culture, empathy, theory of mind, and instinctive cooperation. Their cognitive architectures are far different from ours: they are not embodied biological intelligences, whose very survival depends on navigating a rich social terrain. They are not enmeshed in a biological world of natural selection; driven – often unconsciously – by the imperative to propagate their genetic code. They do not exist in groups that are in a constant state of conflict with neighbours. Harvesting a digital apple does not require the same cooperation as a mammoth hunt. DeepMind’s ‘toy universe’ is a much simpler affair – like an 80s arcade game, but with better baddies.

So, let’s all relax, no?

Well, to a point. Artificial General Intelligence, if it ever manifests, is unlikely to mirror human intelligence, which evolved as the answer to a particular environmental problem – replete with its emotions, massively parallel unconscious deliberation, and narrowband self-aware, reflective conscious. You could perhaps model that in cyberspace – but that would just remain a model. Philosophers like to pose the conundrum: what is it like to experience life as a bat? Answer: we’ll never know, but you can bet it’s much closer to us than the ‘life’ of a machine.

Still, groups of AI will face very similar meta-problems: scarce resources; the possibility of conflict to secure them; and a need to understand what other agents are likely to do. Their inclination to cooperate or compete with those agents may differ radically from our own. Watch this space.

Image courtesy of Wikimedia Commons.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s