AI Arms and Influence

Kenneth Payne

Thomas Schelling, greatest of the nuclear strategists, would have appreciated Jaguar Land Rover’s concept for a self-driving car. As it approaches a pedestrian crossing, the cartoon-like eyes above the car’s front bumper flick towards a woman who is about to step out. An intricate dance of minds ensues.

We do this instinctively, every day – I know that you’ve seen me, and you know that I’ve seen you. You should know that I intend to step out. You should stop. Or, if it’s not a crossing – You know it’s my right of way; I know that too. So, don’t step out, because I’m not stopping. Psychologists call this dance the ‘theory of mind’ and humans are pretty good at it.

Payne01

JLR’s concept AI car watches a pedestrian crossing the road. Picture, Jaguar Land Rover

Madmen at the brink

This sort of negotiation appealed to Schelling, who in the 1960s brought his academic expertise in bargaining to the study of nuclear confrontation. The big challenge – how to make the threat of nuclear retaliation credible? If the enemy didn’t believe you were serious about using them if it came to it, there was no point in having all those nuclear weapons. But doing so would inevitably risk a devastating response: mutually assured destruction.

Schelling made an enduring contribution to strategic theory with his idea of the ‘threat that leaves something to chance’. No sane person would choose their own annihilation, but what if they weren’t quite in control of their actions? What if it were rational to appear irrational?

Schelling famously imagined two people roped together on a cliff-top. The way to intimidate the other is to dance wildly close to the edge – almost in control of your jerky movements, but not quite. ‘Brinkmanship,’ he wrote, ‘involves getting onto the slope where one may fall in spite of his own best efforts to save himself, dragging his adversary with him.’

Leaders since have drawn on the idea – Nixon called it his ‘madman’ theory of statecraft, and admittedly did a good job of appearing convincingly erratic.

So, the first lesson for strategy: A powerful system that is not entirely under the control of its owner is one worth respecting. Think twice before you escalate against it.

The highway is for gamblers

Schelling subsequently employed another useful analogy for brinkmanship; a game of ‘chicken’, in which ‘two teenage motorists head for each other on a highway – usually late at night, with their gangs and girlfriends looking on – to see which of he two will first swerve aside.’

This, he thought, was ‘a universal form of adversary engagement’. In fact, ‘the more instructive automobile form of the game is the one people play as they crowd each on the highway, jockey their way through an intersection, or speed up to signal to a pedestrian that he’d better not cross yet.’

Do you see where I’m going with this?

With so much confusion going on, the crowded highway looks ripe for Schelling’s first strategy – a threat that leaves something to chance. Can you be sure I will back down and give way? Perhaps if I drive erratically you’ll be less sure.

So you try to read what the other driver is thinking – is he glaring at you, or signaling with his eyes that it’s safe to cross? Getting your way becomes a nerve-wracking test of mind-reading: What do you intend?

But what if the driver communicates to you that there is no gamble – back down, or you die? Now, Schelling argued, ‘the game virtually disappears’. The very opposite of the threat that leaves something to chance is the threat that leaves nothing to chance.

And so Schelling proposed a bold strategy for teenage delinquents – throw out the steering wheel. This would signal that your next move is not a matter of chance is entirely set and unstoppable. But of course, you have to let your adversary know and hope they haven’t done likewise!

The second lesson for strategy: A system that has total automaticity built in looks like the wheel-less hot-rod. It cannot be compelled or deterred, regardless of what its owner intends. So you’d better be sure that your adversary knows the wheel is gone when you take it out.

The eyes have it

Back to Jaguar’s test facility, where a game of chicken is underway. Their AI car should stop if you step out, regardless of whether you’re at a crossing point. After all, this isn’t a nuclear standoff. But it’s still a game of brinkmanship – dare you step out in front of it, especially on an open road?

What’s going on? The car’s ‘eyes’ have seen you. There is no mind at work here, at least, no human mind. The eyes themselves aren’t ‘seeing’ you – the car has sophisticated radar sensors for that, spinning away on the roof. Instead, they’re there to communicate, to broadcast, not to receive.

And what are they broadcasting? Uncertainty. If there were no eyes, you could still be pretty certain that the car had seen you and would stop safely – after all, it’s an advanced AI, certified for use on public roads. So, why not just step out?

Roads full of AI cars like that probably wouldn’t work. People would step out reliably into oncoming traffic, and any remaining human drivers would drive cavalierly, safe in the knowledge that AI cars would take evasive action. Risks would be eliminated, or at least, massively reduced. But jaywalking chaos would ensue. The situation would be like the hot-rod chicken standoff, except 180 degrees reversed: you could be near certain that your ‘adversary’ would back down.

So the eyes: they’re there to anthropomorphize the machine; fooling you into assuming uncertainty that isn’t there. I’ve seen you. We both know it’s my right of way. I don’t plan on stopping for you. Dare you step out? Risky. But without risk, there is no brinkmanship. And brinkmanship sometimes brings stability.

Wargames

In the near future, AI systems will square off on the battlefield, alone, or as part of a human-machine hybrid system. Let’s call one such system a ‘Universal Schelling Machine’, and let’s assume it has the independent ability to escalate the intensity of a confrontation. One reason for that is speed – if both sides can move faster than a human decision-maker, there will be pressure to automate.

What sort of brinkmanship will ensue?

There aren’t any eyes on the USM, but you and I might nonetheless anthropomorphize it, assuming some sort of familiar human intelligence is at work behind the scenes. That would be foolish: the system is certainly weighing the odds, just not like we do. It will theorise about its rival’s decision-making, but not via human ‘theory of mind’.

There’s another difference too. Whilst Jaguar’s car wants you to give way, it will ultimately protect your safety, no matter what. My USM won’t have as its primary goal making sure you are safe – but rather getting what I want, by compelling you, if necessary. If you step in front of it, it might just run you down.

Perhaps I’ll also include instructions not to devastate the planet, but squaring those goals could be difficult: how much escalation is too much?

If I turn the USM on without the ability to intercede afterwards – I’ve effectively ceded all my control to it. It might escalate, it might not: a threat that leaves something to chance.

That’s a terrifying prospect that should give my enemies pause for thought before they goad it. But the fear it induces is only useful in compelling them if they haven’t simultaneously fired up their own USM. It’s not that the two opposing USMs would inevitably escalate and counter until we reached total devastation. But it’s not clear at which point one would back down and accept defeat. After all, they aren’t intimidated, or fearful in the same way we are.

So, an AI powered confrontation initially looks like Schelling’s first analogy. Something is left to chance, on both sides. Especially because we don’t know what’s going on inside the AIs – on either side. Modern AI is a ‘black box’, with considerable lack of transparency about how exactly they reach particular decisions. That should give everyone pause – a force for stability.

But turn it on and the decision on whether to escalate is not mine. I’ve ceded control, and we’re in the second analogy. Our two USMs are thundering towards each other on the California highway, wheels held aloft, each determined to prevail.

What a strange game, as a Universal Schelling Machine once said. Perhaps it’s better not to play.

Featured image – Sea Hunter, an autonomous US warship, pictured in 2016.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s