Strategy, creativity and artificial intelligence

DR. KENNETH PAYNE

I’m working on a book about evolution and strategy. It ends with me wondering whether Artificially Intelligent (AI) machines will be good strategists. My BLUF – yes, but not yet, or any time soon.

Here I’m going to rehearse some of that argument. I identify two big issues in AI that bear on its strategic impact – creativity and motivation. This post is on creativity, the next (coming out next week) on motivation. The two are related: creativity is a powerful way of realizing our goals, especially in complex, uncertain and fast-changing environments. AI will need creativity to tackle … well, what, exactly? That’s the subject of my second post – what do machines want?

Creativity

Strategy is a creative process, and at present the ability of AI to perform strategic activities is severely limited by their ability to think creatively.

But what is creativity? I argue that it requires imagination. This is the ability to make unexpected connections; to intuit meanings that are not immediately obvious, partly based on experience. AlphaGo seemingly managed something like that in its recent match against the world champion human. But was it really creative? I say no: its ‘intuition’ was illusory – merely anthropomorphism on our part, as we project back from behaviours we observe but did not expect and cannot readily explain to intuit some plausible human cognitive process underlying them.

Humans are creative as a result of the interactions of their conscious and unconscious minds. To be sure, the unconscious inner workings of the mind play an important part in creativity, as artists of all stripes readily testify. Robert Louis Stevenson argued strikingly that his inner mind was the hub of all his creativity, telling him his stories while he slept.

But that’s not enough. To be really creative, we need to be conscious and self-reflective, grounding our unbridled imagination within a framework of meaning. So consciousness serves an important creative purpose in humans. Creativity is not just random connections – ideas have meaning, in a given context. This capacity to reflect on these meanings typically requires conscious thinking, even allowing that instinct is important. There’s a strategic rationale for consciousness: Some evolutionary theorists suggest that human consciousness evolved and deepened as a response to our increasing sociability, and efforts to gauge what our peers were up to.

AlphaGo was making its choices on an entirely different basis. It was engaged in some complex information processing that weighed probabilities and recalled previous games to produce a move unexpected by on-looking humans. In a two step process, it combined a reinforced learning algorithm that had ‘trained’ by playing itself at Go many times over, and a probabilistic algorithm that narrowed the scope of its many possible options to those most feasibly useful, thereby reducing the vast number of possible moves to something more computable. That’s not creative, in the human sense – it’s calculating. The machine lacked that human ability to reflect on the meaning of its move. In Go, that was no big problem – the universe of Go, while containing an unfathomable complexity in terms of combinations of possible moves, was pretty simple in terms of type of move. That’s a structured universe that lends itself to computation. The moves favoured by the machine were those most likely to succeed, indeed those that had succeeded before.

DeepMind’s computer made some radical moves (by human standards), and the reasons are hard for humans to intuit. The temptation to anthropomorphise is strong. But there’s no need. The beautiful simplicity of Go constrains the ability of the artist to be creative, but it’s perfect for a devastatingly efficient machine learning algorithm.

Sceptics have long argued that AIs shuffle symbols or tokens that represent reality, without actually understanding what they mean. This is the basis of John Searle’s famed ‘Chinese Room’ thought experiment, in which a machine translates to and from Chinese via protocols, without any innate understanding of what all these symbols mean. This lack of native understanding would certainly limit AI’s capacity to take over the universe, still less perform many useful functions on our behalf. Without grasping what things mean, AIs are just hugely sophisticated data processors.

Not all AIs work by shuffling tokens in that ‘cognitivist’ fashion, where reality is represented symbolically, so that cognition is merely the processing of symbols. Recent breakthroughs in AI like AlphaGo use an alternative approach modelled closely on our biological cognition. Our own minds are the product of complex, typically recursive networks of neurons. Human intelligence emerges through the development of these networks, via evolutionary inheritance and then experience. AIs employing Artificial Neural Networks work similarly, although in a rather coarser, more simplified fashion than the billions or neurons and trillions of connections at our disposal. Like us, they work not by blindly processing symbolic representations of reality, but by learning what connections produce the optimal response. Machine intelligence emerges from this evolution of network connections. Perhaps such ‘emergent’ approaches allow the machine to ‘understand’ like we do – getting round Searle’s objection?

No. While we also learn that way at the neuronal level; rather more is involved at a functional level, thanks to the subjective richness of those associations in our conscious mind. In the inner workings of the human mind, the content of mental images, and the neuronal networks that generate them are not available via introspection – we only experience the subjective outcome of those feelings, at a higher functional level. It’s there, in our conscious mind that things really ‘mean’ something. And here too that creativity is fully manifest – as our experience is ceaselessly edited and manipulated in new combinations.

That emergent, first-person experience is what we understand by ‘meaning’. It’s rather more than the simple output of the neural network: it’s a phase transition, qualitatively different. Moreover, one where the emergent property, our mind, feeds back recursively into the network, so that cognition is always dynamic, recursive and multi-causal.

Artificial General Intelligence and creativity

AI cannot do this at the moment, but I reckon it will eventually. Such an AI will be rightly called an Artificial General Intelligence (AGI). Maybe all that’s lacking is complexity of the system. The idea that consciousness emerges from systemic complexity and integration comes from Giulio Tononi; others like Kristof Koch take a similar view – consciousness per se is independent of the ‘platform’ on which the information processing happens.

Many AI experts disagree – for them there is something fundamentally different about human cognition and consciousness. Some, like Antonio Damasio, argue that there is something particular about the biology of the brain that generates consciousness – the material is at least as important as the function, and can’t be modelled in silicone. Fine; but then, what happens when biological material is used to construct artificial neural connections – couldn’t consciousness emerge then? Functionalists like me don’t think Damasio’s biological argument is right, but even if it is, biotechnology means it’s not the end of the road for strong, human-like AI; just that it would like some way distant, given current capabilities.

Whoever is right, it’s surely the case that when the AI cognition achieves that phase change, its consciousness will involve a qualitatively different sensation to humans – the subjective experience of being that AI will be radically different. Because it will be the product of different type of agent, and different environmental pressures will act on it. Its phylogeny and development will differ from human, and so its conscious experience will be different too. That thought gets us towards part 2 – motivation.

It is possible that the AI may develop into an AGI without aping the human brain’s twin-track architecture: its distributed, parallel unconscious processing, and narrowband consciousness, always editing and interpreting in a rich web of meaning. Instead, the AGI’s ‘creativity’ may be a souped-up version of AlphaGo’s sophisticated calculation – one that is able to model and train in a less rigid, deterministic universe than the Go board. That, however, would require a new approach to AGI, since the huge complexities ambiguities of the real world are not readily solvable by clever calculation of probabilities combined with reinforcement learning.

AGI and strategy

How does this discussion on artificial creativity affect strategic studies?

Strategy is dynamic, interactive and hugely uncertain. It requires the ability to reflect on past experience and project into the future. The possible combinations are not computable by brute force logic. DeepMind recently demonstrated their algorithm navigating a maze, like a rat in search of rewards, and the achievement is phenomenal – not least because the AI is coping with the problem of delayed rewards. But pseudo-rat level intelligence is still a very long way from AGI that can master strategic interactions between competing groups of humans.

Far more likely, as Kareem Ayoub and I argue in the Journal of Strategic Studies, the military impact of AI will be felt in the more constrained universe of tactical, domain specific AI – that can handle logistics, make sense of vast datasets, or even master some control problems, like flying and driving.

 

Image: Image to Represent AI. Courtesy of Wikimedia Commons.

2 thoughts on “Strategy, creativity and artificial intelligence

  1. Reblogged this on JDB Communications, LLC and commented:
    Whenever someone…anyone…says “artificial intelligence,” the first thoughts are of Arnold look-alikes or faceless machines with blinking lights and endlessly rewinding tapes. These machines have witless designs to annihilate mankind to ensure their own survival. But frankly I have trouble imagining that any “artificial intelligence” could possibly survive the realization that its existence is due entirely to those very same messy humans that made it. So, military strategy? It has to first imagine what purpose it would serve.

    Like

  2. Consciousness in AI seems superfluous, especially in a military context. Brain-damaged people with conditions such as blindsight successfully manipulate the environment without it. There’s a strong argument for consciousness as the result of *incomplete* self reflectivity.

    Improved versions of existing unconscious tools – genetic algorithms and RNNs given a search space – produce successes novel to unaugmented human designers. They can maximize for probability of victory instead of margin of victory, or incorporate external EM noise to perform functions on a superficially nonfunctional circuit board.

    Alternately – in the transhumanist vision – self-reflective intelligence rapidly scales past human to vastly superhuman general intelligence. In that case, it would seem foolish to add emotional capacity and moral status to a tool with precision-WMD-equivalent capabilities.

    Like

Leave a comment