Consciousness and strategy: What will AI want?

DR KENNETH PAYNE

This is the second of a two-parter on AI and strategy – the first, dealing with creativity is here. In this part, I reflect on the goals for which strategy is a servant. Are the machines coming to take over, and destroy human life as we know it? Or will they be our faithful servants?

Much hangs on the question of motivation. That’s a large topic, and here I want to focus on one aspect – conscious reflection on goals. For humans, consciousness is both the result of our core biological motivations, and a way of mediating them: deciding what to do. Would an AI develop something like consciousness, and so be able to reason about the world and its place within it? If so, it might generate its own goals?

Biological motivations beget conscious humans

Our ultimate motivation as humans is to remain alive at least until we can pass along our genes. We are embodied agents, and our cognition is inherently linked to our bodily processes. All animals have that instinct to survive and reproduce – it’s the basis of natural selection. But no other animal has developed human-level consciousness, which gives us the ability to reflect on our desires, and to strategise ways of attaining them.

Our consciousness is a biological adaptation that evidently confers important advantages. Consciousness, Nicholas Humphries suggests, evolved in humans from our mind’s embodied relationship with the environment – a way of monitoring those interactions, and its responses to them, in a recursive manner. It gives us the ability to focus attention, to edit and integrate complex mental imagery together in new and useful combinations. Consciousness, argues Stanislas Dehaene, is a ‘global workspace’ like a clipboard on a computer.

Among other benefits, consciousness allows us to track our social interactions; reflecting on what others intend; it allows us to generate and (via language) to share abstract meanings. And it allows us to think about time – using experience, memory and creativity to imagine possible futures. Consciousness, language and abstract reasoning are interwoven features of a cognitive package that permits a rich human culture.

All animals share the basic survival motivation, and many may have similar ways of mapping their bodies’ interaction with the world around them – including their social world – but none have developed the same degree of self-awareness as us. I suspect that our intense sociability is the reason for that. If survival is our ultimate motivation, the search for meaning – an understanding of our place in the world – may, as Victor Frankl suggests, be the main proximate motivation shaping human behaviour. Without a deep understanding of the social world, we are vulnerable. Part of that happens instinctively, unconsciously, as with empathy – but much of our social understanding entails conscious reflection and exchange. If consciousness evolved to map our place in the world – human consciousness extends that to a rather abstract level.

Artificial consciousness

Would an AGI share that core animalistic motivation to survive? Perhaps if so, there be something like evolutionary pressure forcing it to refine and improve iteratively better solutions to environmental challenges. Ultimately, that could result in a conscious AI. It could, but I am sceptical that it would look much like human consciousness.

In humans, cognitive function follows the biological form. One approach to developing AGI proceeds in the opposite direction – with form following function. We start with the human-like functions we wish to model, and work out what architectures can deliver them. AGI that is modelled on human cognitive functions may exhibit broad parallels – a synthetic emotional ‘module’; another to model episodic memories; perhaps even a deliberative ‘clipboard’ or workspace model to ape consciousness. There are attempts to do precisely this – but they are not particularly convincing, because both the broad functions (emotions, episodic memory) and their particular content are responses to a particular human problem. An AGI assembled from cognitive models would be like some Frankenstein’s monster – a kit of parts, rather than an evolved response to evolutionary pressures.

A more promising approach allows the AI to develop autonomous solutions to its own environmental challenges, so that its motivations follow in a more naturalistic fashion from the environmental challenge. This ‘emergent’ AGI need not be seeking to pass along ‘genes’ or any synthetic proxy, (after all, it would not be subject to the same perishability of a biological life-form). But it would be seeking at a minimum to perform some function – and that mission itself would require it to preserve its autonomy. In that case, the AI would need to observe its relationship to the surrounding environment, and respond to its interactions. A complex, recursive relationship between agent and environment would ensue. If consciousness emerged in humans from a similar sort of process, an autonomous artificial agent might produce something ostensibly similar.

Perhaps, but I would venture that any emergent consciousness would be radically different, because its physical manifestation would be so very different.

Philosophers are apt to ask ‘what does it feel like to be a bat’ – very different from being a human, no doubt – but it’s a good bet that it feels like something. Recent neuroscientific work suggests a limited form of consciousness in other creatures, including lowly insects – again as a way of mapping their relationship with their environment. The contention is that feels like something to be a bee. They have similar cognitive architecture to us, albeit with greatly reduced neuronal complexity. They are made of similar material, subject to similar biological laws of entropy. We ourselves cannot experience their subjective experience, of course – and it’s a fair bet that bees and bats aren’t discussing Kierkegaard – but the evolutionary rationale for consciousness as a way of focussing awareness on feelings and emotions is striking, as are their behavioural responses to environmental stimulation. They possess motive, means and method to develop consciousness – all highly suggestive of some form of animal consciousness.

The contrast with the emergent AGI is striking. No inevitable mortality, like that facing a biological body, no genetic selection to underpin evolution. As a result, there’s nothing a priori to suggest that the AI will use similar cognitive mechanisms to humans or other animals. Its hugely parallel processing might not need to focus its narrowband attention in the same manner as our consciousness, or a bee’s proto-consciousness – it can handle the urgent tasks and the less important ones without fatal loss of performance, because its cognitive architecture is so radically different. And it might not need emotions to tag and prioritize mental imagery for attention. The priority rationale for emotions and consciousness doesn’t stack up for a massively parallel processing AGI, even if it needs to orient itself in response to the environment.

So it might not feel like anything to be an emergent AGI; and it might not need to focus a single beam of attention on priority messages from the myriad simultaneous processes of cognition. The AGI, in sum, may seek to preserve its agency and to interact with other agents in ways that are radically different from animals.

And yet something like consciousness may still emerge – perhaps as a means of rehearsing strategies, or framing intuitive connections in ways that have contextual meaning. Or perhaps of providing some executive capacity to coordinate responses. As Tononi argues, there is nothing per se to preclude the development of consciousness in machines – consciousness is just an emergent function of material, and, he argues, of the organisation of those materials.

Nonetheless, I would expect human consciousness and subjective experience to be considerably closer bees and bats than to that of an emergent AGI housed in a distributed network of hardware and capable of lightning fast parallel processing. An AGI like that might generate consciousness, but not necessarily. It may be parallel but sufficiently integrated that we can talk of an ‘agent’, without necessarily being synchronised to the extent that a phase transition results in subjective awareness. It would not feel like anything to be such an AI, and its motivations would correspondingly differ. For example, while it might operate socially, its reproductive chances would not hinge on its social status, so it would not feel obliged to jealously defend its prestige, nor to instinctively favour kin over non-kin.

Without consciousness, the machine would be an unreflective executor of whatever goals it was tasked with. Were consciousness to develop, we might expect the machine to reflect on the big question – why? But the ways in which the machine reflected and the conclusions it reached would be very different indeed from our human, embodied reflections and our timeless search for meaning.

An ominous sense of impending doom attends discussion of super-intelligent AGI, pursuing ends inimical to humankind. There is a particularly well-worn sci-fi trope featuring malign AI that does not care about human understanding – this is AGI as psychopath, amoral and unempathetic. 2001’s HAL and Ava from Ex_Machina fit the bill nicely: they are intelligent enough to understand human motivation, and then to manipulate humans by exploiting that knowledge.

It’s not clear what’s driving those fictitious AIs – curiosity? Selfishness? Perhaps even a similar urge to survive – like the shortlived Replicants of Bladerunner? But what they all have in common is that they are human artistic inventions – anthropomorphic projections of our own imagination, and imbued with a human sort of consciousness.

I doubt that will happen. Our cognition is shaped by our bodies, with their biological constraints and emotional wiring. Our motivations originate in that biology – even our higher order reasoning, with its incessant search for meaning, its curiosity and imagination.

There are dangers here for humans – but not necessarily of a passionate, self-aware Intelligence relentlessly pursuing its own agenda, even where that clashes with ours. If anything, the danger is of an unemotional lack of understanding and empathy as AGIs execute their allotted task, because the machines do not employ emotion as a way of organising their cognition.

Strategic AGI

What’s the relevance of all this for strategic studies scholars and practitioners? We are soon to enter a world with multiple artificial agents, flexibly pursuing varied and sometimes antagonistic ends – on our behalf initially and maybe one day on their own. Google’s DeepMind aims to deliver Artificial General Intelligence in relatively short order. It’s not entirely clear what that would mean in practice – but broadly speaking, such an AGI ought to be able to tackle a variety of cognitive challenges in a reasonably complex environment. That does not imply consciousness, or intrinsic motivation.

Image: Matrix wallpaper, via flickr.

 

One thought on “Consciousness and strategy: What will AI want?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s