Rod Thornton is a Senior Lecturer in the Centre for Defence Education Research and Analysis, UK Defence Academy/Defence Studies Department, King’s College London.
I recently attended a UK-government sponsored workshop on Artificial Intelligence (AI). At the start, the convenors asked the participants – from the commercial world, the Third Sector, academia and government agencies – to define AI. This was a difficult task. There was, though, some coalescing around the idea that AI involves machines carrying out tasks with varying degrees of autonomy, free from human control. This question about defining AI was not simply used as a workshop ice-breaker; definitions are essential as the use of AI increases and as, concomitantly, world-wide debates grow about its use. One of the sharpest of these relates to AI as an enabler in the military realm. And it is the UK that appears to have put itself at the centre of this particular debate. The fact that it has, however, could have negative downstream consequences for the conduct of future operations by the UK’s armed forces.
Among the world’s major states an AI arms race is underway. The US and China, with their huge spending on AI and with their ability to draw on the expertise of indigenous high-tech firms (such as Google, Amazon, Huawei, Tencent, etc), are way ahead of other states in the development of AI systems for their militaries. They understand that to be left behind in such development risks facing not just battlefield disadvantage but also actual strategic defeat. Weapons based on AI have the potential to become even more powerful than nuclear weapons. One Russian source, for instance, sees a future ‘Third Word War’ being won ‘within seconds’ by using AI-enabled cyber warfare. As Vladimir Putin has said on several occasions, ‘whoever becomes the leader in this [AI] sphere will become ruler of the world’.
Given that this race is on and that the stakes are high, what role is there for ethical constraints? Going forward, each country with the capacity to deploy AI within its arsenal of weapons of war will have to make some fundamental decisions about how they will be used. Perhaps the most high profile question is over the use of ‘killer robots’ – machines technically known as Lethal Autonomous Weapon Systems (LAWS). These have been unofficially defined as ‘weapons that can select and attack individual targets without meaningful human control’ (although ‘meaningful’ may be seen as a problematic modifier). The main advantage of LAWS in conflict scenarios is that they offer the ability to deliver ordnance with speed and accuracy but without putting human actors (pilots, tank and ship crews, etc). The main problem with them is that without any human actor in the decision-making loop their ability to apply either discrimination or proportionality is supposedly limited. LAWS, moreover, will also never perform perfectly. As with any AI system, their algorithms will only ever allow them to act on probabilistic reasoning and not on certainty. LAWS will always, to a degree, be ‘guessing’ when they engage their targets. Thus they cannot, in essence, be trusted as would a human actor conscious of the laws of war. And if mistakes are made or crimes committed by a human when it comes to targeting – if innocents die – then individuals can also be held accountable. If a machine makes a similar ‘mistake’ then who is to be held accountable? The laws of war could be bypassed by AI and no longer have any meaning.
The potential use, thus, of AI in weapons systems has raised controversy, not least in the UK. A recent report by the House of Lords Select Committee on AI recognised that, while UK spending on AI could not match that of the US or China, the UK could still be a world leader in terms of the ethics involved. The report warned: ‘The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence’. It advised that the UK should be acting to ‘lead the international community in AI’s ethical development, rather than passively accept its consequences’. It went on to urge the UK to ‘forge a distinctive role for itself as a pioneer in ethical AI’.
The UK is, it seems, to be a global leader in the governance of the ethical use of AI. In light of this report, one journal article rather pointedly carried the headline: ‘The UK says it can’t lead on AI spending, so will have to lead on AI ethics instead’.
With such sentiments abroad, it is no surprise then that the official UK line is thatnone of its offensive weapons systems will be capable of attacking targets without some degree of human control. As one Ministry of Defence spokesperson put it: ‘The United Kingdom does not possess fully autonomous weapon systems and has no intention of developing them’. This is a laudable but debateable statement. Surely, the UK will be developing purely defensive AI system that arefully autonomous – anti-missile missile systems, for instance – where speed of reaction cannot be left to dithering humans. In a broader sense, though, this is a declared restriction that could put UK forces at a serious disadvantage on future battlefields when it comes to the employment of LAWS. It cannot really be imagined that the likes of China and Russia, as they develop their AI systems, will feel limited by ethical sentiment. Their view will be that they cannot afford to be. They both see themselves as weaker militarily than the combined forces of NATO and its partner countries and, as such, have doctrinally declared that they will be seeking out any asymmetric advantage they can. If these Western powers – including the UK – want to self-restrict their use of LAWS, for instance, then this will be seen by Beijing and Moscow as a weakness to be exploited in an asymmetric sense.
There may then come a future scenario where UK force elements, facing adversaries with different ethical standards and free to deploy their ‘killer robots’, would be unable to reciprocate with their own. They could be left exposed; fighting with one arm behind their back.
Given its declared position, it might seem logical for the UK to push for an international ban on the use of LAWS. Trying to level the playing field so that no other state possessed them would seemingly work to the UK’s advantage. A ban is also the favoured UN option. UN Secretary General António Guterres has, for instance, described LAWS as ‘morally repugnant’. Within the UN, however, the UK is part of a group of states (alongside Australia, Israel, Russia and US) that has collectively stated that currently they do notwant to see any regulation that forbids the use of LAWS. To explain the UK’s position, an MOD spokesperson said that, ‘We believe a pre-emptive ban is premature as there is still no international agreement on the characteristics of lethal autonomous weapons systems’. We are thus back to the thorny problem of definitions. If we do not know what something is then how can it be banned?
The question here, though, is why is the UK trying to prevent a ban on a weapon it has ‘no intention’ of developing itself? This does not look very ethical or, indeed, sensible. It seems to be giving licence to potential adversaries to continue with their own development of LAWS while the UK sits on its AI hands.
Whatever the UK’s position, it seems that LAWS will prove impossible to ban anyway. Firstly, because the world’s major states will be seeing the benefits of LAWS there will probably (and maybe conveniently?) never be an internationally agreed definition on them, which would then allow any ban to accrue. Secondly, the technology that underpins any ‘killer robot’ will come to be developed anyway in the civilian sector – with systems designed, for instance, to deliver parcels or to tackle forest fires. Any military organisation could simply buy such systems off the shelf and convert them readily into LAWS. The genie will thus be out of the bottle on LAWS fairly soon anyway and can never be put back in. It will therefore, and unfortunately, be very hard for the UK to maintain a credible stance as a ‘pioneer in ethical AI’.
Image: Reaper a Remotely Piloted Air System (RPAS), part of 39 Squadron Royal Air Force via defence images.
There are three problems with AI:
1. How to define what it must never do.
2. How to get it to explain what it does.
3. How to test it.
Answering these questions is less a matter of money than of intelligence.
LikeLike