Artificial Intelligence in the Integrated Review

Dr Kenneth Payne, Defence Studies Department

Note from the Editor of Defence-in-Depth. Dr Kenneth Payne’s piece here is the first in a series representing the evidence given by various members of staff within the Defence Studies Department of King’s College London to the UK’s government’s Integrated Review process. This Review made a call for evidence in the fields of Security, Defence, Development and Foreign Policy that would help government define its vision for the UK’s role in the world over the next decade.

Artificial Intelligence is poised to make a significant contribution to many aspects of defence and national security. As it conducts the Integrated Review, the government will need to be cognisant of some of the following challenges:

Industry

  1. The UK is a genuine world leader in basic AI research and has been for many decades. But considerable expertise resides outside the defence-industrial base, and many AI researchers are reluctant to see their expertise used for defence.
  2. Moreover, the practical application of research has lagged pure research – partly a product of the disconnect between university research and industry, and also of weaknesses in UK management. There is a significant UK high-tech sector; but there are no UK tech giants, and more broadly, British productivity is notoriously weak.
  3. AI research is characterised by a highly skilled, highly mobile multinational workforce, and by international capital flows. Britain has been an attractive environment for both, but there are consequent vulnerabilities: British universities have been slipping in global league tables; Brexit has increased uncertainty for those seeking work in the UK research sector; and COVID-19 has further compounded that uncertainty.
  4. The government will want to foster an environment that is congenial to advanced AI research. This includes attracting researchers to our university sector, and attracting inward investment, especially from the US technology giants. But all this raises concerns about foreign ownership of strategic assets, industrial espionage and brain-drains.
  5. The US model features extensive Pentagon funding of basic and applied research via the Defense Advanced Research Projects Agency (DARPA) and other agencies. Recently that highly successful model has evolved with the emergence of fabulously wealthy internet corporations, eager to use AI for commercial gain. Many research breakthroughs of recent years have resulted from corporate research, not government. Though there are similarities, neither of these models translates directly to the UK context, partly because we lack scale and a similar culture of university-defence contracting.
  6. My recommendation: establish DARPA UK, but with an explicit defence focus; and not as an instrument for basic AI research, of which there is already plenty, ably conducted through existing channels. Strapping a solicitation/competition process onto the Defence Science and Technology Laboratory (DSTL) would be a straightforward way to achieve this.
  7. My recommendation: the government should support, protect or take an ownership stake in key research activities, especially where these are excessively risky for the private sector. State aid might include investments in biotechnology, space systems and quantum computing.
  8. My recommendation: Tax offshore technology companies more equitably. AI enabled corporations are earning huge profits from UK consumers while employing considerable tax avoidance. The same companies sometimes bid for government AI and IT contracts. As more economic activity moves online, the government will need to extract more revenue from these few, large providers. It may also need to regulate to encourage diversity of actors. National security ultimately depends on economic security.
  9. My recommendation: Increase direct funding for university education in computer science, but also in applied social sciences. The latter is important, since the application of AI technologies necessarily involves consideration of their social impact – as recent controversies on A-levels and facial recognition demonstrate. Expand, in particular, post-graduate and post-doctoral funding for UK computer science students. We are presently training a generation of Chinese researchers and must, at minimum, ensure that we do likewise for British nationals.

Military Organisation

  1. Hierarchy and tradition have proven military value, but also foster cultures that are resistant to rapid change, including changes in technologies and concepts. In some respects the military is eagerly embracing AI (as with Professional Military Education, the jHub, and the Navy’s Project Nelson). But elsewhere, especially in procurement, long-lead times and conservative mindsets are impeding progress. Contrast the ambition of the scheme to build a new fighter aircraft, Project Tempest (a conservative design, including a cockpit), with the RAF’s very small-scale experimental swarming squadron. Moreover, the various projects are disjointed: there is presently limited understanding of where expertise and interest lies within the officer corps.
  2. AI will create systems that operate across domains, demanding greater ‘jointery’ in military activity. For example, is tactical airspace featuring loitering munitions and long-range hypersonic missiles the business of the Army or the RAF? AI will push joint activity to progressively lower levels.  It will also blur the distinction between teeth and support arms, as uninhabited systems assume a greater role in both.
  3. AI may demand different combinations of combined arms and favour different performance attributes.  For example, stealth and armour may become less important for uninhabited and disposable platforms. Conversely, speed, manoeuvrability and the capacity to saturate through firepower may become more valuable.
  4. Similarly, the existing balance between quantity and quality of platforms may shift. If AI decision-making is the decisive capability in uninhabited platforms, this will absorb proportionately more development effort. But it is relatively trivial to replicate and update this across platforms, and the bottleneck for recruiting, training and protecting humans is removed.  Expect to see disposable platforms, and more emphasis on saturation than survivability.
  5. The cost of developing exquisite technologies has driven the trend for multi-role platforms like the F-35 and Type 31 Frigate. AI may see a rebalancing towards specialised (and cheaper) systems.
  6. These attributes necessitate new conceptual thinking – for example about using swarms, or just-in-time assembly of force packages (per the Chinese PLA’s ‘battlefield intelligentization’ concept, and the US DoD’s ‘Mosaic warfare’ concept). The UK lags both China and the US in this conceptual thinking.    
  7. The lead time for developing military systems is incompatible with the pace of change in AI technology. Exquisite technologies have driven defence inflation and extended development times. Changing requirements during this period have further extended cost and timescales. All this has led to reduced quantities. This model is not fit for purpose.
  8. My recommendation: Create a centralised Office for Defence AI, perhaps within UK Strategic Command, to consider the strategic and operational implications of AI and to give direction to the wider UK defence establishment, including procurement. This office should have sufficient (perhaps 3*) clout to drive change throughout the establishment. AI technologies will challenge traditional service shibboleths – e.g. for inhabited platforms and for concentrating fighting power in a few, often large and slow, platforms. That challenge requires senior and coordinated support.
  9. My recommendation: Create a joint technologist stream within and between the three armed services, intelligence agencies and civil service, with personnel specialised in decision-making technologies of the sort that will shortly be employed. Recruit laterally, including via multiple re-entry. Ingest recruits via non-traditional routes. Age and physical fitness are important attributes for infantry, less so for technologists. Ditto the need to wear a uniform. Paying more for fewer people with the appropriate skills is preferable to extensive recruitment of poorly suited personnel. This technologist stream should offer an elite career path, not a specialist ghetto. Sponsor students through university for direct entry into this technologist stream.
  10. My recommendation: Expand the employment of synthetic environments like Project Improbable to experiment on concepts and platforms. In particular, sponsor the research into ‘meta-AI’ systems that design weapon-systems.
  11. My recommendation: Prepare leaders for command of autonomous weapon systems, via existing Initial Officer Training and Professional Military Education establishments. All commanders should be AI literate as a generalist skill.

Geopolitics

  1. AI technologies will have an impact on fighting power, whether employed in Intelligence Surveillance and Reconnaissance, as Lethal Autonomous Weapons, or even to sort the mail more efficiently. Military powers will succeed to varying degrees in instrumentalising AI, with implications for the existing international balance of power.
  2. Some traditional military powers will struggle to develop cutting-edge AI systems. Examples here are Russia and India. Both lack the scale of advanced AI research, have sclerotic bureaucracies and strained budgets. Lagging powers will likely focus on asymmetric offsets for AI – perhaps prioritising nuclear deterrence and unconventional warfare. They will also field dubious AI systems, like the supposedly stealthy Sukhoi S-70, in an effort to project AI competence.
  3. China is a leading state in instrumentalising AI research for security. Its use of AI technologies for internal security provides a vivid illustration of the need to think carefully about robust institutional safeguards in the UK.  However, China has been derivative in military innovation and its weapons platforms qualitatively lag western equivalents. Similarly, it produces abundant AI research and engages in industrial espionage against rivals, but again its research outputs qualitatively lag the US and UK. Among other challenges, China lacks comparably robust intellectual property laws and suffers from extensive corruption, including in research. Its model for scientific innovation is radically different from the US-UK, and as yet its ability to generate cutting-edge breakthroughs is unproven. While its quality likely lags, however, the shift towards quantity of platforms and saturation attack favours China.
  4. China’s other big advantage may lie with ‘augmented intelligence’, blending ‘classical’ AI and biotechnologies. The UK has an advanced biotechnology sector, but China has a more permissive approach to ethics in genetic research. Augmented biological intelligence promises enhanced cognition and health benefits but is very high risk. The use of AI in biotechnology also has the potential to create potent new bioweapons; and, again, the ethical constraints that obtain in the UK but not in China work to the latter’s advantage.
  5. Uncertainty over the relative performance of AI systems has combined with a widespread suspicion that they are ‘offence-dominant’ to create a potent security dilemma and the prospect of AI arms-racing. The UK is well placed for this race, but to date has displayed insufficient ambition. Taranis is now a decade old. Why is there no UK entry in the DARPA dogfighting challenge? Small scale experimental work and off-the-shelf procurement is insufficient.
  6. Alliance relations, particularly with the US, will be challenged by varied abilities in AI. The US has scale and budgets to outpace everyone, including the UK. Shared projects, like MoD participation in Project Maven, offer a partial solution; but the UK must also prioritise indigenous AI systems in its acquisition programme.
  7. My recommendation: Deepen defence cooperation with allies, particularly those with limited indigenous AI research capabilities. Some legacy military systems will be of limited utility for AI-enabled forces, so alliance effectiveness depends on effective technology transfer. In particular the UK should extensively partner with European autonomy projects, like Dassault’s nEUROn UCAV. Ensuring that close allies like Denmark, the Netherlands and France have the ability to operate AI alongside the UK is imperative.
  8. My recommendation: Refocus intelligence activities away from low-consequence terrorist activities towards collection on scientific and industrial research in near-peer rivals.
  9. My recommendation: Rebalance away from legacy systems, accepting some risk. Carrier strike is a sunk cost; so too the attendant surface combatants and the Army’s medium weight Strike brigade concept. But the UK needs to fund hypersonic missile research, swarming technologies, loyal wingmen, and uninhabited attack submarines, inter alia. Mothballing the Prince of Wales, Type 45 and Challenger 2 may be necessary, so too further reducing infantry manpower.
  10. My recommendation: SSBN may be vulnerable to AI search, for example via distributed hydrophone arrays. The UK should consider re-establishing another leg of the deterrent – likely via stand-off air or ground launched missiles.

Regulation

  1. The UK has committed not to develop fully autonomous lethal weapons. This is unsustainable because swarming technologies will shortly allow huge swarms that preclude fine-grained human control. Other technologies, including hypersonic missiles, highly manoeuvrable unmanned aircraft and long-endurance submersibles push the human progressively further from the loop. Pressures of time and degraded communication work against ‘meaningful human control’ in highly contested environments. Near-peer AI combat need not look like a Reaper on station in Syria.
  2. Agreeing what constitutes an AI ‘weapon’ will confound international efforts to regulate AI in military activity. Autonomy is already a feature of many military activities, especially in ISR, but also in active defence. Even agreeing what constitutes AI is problematic – it being more a philosophy than a discrete technology.
  3. AI is a dual use technology, useful for both civilian and military application, which further complicate efforts to reach international agreement on regulation. So too does the ease of defection from any arms control regime.
  4. My recommendation: Prepare guidelines for the soon-to-arrive moment when lethal weapons will necessarily be operating without direct ‘on the loop’ human control.
  5. My recommendation: These guidelines should emphasise ex ante control – ‘human-before-the-loop’ – via accurate coding of commander’s intent. Specifying instructions for AI systems that adequately capture our intent is a priority. Research on safe and explainable AI is imperative, especially in systems with the ability to initiate or escalate combat. 
  6. My recommendation: Empower the Information Commissioner with appropriate regulatory powers and resources to scrutinise the collection and use of AI technologies in British society – for example, biometrics, facial recognition, airborne surveillance and predictive policing technologies, health technologies, and even the recent exam results algorithm. The public must have confidence that there are robust institutional checks in place as AI technologies become more prevalent throughout society, including in national security.
  7. My recommendation: Routinely engage with opponents of Lethal Autonomous Weapons, especially from within the AI research community. Establish an AI ethics panel or forum to facilitate this. Fruitful exchange of ideas can help develop systems that mitigate risk, including the risk of ‘normal accidents’ inherent in complex systems, and unanticipated departures from our intent.

Further reading:

  • Kareem Ayoub & Kenneth Payne (2016) ‘Strategy in the Age of Artificial Intelligence, Journal of Strategic Studies, 39:5-6, 793-819, DOI: 10.1080/01402390.2015.1088838
  • Kenneth Payne, Strategy, Evolution and War: From Apes to Artificial Intelligence (Georgetown University Press, 2018)
  • Kenneth Payne, ‘Artificial Intelligence: A Revolution in Strategic Affairs?’ Survival 60, no. 5 (2018): 7-32. DOI: 10.1080/00396338.2018.1518374
  • Kenneth Payne, I Warbot: The Dawn of Artificially Intelligent Warfare (London: Hurst, forthcoming 2021)

2 thoughts on “Artificial Intelligence in the Integrated Review

  1. Throughout his account of British Scientific Intelligence1939-1945, “Most Secret War,”
    Professor R.V. Jones, 1911-1997, demonstrates a deep preoccupation with military organisation, and the coherent administration of different scientific defence sectors. This resonates strongly with Dr Kenneth Paynes examination above. OK, so we have been here before, but no harm in contextually checking Jones’s solutions?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s