UK Defence’s Artificial Intelligence (AI) Landscape: is More Effort Required?

STUART MACCRIMMON

Stuart is a British Royal Marines officer studying a Masters by Research degree while attending the UK Defence Academy’s Advanced Command and Staff Course. His research has examined how human-machine teaming can deliver operational success, focussing on coherent approaches to technology, people and processes. His first in a series of three blogs questions whether UK Defence should apply more effort to create an AI strategy. Follow on blogs consider Defence’s future workforce needs when working with human-machine teams, and future command and control decision making.

Why is Human-Machine Teaming Important and What are Other Countries Doing?

Vladimir Putin’s September 2017 comments on countries dominating the AI sphere ruling the world are now commonly known. AI’s ubiquitous promise is widely publicised, researched and welcomed in areas including transport, economics, medicine, environmental protection and industrial development. Many consider countries strategically implementing effective AI capabilities will grow faster, have cities that are more efficient, run businesses that map consumer needs quicker, house citizens who live longer, and maintain a military which projects power more effectively. In this ubiquitous space of enabling technological development, it is difficult to find areas where advancement could not occur due to AI’s boundless potential.So how does UK Defence fair against other leading powers?

Russia’s 2018 ten-point AI plan has Defence at its heart, its Defence Ministry deeply integrated in strategy development alongside its Departments for Education and Science and its Sciences Academy. This includes forming an AI and big data consortium and creating a state system for AI training and education. Russia has also fielded lethal autonomous weapon systems (LAWS) on operations in Syria with its Uran-9 system, albeit with results not necessarily being as expected. China is also advancing, its industry expecting LAWS to be commonplace by 2025 and its Ministry of National Defence intending to ‘leap-frog’ other global powers by leveraging AI. Similar to Russia, China’s 2017 New Generation AI Development Plan has Defence in lock-step with economic and social innovation plans such that it is the world’s primary AI innovation centre by 2030. The President of the United States launched America’s AI Initiative in February 2019, and the Department of Defence’s (DoD’s) Strategy was released the following day. The national initiative called for strong cohesion with allies and partners and acknowledged experimentation will likely bring the greatest gains through data sharing, frameworks, standards and cloud services. It additionally put a 3* military-led Joint Artificial Intelligence Center (JAIC) as the focal point for DoD AI strategy delivery. Its roles include accelerating military AI implementation, synchronising DoD AI activities and building a world-class AI team. The strategy meshes with the 2012 DoD Directive 3000.09 which stipulates autonomous and semi-autonomous weapons must be used with “appropriate human judgment”, yet it also calls for a worldwide set of military AI standards to which signatories comply.

What about the UK?

The UK is also embracing AI, the British Prime Minister using the 2018 World Economic Forum to announce the country was to be, “a world leader in AI, building on the success of British companies like Deepmind.” The AI Sector Deal followed in March 2018, yet it contained no mention of Defence. This appeared at odds with the UK MOD’s most recent strategic trends publication, which considered countries integrating human-machine teams (HMTs) most effectively may achieve decisive advantage at or before 2050, plausibly changing the character and nature of war. The Sector Deal’s core focus is rather to harness AI to further the UK’s industrial strategy and economy, its development pillars being those of ideas, people, infrastructure, business environment and places. Was any inference to Defence omitted accidentally? Or was the UK trying to do too much, making the Sector Deal all things to all government departments?

The recent government guide to using artificial intelligence in the public sector gives no further direction to Defence on AI implementation strategies. It helpfully explains fundamentals including machine learning techniques and applications, asks important questions about data readiness and what specialisations should be included in a multi-disciplinary team. However, one could argue is misses the point for internal Defence users with there being little to no current uniformed expertise or experience in AI.

Internal to the UK MOD, much work is occurring on AI and human-machine teaming. The MOD’s Doctrine and Concepts Development Centre (DCDC) highlights AI as having a significant potential impact (Figure 1). DCDC’s 2018 human-machine teaming concept note addresses numerous subjects, including the evolution of systems and how AI could impact conflict. Single Services are implementing AI centred projects, with examples including predictive engineering failure systems for the Royal Navy and autonomous logistic resupply from the Army. So what is the problem?

 

Picture1

Figure 1: Impact of factors set against the uncertainty of realisation, via Global Strategic Trends.

What’s Missing?

Despite single Services progressing individual projects, there is surprisingly little AI coherence across UK Defence. Without an internal AI strategy, single Services are working in isolation as to the best way to implement capabilities and without pan-departmental oversight. The Defence Science and Technology Laboratories coordinate the only pan-departmental events with biannual autonomy meetings, all other efforts being ad hoc. Therefore, the departmental focus is on longer-term science and technology projects as opposed to what capabilities are deliverable now. Unsurprisingly, those involved in operations want cutting-edge capabilities fielded as soon as possible in an agile manner. This does not propose science and technology is not essential, but rather it must be better balanced against available AI capabilities. The overall result is inefficiencies in cost and time, with single Services potentially having overlapping projects. Better coordination from the highest level could deliver more effectively. Without an organisation akin to the JAIC to cohere work, Defence risks multiple stove-piped areas of excellence that could consume considerable resource to make technologies interoperable during operations. These issues will only be magnified further when working with other government departments or allies.

Notably, an interview with the Department for Digital, Culture, Media and Sport’s Office for AI questioned whether Defence needed a strategy, despite there being an appetite for cross-government coherence. Is a Defence light touch slowing progress and potentially undermining long-term success?

What are the priorities?

A key question is whether Defence should be doing things better (efficiencies and optimisation) or doing better things (revolutionising processes and cultural change). This is routinely debated in the West at national levels and most of the author’s interviewees considered it should be a combination of the two, cognisant evolutionary processes are relatively slow, but extinction being possible when environmental changes are too rapid. Cultural barriers can also prevent fast propagation of revolutions in military affairs, despite technologies being available for rapid progress. Whether Defence wants to be fully interoperable with allies is a good starting consideration yet, fundamentally, the key will be to maintain effective sovereign capabilities. Wider relationships will be determined with time and depend on budgets. Two themes however kept recurring in interviews with single Services, allies, academics and industry experts: data problems, and agile capability delivery.

Data Problems: The April 2019 Digital and Information Technology Strategy acknowledges Defence has, “too many different, unconnected systems, of non-standard design that are difficult and expensive to maintain, often running highly bespoke and costly software.” At the core of this statement are the significant quantities of incompatible data, which are difficult to access and exploit with AI algorithms. With data being the lifeblood of AI, weak data results in a poor algorithm output. The Royal Navy’s internal AI consultancy team, Project NELSON, also considered data was a continuing blocker, with availability, coherency and classification routinely slowing progress. Accepting incompatible technologies can mesh, this takes time, effort and unnecessary expense. Although a future key role could be that of a ‘system meshing specialist’ to resolve these issues, the most cost-effective data delivery method must be considered first.  Enforcing data open standards is an absolute must, yet more imaginative solutions could include investigate data being purchased from trusted suppliersor shared amongst allies for best effect.

Agile Capability Delivery: When interviewed, a senior Royal Navy officer drew parallels between Defence today and BP before the Deep Water Horizon Disaster. They considered some areas in Defence were progressing but, on the whole, it remained an inflexible organisation, with a waterfall procurement cycle unsuitable for 2019. The pace of change in AI is so great an agile ‘fail fast’, methodology is needed so technology can be in the user’s hands quicker. These comments on cultural and conceptual change should not be lost, many thinking Defence routinely applies industrial age processes in the information age. More sensationalist views may even argue the MOD, similar to BP at the Deepwater Horizon Disaster, risks mission failure if agility is not injected into procurement and operating models rapidly. Although it is accepted this is not solely a human-machine teaming problem, the pace of AI development makes it particularly acute. Industry experts and academics proposed ideas including reflecting agile procurement in policy such that it became routine for single Services. Holding money centrally, but releasing it quicker for concerted industry efforts, was also though an excellent way of having industry pull together its expertise centrally for maximum cost effectiveness.

Conclusions:

Developing an AI strategy in Defence will assist in improving humanitarian assistance and disaster relief through to national resilience and expeditionary operations. Having a Defence AI strategy will contribute to delivering coherence between single Services, other governmental departments and allies. It will further increase cost-effectiveness, mitigate operational risk and, most importantly, contribute to maintaining competitive strategic advantage. In the writing of Alan Turing, “we can only see a short distance ahead, but we can see plenty that needs to be done, ” and writing a Defence AI strategy has the capacity to deliver considerable benefit.[1]

Image via Defense.gov.

[1]Alan Turing, “Computing Machinery and Intelligence,” Mind, Volume LIX, Issue 236 (October 1950): 433–460.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s