On MOD AI and Data Science

The UK MOD’s increasingly urgent pivot to develop, embed, and otherwise deploy data science and AI (hereafter DS/AI) technologies, and associated practices, is a critical play in many ways. Arguably, the priority that Russia has shown for DS/AI supremacy is one of a few major drivers. Others include the consequent developing relationship with our key ally, the US, during this whole paradigm shift.

We do not doubt the veracity of this diagnosis, yet we want to examine a few possibly missing elements within the MOD’s approach to this one-off transformational change. These relate to people- matters, technology-matters, and agility-matters.

There is a short summary of the UK’s present position, set out by MacCrimmon, asking “should we be doing more?”. He contrasts the strategic programmes within other countries with our own seemingly relatively piecemeal efforts (disconnected from other government and any national drivers). Some of the underlying barriers are not new: but they are writ large and anew within fast paced digital worlds and the old reliance on contractor “primes” rather than creating a national R&D network may well constrain progress.

The DCDC Joint Concept Note 1/18 “Human-Machine Teaming” provides a revealing focus on a number of identified human machine issues, for both AI and robotics (the particular interest of the former MOD CSA) and it highlights a number of challenge(r)s:

  • Gaining access to cutting-edge AI, by fair means or foul.
  • Defending such civil, commercial, and military AI assets,
  • Robotic and artificial intelligence systems likely to revolutionise the battlespace.
  • Creating mass effect via human-machine teaming.
  • Optimising human-machine teaming.
  • Trust and assurance for artificial intelligence.
  • Accessing skills and the race for technological advantage.
  • The new economics of warfare.

 

What is not there?

In this note we argue, beyond MacCrimmon, that the analysis to date (including the joint concept note) together with behaviours exhibited implicitly signal an improper framing of the true challenges and opportunities ahead.

At the first level they assume that almost all of the assets required are available or accessible and it is a question of teaming, of deployment, of optimisation, and so on. This vocabulary is loaded. Some necessary assets are not mentioned at all, perhaps because they are rightly assumed to be the very currency of military thinking and physical operations: leadership and strategy (for DS/AI).

The two are related but they represent distinct and separate omissions when it comes to DS/AI. The recent book “Leading Within Digital Worlds” argues that the pace of change, the sequential disruptions, the pervasive mass-adoption, and the prevalence of reliance upon opaque technologies, within digital R&D and deployments represent hard challenges at a strategic level (what should a programme involve and how should missions be famed?), and to leadership (since we require at least some creative data scientists and operators to super-perform). The above book argues that the various attributes of digital transformation and becoming reliant of DS/AI require care, not least because of the timescales, and the nature of those involved who (necessarily) have a creative rather than an institutionalised mindset. These are not mutually exclusive: there are honourable exceptions.

The excellence developed through leading military (physical) field operations, in encouraging people to sacrifice and achieve together, may actually be relatively ineffective within digital worlds. Worse, we have to plan for technology upgrades to occur almost continuously within months and years and some of these will be disruptive step-changes. That is unlike any other sector of capability development, usually made with support from primes perhaps adhering to their own longish-term road maps (primes have a business to mop-up any innovation funding and starve new entrants/disrupters – that should be called the “King Herod principle”).

We must lead and strategize for the upcoming digital worlds and not just the present one. This is obvious (isn’t it?), yet time after time the act of full immersion into the details of any particular data science platforms and methods reduces the framing, and focusses the effort far too narrowly (making progress with the trees rather than any progress at all with the woods). Below we will give some explicit examples of tech that really ought to be included.

The DS/AI challenges principally concern the organisation of activities (what is to be achieved as proximate goals, delivering tangible advances, and yet how do we innovate and disrupt within DS/AI?) and our mindsets. How do we set proper achievement targets across a wide and wise portfolio of DS/AI activities?

The fear of (any) failure is really corrosive: each individual that is leading within each project wants to succeed and will cleave to relativily straightforward (low risk) and attainable targets, verifying that all elements are meeting their gates and milestones, confusing activity with real progress. Yet if we take the transformation programme as a whole, then we wish to have some risk, some super-performance, some black swans. The taxpayer should expect that. It is this issue that begat the UK’s new thinking about setting-up an ARPA (like DARPA): we cannot limit R&D spending to consensual programmes that are risk averse, with nobody taking the portfolio view.

When six-year-olds start playing soccer they all chase the ball in a big scrum, searching for their own piece of possession and glory. They do not frame the session from the perspective of the team. “Spread out, create space, …” we shout. This is not the wild west. So also with DS/AI.

In fact, the “Human-Machine teaming” riff developed within the DCDC document may itself be a red herring, or more precisely represent a “fig leaf” covering some more essential, yet more difficult, strategic and leadership challenges. Perhaps it is merely resolving an accessible question while ducking some more important ones. One might argue that finessing the human-AI interaction and optimisation of existing capabilities is fine but we must also discover novel, radical, analytics and applications that deliver real benefits to the war-fighter.

Similarly, recent position papers, such as that from Dstl, IBM and the Defence Academy on achieving the benefits of AI within defence, do not address leadership issues.

 

Some possible DS/AI topics to include

How should we decide and settle on components of a DS/AI programme? By doing what is presently possible? By doing what our allies do? By avoiding risk?

The experience of many UK government horizon scanning functions, and especially those of the MOD, is that generally they are too closed (living “inside the box”, containing military, civil servants and domain experts who are mired within the present internal politics, internal culture, known constraints, and mindsets). They should instead subsume external entrepreneurial views and zealous views from those who can think in terms of disruptive early-mover opportunities. This requires a mindset that is trophy driven and with a skewed view of risks.

There is also the issue of setting expectations (internally, above and below; and externally). This is dangerous as we need to create teams with non-subject specialists (military careerists, not data scientists) being placed to lead technical DS/AI teams during developments and early deployments. What should such leaders themselves expect? Often what is said between the scientists is not heard correctly by those above them, and is morphed via subsequent Chinese whispers.

Panels of experts making sub-programme decisions are also dangerous. Some radical ideas will never achieve a consensus. For almost twenty years UK research councils have wrestled with the definitions of truly adventurous and potentially disruptive research. The problem is that the councils, their programmes, and their peer reviewers all frame the challenges around individual calls and/or technologies. They create tension within competitions yet they penalise “out of the normal” thinking because money is tight. That framing introduces a risk-averse nature and a lack of entrepreneurial research zeal.

What might we be presently doing? Here are a few fresh ideas. Are all covered adequately?

  • Rapid and flexible data ingestion from (local/relevant) external data sources: the ability to exploit novel data sets for sub populations – data from local MNOs, ISPs, p2p web-platforms (Gab, Twitter, Telegram, Reddit,…).
  • Agile analytics deployment. (1) Wide and wise: open to highly novel methods – not one club golfers becoming stale: the need to employ diverse, competitive, and tensioned options. (2) Fail-fast POCs and rapid prototypes. (3) Understanding the fundamental limits of deployable technologies. (4) Push technologies – insights into palms. (5) Support network of technical/scientific contributors.
  • Successful adversarial attacks on ML/DL are inevitable – this needs a rapid growth in deep understanding. (1) Recent research (2020) show that both small noise and stickers will inevitable cause spoofing (how do our own and allies’ existing technologies perform?); are we developing both offensive and defensive capabilities? (2) Placing “traffic lights” on automated deliverances – establishing the fundamental limits – building trust in AI deliverances, challenging existing platforms.
  • Behavioural biometrics: the next phase of identity validation. US army: massive “hard” biometrics ABIS database (fingerprints, DNA, facial images..) with 7.4 million IDs – this should be complementized by behavioural biometrics (vocab, style, voice, gait, behaviour, …).
  • Quantum Computing and Post Quantum Crypto. What if we had a quantum computer: how would it change the game? What if other actors had one? While GCHQ must protect the UK nationally, this would disrupt all local offensive and defensive live capabilities.
  • Strategic diagnosis – what is going on? What will challenge us next? What digital environments will be in play – including IoT?
  • Setting priorities for technical activities (avoid tick-boxing; don’t confuse activity with progress – this appears common in the MOD) Should the UK be positioned for thought leadership and high-end expertise?
  • Leading within digital worlds – how to get the best from data science teams and evolving technologies. (1) Do we have or can we access visionaries? When they speak everybody else shuts up. (2) Leaders at the strategic level without data science backgrounds: what should they know?

 

How do our allies see us?

Just ignore what is written down and/or said aloud by our allies: what does their behaviour towards us and our offerings really tell us? Are we (in ascending order):

  • useful idiots – expendable?
  • an extra resource to be directed?
  • make-weights in their other relationships?
  • cheerful and knowledgeable challengers, and critical friends?
  • a source of radical thought-leadership?

Obviously, we would wish to be in the latter categories rather than the former.

Consider the position that occurs if or when the US had developed a novel AI platform, say one for recognizing opposing assets within the field of combat based on images, sounds, or videos. Very likely this would have emerged through DARPA or other programmes and involved large IT/AI prime contractors (Google, MS, Apple, IBM, LM, etc).

Having funded the initial development (perhaps collaboratively with contractors) the US DOD might invite the UK and its other allies to adopt that solution – arguing on the grounds of interoperability, common vocabulary, common operational factors, mutual reliance, and so on. They may also wish US allies to populate the platform with their own data as a way of exposing both good and bad performance and so on. On the other hand, perhaps the contractors worked with the US in order to sell such licenses to other countries’ operations.

How would such opportunity for the UK allow us to move up the food chain (introduced above)? We should not simply deploy, but instead we should run some investigative programmes to find the existing limits of such technologies. Seeking to spoof them, break them, disrupt them.

Then go back to the US military and their contractors with a challenging critique and a set of suggestions for next phase improvements. We should also learn and consider how we might behave assuming the military’s adversaries deployed similar platforms. Such an effort would place us within the critical friend category and further up into the thought leadership bracket – keeping one step ahead.

 

Trust and reliability

If we are to build substantial trust in the AI deliverances for field-based commanders, we must increase the transparency (in general) and/or flag-up the relative level of reliability for all deliverances (case by case). The AI technology itself simply has no skin in the game and the fighters know this. We must also create knowledge and set proper expectations about the fundamental limits and sensitivities of those deployed DS/AI technologies. At a crude level all soldiers will know that the technology must be thrashing through (very large amounts of) available data, but they cannot fathom the details and they do not know where limitations and thus errors may lie; whether accidental from training biases, or from adversarial attacks, including noise and spoofing. This is arguably of far more importance that any technical progress being made within programmes such as DAIS-ITA , which might also offer an opportunity for the UK to develop and signal its thought leadership within “digital worlds”, thinking about implementations far beyond the technical agenda for US/UK collaboration.

 

Growing Leadership Talent

The data science skills shortage is almost surely at a deep technical level. This is very often discussed and identified as constraint. Yet what about those with the ability to lead the data scientists and their programmes through to capability? DSTL and others are engaged within some collaborative activities with the academic sector. It appears to be almost totally challenge-led “pull” rather than open-ended “push”. It also contains very few elements of data science leadership development and does not address the chasm between the culture of creative chaos which is best for developing disruptive applications (including the development of and support for zealous and passionate innovators) and the usual civil service mind set and the defence institutional culture. Is this activity at the right size though, given both the urgency of the need and the scale of expertise available across the HEIs? This issue was an essential strand of the more general DSAC critique and has not gone away.

Dstl’s strategy is actually dominated by a rather wide range of data scientific technologies, yet it certainly does include a commitment to “become a more agile organization that is fit for the future…..This includes leadership, accountability and governance; our investment in the talent, skills and careers for our people.” By which they mean scientific leadership.

The Defence Academy has a possible weak spot which one might term “strategic science”, since it puts on many courses with introductions to DS/AI technologies (to network analytics, machine learning and so on), as well as leadership courses designed to develop R&D management ability, but these offer relatively little discussion of the rapidly changing face of science, and of DS/AI in particular. Their focus appears to avoid topics such as (i) the changing nature of intrinsic scientific innovation, (ii) the miniaturization of science, (iii) the ongoing amateurization of technology and its deployment, (iv) game-changing research and development, and (v) trust, limitation, conflation and hype.

Instead, by their nature, the introductory or taster courses necessarily lag behind the true leading edge. The same problem afflicts data science courses within standard business school MBAs of course: the taster sessions appear more like the 101 of stats, analytics, and machine learning (or similar technologies), including longish explanations of the underlying technical principles associated with AI and advanced analytics; real-world case studies; and some challenges associated with the development and implementation of advanced analytics-based systems. Many organisations try to get around this problem with alternative teaching/experiences.

There is a huge opportunity here to deal with issues arising when generalists are placed in charge of highly technical DS/AI teams: how to strategize, lead, motivate, and deploy such technical talent. There is also a need to document all R&D work within individual’s lab books (just as experimental physicists do), and document all algorithms and models within log books (that will develop with the applications over their whole lifecycle) is all too apparent, since the algorithms and applications are often opaque. These “soft” technical issues are all covered in some depth elsewhere as well as within prototype Codes of Conduct developed for corporates. Essentially the latter is a risk-control play.

 

Conclusion

There is much to celebrate within MOD DS/AI, especially at a technical level. So here we have focused on what is not there, and how such omissions are largely at the visionary, strategic, and leadership levels of deployment within the MOD and the military (with respect to “digital worlds” – novel operating environments). These induce a possible lack of both ambition and super-performance, and thus they represent “true risk” – the risk of the UK having no supremacy in anything. However, such threats are squarely addressed within corporates that are dependent upon the next generation of DS/AI. They must disrupt or become disrupted.

We have discussed a range of relevant people issues, as well as suggesting some technology options to be addressed for the UK’s own advantage.

Besides the obvious leadership, developmental, and deployment challenges we also suggest that data science expertise represents a real and current opportunity for the UK to become a more highly regarded ally for the US and others. That window will not stay open for long. In particular this demands that the UK should have a challenge capability to gain an understanding of the fundamental limits of performance for all novel technologies (and not merely passively deploy them); and develop some world-class leading-edge expertise, in alignment with national centres for internationally leading DS/AI research within UK universities. In turn, this also requires a wider set of opinions and external influences within MOD horizon scanning to determine the future opportunities within all fast-paced, uber-competitive, DS/AI sub-sectors.

We should challenge all of our present assumptions and empower some visionaries. The time to act decisively and to catalyze change is now upon us.

 

Peter Grindrod is a British mathematician, data scientist, and entrepreneur, spending half his career within industrial roles (including founding successful data science start-ups) and half within academia. He has written widely on research strategy, innovation, and research leadership. He regularly consults to customer/citizen-facing industries within the retail, consumer goods, energy, digital media and marketing, fintec and telecoms sectors. He has served on both EPSRC and BBSRC Councils and the MOD’s former DSAC. He was a founding trustee of the Alan Turing Institute.

Featured Image – An MQ-8 Fire Scout, an autonomous-capable UAV in service with the US Navy.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s