British Defence Policy

Beyond Effectiveness on the Battlefield: reframing Military Innovation in terms of time, networks and power

This is one in a series of occasional posts from scholars outside of the Defence Studies Department. If you would be interested to contribute to this series please contact the editors: Dr Amir Kamel and Dr David Morgan-Owen


Dr Matthew Ford is a Senior Lecturer in International Relations at the University of Sussex. His book Weapon of Choice – small arms and the culture of military innovation is published by Hurst & Co, London and Oxford University Press, New York 2017. This blogpost draws on ideas that I have used to frame a discussion panel for the Annual Society for Military History conference this coming March-April in Florida. If you have found this of interest then it would be great to continue the debate alongside my fellow panellists Lt-Col. Dr J.P. Clark (U.S. Army) and Dr Laurence Burke (curator at the Smithsonian National Air and Space Museum) at the conference. J.P. Clark’s book Preparing for War: the Emergence of the Modern U.S. Army, 1815-1917 is published by Harvard University Press.

‘…it would appear that many battalion commanders are not really qualified to comment usefully on their weapons’

– Major J.A. Barlow, Tunisia, May 1943

In May 1943 Weapons Technical Staff (WTS) from the Ministry of Supply visited 18 Army Group in Tunisia. Led by Major J.A. Barlow, a captain of the Army Eight shooting team and two times winner of the King’s Prize at Bisley, the WTS sought to understand what soldiers in 18 Army Group wanted from their equipment. To their astonishment having tabulated 15,000 replies to their questionnaire the WTS found that soldiers’ opinions on their kit was ‘frequently conflicting, if not directly contradictory, as between different units and formations’.

Although surprising, the WTS’ findings were not unique, nor confined to the conscripted armies of the Second World War. For example in 2002, during Operation JACANA, Royal Marines were deployed to Afghanistan in support of America’s Operation Enduring Freedom. Newly issued with the upgraded SA80/A2 rifle, news reports started to emerge indicating that, despite the euphemistically named Mid-Life Upgrade by Heckler & Koch, the new version of the assault rifle still didn’t work. Recognising that the Ministry of Defence was facing a PR disaster, a team of civil servant and H&K engineers were sent to Afghanistan. This team showed beyond reasonable doubt that the issues were, in fact, caused by the Bootnecks who hadn’t followed the cleaning drills they’d been issued and as a result the SA80/A2 was malfunctioning. As in 1943, the interface between the soldier and his weapon proved to be a complex and problematic one.

Little known anecdotes like these might seem irrelevant to the much larger question of military effectiveness, but I want to suggest that these specific instances of ambiguity offer considerable opportunity to generate insights into military-technical change. Superficially these stories indicate that the military can struggle to inculcate standardised doctrine, drills and techniques into soldiers. More fundamentally, they point to the way that effectiveness is shaped by a whole series of functions that aren’t simply the responsibility of the armed forces but also reach back to matters concerning engineering design and even defence industrial policy.

Part of the reason for highlighting these examples is to draw attention to the complex socio-technical web of relationships that need to be understood if effectiveness on the battlefield is to be adequately explained and theories of military innovation developed. In this respect the first wave of innovation literature (literature that looked at military-technical change as a top-down exercise driven through by politicians, generals or forced on the organisation by events) is of limited use. While there was an impulse to change small arms in the Second World War, the driver of this change didn’t come from frontline commands but the Director of Infantry at the War Office. In the same way not even the shock of the SA80’s failures during the First Gulf War could force the Conservative Government to admit that they had got the privatisation of Royal Ordnance dramatically wrong. Instead the Ministry of Defence had to content itself with tinkering with the SA80/A1 until a Labour Government came into power after which the atmosphere in Whitehall changed and a systematic evaluation and refurbishment of the Army’s assault rifle became possible.

Similarly, some caution is required when trying to fit these anecdotes into more recent literatures that emphasise adaptation on the battlefield. In the case of 18 Army Group in 1943, even with the support of the Director of Infantry, British engineers were unable to introduce the kinds of assault weapons they recognised could help the infantry win the tactical battle. Weapon designers were hemmed in both by the Government’s over-riding focus on mass-producing existing technological devices and the logistical and supply chain challenges that making a switch implied. Moreover, they also had to recognise that the tactical engagement had been displaced as the Army’s foremost priority by commanders like Montgomery, who favoured combined arms in order to deliver success at the operational level.

By contrast, recent operations in Iraq and Afghanistan point to the way that the battlefield has become less important for defining military effectiveness. As the evolution of components for the SA80/A2 (grips, picatinny rails, range finders, grenade launchers, sights etc.) indicates, the British Army has been excellent at accelerating its “lesson-learnt activity”, adapting its technology and technique in the face of changing insurgent tactics. However, indicators normally associated with the development of civil-society became proxies for overall campaign success even as Special Forces sought to win tactical engagements as part of kill/capture operations. Consequently, even as the pace of battlefield adaptation has increased, it has simultaneously exerted a decreasing effect upon the outcome of operations.

Thus, irrespective of whether military organisations continue to think of success in the same way as in Iraq and Afghanistan, events over the past 15 years present an opportunity to rethink how we theorise military innovation. Traditionally three conditions had to be met for a change to count as military innovation. First, innovation had to change the manner in which military formations functioned in the field. Secondly, an innovation had to be significant in scope and impact. Lastly, an innovation needed to result in greater military effectiveness. At the very heart of this definition, then, was the necessary link between effectiveness and the battlefield. What happens to a theory of military innovation, however, if we challenge the basis of the theory and reframe an analysis of change in ways that don’t make this assumption?

One possible way to generate insights into the process of change is to take what Dr J.P. Clark, my fellow panellist at a forthcoming Society for Military History conference, has done and look at innovations through the lens of time. Central to his approach is the recognition that there can be stark generational differences in professional expectations within the military profession. Among an older generation, even among those officers most supportive of reforms, there can be a tendency towards supporting innovation as a way to preserve the best of what was old. At the same time and by way of contrast, younger officers can regard reforms as the means by which to overturn old institutions and replace them with a new collectivist approach centred upon organisations filled with like-minded staff officers trained in accord with an approved doctrine. Far from constituting revolutionary change, innovation, as understood by Clark in his analysis of the U.S. Army during the 19th and early 20th centuries, may also represent an evolutionary inter-generational moulding of the military.

Alternatively Dr Laurence Burke offers a more explicitly theorised approach drawing on Science and Technology Studies (STS) as a way to reflect on processes of military innovation. By applying Bruno Latour’s Actor Network Model, Burke relates the experiences of the US military to the adoption of the aircraft in the early decades of the 20th century demonstrating the way in which a range of actors work to enrol groups into networks. What emerges from his analysis shows how interests are constructed and re-worked so as to dictate socio-technical outcomes.

Developing this point, my own approach recognises the value of STS and Clark’s consideration of time and seeks to develop theoretically informed insights into the importance of the concept of power in framing military innovation. By showing how innovation sits within a broader set of industrial and alliance relationships, relationships that demand engineering, scientific and bureaucratic mediation I reveal how frontline requirements are reframed in ways that make them intelligible for wider circulation. More than this I can show how users themselves have contested different views of their own requirements as they seek to enrol other groups and dictate the kinds of innovation that might emerge. In this respect I hope to draw scholarship back to not just discussion of top-down or bottom-up change but also to a consideration of what might be caricatured as middle-out change.

Image: SA-80 rifle stripped (1996), via wikimedia commons.


The British Army’s role in defending NATO’s Eastern Border


This post summarised some of the evidence Dr Chin gave to the House of Commons Defence Select Committee on the British Army and SDSR 15 in October. A recording  of the session is available here.

SDSR 15 acknowledged the increased threat posed by Russia to NATO and made clear its intention to deter any future Russian aggression. Most interesting is the role assigned to the British army which may be called upon to fight and defeat Russian forces. The British army’s thinking on this issue seems to contain echoes of the Cold War on the central European front in that our conception of defence rests on being able to fight a combined arms battle employing armour, infantry and artillery supported by airpower. As then, we face the problem of how to fight outnumbered and win with the added complication NATO is unlikely to have control of the air domain and the army lacks any integrated air defence capability. As in the past, we have also fallen back on a traditional western solution to ensure ends, ways and means are in balance, which is to employ technology as a force multiplier so that quantity is overmatched by quality. A number of questions arise regarding this approach.

First a great deal seems to depend on the successful acquisition of the Ajax armoured vehicle and the family of related tracked vehicles. However, the history of British weapons acquisition does not inspire confidence this programme will be delivered on time and to cost. A good example of the kind of technical problems that could be encountered is the recently cancelled FRES programme, which appears to contain many of the same technical features. The current CGS emphasised that a great deal of thought was being invested in this programme:`we are building the capability in a methodical and deliberate fashion over time, as this equipment rolls off the production line. Rather like we did in the 1930s, the idea is to test it to destruction and to experiment with it, in the same we did with the mechanisation of of force in the 1930s, so that we get the doctrine and the concept right.’ However, whilst I agree the UK did play a leading role in experimenting with armour in the inter war period and that by the start of the Second World War it was the most mechanised army in the world, we also need to take note of the fact that this still resulted in the production of tanks which were poorly armed and protected and the adoption of a divisional organisation and doctrine which was deeply flawed to the point that it was still being changed in the midst of battle in summer 1944. My point is that the successful introduction of technology is due to many factors some of which are beyond the control of the army and this could inhibit the success of the Ajax programme.

A second concern relates to the belief that combat power can be best articulated via the division. Gen Nick Carter explained:`The rise of state on state threat post 2012 made it important that the UK be able to deploy war fighting division.’ The justification given for focusing on this formation is that, like an aircraft carrier, it has a range of capabilities and it is where the orchestra comes together – `it is where all the capabilities that you need to compete in the state-on-state space happen.’ In his view, the possession of a division provides a metric to your friends and enemies about your power in the land domain about how powerful you are. There are two points I would make here. First, aircraft carriers are high value assets and extremely vulnerable in the modern battle space and because of this are unlikely to be deployed in the forward edge of battle – something a division might have to do if the data links between it and its remote theatre HQ do not work or are hacked. Second, in an age of fiscal austerity we need to think more boldly about how we organise and use military power and I am not certain enough thought has been given to this matter in the land domain. We also need to remember this formation dates back to the Seven Years War which begs the question why in the twenty-first century do we still need an organisational structure which really grew out of the requirements of warfare over two hundred years ago. In other forms of human activity technology is supposedly leading to flatter and more decentralised command structures, where power is devolved to the lowest levels of decision making.

One might argue that this trend can be accommodated within a divisional structure and that might indeed be the case. However, let us not forget that one of the reasons we introduced a divisional command structure into Afghanistan in the latter part of the war was to centralise rather decentralise command and control. Finally, whilst the division remained part of the orbat of some western armies in the post Cold War, the brigade seemed to rise in importance and I think became the most important unit of currency for a time at least. I raise this question because there are those who assert the covert goal of creating a war fighting division has a lot to do with protecting the army from the prospect of further cuts; a division is between 10,000 and 20,000 strong, and because of the teeth to tail ratio of modern armies will require at least a similar number if not more to keep it operational in battle. In essence we need to be bolder and more radical in our efforts to address this security challenge.

Image: UK Warrior Armoured Fighting Vehicles on Exercise in Poland, November 2014, via flickr

The Significance of Suez 1956: A Reference Point and Turning Point?

This is the third in a series of posts drawn from an event to mark the 60th anniversary of the Suez Crisis which the Defence Studies Department Strategy and Defence Policy Research Centre hosted on November 7th, 2016. Recordings of the papers will be posted shortly to the Department soundcloud.


From a British perspective 60 years after the crisis, Suez has an almost iconic status, often used as a short hand for everything ‘wrong’ in foreign policy and decision making. It is said to be the moment when Britain’s status and reputation as a global power ended and with it a decline of British moral power and prestige, the ultimate exemplar of Albion’s perfidy. In this way ‘Suez’ evokes a specific response which intends to tap into a shared meaning that is still used today.

For example, in the context of the Brexit debate, Matthew Parris wrote in The Times on 15 0ctober: ‘As in a bad dream, I have the sensation of falling. We British are on our way to making the biggest screw-up since Suez and, somewhere deep down, the new governing class know it. We are heading for national humiliation, nobody’s in charge, and nobody knows what to do. This Brexit thing is out of control’.

In Britain and the Suez Crisis, the historian David Carlton argued that ‘No event in the post-war period has so divided the nation as the Suez crisis; in none has the government so adamantly obscured the truth, and there has been much controversy as to its effect on Britain’s standing in the world. In consequence, many will see 1956 as one of the turning points in Britain’s post-war history’.

In these ways then Suez is both a reference point and a turning point.


Background to the Suez Crisis

So what was the crisis about? What was at stake that produced what Enoch Powell later called ‘a national nervous breakdown’?

First of all, it was not about the Canal Zone or the Suez Canal Company and if it had been it could have been solved peacefully, through the UN. Instead it was a multi-crisis at the international, regional and state levels, and only Nasser’s removal would resolve the crises because he was perceived to be at the centre of them all.

But was it really mainly about prestige? We are used to arguments that suggest Britain’s interests in the Middle East and the maintenance of her informal empire was linked primarily to the control of important resources and the security of essential military facilities. Britain did not seek to retain its military presence in the Middle East to protect oil. In 1956 there were 16 plans for unilateral British action in the region. Fifteen plans were for national evacuation operations and only one was for a conventional war: to support Jordan against Israel. Neither did Britain seek to remain in Egypt because of the importance of her military facilities. This may have been the case in the Second World War and the early post-war period, but by 1956 the Suez base was considered to be of no military importance in peacetime. Yet the British still refused to meet Egyptian demands for evacuation because, significantly, they feared this would be seen as being forced out and, therefore, as damaging to their prestige and influence in the rest of the Middle East.

Traditional accounts of Britain and the causes of Suez highlight British defence of her longstanding interests and influence in the Middle East dating back to the 1870s to protect the vital trade and communications route through the Suez Canal to the remainder of the British Empire in the Far East. In these versions, the main threats to British influence were the lack of resolution to Arab-Israeli dispute, the rise of Arab nationalism and the threat of communism.

When Nasser became President of Egypt he seen as positive and treated as a client of the west and key to a number of British and American policies in the Middle East. For example, Egypt was central to Anglo-American Cold War strategy in the Middle East which aimed to create a Middle East defence organisation along the lines of NATO. For the United States this would act as a bulwark against Soviet penetration in the region. For Britain it would have the added advantage of formalising Britain’s bi-lateral arrangements in the region and become an umbrella collective defence organisation of existing British defence interests with Egypt, Jordan and Iraq. Britain and the United States also sought a resolution to the Arab-Israeli dispute, through plan ALPHA, essentially an early version of a land for peace deal: territorial compromises and an agreement to recognise borders.

But in 1953 American policy was re-evaluated. John Foster Dulles, President Eisenhower’s Secretary of State toured the region and concluded that the British role in Middle East defence and Anglo-Egyptian relations hindered rather than served Western interests. He believed that the lack of settlement on the Suez Canal base undermined potential Arab unity and alignment with the west.

Nasser was increasingly perceived to be a threat to western interests. While the 1954 Anglo-Egyptian agreement gave Britain 20 months to withdraw their troops from the Canal zone and the right to reactivate the base if the freedom of the Canal was threatened by external powers seemed to indicate a resolution to the problem of the 1936 Anglo-Egyptian treaty, Nasser undermined the British sponsored Middle East defence organisation the Baghdad Pact, by pressuring Jordan not to join.

Nasser’s opposition to Israel threatened to renew the armed conflict in the Middle East. As a result his requests for military equipment from the west were refused. In July 1955 he turned instead to the Eastern bloc with an agreement with Czechoslovakia. Crucially, however, while there was western agreement that Nasser had to go, it was for very different reasons. For the United States it was because Nasser stood in the way of Middle Eastern unity in opposition to the USSR and Britain because Nasser was undermining Britain’s position in the region and the rest of the British empire. Opposition to Nasser’s policies led to Britain and the US withdrawing their promised finances of the Aswan High Dam in mid-July 1956. Nasser found an alternative source of income in his nationalisation of the Suez Canal Company on 26 July.

That evening when the news came in Eden was having dinner with the King and Prime Minister of Iraq and said Nasser had to go because he could not be allowed to ‘have his hand on our windpipe’ and ‘knock him off his perch’. But this was not going to happen quickly or decisively due to problems with military capabilities and readiness.

In private preparations were made for the use of force, including collusion with Israel and France for a pretext for the use of force which led to the Sevres Protocol on 22 October. In public, however, Britain pursued a diplomatic settlement thorough negotiation: a Maritime Conference of 22 Nations in August and the American sponsored Suez Canal Users’ Association in September.

The military operation ended abruptly when the UN called for a cease-fire on 2 November. The conflict led to a run on the pound and a sudden decline in Britain’s gold reserves. Although loans from the IMF would have eased the pressure, American backing for this was essential and so Britain had to bow to Washington’s demand for a ceasefire. The British had miscalculated, holding faulty perceptions of US policy: believed they would support or at least be indifferent, hoping at least for benign neutrality. Eisenhower summed up when he addressed the National Security Council on 1 November “How could we possibly support Britain and France and in doing so we lose the whole Arab world?”


Results of the Suez Crisis: a Turning Point?

The crisis led to a change to the regional balance of power for while the Egyptian air force destroyed, Nasser emerged as the only Arab leader capable of challenging the west. Israel gained for although did not depose Nasser, the UNEF guaranteed freedom of shipping in the Gulf of Aqaba and this gave Israel a Red Sea port. France applied her lessons when de Gaulle became President 18 months later with a European focus to French foreign policy. Part of de Gaulle’s veto British entry into the EEC can be explained by the Suez experience, not allowing Britain to be a Trojan horse of American interests. France withdrew from the military structure of NATO and refused to support American policy in Lebanon and Vietnam.

Globally it can be argued that the crisis formalised the dominance of the two superpowers and established a balance of power that remained effective until the collapse of the Berlin Wall.

Some see Suez as confirmation that Britain was hopelessly overstretched, that if a global role was to be retained it would have to be subordinate to superpower interests. The limits of post-war British power were demonstrated and the further British decline as an imperial power in Middle East, Africa and South East Asia was presaged. Others look at the relationship between Suez and the British decision to join the EEC, as if that decision was a result of Britain acknowledging and adjusting to a new reality – where it had lost an empire and was seeking a new role.

Margaret Thatcher certainly saw Suez as both a turning point and reference point. She believed the impact of Suez on British policy making thereafter, a “Suez syndrome”, was negative: ‘having previously exaggerated our power, we now exaggerated our impotence’. And she drew on Suez to enhance her foreign policy achievements: “The significance of the Falklands War was enormous, both for Britain’s self-confidence and for our standing in the world. Since the Suez fiasco in 1956, British foreign policy had been one long retreat”. (The Downing Street Years).

It is also important to remember that at the time British policy assumptions remained the same. Britain still saw itself as a great power and still aimed to maintain global influence. And while Britain continued to exercise influence globally, on decisive issues it would do so only in close consultation with the US. In this way Britain continued to exercise its influence and remained active in the Middle East. British power may have diminished, but her interests remained the same. Britain remained concerned about Arab nationalism, communism and the Arab-Israeli dispute. Britain used military force in 1958 to intervene in support of Jordan and Kuwait in 1961, counterinsurgency campaigns were fought in Aden and Dhofar and Britain remained active and engaged even after the East Suez decision down to 1991 and beyond.

Whether or not Suez is a turning point or a reference point, it magnified British unpreparedness to undertake a limited war and the incoherency of British ends, ways and means. The fear that a failure to tackle Nasser would be disastrous for British prestige ended in disaster and ignominy. And in this way Antony Nutting was surely right to suggest that enduring significance of the crisis is its No End of a Lesson.

Image: Smoke rises from oil tanks beside the Suez Canal hit during the initial Anglo-French assault on Port Said, 5 November 1956, via the Imperial War Museum.

Chilcot: The Lessons of Iraq vs The Reality of Interventions


Chilcot’s exhaustive enquiry into the origins, undertaking, and consequences of the Iraq war has been published. In turn, this (rather less than) exhaustive analysis of certain of its conclusions seeks to explore two of the critical components of the faulty pre war decision-making process as identified by the report. It will propose that despite Chilcot’s pertinent and well meaning observations in this respect, and despite any prospective efforts to abide by those observations and incorporate them into our planning and strategizing for the purpose of future interventions should they occur, similar mistakes as those made in Iraq (not to mention Afghanistan and Libya) will most probably continue to be made.

What precisely then are the observations and recommendations referred to? They are, firstly, that ‘[W]hen the potential for military action arises, the Government should not commit to a firm political objective before it is clear that it can be achieved’. Secondly, and presuming that the achievability of the political objective as been recognized, that ‘[A]ll aspects of any intervention need to be calculated, debated and challenged with the utmost rigour’. One would no doubt agree that these are highly pertinent observations, and that any rational interpretation of the events surrounding the Iraq intervention and its consequences would support such recommendations. Indeed in theory I would absolutely agree, and a recent article of mine has identified similar themes to Chilcot in these important respects. But that same article also identifies certain crucial elements that will always infect the rational use of military power for the purpose of liberal intervention, regime change and stabilisation, and which will always have the potential to derail rational political processes and designs.

With regard to the first point, that relating to the ‘achievability’ of political objectives. The former soldier turned academic Christopher Bassford puts it best in some ways. Responding to the oft bowdlerized warning that ‘[T]hose who do not remember the mistakes of the past are condemned to repeat them’, Bassford adds the refinement that, then again, even those who do remember the mistakes of the past are still condemned to repeat them. Because that’s what people do. It’s an unhelpful aspect of human nature; not the fact that lesson’s can’t be learned (of course they can), but the notion that on this particular occasion they simply don’t apply (when of course they do). But although Bassford’s observation may have been intended as a throwaway quip, it is rooted in scientific realities. For the purposes of this post, it draws attention to the concept of Construal Level Theory (CLT), a field of psychology that examines how people cope with the challenge of forming evaluations of distant actions and outcomes and in particular the way that they evaluate the latter phases of a sequence of actions.

Obviously, the keen eyed observer will note that I’m not a psychologist. But thanks to the research of Aaron Rapport in his article The Long and Short of It: Cognitive Constraints on Leaders Assessments of Post-war Iraq the non-psychologist becomes immediately aware of how these cognitive processes really matter in relation to political actors choosing to use force for interventionist purposes, particularly when it comes to the objective of dismantling or altering political and social structures in target societies. By extension therefore, one becomes aware of the potential for Chilcot’s warnings to remain unheeded in future.

Essentially, Rapport’s research argues that there is a difference in the way that policymakers approach certain aspects of an intervention such as Iraq. The first is the ‘near’ problem, which in the case of Iraq was the initial military campaign and the process of regime change. This is generally assessed on the basis of feasibility i.e. can we do this? However, the ‘far’ problem, in this case the subsequent long-game involving the transformation of Iraq from totalitarian dystopia to functioning democratic and unitary state, is subject to different criteria. In this instance, the determining factor is one of ‘desirability’ i.e. how much do we want this to happen? According to Rapport’s analysis, when ultimate objectives are so highly prized, policymakers tend to focus almost exclusively upon the benefits that will accrue rather than the intricate steps necessary to make them happen. As his conclusion states, this had the effect of encouraging overly-optimistic assessments of the political conditions that would exist in Iraq in the late stages of the intervention, a laissez-faire attitude that was not reflected by those involved in the short term planning relating to the initial military invasion and potential humanitarian crisis that was expected to follow. Simply put, the more distant the event, the more likely policymakers are to attribute positive outcomes to it. This has obvious implications for the mechanics of intervention, and the likelihood of political actors failing to properly conceptualise and resource the ‘long-game’ due to their over-optimistic belief in the satisfactory conclusion of their ultimate grand designs.

Chilcot’s second observation, that relating to the requirement for ‘all aspects of the intervention’ requiring the necessary ‘debate, calculation and challenge’ is similarly problematic. ‘All aspects’ of an intervention must, by definition, include that point subsequent to initial military operations. Yet, as my article points out, military interventions of the type engaged in by the West recently tend to transform the known into the unknown. The demolition of Ba’athist Iraq and the toppling of Ghaddafi released a vicious, swirling, directionless mass of competing ethnic groupings, tribes, sects, gangs, militias, warlords, terrorists and foreign elements, each with their own peculiar local, regional, national and transnational allegiances, alliances, economic interests and political aspirations. The notion that policymakers could have accurately considered and debated the innumerable permutations that may or may not have arisen is laughable. Donald Rumsfeld may have got many things wrong on the subject of Iraq, but his much derided articulation of ‘unknown unknowns’ i.e. ‘things you don’t know you don’t know’, perfectly highlights the problem facing those seeking to abide by Chilcot’s recommendations. Because what Chilcot is advocating in reality is that policymakers and their advisors, both military and civilian, must ‘debate, calculate and challenge’ not only the unknown, but potentially the unknowable too.

Image: Tony Blair and George W. Bush at Camp David in March 2003, during the build-up to the invasion of Iraq, via wikimedia commons.


In Defence of Military History

This post follows on from an entry by Dr. Matthew Ford, Dr James Kitchen and Dr Stuart Mitchell on Chilcot and the Politics of Britain’s Military History.


The notion of an academy remote from public discourse and disinterested in government policy is an attractive stereotype. Aspects of the academic discipline of history could certainly produce such an impression. There is a strong body of thought within certain areas of History that considers any attempt to use the past to inform current debate as bordering on ‘instrumentalising’ previous experience. For some scholars the past was a simply an entirely different world, one which ought to be understood solely in its own terms and not compared to the present lest such an endeavour lead to inaccurate and misleading deductions.

This argument has always appeared less convincing to those engaged in the ‘traditional’ areas of historical enquiry – political, diplomatic and military history – which have their roots in statecraft and military staff colleges. From their outset these sub-disciplines have sought to influence and educate practitioners. Yet in twenty-first century Britain, their ability to do so appears to be in an alarming state of decline.

Historians engaged in the study of politics, power and military force find themselves firmly out of fashion and under-represented within the academy. Outside of the staff college environment, one can count the number of chairs in military or naval history at UK institutions on the fingers of two hands. The ‘cultural turn’ in history in evidence since the 1960s may have produced a valuable, more egalitarian record of the past, but it has been accompanied by a noticeable contraction in support for academics studying topics which can inform government and the public about the use of armed force.

Indeed, a divide now exists between the academic study of war, the general public, and the policy-making community. This matters because it has a direct bearing on the UK’s capacity and willingness to use its armed forces in an intelligent and well-informed manner to defend its people and its interests.

Defence was conspicuous largely by its absence in the 2015 general election debates. Similarly, this week’s parliamentary debate on the renewal of Trident has been conducted with minimal public engagement or (seemingly) interest. If the population is disengaged with key issues of national security and strategy, it is unlikely fully to support or believe in decisions taken by government. Equally, the absence of history from discussions of international affairs risks each issue we face looming large as a challenge without precedent or answer. Such obstacles may exist, but if we lack an understanding of historical context, how can identify them as such and focus our actions on them appropriately?

If we are to create a more beneficial interaction between the academic study of war, the general public and the body politic, three key issues must be surmounted:

 Diminishing role of history in education & public life

The majority of the population does not take an active interest in historical research, even if they are engaged with history in a broader sense. Modern media presents a wealth of alternative entertainment offerings and whilst historically based content does feature regularly, the marketplace is hugely competitive. Thus, whilst reading history does remain a popular leisure activity, reading in the traditional style is in precipitous decline – particularly amongst younger people. This presents a major challenge to a discipline whose pinnacle of achievement is likely to remain the sole-authored monograph.

History also appears to be of decreasing importance within secondary education. Leaving university level education aside – afterall most people’s exposure to history ends after leaving secondary school – the quality and purpose of history teaching has been an area of intense debate in recent years. Critics like Niall Ferguson note the huge gaps in understanding and knowledge that many students arrive at university with (and those studying history at a higher level are presumably amongst the most interested in the subject and thus a sample of the most knowledgeable about it). In a survey conducted at one UK university, it was found that only 34% of arriving undergraduates reading history knew who the monarch was at the time of the Spanish Armada, 31% knew the location of the Boer War, 16% knew who commanded British forces at the Battle of Waterloo and just 11% were able to name a single British Prime Minister from the nineteenth century. Ferguson blames a curriculum which provides no real picture of the grand sweep of history, focusing instead on distinct episodes with no apparent relation between them.

Regardless of what one views the specific shortcomings of secondary school history to be, it seems reasonably clear that history is not viewed as a core subject. Thus, even if people are minded to study the past independently upon leaving education, they may lack the foundations necessary to do so in the most effective and enjoyable manner.

Availability of Cutting Edge Research

Those of the public who are minded to engage in depth with historical research are constrained from doing so by the academic-publishing process, which is simply not aimed at providing a conduit between the latest scholarship and the general reader. Whilst the growth of open access publications such as the British Journal of Military History is a welcome innovation in this regard, the fact remains that the majority of top journals appear unlikely to follow this route in the near future. So long as the Research Excellent Framework (REF) and other career considerations push academics in the humanities and social sciences to prioritise quality of research (one indication of which can be place of publication) over reach, this situation seems unlikely to change.

Much the same can be said of book publishing. The desirability of writing for highly regarded university presses is constantly emphasised and re-emphasised to academic staff by their institutions, yet even if one is successful in doing so the result is likely to be a monograph with a price tag of £60 or more. This is far outside what a general reader is likely to consider to be a reasonable price for a book, yet because such a monograph is essentially the gold standard for promotion, it will remain what many academics strive for. The result is that many of the most talented historians who write about conflict are faced with a choice between following the ‘safe’ route of academic publishing or risk the opprobrium of their institution for ‘wasting’ valuable research time by writing a ‘popular’ book, which may not be eligible for submission to the REF. Thus, the ‘military’ history available to the general reader on the high street is totally unrepresentative of the output of the academy as a whole and continues to be dominated by ‘big books by blokes about battles’. Some books of exceptional quality certainly do bridge the divide, but if only a small cross-section of the academy penetrates the popular market (and these often senior professors for whom the REF is often less of an immediate consideration) then those scholars producing excellent new work within the field cannot be faulted for operating within a system of promotion they did not create or for following the ‘academic career’ track.

The government’s attempt to remedy this situation – ‘impact’ – raises as many problems as it solves. Impact is defined as ‘any effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’. Education, or the transmission of knowledge or ideas is not prioritised. Thus, academic staff are encouraged to participate in endeavours which produce a measurable change in public views or attitudes, rather than simply seeking to share their work. How the public can change their minds on a topic about which they have no fixed view is unclear. The practical result is to disincentivise some activities which may serve to bridge the gap between academia and a general audience.

Mass Media

If new forms of media represent a challenge to the popularity of history, they also surely present a wealth of opportunities for disseminating research and engaging a wider audience. Blogs, videos, podcasts and social media are all exploited widely by the academic community as means of providing free access to new research and ideas. However, mass media – radio and television – continue to command a wider audience than even the most successful blogs and Twitter accounts, often reaching far into the millions. This presents a hugely powerful vehicle for reaching a broad audience, but also confers a weighty responsibility on programme makers who may provide the totality of an individual’s knowledge of an historical episode.

This produces difficulties for both editors and academics. Those in the media need the courage to strike the appropriate balance between education and entertainment, providing opportunities for academics to translate their work to a popular audience. Academics must become more willing and more able to articulate their research in an accessible, engaging manner that does not presume fore knowledge or descend into scholarly minutiae. This ought to be more effectively supported by government policy and by universities, as media work currently falls outside of the ‘impact’ criteria in the majority of circumstances.


Britain undoubtedly needs to develop a more sophisticated understanding both of its own armed forces and of the role military power can play in international affairs. If not, we risk ill-informed public pressure obliging the government to shy away from important decisions that may be in all of our best interests. Military history can and should provide an excellent means of improving this understanding. But to blame historians for an unwillingness to engage, to produce work quickly enough or to write with a broad audience in mind would be to consider but one small part of a far larger problem. Until more people are provided with the necessary tools and interest to engage with the past in a meaningful way, until academics are freed to reach a more popular audience without compromising their careers and until the media and the academy partner more effectively, the best new military history is unlikely to reach as far beyond the walls of the academy as is necessary to have a discernable impact upon wider society.

Image: Book stacks, The British Library (1978-97) by Colin St John Wilson via flickr.

Iraq: not the first British disaster … and it’s unlikely to be the last


After seven years, the Chilcot inquiry into the circumstances surrounding Britain’s involvement in the Iraq war finally has been released. Its conclusions are an excoriating critique of the limitations in British strategy and policy in 2003. The inquiry has identified a raft of issues: that war was not the last resort and that alternatives to military action were not fully explored; that the public case for war was built on evidence that did not reflect the uncertain nature of the actual intelligence; that the government was woefully unprepared for the post-conflict context; and that, in the end, Britain failed to achieve its key objectives. There may be many consequences. The Chilcot inquiry may reflect, as Sir Martin Gilbert has hoped, ‘an important milestone in government willingness to confront contentious issues’; or it may result in, as Alex Salmond as called for, the beginning of a ‘political reckoning’ for those most associated publicly with Britain’s decision to go to war. But Phillipe Sands, QC, has noted that the inquiry’s crucial role should be to ensure that ‘lessons will be learned that will allow us to make sure it never happens again’. Lessons undoubtedly will be identified, but whether they make another Iraq debacle impossible is more doubtful.

The eminent repeatability of the events of 2003 is evident when one examines two overarching themes identified by the Chilcot enquiry that weave themselves throughout the detail of the government’s decision-making over Iraq. The first is internal in nature, and it concerned the government’s decision-making processes; the second is external, and it was the priority accorded in British calculations to the ‘special relationship’ with the United States.

In terms of government’s decision-making processes in 2003, Chilcot notes their informality and ad-hoc nature. Cabinet often was informed of decisions rather than debating them. The inquiry identifies, in consequence, that there needed to be a ‘collective discussion by a Cabinet Committee or a small group of ministers’ on a number of crucial issues, including the political and legal implications of recourse to military options, and the potential difficulties in the post-conflict situation. In future, Chilcot recommends the creation of ‘a more structured process’ to ‘probe and challenge’ government options. Indeed, in such structures as the National Security Council, Britain already has more refined security policy decision-making mechanisms than existed in 2003. But it is doubtful if such changes would effect any revolutionary improvement in the quality of British strategy-making. The challenge for the Blair government in 2003 was the operation of two pervasive policy influences: uncertainty and beliefs.

There is generally in international relations an enormous gap between what decision-makers actually know as objective fact and what they would need to know to make fully considered, rational decisions. Decision-makers fill this gap with beliefs: beliefs about what it is right to do; beliefs about what will work and what the outcomes will be. Chilcot identifies Blair’s belief that Saddam Hussein was ‘a monster’ and that his regime represented a threat. This was reinforced by a set of ‘ingrained beliefs’ in government that Saddam had Weapons of Mass Destruction and that he would continue to develop them. Blair needed to make policy decisions but faced such uncertainties as the qualified conclusions of the Joint Intelligence Committee (JIC), and competing perspectives on the nature of the post-conflict context. Political crises typically short-circuit formal decision-making processes and reduce the size of decision-making groups. Facilitated by the nature of British political system, which accords great informal powers to the Prime Minister, Blair did what many British Prime Ministers before had done, and took the lead in driving foreign policy. From a 2003 perspective, he also had perceived successes in Sierra Leone, Kosovo, and Afghanistan to support belief in his judgement. Blair has noted in his memoirs that in acting over Iraq he was doing what he believed was ‘right morally and strategically’. In conditions of uncertainty what is believed to make strategic sense often becomes a function of what a decision-maker believes that it is right to do. Tinkering with government decision-making processes cannot eliminate in the future the uncertainty problem; nor eliminate the psychological factors that have such an important bearing on crisis decision-making.

Shaping Blair’s belief in the necessity of action was the second theme: the influence of the United States on British policy considerations. The Chilcot report concludes that the UK’s relationship with the US was ‘a determining factor in the Government’s decisions over Iraq’. This influence is a long-standing theme in British foreign policy. But what the inquiry also illustrates is that, time and again, British influence over US decision-making was minimal. Britain’s shift towards involvement in the Iraq war was influenced powerfully by the Blair government’s belief, as Chilcot notes, that supporting the US over Iraq was necessary in order to sustain cooperation in other areas; and that the UK could best influence US policy towards Iraq ‘from the inside’. But generally, Blair’s government proved unable to exert a decisive influence on the US – indeed, the reverse was true: by prioritising relations with the US, British policy was forced by degrees into alignment with that of the US. As Chilcot illustrates, despite Blair’s post 9/11 commitment that the UK would stand ‘shoulder to shoulder’ with the US, he was keen on reigning back the US focus on military options, preferring instead a gradualist approach that would maintain international support and that might at some point look towards regime change. Progressively, however, in attempting to reign the US back, the UK was instead simply dragged forwards. Blair’s long note of 28 July 2003 included the phrase ‘I will be with you, whatever’. This phrase was contained in a missive whose general thrust was a desire to slow the US’ moves to the military option; but it also expressed a general truth about the realities of the British position. The Chilcot inquiry notes that, in 2003, Britain should have adopted a more questioning attitude. But whether, especially post-Brexit, Britain would be in future be more willing to risk a rift in Anglo-American relations is a matter of debate.

The specific issues identified by the Chilcot inquiry are a devastating critique of the Blair government’s handling of the Iraq crisis in 2003. However, it would be unwise to assume that the roots of the problems identified are new or that in the future they won’t be open to repeat. The decision-making difficulties that manifested themselves in 2003 reflect pervasive problems in foreign-policy decision-making relating to uncertainty. Equally, the priority placed upon the ‘special relationship’, and the influence therefore on the UK of US policy priorities, is a long-standing theme that is likely to endure. These factors can generate great policy difficulties but they do not make failure inevitable. For a war fought on questionable legal foundations, for example, see Kosovo in 1999; or for policies driven forward by Prime Ministerial fiat, see the Falklands War in 1982. Blair has argued that his decisions over Iraq were taken ‘in good faith and what I believed to be the best interests of the country’. It is entirely possible that this is true; but, unlike in Kosovo and the Falklands, Blair’s great problem is that Britain lost.

Image: Jacques Chirac, George W. Bush, Tony Blair and Silvio Berlusconi during the G8 Summit in Évian, June 2003, via wikimedia

Brexit and International Security: A Guide for Undecided Voters


The most recent polls for the referendum on Britain leaving the European Union suggest that neither the ‘Brexit’ nor the ‘Bremain’ camps have mustered the necessary support to win today. The still undecided voters will certainly play a crucial role. So, how should these voters take their decision? The most obvious approach is to gather as much impartial information as possible. Admittedly, in the present climate of the referendum campaign identifying such information is a challenging task. However, I argue in this blog post that academic scholarship can offer useful remedies. To be sure, academics have been accused of a clear Bremain bias. After all, a substantial number of academics have come out in favour of Britain staying in the EU. Universities UK, the umbrella organization for British universities, supports strongly the Bremain campaign and, according to The Independent, ‘vice chancellors from almost every major higher education institution in Britain say they are “gravely concerned” about a vote to leave’. At a recent workshop on the security implications of a potential Brexit, which I co-organized at King’s College London, it was difficult to find pro-Brexit security experts. None the less, academic scholars have demonstrated that they are capable of providing much needed, impartial information, as Anand Menon of the UK in a Changing Europe initiative has argued forcefully in a recent article for The Guardian. In a recent contribution to this blog, I have already refuted the arguments by both Brexit and Bremain supporters who have tried to use defence-related arguments in their campaigns.

In this contribution, I will go beyond the narrow focus on military defence. Using basic insights from International Relations theory, I will offer an impartial examination of British membership in the EU in the context of international security. From an International Relations perspective, the EU is basically a very advanced form of inter-state cooperation. And the classical International Relations theories tell us that states cooperate because it is in their national interest to do so. Historically, the main examples of international cooperation are alliances. You do not have to be a military genius to realize that it was easier to defeat Nazi Germany or to oppose the Soviet Union as a block of states rather than each country for itself. However, International Relations scholars also tell us that effective international cooperation always comes with a price tag. Especially for major players like the UK it is very difficult to ‘free-ride’ on the efforts of others. This is, if you will, the fundamental issue of this referendum: whereas the Brexit supporters believe that the price tag of EU cooperation is too hefty, Bremain supporters argue that the benefits of EU membership outweigh its costs.

But what do International Relations scholars consider to be ‘costs’? Too much focus in this regard has been on the misleading figures of the UK’s financial contributions to the EU. More important are costs in terms of national independence. In abstract terms, cooperation always entails some sort of compromise. In other words, if a nation state cooperates with other nation states its narrow national interests will be ‘compromised’ in one way or another. Let’s take an easy example: NATO and its leadership. The Alliance members have accepted that the Supreme Allied Commander Europe is always a US commander. This might be a small price to pay for America’s continuing commitment to NATO, but it is still a significant concession in terms of national military independence. Consequently, (neo) realist scholars believe that strong international cooperation only occurs – and should occur –in the rare instances when (minor) limits on national independence offer far superior benefits in terms of national interests. As the new head of the Department of War Studies at King’s College London, Prof. Michael Rainsborough, argued in The Telegraph, ‘What remains permanent in Europe and the world are nation states that ultimately have no permanent friends and no permanent enemies, only permanent interests.’

There is no doubt that the EU encroaches on the UK’s national independence – every international organization of which the UK is a member does so in one way or another. In economic terms this encroachment is arguably more obvious than in the case international security. After all, in the realm of international security, the EU remains a largely intergovernmental organization, where decisions are still taken by consensus. Most notably, the EU does not infringe the right of the UK – or any other member state for that matter – to take fundamental national security decisions on their own, e.g. the decision to go to war with Iraq in 2003 or the decision about the renewal of Trident. However, research shows that it is also true that many security-relevant decisions in Europe are not taken anymore in the national capitals in isolation, but rather by national representatives in Brussels. Conceptually, this is called supranational intergovernmentalism. Another example where the UK has lost some of its security-relevant national independence is border control. Although Britain has never joined the Schengen Agreement and remains formally in control of its borders – hence the long queues at the border control posts at UK airports whenever we try to enter the country! – the border-free Schengen zone and the free movement of persons in the EU has certainly limited the UK’s ability to control its borders.

However, all these costs in terms of national independence also have clear benefits for the UK. First, EU membership reduces uncertainty. Although the EU might have its shortcomings, at least we know what we have. And this might be better for the UK’s national interests than going it alone in an increasingly turbulent world. As the saying goes, a bird in the hand is worth two in the bush. To be clear, nobody knows what will happen if the UK actually leaves the EU. There is certainly the possibility that the UK will be better off after the Brexit. But there is also a high risk that the UK will be worse off. As the historian Lawrence Freedman pointed out in a recent article for Survival, ‘Extracting the United Kingdom from the European Union is not going to make either body stronger or better able to cope with the current set of security challenges, whether from Russia or ISIS. It could leave both in a much weaker position. With so little clarity on what Brexit is intended to achieve, it is hard to think of a greater test of the law of unintended consequences’.

Second, international organizations such as the EU help to lower transaction costs, as (neo) liberal scholars have argued since the 1980s. What this means in practice will be all too familiar to those readers with small children. As the eminent International Relations scholar Stephen Walt explained in a blog post for Foreign Policy, ‘My kids might like to negotiate every single aspect of their lives, but who has time? And as with most norms, failures in the short-term are less important than success in the long run’. In other words, cooperation with like-minded countries in an institutionalized setting like the EU tends to be much more efficient in the long-term than negotiating new forms of cooperation from scratch, whenever the need for working with other nations arises.

Third, many of today’s major security issues are global in nature. Transnational crime, the proliferation of WMD, climate change, energy security or the rise of China are issues that affect in one way or another most nation states, including the UK. Likewise, the issues cannot be addressed effectively by individual nation states, even the most powerful ones. For instance, if the UK tackles climate change nationally, but China and other major actors continue with their greenhouse gas emissions, British policies will not have a major impact and the UK is still likely to face the consequences of climate change. In International Relations theory, these kind of challenges are known as collective action problems. And the only way to avoid these kind of problems are powerful international organizations such as the EU.

So, what does all this theorizing about security cooperation tell the undecided voters today? Clearly, they should not cast their vote based on an ill-defined gut-feeling but on a fundamental decision about what each individual voter values most: national independence, though without being able to reap fully the benefits of security cooperation with the EU and its member states; or the ability to shape collective responses to common problems, but with less national independence. The ideal solution – full sovereignty and full benefits from cooperation – is unfortunately simply a pipe dream. As all too often in life, we can’t have the cake and eat it too.

Image via pixabay.

Why ‘defence’ does not serve as a suitable argument in the Brexit debate


Only one month remains until British voters can decide if the UK should leave or stay in the EU. Naturally, the debate about the benefits and disadvantages of British membership in the EU is heating up. Almost each day, the supporters of ‘Brexit’ and ‘Bremain’ vie with each other for the best punch line in the national media. Whereas most of the original debate centred on economic issues, the discussion covers now virtually every single imaginable policy field. Defence – or better military cooperation in the EU – has become one of them. Both the ‘leave-’ and ‘in-campaign’ have come up with terrifying visions should the UK either leave or stay in the EU. Vote Leave claims that ‘A vote to remain in the EU is a vote to keep giving the EU more and more power over our military and defence.’ In contrast, Britain Stronger in Europe argues that ‘If we left, the UK would lose its veto and its influence over EU policy [in the area of defence]’. Both campaigns have also been able to muster the support from some of the UK’s most senior military leaders. Major General Julian Thompson, the British land commander during the Falkland War, expounded his views in The Sun: ‘I find it quite extraordinary that they are trying to set up a separate defence organisation. It makes us less safe. It muddies the water.’ Yet, his comrade Colonel Angus Loudon MBE comes to a very different conclusion in The Telegraph: ‘Only by remaining and leading European defence co-operation can we protect ourselves from the threats and instabilities we may face in the future.’ So, the question is: who is right? The short answer is: none of them. Defence, in the narrow military sense, is simply not an issue on which leaving or staying in the EU will have major repercussions for the UK. After all, the EU is not, by and large, a defence organization.

To be sure, the EU has a defence policy in the form of the so-called Common Security and Defence Policy (CSDP). However, it is not a widely developed policy field. The notion of a common defence policy dates back only to 1998, with the European Security and Defence Policy. Interestingly, it was actually an Anglo-French bilateral document, the so-called St Malo Declaration, that kick-started EU defence cooperation. In this Declaration, the governments of the EU’s two most powerful military actors agreed that ‘the Union must have the capacity for autonomous action, backed up by credible military forces, the means to use them, and a readiness to do so, in order to respond to international crises.’ However, the British and French governments did not agree on the reasons behind this initiative. While the British thought it would strengthen the European pillar in NATO, the French saw it as a way to become independent of the United States in military affairs. Be that as it may, in the following years EU member states have adopted a number of ambitious ‘headline goals’ to implement the new-born European and –since the entry into force of the 2009 Lisbon Treaty – Common Security and Defence Policy. Yet, these goals have never been met. This does not mean that the CSDP has ceased to exist. On the contrary, the EU has implemented over thirty CSDP missions all around the world. There have been even missions with real military teeth, in particular the anti-piracy operation off the Somali coast, but the majority of them were rather civilian than military missions. So, in many respects, the EU has remained a military dwarf.

What does this mean for the Brexit vs. Bremain debate? The low level of defence cooperation in the EU leads to a rather paradoxical situation for both camps.

On the one hand, the supporters of Brexit can argue – quite convincingly – that leaving the EU would not have major repercussions in the defence field. Institutional integration is almost non-existent. For instance, there is no permanent EU military headquarters. So, the institutional disentanglement could be almost done overnight. On the other hand, EU defence cooperation looks in many respects like the type of cooperation that the Brexit supporters want for the whole of the EU: It is almost purely intergovernmental, each member state has a de facto veto power on all decisions, most financial contributions are paid for by the member states on a voluntary basis, and the European Court of Justice (as opposed to the European Court of Human Rights, which is not an EU institution) has virtually no power. And this creates two problems for the Brexit arguments: First, defence does not appear to be a particularly good reason for leaving the EU; second, if the rest of the EU would be run like defence cooperation, does this mean that new forms of cooperation in Europe would be as ineffective in all other areas as it is in defence?

At the same time, defence is also a double-edged sword for the supporters of Bremain. On the one hand, they can point out that Britain remains fully in control of defence cooperation, i.e. nothing can be done against her will. Furthermore, while there are only few positive outcomes of CSDP missions, in particular off the Somali coast, there have been no missions with clearly negative outcomes. Yet, on the other hand, the supporters of Bremain cannot point to any major example that could show that Britain really needs to be in the EU for defence purposes. In fact, the UK could still cooperate with her European partners in defence matters even from outside the formal EU structures – and she might be even more eager to do so, as the example of France leaving the military structures of NATO shows.

At the end of the day, the bottom line is: in narrow military terms, staying in the EU does not make the UK a safer place – but neither does leaving the EU! Especially in the short- and medium term the Brexit referendum will not have major defence implications for the UK (though it might have for the EU). NATO is and will be Europe’s main organization for defence cooperation. In particular nuclear deterrence is dealt with exclusively in the NATO framework. National security in a broader sense is, of course, a very different matter. International terrorism, the proliferation of Weapons of Mass Destruction, or transnational criminal networks, are very different types of threats to the UK and cannot be compared to defence cooperation in a strict military sense. In fact, they might be very good reasons to vote on 23 June. But military defence in a strict sense is not one of them.

Image: NATO forces practice amphibious assault near Ustka, northern Poland, on 17 June 2015, via NATO image library.

A Capital Mistake: Evidence and Defence in the Brexit Debates

Professor Matthew Uttley & Dr. Benedict Wilkinson

In one of his more exasperated moments, Sherlock Holmes turns to his long-term companion, Dr. Watson and chides him for his impatience, saying ‘It is a capital mistake to theorize before you have all the evidence. It biases the judgment.’ Strong words they may be, but wise ones too. And yet, those who watch defence matters will have noticed that we are approaching a Brexit referendum in much the same position that Holmes warns Watson to avoid: lots of arguments and theories and ideas, but little evidence to back any of it up.

Over the last few weeks, the national security and foreign policy implications of a Brexit have become a key battleground. Everyone seems to have an opinion, even US President Barack Obama, who wrote in the Daily Telegraph that ‘[t]his kind of cooperation – from intelligence sharing and counterterrorism to forging agreements to create jobs and economic growth – will be far more effective if it extends across Europe’. The arguments on both sides are well-rehearsed: those in the ‘Remain’ campaign repeatedly claim that leaving the EU would ‘threaten’ the UK’s national security and global influence. They point to ‘grave security challenges’ and existential threats, including the rise of so-called Islamic State (DAISH) and resurgent Russian nationalism, and assert that the UK is in a ‘stronger’ position to deal with them from inside the EU. Those in the ‘Leave’ campaign have responded by accusing their opponents of exaggeration, egregious scaremongering and ‘Project Fear’ tactics.

Amongst the op eds, the interviews and the speeches, there is little evidence or rigorous analysis to substantiate claims made by either side. In our view, this is worrying: in the first place, it means that key elements of national security have been overlooked in what has been described as a ‘blizzard’ of sweeping claims and counter-claims over whether Britain’s defence and international status would be undermined by departure from the EU. Indeed, as we have argued in the International Affairs journal published by Chatham House and elsewhere, one of the most important omissions in debates thus far has been any consideration of what a Brexit might mean for Britain’s defence procurement and domestic defence industries.

Secondly, and perhaps more importantly, the lack of evidence and analysis raises the spectre of UK voters being forced to make their referendum choices without key information on the possible Brexit implications for a vital sector that provides secure military supply chains, ‘technology advantage’, and a domestic industry with an annual turnover of £30 billion and employs 215,000 predominantly skilled personnel as well as supporting a further 150,000 jobs in supply chains.

Without the data and evidence, it is difficult to understand what a Brexit might mean for defence procurement and defence industries. This is worrying because the Brexit debate is highly partisan and ideological, so there is the real possibility that long-term choices will be coloured by the politics of sovereignty versus the politics of integration, rather than evidence relating to the defence-industrial base and defence acquisition.

As things stand, the debate is being played out between four factions. On the one hand, there are pro-Brexit and pro-Remain ‘factions’ that emphasise the importance of national sovereignty, but disagree on the implications of a British exit. On the other, there are pro-Brexit and pro-Remain ‘factions’ that subscribe to the goal of integration through ‘ever closer union’, but disagree on whether the UK is essential for, or an impediment to, that goal.

The domestic British political debate is likely to be dominated by cases presented by two pro-Brexit and pro-Remain ‘factions’ that emphasise the ‘so what?’ for national sovereignty and independence. These factions are likely to rehearse the predictable and well-worn claims and counter-claims characterising the Brexit debate as a whole. The argument of the ‘pro-UK, pro-Brexit’ camp will be that ‘leaving will not undermine the national defence procurement options or industrial capabilities’ because EU integration in this sector has thus far been limited, so Britain would remain free to pursue a ‘sovereign’ defence procurement policy. Set against this, the ‘pro-UK, pro-Remain’ camp would argue that ‘there’s nothing to lose by staying in, but there are plenty of risks for the UK in leaving’. It would also argue that if the UK would be no worse off in leaving, then it would be no worse off in staying. Correspondingly, it is likely to reiterate the broader mantra that a Brexit might deter future foreign direct investment, and that Britain would have to comply with EU regulations when trading with Europe, but without influence on the future content and direction of those regulations.

The two pro-Brexit and pro-Remain ‘factions’ subscribing to the goal of ‘ever closer union’ are likely to produce narratives premised on assumed benefits from integration. A ‘pro-EU, pro-Remain’ faction is likely to argue that ‘leaving will undermine the EU’s defence industry so that the EU and UK will rely on the US to an even greater extent’. It follows well-worn assumptions in Brussels that the reluctance of EU member states to relinquish sovereignty has created protectionism and fragmentation in Europe’s defence procurement and industrial spheres. The solution to the ‘costs of non-Europe’ is a strategically independent European Defence Technological and Industrial Base (EDTIB) able to compete with large US defence contractors. On this basis, a Brexit will undermine the emergence of a genuinely competitive and strategically autonomous EDTIB, which, in turn, risks undermining the future of ‘security of supply’ of defence equipment sourced from within Europe, leading, in turn, to greater EU reliance on US-sourced defence systems. Against this and as we argue in the IA article, we envisage a pro-Brexit faction that argues that ‘a British exit will remove a barrier to other member states’ desire for “ever closer union” and a European Defence Union’. This perspective is likely to emanate from frustrations in European member states among those who feel that the UK is an impediment to EU integration.

Nonetheless, all of this is difficult to prove – not least, because of the dearth of data, the paucity of evidence and the absence of analysis. The real threat, amongst all this, is that British voters will be forced to chose between partisan and ideologically motivated claims and counter-claims. The risk is that they end up like Dr. Watson – making judgments without all the evidence and, perhaps, coming to regret those judgments.


Image: 47th Munich Security Conference 2011: David Cameron (le), Prime Minister, Great Britain, Dr. Angela Merkel (ri), Federal Chancellor, Germany, and Kevin Rudd, Minister of Foreign Affairs, Australia.. Courtesy of Wikimedia Commons.

Understanding a different ‘holy trinity’: procurement and British defence policy, part 3: Time


My previous two blogposts on the procurement trinity covered capability and cost. Many people see the problems of defence procurement as a trade-off between one of these two factors or the other, but there is also the third forgotten element: time. Delays in projects can affect the other two elements of the trinity: firstly capability is affected through the need to keep in service ageing and less capable equipment, whilst secondly costs are increased through the need to spend more to keep that old equipment working as well as to maintain a production line for the delayed equipment for longer thus incurring additional costs.

The issue of time can often be the most insidious problem in procurement projects: using time holds out the false promise of solving a problem (for now) but in fact serves only to make the problem much worse and leave bigger mess for others to clear up in a few years’ time. At first glance, nothing can be simpler than stretching out a project – either manufacturing over a longer period of time, or keeping the design in development purgatory before finally committing to placing orders for manufacture. Short term battles within the MoD are averted, the budget profile looks immediately better, and the problems will only surface when most of those involved – politicians, staff officers, many officials – have moved onto other, and perhaps, better things.

An example was given by Sir Bernard Gray (whom we met in my previous blogpost) in a speech to the International Institute of Strategic Studies at the end of the last year as he retired from his position as the head of the Defence Equipment and Support organisation, the body within MoD that manages procurement and acquisition. He recalled a proposal to delay aspects of the avionics for the Typhoon jet aircraft in the late 1990s: it would save £20m in the next two years, but it would mean delaying the in-service date of the aircraft by eighteen months. This delay would result in the under-employment of personnel in the factories making the aircraft and the maintenance of a full range of overheads and working capital for this additional time period, which over the life of the whole procurement project would add between £70m to £100m to the total Typhoon procurement cost. A small short term saving had been bought at the cost of a long term cost increase of over 3 to 5 times greater.

In the MoD argot, this is known as creating a ‘bow wave’, after the phenomenon in which a wave forms at the bow of a ship as it sails through the sea. The greater the bow wave the slower the vessels goes as energy is dissipated in pushing the water forward in the bow wave rather than in propelling the ship forward. In short, energy (money) is wasted in a bow wave at the expense of pushing the ship (procurement project) forward.

The procurement bow wave comes in a number of forms and one example should highlight the difficulties in understanding all the costs associated with such stretching out or delaying of procurement projects. Largely unreported from the recent 2015 SDSR was the decision to push back the procurement of the successor to the current Vanguard class nuclear powered submarines – which carry the British nuclear deterrent in the form of Trident D5 ballistic missiles – by what seems to be between three to five years. In the 2010 SDSR the Trident successor was expected to cost £20bn at 2006 prices (£26bn in today’s prices) and enter service in the ‘late 2020s’. The 2015 SDSR announced that it would cost £31bn with a £10bn contingency and that it would enter service in the ‘early 2030s’. This will not include the replacement of the warheads on the Trident missiles these boats will carry. This was originally planned to take place in the early 2030s (SDSR 2010), then the late 2030s (2015 SDSR), and is now planned to occur in ‘the 2040s’ (Ministry of Defence, January 2016). (SDSR 2010, pp. 37-39; SDSR 2015, pp. 35-36; Ministry of Defence Policy Sheet Jan 2016)

Setting aside the warheads, there are at least three consequences of delaying the ‘successor’ class of submarines themselves. The first is the standard bow wave in which keeping the production facilities going will mean additional overheads at the end of the programme whilst staff will be underemployed at the start. It appears that this increase will be between £5bn and £15bn depending on whether and how much of the contingency is used. Second is the fact that the existing and increasingly ageing Vanguard boats will have to be kept in service for longer – older equipment, systems and platforms will need more maintenance and servicing. This will cost money and mean a reduction in capability as these boats soldier on past their planned retirement dates. Third, because of a need to maintain a skills base in nuclear submarine-building the government has committed to ensuring that there will not be substantive gaps in submarine construction (which if such gaps were to occur would mean the laying off of highly trained shipyard workers and engineers, probably never to return, resulting in even larger amounts of money being spent to re-create a skills base when production re-starts), so it is almost inevitable that the last vessels of the preceding Astute class hunter-killer submarines will be stretched out to prevent a gap emerging. Therefore a bow wave in the successor class has probably created another bow wave in the Astute class with the inevitable consequences for costs.

It seems likely that these delays have little to do with technological problems with the successor boats – they will probably utilise much of the technology already developed for the Astute class – but with a wider ‘portfolio’ issue of fitting the whole procurement programme into the expected budget levels in the next ten to fifteen years. Of the capability enhancements announced in the 2015 SDSR, the most significant (and expensive) was the decision to procure nine P8 maritime patrol aircraft after the 2010 SDSR had removed that capability entirely. A large part of the justification for the re-introduction of this capability was their role in patrolling the approaches to the nuclear deterrent submarine base at Faslane to prevent surveillance by other countries’ navies (for ‘other countries’ read Russia). So to add to the layers of complexity, the new P8s are in fact part of the wider capability to provide a continuous at sea nuclear deterrent that also encompasses the Vanguard class, the successor boats and the nuclear warheads. Looked at in this way the whole capability provision could run to much more than just the currently planned price range of £31bn to £41bn.

The aim of this post and the preceding two has been to highlight the complexities and difficulties of getting defence procurement right. The procurement trinity needs to be balanced, and no decision-maker should ever fool themselves that change to one element of the trinity, be it the stretching of a project timescale, the tweaking of requirements or the allowing of restrictive support contracts to allow for cheaper up-front costs, will not impact on one or both of the other elements. The examples I have given in these blogposts have been placed under one of the three trinity elements, but as readers of these posts should now no doubt realise, at least one of the other two trinity elements will always come into play. How much capability should be traded off to get the project completed on time and to budget? How much money should be spent to try maintain a project’s capability or to keep it to schedule? Is the time bought by a delay worth the money and the reduction in existing capability in the mean time?

The problems discussed here have been acknowledged for many years: but the late 1990s SMART procurement reforms resulted in a concerted effort to view procurement in terms of a whole life cycle. They introduced the ‘CADMID’ procurement stages mentioned in my second post and began to bring together the disparate procurement and logistics organisations in the MoD. Some of these reforms were pulled off track by the post 9/11 wars in Iraq and Afghanistan, but two reports towards the end of this ‘long wars’ period refocussed activity on reforming systems and structures.

The Gray report of 2009 recommended a series of changes in procurement including regular defence reviews to help keep the programme in budget, the better delineation of responsibilities between the commands and the Defence Equipment and Support organisation (‘DES’), and controversially the recommendation that the latter should privatised. Bernard Gray himself was subsequently brought in to head DES in order to lead the reforms based on his proposals, although the privatisation attempt eventually fell through. In 2011 the Levene report, written by a former head of one of DES’ predecessor organisations, proposed a whole series of reforms across the Ministry of Defence as a whole, not just in procurement. The budgetary autonomy of the individual commands was enhanced in return for clearer lines of delegated responsibility – in procurement as well as other areas – whilst staff officers were encouraged to remain their posts for longer to prevent the loss of corporate memory. These as well as a range of other reforms are currently being implemented across the MoD, whilst as has been seen, the economies of the 2010 SDSR apparently dealt with the £36bn projected overspend the National Audit Office predicted in 2009. Many of the new processes and systems are undoubtedly making a significant difference to the effectiveness of British defence procurement, but it would a brave person who would predict that the issues set out in these three posts are now a thing of the past.

Image: A Trident II missile fires its first stage after an underwater launch from a Royal Navy Vanguard class ballistic missile submarine via wikimedia commons.