Legacies of the Great War: the Experiences of the British and American Legions during the Second World War


Ashely is a DPhil student in the Globalizing & Localising the Great War programme at the University of Oxford. You can here a recording of the talk associated with this post here.

The year 2017 marks the centenary of American involvement in the First World War, but it is unlikely to draw the same level of public attention as the 2014 anniversary has in Britain. The Great War does not hold such prominence in the American national consciousness, a reality which is often attributed to its more limited role in the conflict. The United States entered the war three years into the fighting and lost approximately 53,400 men killed in combat (although including influenza deaths among servicemen raises the tally of American dead to more than 115,000). Britain, by comparison, suffered more than 700,000 dead during the conflict. It could be argued that such figures explain why the First World War has receded in American public memory while it retains such prominence in Britain, but it is significant to note that this was not the case in the years immediately following the war. As scholars such as Jennifer Keene, G. Kurt Piehler, Mark Snell, and Stephen Trout have argued, the war left a considerable mark upon America and a culture of commemoration developed in the post-war years just as it did in Britain and other former belligerents. So when – and how – did these memory trajectories come to diverge so markedly?

Naturally, our thoughts turn to subsequent historical developments for this answer, and particularly to the Second World War, which is the predominant twentieth-century war remembered in the United States. How this latter conflict came to affect the memory of its predecessor is an intriguing question into which ex-servicemen’s organisations such as the British and American Legions can provide unique insights.

The Legions’ shared characteristics provide a baseline for comparison that may help illuminate the unique national contexts in which they were situated. The membership, leadership, structure, and relationship to the state of both groups mirrored one another – as prior work by Niall Barr, Graham Wootton, William Pencak, Thomas Rumer, and Stephen Ortiz has demonstrated. Former officers and the upper classes were over-represented among the national leaderships of both groups, while white middle-class men of small towns dominated the rank-and-file membership. Hierarchically structured with local, regional, and national outposts, both groups enjoyed close working relationships with their respective states, thanks to conservative political agendas. Perhaps the most significant similarity, however, is their common mission to perpetuate the memory of the First World War. This agenda came to inform their political and cultural engagements in Britain and America throughout the interwar period.

Yet despite the similarities in demographic and cultural terms, and the shared background and aims of these groups, in-depth research comparing the two is lacking. This is due in part to the differing national contexts mentioned earlier, but also because of important distinctions between the organisations themselves. The American Legion was considerably larger and more powerful politically than its British counterpart, claiming between 15-25% of all Americans mobilised for World War I as members and enjoying support from political elites such as Theodore Roosevelt, Jr. The British Legion, in contrast, represented 10% of British veterans at most during its interwar peak. Its national presence was felt more through its annual Poppy Day appeal rather than its influence on official policies.

Yet these differences only make the question of divergent memory trajectories even more pronounced, since it is in the United States – with its larger and more politically influential Legion – where the memory of the Great War subsides most. Perhaps the answer can be found in the differing national experiences of the Second World War?

That the Second World War delivered a blow to such groups so firmly anchored in the Great War is unsurprising. The onset of another global conflict forced both organisations to re-evaluate the legacy of the preceding war. Comparing the First World War with the Second thus became a frequent theme in British and American Legion discourses – especially early on in each nation’s war effort. Placing Great War veterans in relation to those being mobilised for the new fight was particularly important for the groups, whose membership rolls might increase via these future ex-servicemen later on.

At the heart of wartime discussion was a debate about comradeship – which my paper to the First World War Research Group at the Joint Services Command and Staff College on 14 February 2017 (available here) analysed in detail. Participation in the First World War served as a cornerstone in the collective identities of the British and the American Legions. Incorporating ex-servicemen who had not experienced the Great War challenged existing ideas of who could be considered a “comrade in arms.” Deviating too far from past views might jeopardise the memory of the First World War, both in terms of upholding its broader historical significance and as well as its personal import. At the same time, recruiting Second World War ex-servicemen offered the chance to secure their futures as organisations. Discussions, therefore, needed to appeal to this generation, too.

Deciding who belonged and who did not boiled down to a much larger question with significant implications: why did the service of veterans from both the First and Second World Wars matter?

Examining discussions among Great War ex-servicemen in America and Britain offers a helpful case study demonstrating how the Second World War impacted narratives of the First within these differing national contexts. The extent to which the Legions continued to uphold the Great War as significant raises interesting questions about wider developments in national memory discourses. Understanding the conflict’s place in British or American national consciousness in 2017 is not only a matter of grasping these state’s respective war experiences, but of discovering how subsequent events served to shape its narratives as well.

Image: Crowd at an American Legion convention in New Orleans, 1922, via wikimedia commons. 

Conference Report: Commemorating the Centenary of the First World War


This post reflects upon an event held on January 12th in the River Room at King’s College London. The symposium featured contributions from Prof Jay Winter, Dr Helen McCartney, Prof Annika Mombauer, Hanna Smyth, Dr Jenny Macleod, Dr Heather Jones, and Dr Catriona Pennell. Recordings of all of the days proceedings are available online and can be found by clicking on the name of the individual participant.

How the conflict which subsequently became known as the First World War ought to be interpreted, understood, and given meaning became a hotly contest topic almost immediately after the outbreak of hostilities in the summer of 1914. Debates over what the War meant displayed, and continue to display, a multiplicity of interpretations, attitudes, and agendas – which often reveal far more about those who formed them than the events they aim to discuss. The centenary of the conflict – and the accompanying raft of commemorative activities and spike in public interest – has presented a unique set of challenges to historians, but also a valuable opportunity to reflect upon the relationship between their craft and broader society. This event, held at King’s College London on January 12th, brought together scholars from a range of backgrounds to discuss the varying national approaches to the centenary, and what these might tell us about how the First World War is perceived and understood in the twenty first century.

 (Contested) Identities of Remembrance

What is the future of identities in the process of commemoration? Jay Winter’s provocation proved a key theme that ran through the event’s proceedings. With the aftermath of Brexit and the increasingly pluralised nature of identities in the modern age, participants were invited to consider how these identities might become contested and fluid, rather than temporally fixed. Vladimir Putin’s use of the ‘sacred memory’ of the First World War as a way of rehabilitating the Russian Empire and providing a ‘militarist narrative for popular consumption’ is just one example of the slippery way in which identities can be mobilised for political gain. Other speakers tapped into this pervasive theme. Hanna Smyth touched on these contested identities when speaking about the work of the Vimy Foundation. For Canada, national and imperial identities of remembrance were not binary. The idea of a Canadian national identity can be broken down further: how does Newfoundland – a separate dominion during the war, but now part of Canada – remember the First World War? What about the Quebecois? What about those from the First Nations? These contested identities are further compounded by the problematic narrative of ‘brave soldiers’ who died for freedom – a narrative that is by no means unique to Canada. In the case of Ireland, the tense, often divisive, nature of identities of remembrance has supposedly been tackled head on during the centenary commemorations. Catriona Pennell spoke of the ‘de-orangification’ of the First World War narrative, and the move towards equality of sacrifice in Ireland’s commemoration. As historians, we need to be mindful of the inherent complexity associated with the construction and presentation of national identities; the centenary has certainly reminded us of this.

Silences of commemoration

Despite the high level of commemorative activity across many of the main belligerents, there remain obvious silences of commemoration. Refugees and the reconfiguration of imperialism offer just two, broad examples. While attempts have been made to uncover and reintegrate the story of the Canadian First Nations, and Indigenous Australians into national commemorative narratives, there is still a continuing problem of visibility. Heather Jones spoke of the removal and muting of the ‘national’ narrative from France’s commemorative activity. While the international and the European has been a key focus of France’s commemoration, the continuing trauma of the nation’s colonial legacy and the often white, male face of commemoration has – unwittingly or not – proved another means of silencing complicated aspects of France’s past. From a British perspective, the focus on 1 July 1916 as a key focal point in the Somme commemoration is just one of the silences apparent in British commemorations. Cherry picking certain operations or campaigns, for commemoration, particularly those dominated by the army, is problematic. We are faced with similar problems when looking at the contributions of the army’s sister services. The British war in the air has been sidelined. In spite of its ubiquity, it will be commemorated in April 1918, aligning with the birth of the RAF. The war at sea has been both marginalised and militarised, overlooking the important contributions made by the Merchant Navy to the war effort. In many respects, commemoration activity in Britain runs the risk of distorting our own popular perceptions of the conflict, particularly in terms of who fought and their relative contribution. What happens then when we widen our view to look beyond the national to the international? What implications does this cleft between historical reality and remembrance have both during and beyond the centenary?

The Historian and the Centenary & Democratisation of commemoration

The complex relationship between historical accuracy and commemorative activity, and thus between the historian and the centenary, was also evident in the participants discussion of the democratization evident in the activities undertaken since 2014. Quite naturally the speakers welcomed initiatives intended to encourage broader participation in the centenary and engagement with the First World War. Schemes such as the ‘We’re here because we’re here’ and the poppy display at the Tower of London attracted widespread public interest, however questions remain over the extent to which they prompted people to reflect upon the conflict and its meaning. Helen McCartney highlighted how programmes such as Letter to an Unknown Soldier produced a degree of engagement with the historical detail that suggests a greater level of engagement with the record than critics might fear, however there is good reason to doubt the extent to which the centenary has genuinely changed the well-established narratives about the War evident prior to 2014. As Annika Mombauer highlighted in relation to Germany, even scholarship that penetrates into the popular domain – as Chris Clarke’s Sleepwalkers has done – tends to be simplified to the point of gross reductionism in popular debates, which are as much about the realities of the present as they are about the lost world of the past.

This all begs the question – what is the role of the historian during the centenary? Hanna Smyth observed that there is an implicit tension in those studying commemorative practice and centenary being involved in shaping its conduct. What effect does this have on the scholarship of those involved? And, in turn, ought the academic study of commemorative practice to play a role shaping how we commemorate? If the centenary is as much about the future as the past, what claim can historians make to inform a debate about events yet to pass?

Power & modern agendas – government, organizations, & the centenary

Ultimately, how we commemorate the First World War will always be determined by the needs of the moment. The iconic image of François Mitterrand and Chancellor Helmut Kohl standing hand in hand in the pouring rain before the memorial at Verdun is one of the most powerful encapsulations of European Unity and of a future devoid of conflict on the continent. Moments such as these are as much about power and political narrative as they are about historical accuracy, yet by attempting to mobilize the past for the needs of the present they also speak to the never ending debate as to what history is, and ought to be ‘for’. Indeed, the laudable inclusion of German and French representatives – alongside the British, Irish, and Commonwealth forces – at the centenary service for the Battle of the Somme at Thiepval – mirrored the move towards increasingly transnational, inclusive approaches within the discipline of history itself.


The timing of the UK’s referendum on its membership of the European Union – coming as it did days before the July 1st service – underlined how far we still are from a common narrative or understanding of the conflict. The War was mobilized in support of both the leave and remain arguments, often with precious little care for historical realities. Historians have no claim over this process, but do have an obligation to engage with it and to work against the crude instrumentalisation of the past for the needs of the political moment. This process is ongoing, and will be the subject of further discussion by the First World War Research Group as we approach the culmination of the centenary cycle in 2018-19.

Image: Poppies At The Tower Of London 23-8-2014 via Flickr.

The Operational Level as Military Innovation: Past, Present and Future


As Defence-in-Depth once again spends time exploring the concepts of the operational level and operational art, it seems an appropriate time to relate my previous contribution on the subject to the other research strand that I have previously blogged about: military innovation. Though the popular focus of military innovation tends to be on new technologies and weaponry, much of the theorising about the causes of military innovation takes evolutions in doctrine as its starting point. I will return to the different theoretical approaches to military innovation in a future post but, for now, the important point is that the operational level is, first and foremost, a doctrinal innovation and that this is crucial to any debate about its current and future worth. As discontent with the current form of the operational level grows, placing the debate in appropriate context becomes ever more important.

Before exploring the history of the operational level, we need to understand why doctrine has often been the source of scholar-practitioner theorising about the causes of innovation. First, a critical practical issue for any academic is the quality of primary source material on a subject. For historians trying to understand the dynamics of military reform in a given era, shifts in doctrine offer concrete evidence of change being enacted by the armed force in question. One can trace a doctrine’s origins back through the system and glean invaluable insights into how and why it came into being because, most obviously, it is written. Further, the formal character of its codification increases the likelihood of this traceable genealogy. Second, though the exact purpose of doctrine varies from military to military, its basic function is to provide authoritative guidance that helps militaries fulfil their raison d’être: usually, to be prepared to successfully wage war. Certainly, ‘field manuals’ and ‘warfighting doctrine’ has that purpose (the clue being in the titles) and so it is a reasonable assumption that it should also reflect the most current, institutionally agreed, thinking on how to actually conduct warfare. Inevitably, the more rapidly the character of conflict is changing, the harder it is for doctrine to keep pace but, sooner or later, it either reflects successful innovation or fails. It is no coincidence, therefore, that Barry Posen chose inter-war doctrine in Britain, Germany and France to analyse the drivers of innovation and that studies of doctrine formulation have been an integral part of military innovation theory ever since.

This is relevant here because the operational level, now integral to how we think about warfare, is, at its heart, doctrine. It makes its way into our consciousness because it takes hold as a concept that relates to bigger issues of strategy and campaigning but it formally originates in a specific piece of doctrine: US FM 100 from 1982. The distinction between the operational level and operational art was subsequently made in the 1986 variant. Ok, fine, so what? Well, though the formalisation of the operational level originates in the United States Army in 1982, thinking about ‘operational art’ long pre-dates it and, in each of its guises, is also a doctrinal response to specific circumstances. Taking three highly influential moments in turn; first, the Prussian General Staff seeking to apply the enduring lessons of Clausewitz and their practical experiences in the Austro-Prussian (Seven Weeks) War of 1866 and the Franco-Prussian War of 1870-71 to a highly innovative intellectual debate about the future of war. This debate encompassed several related innovations in warfare including the physical expansion of armies and of the battlespace and the impact of related technological advances in firepower, mobility and communications. Emphasis on decisive battle remained in the thinking of Moltke the Elder and the officer class but appreciation of the inter-connected nature of the battlespace grows; presaging modern thinking about operational art. We see these developments in the writings of key thinkers, in the Prussian Staff College, in ‘doctrine’ (such as it was) and, of course, in practice.

Second, after the First World War, the Germans and Soviets in particular respond to their own very specific experiences by developing cutting-edge combined arms and armoured manoeuvre concepts. Their shared experience of defeat and the Soviet experience of a subsequent civil war fought over huge distances encouraged radical experimentation and boldness when thinking about future war. In both countries, doctrine again reflected this innovation and though the Germans remained resistant to any formalisation of an operational level they pushed the technological boundaries and skill at campaigning to far greater effect. The Soviets, by contrast, fell behind in technological terms once Stalin imposed his own brutal control on the military but their doctrinal innovations of the 1920s and early 1930s advanced thinking about the link between strategy and tactics, operational art in other words, in a profoundly important manner. I would argue that they are actually more important in this respect than the Germans. Again, both eventually test their theories in the crucible of war and while German combined arms brilliance influences the physical component (how to conduct high-intensity warfare) to this day, Soviet thinking has had the greater impact on the conceptual (how to conceive of warfare).

Nowhere is this more evident than in the final snapshot: the US formalisation of an operational level. Partly in response to defeat in Vietnam in the 1970s and to the Soviet creation of Operational Manoeuvre Groups (OMGs) in the 1980s, the US military formalises the operational level. The concept spreads into NATO and then more broadly . Again, this innovation is doctrinal in origin and conceived in response to very specific challenges. Further, despite recent caricatures of the US military debate as founded on fundamental misunderstandings of the historical evolution of operational thinking, closer study of the genealogy of the doctrine actually reveals an open, intellectual and sophisticated analysis of what had gone before that is much more in-keeping with the traditions of the Prussians and Soviets. True, there are misunderstandings in US application of the concepts but, arguably, they served a very practical purpose in the context of the 1970s and 1980s. It, too, has been tested in battle with great success in the first Gulf War, 1991, and Iraq, 2003, but has proved increasingly problematic in dealing with the kinds of complex conflicts presented by Iraq and Afghanistan. These problems have inevitably led to the current debate about its current and future utility.

What are the implications of all of this for academics and modern militaries trying to think critically about operational art and the operational level? Well, there are lots of interesting lessons about the drivers of military innovation but a more profound lesson perhaps relates to the point that the concept is, first and foremost, doctrinal. The operational level does not have any intrinsic right to remain at the heart of how we conceive of modern warfare. I have argued in the past that only operational art, in its various guises, is a constant in campaigning. Thinking about a ‘level’ evolves in response to very particular threats in very specific circumstances and only becomes formal in the 1980s. It changes in form throughout history and is not a constant in warfare: you don’t necessarily need an operational level to enable operational art. Critics of the ‘level’ therefore have a point but, as an advocate of its continued utility, I would argue that its failings are not evidence of its redundancy and inevitable demise but rather the consequence of far too little time in recent decades spent on genuinely innovative thinking about its current and future form. Reminding ourselves that the operational level is an example of innovation in military thinking, of purposeful doctrine, should also serve as a reminder that good doctrine requires constant critical engagement to remain relevant. Time, perhaps, to stop bashing the concept and start thinking innovatively about it once again?

Image: Soviet stamp depicting Marshal Mikhail Tukhachevsky (wikicommons). Tukhachevsky was executed during Stalin’s Purges but rehabilitated as a national hero along with several other key military thinkers during the 1960s.

The operational level of war and maritime forces


The recurrent debate over whether or not the operational level of war exists can sometimes feel like the land component talking to itself.  The vast majority of what is written about the operational level, and operational art, focusses predominantly on land operations.  It is rare to find an acknowledgement of the significance of the other components or other government agencies, let alone a systematic consideration of how they might view or fit into the operational level.  Yet one of the strongest arguments for the reality of the operational level – and for its continuing utility despite the changing character of conflict – is that this is where the different components come together, along with tools of government policy other than military power, for the design and implementation of a campaign plan to achieve political objectives.  This approach to understanding the operational level is rather different to that envisaged by Soviet thinkers in the interwar period, German panzer commanders in 1940 or even NATO in the 1980s.  Yet this formulation – which reflects modern British and NATO doctrine – is not a twisting of the original concept but rather a pragmatic up-dating of it, an evolution to allow it to fit contemporary circumstances.

The point remains, however, that the literature on the operational level and operational art is dominated to an unhealthy degree by land power.  It was for this reason that I wrote an article for the RUSI Journal that considered the operational level from the maritime perspective.  I argued that the operational level does apply to the maritime environment, albeit in ways that to some extent differ from the land.  These differences matter because maritime forces contribute to operational and strategic goals in distinctive ways; a maritime commander needs to understand these differences in order to provide the best support to a joint campaign, while a joint commander needs to understand them in order to get the most utility out of the maritime component, as well as to appreciate that it might have its own requirements to enable it to make this contribution.

The key differences in the operational level for maritime forces are that the relationship between attack and defence is more fluid than on land; that the levels of war are more often short-circuited (the current understanding of the operational level is sufficiently flexible to acknowledge that the strategic and the tactical can sometimes be directed linked); and, in particular, that distance and time apply in different ways.  Frequently, when those from the land (and even to some extent air) component are seeking to understand how or why a maritime campaign or contribution to a joint campaign differs, the answer lies in them considering a bigger map or a broader timescale.

For the land component, maritime forces might be acting in close conjunction with them at the tactical level, for example when conducting an amphibious landing, or providing fire support or surveillance over the battlefield.  The Iraq invasion of 2003, in particular the landings and subsequent operations in the Al Faw peninsula, provide a case of this.

More often, maritime forces will be acting out of sight of the land but as part of a coordinated joint campaign, where the effects of the activities of the different components (and other government agencies) come together at the operational level.  The Mediterranean theatre in the Second World War offers several fine examples of this; others might include the US amphibious feint during the 1991 Gulf War, which helped to fix and divert a significant proportion of the Iraqi ground forces away from the intended advance of the coalition.  In both of these cases, of course, maritime forces provided broader support to ground-based land and air forces over a prolonged period before, during and after specific operations from initial deployment to post-war recovery.

Sometimes, the activities of maritime forces might be conducted quite separately from those of land forces, in campaigns which complement those ashore with their combined effect coming together at the strategic level.  The misnamed ‘Battle of the Atlantic’ in both world wars had to be won to further a range of military and broader strategic objectives – not least to allow other campaigns to be fought (including, in the Second World War, the strategic bombing offensive).  Both of these campaigns were strikingly joint, as well as requiring the carefully tailored input of other arms of government, from intelligence to diplomacy.

Another fascinating example of operational thinking in the maritime environment is the US forward maritime strategy of the mid-1980s.  This represented an imaginative attempt to apply the US and NATO advantage in naval power to gain leverage over the Soviet Union in a crisis or to put significant military pressure on it in the event of war.  Rather than sit back defensively behind the Greenland-Iceland-UK gap and fight to protect Atlantic shipping from there, this new approach envisaged pushing carrier battle groups well to the north (primarily, though also into the eastern Mediterranean and in the Far East to threaten the Pacific coast of the USSR).  By threatening Soviet territory and the bastions for their nuclear missile-armed submarines, this action would compel the USSR to pull its naval and maritime air forces out of the Atlantic (thereby achieving a defensive aim of protecting shipping that was carrying allied reinforcements for the land campaign) and also to divert air and even land forces from the battle on the Central Front, thereby indirectly supporting NATO land forces – in addition to any direct support that US carriers could subsequently provide with air and missile strikes.  The concept was, to say the least, not without its critics yet two points stand out.  First, it was without doubt taken very seriously by the USSR.  Second, it represented thinking at the operational level, aiming to use maritime forces in a creative way to achieve campaign and wider strategic objectives at sea while also supporting activities on land from the sea.  In passing, of course, it also provides a useful pointer as to how the West might make creative use of its maritime power to put pressure on an aggressive Russia or an assertive China, not least in threatening to spread any conflict to areas other than those in which they would ideally prefer to fight.

The debate over the operational level could therefore usefully raise its gaze from land warfare alone.  Doing so could clarify the existence and utility of the operational level, while also improving the coordination of military and non-military instruments of policy that is needed for a successful campaign.

Image: US Navy (USN) F-14A Tomcat, Fighter Squadron 211 (VF-211), Naval Air Station (NAS) Oceana, Virginia Beach, Virginia (VA), in flight over burning Kuwaiti oil wells during Operation DESERT STORM, via wikimedia commons


Using Military History: Doctrine as an Analytical Tool for Historical Campaigns


James Wolfe was a great advocate of using military history to help inform his understanding of new situations and challenges he faced throughout his career. ‘The more a soldier thinks of the false steps of those that have gone before him, the more likely is he to avoid them’, he wrote after visiting the battlefield of Culloden in 1751. ‘On the other hand, The Examples worthiest of imitation should never be lost sight of, as they will be the best & truest guides in every undertaking.’

Wolfe had fought at Culloden, but he found a visit to the field highlighted errors which in the heat of the moment had not been apparent to him on the day itself. A few years later, he wrote to the father of a newly commissioned officer with his advice on what to read to get a better understanding of his profession. ‘In general, the lives of all the great Commanders and all the good Histories of Warlike Nations, will be very instructive’, Wolfe advised. ‘In these days of scarcity & in these unlucky times, it is much to be wish’d that all our young soldiers of birth & education, would follow our brothers, steps, and as they will have their turn to command, that they would try to make themselves fit for that important trust.’

Half a century later, a young Rifleman named Thomas Mitchell, was deployed to the Peninsula as part of Wellington’s Army. Like Wolfe, Mitchell had an abiding interest in military history. ‘Rivers, woods, ravines, mountains etc etc together form the great book of war;’ Mitchell had noted, ‘and he who cannot read it must for ever be content with the title of a brave soldier, and never repair to that of a great general.’ A skilled draftsman, Mitchell was spotted by Wellington’s QMG, and appointed a mapmaker. Mitchell’s sketches identified the roots along which the British Army was to advance in its ultimately successful campaign against the French in 1813.

Even more recently, a senior general in the US Marines noted the true value of studying history. ‘The problem with being too busy to read is that you learn by experience (or by your men’s experience), i.e. the hard way. By reading, you learn through others’ experiences, generally a better way to do business, especially in our line of work where the consequences of incompetence are so final for young men.’ This quote has had quite an airing in the last few weeks. The general’s name: James Mattis.

Let’s assume, then, that the study of military history is a useful way of learning about modern military planning and campaigns. One way of creating a linkage between the past and the present is using modern doctrine to analyse the challenges and obstacles facing historical commanders. Modern doctrine is, in part, based on the lessons of history: a distillation of what worked in the past, updated to fit within the contemporary context.

The term ‘Centre of Gravity’ means something very particular in modern doctrinal terminology, but let’s take the concept more broadly, and apply the original Clausewitzian definition: ‘The hub of all power and movement, on which everything depends. That is the point against which all our energies should be directed.’ It becomes somewhat easier to identify and debate what the centre of gravity of any power, of any military, fighting at any point in history.

Take another doctrinal term, Decisive Conditions. Again, a long and impenetrable definition: ‘A combination of circumstances, effects, or a specific key event, critical factor, or function that when realised allows commanders to gain a marked advantage over an opponent or contribute materially to achieving an operational objective.’ Put another way: ‘The conditions General X needed to achieve in order to execute his operational plan’. Every military campaign occurred in a sequence. Those campaigns might not have been planned using terms such as decisive conditions, but but all of them would have been planned sequentially, with the key decision-maker knowing full well that he would have had to have achieved a number of small-scale objectives before he could focus on the main event.

Nor is such an analysis of use alone to the military practitioner. Military historians frequently overlook the importance of seemingly insignificant details in the planning and execution of campaigns: their focus is on the big picture, the decisive battle. Little do they realise that without a seemingly unimportant detail, the whole operation might have gone awry. Doctrine, then, as a derivation of what has worked in history, might be a little dry, and filled with jargon that renders its meaning almost impenetrable, but cut through that noise and see the concepts defined in broad terms, and they are a useful analytical tool for learning from, and learning about, history.

Image: Queen’s Royal Hussars (The Queen’s Own and Royal Irish) (QRH) Junior Non Commissioned Officer (JNCO) Cadre course on Sennelager Training Area

The Russian military’s view on the utility of force: the adoption of a strategy of non-violent asymmetric warfare

By Dr. Rod Thornton

Russian military thinking seems to have reached the point now where the idea of using force intentionally in conflicts with peer-state adversaries has been almost completely ruled out. This seems a radical move. But there has been a clear recognition within this military that better strategic outcomes for Russia will result from the use of non-violent ‘asymmetric warfare’ activities rather than those which will or can involve the use of force – such as conventional war or hybrid warfare.

Asymmetric warfare, of course, and in a nutshell, is a method of warfare employed by the weak against the strong where the former seeks to level the battlefield with the latter. The weaker party, using its own relative advantages, attempts to turn the strengths of its opponent into vulnerabilities, which can then be exploited. The means used are ones which, in essence, cannot be used in return – reciprocated – by the target (‘asymmetrical’ means that which cannot be mirror-imaged). Fundamentally, asymmetric warfare is all about activity that, rather than bludgeoning a target into strategic, operational and tactical defeats, actually manipulates it into them. And it is all done, ideally, with no use of force. As Sun Tzu, the ‘father’ of asymmetric thinking, told us, the acme of skill in the conduct of warfare is to defeat the adversary without the use of any force. See, for instance my book titled Asymmetric Warfare: Threat and Response in the 21st Century.

It was President Vladimir Putin who back in 2008 first pointed his military in the direction of asymmetric warfare. In suggesting ways to counter what was accepted as western military superiority, Putin advised that the armed forces ‘should not chase after quantitative indicators … our responses will have to be based on intellectual superiority. They will be asymmetrical, less costly’. Putin understood that efforts to try and match NATO’s military power, especially in terms of technology, would be unavailing and prove ruinous for the Russian economy. The ‘cost’ also of engaging in open warfare was unsupportable. In essence, the Russian military would have to become more subtle – it would have to employ ‘intellect’ in attempts to create strategic effect and do so, ideally, without the use of force. For what Russia needs to avoid, of course, is the use of any military violence in situations that might cause NATO to invoke Article 5 and thereby set in train the costly conventional war.

Surprisingly, in many ways, the Russian military has readily adopted asymmetric thinking. Russian military journals have come to be suffused over the last few years with articles lauding the qualities of ‘asymmetric warfare’ (asymmetricheskie voina). Among the senior officers pushing for the tenets of asymmetric warfare to be adopted throughout the armed forces is Col.-Gen. Andrei Kartapolov, the current Deputy Chief of the General Staff (and aged only 53). It is significant that such a high-flyer (he previously held the prestigious post of commander of the Western Military District) is among those urging the capture of asymmetric warfare techniques in doctrine and for its methods to be taught in military academies ‘down’, he says, ‘to a very low level’. Such methods, he goes on, will ‘enable the levelling of the technological superiority of the enemy’. In his ‘principles of asymmetric operations’, Kartapolov talks of the ‘concentration of efforts against the enemy’s most vulnerable locations (targets) [and the] search for and exposure of the enemy’s weak points’. The specific emphasis, he points out, will be on ‘non-violent’ (nenasil’stvennoe) methods of asymmetric warfare.

Other articles present similar arguments for the use of asymmetric warfare by the Russian military. The overall message for this military, and as the influential military newspaper Red Star (Krasnaya Zvezda) summed up last year, is that when it comes to the conduct of warfare in the current era, ‘The main emphasis must be placed on asymmetrical means and methods’.

The principal aim of Russian asymmetric warfare is to create degrees of destabilisation (destabilizatsiya) within targeted states and within collectives of targeted states (e.g. NATO, EU). A target that is destabilised (in whatever sense) is one that, in Russian military thinking, is more susceptible to Russian leverage, i.e. it can be manipulated more easily. The range of methods used to engineer such outcomes are mostly based on the use of information (for more on this, see my paper in the RUSI Journal  titled ‘The Changing Nature of Modern Warfare: Responding to Russian Information Warfare’). Information warfare targets the strengths of NATO states – the fact, for instance, that they are democracies and have free media – and turns them into vulnerabilities: elections can be manipulated; opinions can be altered to Moscow’s advantage; agent provocateurs can operate with impunity; journalists and academics can be paid to present a certain line, etc. The West’s use, moreover, of high-tech information systems in all forms of social, financial, economic and industrial life, again, while providing great strengths, will also be presenting vulnerabilities to Russia cyber operations – in both the cyber-psychological (most important in Russian thinking) and the cyber-technical realms. Perhaps, however, the greatest degree of Russian leverage/manipulation will be generated by the targeting of individuals – decision-makers, political and military leaders, etc. These can often be co-opted or blackmailed if the right incrimination information – kompromat – is available.

And all this plays to the Russian military’s own strengths – its ‘own relative advantages’. While it might lack ‘quantitative indicators’ – the tanks, aircraft and ships – it does have a massive capacity to gather information, to disseminate (mis)information and to employ considerable cyber abilities. There is also, and importantly, a history and a culture in both the Russian and Soviet militaries of emphasising and employing to good effect non-violent military means. Perhaps the key term here is maskirovka, one which covers considerably more than just the use of ‘camouflage’.

Conventional military assets are still needed, of course. But these days they may be seen to be acting in a supporting role for the asymmetric warfare campaign against NATO interests. Their outwardly sabre-rattling movements, deployments and activities are seen as means of creating ‘indirect leverage’ that can, in turn, manipulate western actors into making counter moves that actually suit Moscow’s purposes.

The Russian military is now also employing asymmetric warfare methods that these western actors find very difficult to retaliate against on a like-for-like basis – reciprocity is largely denied. Russian democracy has become very much a ‘managed’ one and this closes down many avenues of retaliation. Russia is also not open to cyber attack in the same way that western states are and defences in the country are more pronounced.

The Russian military can and is using non-violent asymmetric means to considerable strategic advantage against NATO. They are, wherever one looks, destabilising and manipulating to good effect. Given this continuing situation and the strategic results that are patently being produced in NATO countries, why would the Russian military need to consider the conventional use of force? What utility does it have?

Image courtesy of Wikimedia Commons.

Breaking: Opening salvo fired in coming war with machines

Dr. Ken Payne

DeepMind, the world’s leading Artificial Intelligence outfit, has released a remarkable new study with implications for those of us interested in war, cooperation, and the strategic ramifications of AI.

You can read and watch it here.

In short, their agents demonstrated the ability to relate socially in a competitive environment. When resources (green apples) were plentiful, the agents cooperated happily. When they were scarce, all hell broke out – DeepMind had endowed he agents with the ability to shoot each other; and that’s exactly what they did.

So what?

In my next book, I take a long-run view of strategy, all the way from early human evolution through to the advent of AI. It’ll be out soon, but by way of preview, and since DeepMind has made it relevant.

First things first – DeepMind’s agents may have cooperated or fought; but they didn’t do so for the same reasons we did. They were making ‘rational’ decisions about whether to cooperate on the basis of individual gain, of the sort that will be familiar from game theory puzzles, like the famous Prisoner’s Dilemma.

Human cooperation is a puzzle – why do we cooperate when the benefits from slacking off can be substantial, especially if we deceive others about it? Natural selection happens at the level of the gene, not the group – so why should I risk my genes to help the group out?

One answer (mine, in fact) is war: the pressure of intergroup conflict, which we now think was pretty ubiquitous. Groups that cooperate together win against groups that do not. This is particularly true when weapons are pretty primitive, and fighting is in the form of a melee, rather than a one on one duel. If you don’t cooperate with group members there’s a good chance that both you and your group will go out of business.

Frederick Lanchester did the maths back in 1916, in his catchily named ‘square law of armed conflict’. When the force of many could be concentrated against the few, there were disproportionate gains from scale. The takeout: there is a massive military advantage to be had from being in a larger group; and larger groups that cooperate will win against smaller groups that do not.

Two results follow – we instinctively cooperate, especially with those we identify as being in our group (conversely we are chippy xenophobes against outgroup people). And second, there’s a clear incentive to form larger groups – including of non-related people. Hence a motive for the development of cultural identities – a shorthand way of saying who can be trusted in a tussle. Over a long period of time, and beginning some half-million years ago, humans developed unique social abilities – including a sophisticated empathy and theory of mind, allowing us to gauge what other individuals believed. Could we trust them not to malinger? In war, we, like chimpanzees today, worked the odds in our favour – fighting primarily by raid and ambush, and attacking with surprise and massively advantageous force ratios. Victory goes to the big battalions.

The arrival of culture, thanks to all that cooperation, modified the situation somewhat: new strategies were available – including, of course, defence in depth! Scale mattered, but clever thinking might offset it.

Back to DeepMind, and the impending rise of the machines. Their agents cooperate to harvest digital apples; but the logic of that cooperation is not the same as that which drove humans to develop culture, empathy, theory of mind, and instinctive cooperation. Their cognitive architectures are far different from ours: they are not embodied biological intelligences, whose very survival depends on navigating a rich social terrain. They are not enmeshed in a biological world of natural selection; driven – often unconsciously – by the imperative to propagate their genetic code. They do not exist in groups that are in a constant state of conflict with neighbours. Harvesting a digital apple does not require the same cooperation as a mammoth hunt. DeepMind’s ‘toy universe’ is a much simpler affair – like an 80s arcade game, but with better baddies.

So, let’s all relax, no?

Well, to a point. Artificial General Intelligence, if it ever manifests, is unlikely to mirror human intelligence, which evolved as the answer to a particular environmental problem – replete with its emotions, massively parallel unconscious deliberation, and narrowband self-aware, reflective conscious. You could perhaps model that in cyberspace – but that would just remain a model. Philosophers like to pose the conundrum: what is it like to experience life as a bat? Answer: we’ll never know, but you can bet it’s much closer to us than the ‘life’ of a machine.

Still, groups of AI will face very similar meta-problems: scarce resources; the possibility of conflict to secure them; and a need to understand what other agents are likely to do. Their inclination to cooperate or compete with those agents may differ radically from our own. Watch this space.

Image courtesy of Wikimedia Commons.

2017 – the Year of the Royal Navy: time to get real?

Professor Andrew M Dorman and Professor Matthew R H Uttley

Centre for British Defence and Security Studies

As we entered 2017 the Ministry of Defence earmarked 2017 as the ‘year of the Royal Navy (RN)’. In the press release that accompanied the announcement key milestones for 2017 were highlighted, including the new aircraft carrier HMS Queen Elizabeth leaving Rosyth and commencing sea trials, the launch of her sister ship HMS Prince of Wales and the fourth Astute-class SSN, the arrival in the UK of the first of four new Tide-class tankers and the opening of the first permanent RN base East of Suez in more than half a century.

This built on the government’s 2015 National Security Strategy and Strategic Defence and Security Review (NSS/SDSR pp.30-1). As part of Joint Force 2025, the RN would continue to maintain the continuous at sea deterrent with four new nuclear powered ballistic missile submarines. The NSS/SDSR also pledged to bring into service both of the large aircraft carriers currently under construction in order to have ‘one available at all times’. The government also promised to bring forward the purchase of F-35B Lightning II aircraft so that there would be 24 aircraft available by 2023. Looking further ahead, the 2015 review committed the government to buy three new logistic ships to support the fleet, in addition to the four tankers that were due to have entered service from 2016. The government also confirmed that a fleet of 19 destroyers and frigates would be maintained with the hope that ‘by the 2030s we can further increase the total number of frigates and destroyers’ (NSS/SDSR pp.30-1).

Since then, the Defence Secretary, Sir Michael Fallon, has confirmed ‘… that the expansion of the Royal Navy is fully funded’ (Oral Questions on defence 30 January 2017). Yet, behind this rosy façade, however, is a somewhat different picture. In the second half of 2016 the financial pressure on the RN’s budget had become evident. Over the summer, technical problems with the Type 45 destroyer’s power plant emerged leading to all the ships being temporarily moored alongside. In November 2016, it emerged that the Harpoon anti-ship missile would leave service in 2018 without a replacement in the near term rendering RN ships reliant on their deck guns until the Wildcat helicopters are equipped with an air to surface missile. Since Christmas the government has been plagued by revelations concerning the test of one of its Trident missiles last June.

Looking behind the veneer of the 2015 NSS/SDSR a whole series of other cutbacks are evident. The Landing Pad Helicopter (LPH) HMS Ocean is scheduled to leave service in 2018 without replacement. Instead, the second aircraft carrier will be equipped with some amphibious capability. The obvious question this raises is what happens if HMS Prince of Wales is fulfilling the strike carrier role and the government needs both a strike carrier and LPH? The pledge to bring forward the acquisition of the planned 138 F-35Bs so that 24 frontline aircraft would be available from 2023 sounds like a positive development for the RN. However, with each carrier capable of carrying 36 F-35Bs in the strike role, the planned frontline of 24 F-35Bs by 2023 leaves the UK dependent on the US Marine Corps to fill the deficit. Moreover, sustaining the planned Maritime Task Group will be hampered by delays in the delivery of the four new tankers and the continuing absence of an order for the promised three stores ships.

At the same time, the RN is beset with personnel challenges as the most recent personnel statistics have shown with shortages in a number of specialist areas(MoD 2017). As a consequence, the MoD has acknowledged that one of its frigates, HMS Lancaster, was being effectively mothballed pending a refit later this year. Similarly, as the LPD HMS Albion is brought out of reserve and refit her sister ship will be put into reserve ahead of a forthcoming refit. These factors suggest that the uplift of 400 in personnel numbers announced by the 2015 NSS/SDSR is insufficient to allow the RN to crew its existing ships, let alone ensure that one of the new aircraft carriers is always available. As a result, there are rumours that Royal marine numbers will be cut to free up posts for the dark blue element of the navy.

Personnel shortages only partially explain the decision not to restore the amphibious brigade capability taken as a cut in the 2015 review despite the growing fears expressed about Russia and the need to support the UK’s NATO partners. Instead, much of the amphibious capability is fulfilling other tasks in place of other ships. Thus, HMS Ocean is currently acting as the command ship for the US/UK deployment to the Gulf. At the same time, the RN has struggled to commit ships to the various NATO standing forces and some of its tasking is being gapped. Put simply, the RN appears simply too small for its mandated tasks but the government remains unwilling to acknowledge or address this.

One might be forgiven for holding out for the longer term. Before Christmas, Sir John Parker published his report designed to influence the forthcoming ‘National Shipbuilding Strategy’, which called for major changes and investment in the UK’s naval shipbuilding industry. Many of the recommendations appear sound, including gearing the new Type 31 frigate for export and seeking to break BAE Systems’ monopoly of the construction of major warships. Such recommendations are, however, strangely familiar: both the Type 23 frigate which the Type 31 will partially replace and the Upholder class of conventionally powered submarines were originally designed with the export market in mind in the 1980s. It is noteworthy that no foreign sales were achieved and the Upholders and three of the Type 23s were ultimately sold-second hand to Canada and Chile respectively. Moreover, if the government truly wants to implement a viable long-term national shipbuilding strategy, then it needs to bear in mind the life-cycle of its ships and how this will influence the RN’s force size. For example, a RAND study of the UK’s nuclear submarine industrial base concluded that to maintain the industry’s capacity a submarine needed to be ordered every two years (Schank 2005). If one assumes an average lifespan of 30 years then the submarine force needs to comprise some 15 boats. Currently the force comprises just four SSBNs and seven SSNs with no planned future increases.

Moreover, lurking in the background are question marks over the wider affordability of the Ministry of Defence’s (MoD) overall Equipment Plan between now and 2026. The most recent edition was published in January 2017 and the financial risks contained within were highlighted in the accompanying National Audit Office report. Four risks stand out. First, previous iterations of the Equipment Plan had contained significant amounts of uncommitted ‘contingency’ funding to cover unforeseen programme cost increases and new requirements. This reserve has been almost entirely allocated to new programmes with the result that there is little flexibility in the budget despite the MoD’s extensive previous experience of unforeseen programme overruns and cost increases. Second, one of the results of the Brexit referendum vote has been a fall in the value of Sterling against both the US Dollar and Euro. Whilst the MoD has taken some mitigation steps, the January NAO report highlights that these ‘hedges’ will be insufficient unless the value of Sterling starts to rise. In particular, the significant cost of existing equipment orders in US dollars from the US – including the Boeing P8A Apache AH-64E, F-35B and successor missile compartment tube programmes – means that further cuts to the MoD’s equipment programme are almost certain. Third, the affordability of the Equipment Plan is predicated on a shift of funds from other areas of the defence budget. The risk here is that MoD assumptions that personnel costs will rise below the rate of inflation, significant income can be generated from the sale of assets and major efficiency savings can be achieved might prove overly optimistic. Indeed, failure to achieve the requisite savings in any of these areas could derail defence budgeting assumptions and, by implication, the future viability of the MoD’s Equipment Plan. Finally, the budget is predicated on a 1% real terms increase in defence spending for each of the next ten years. Ironically a similarly optimistic outlook was ultimately the undoing of the ill-fated 1981 Nott Review.

These factors, together with concerns over whether the UK’s GDP will continue to grow in the post-Brexit era, raises serious doubts about whether 2017 will be the ‘year of the Royal Navy’ or the nadir before which financial chickens finally come home to roost.

Image: HMS Arc Royal, courtesy of Wikimedia Commons.

Suggested further reading

‘2017 is the Year of the Navy’, Ministry of Defence Press Release, 1 January 2017, https://www.gov.uk/government/news/2017-is-the-year-of-the-navy

‘An Independent Report to inform the UK National Shipbuilding Strategy’, Ministry of Defence, 29 November 2016, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/572532/UK_National_Shipbuilding_Strategy_report-FINAL-20161103.pdf

Cabinet Office, ‘National Security Strategy and Strategic Defence and Security Review 2015: A Secure and Prosperous United Kingdom’, Cm.9,161, (London: TSO, 2016), https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/555607/2015_Strategic_Defence_and_Security_Review.pdf

Ministry of Defence, ‘Royal Navy and Royal Marines Monthly Personnel Situation Report for 1 January 2017’, 9 February 2017, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/590163/20170207_-_FINAL_-_RN_RM_Monthly_Situation_Report_January_2017-rounded.pdf

National Audit Office, ‘Ministry of Defence – The Equipment Plan 2016 to 2026’, HC.914, session 2016-17, (London: TSO, 2017), https://www.nao.org.uk/wp-content/uploads/2017/01/The-Equipment-Plan-2016-2026.pdf

John F. Schank, Jessie Riposo, John Birkler, James Chiesa, ‘The United Kingdom’s Nuclear Submarine Industrial Base, Volume 1’, Sustaining Design and Production Resources RAND, 2005), file:///C:/Users/Andrew%20Dorman/Downloads/RAND_MG326.1.pdf

Beyond Effectiveness on the Battlefield: reframing Military Innovation in terms of time, networks and power

This is one in a series of occasional posts from scholars outside of the Defence Studies Department. If you would be interested to contribute to this series please contact the editors: Dr Amir Kamel and Dr David Morgan-Owen


Dr Matthew Ford is a Senior Lecturer in International Relations at the University of Sussex. His book Weapon of Choice – small arms and the culture of military innovation is published by Hurst & Co, London and Oxford University Press, New York 2017. This blogpost draws on ideas that I have used to frame a discussion panel for the Annual Society for Military History conference this coming March-April in Florida. If you have found this of interest then it would be great to continue the debate alongside my fellow panellists Lt-Col. Dr J.P. Clark (U.S. Army) and Dr Laurence Burke (curator at the Smithsonian National Air and Space Museum) at the conference. J.P. Clark’s book Preparing for War: the Emergence of the Modern U.S. Army, 1815-1917 is published by Harvard University Press.

‘…it would appear that many battalion commanders are not really qualified to comment usefully on their weapons’

– Major J.A. Barlow, Tunisia, May 1943

In May 1943 Weapons Technical Staff (WTS) from the Ministry of Supply visited 18 Army Group in Tunisia. Led by Major J.A. Barlow, a captain of the Army Eight shooting team and two times winner of the King’s Prize at Bisley, the WTS sought to understand what soldiers in 18 Army Group wanted from their equipment. To their astonishment having tabulated 15,000 replies to their questionnaire the WTS found that soldiers’ opinions on their kit was ‘frequently conflicting, if not directly contradictory, as between different units and formations’.

Although surprising, the WTS’ findings were not unique, nor confined to the conscripted armies of the Second World War. For example in 2002, during Operation JACANA, Royal Marines were deployed to Afghanistan in support of America’s Operation Enduring Freedom. Newly issued with the upgraded SA80/A2 rifle, news reports started to emerge indicating that, despite the euphemistically named Mid-Life Upgrade by Heckler & Koch, the new version of the assault rifle still didn’t work. Recognising that the Ministry of Defence was facing a PR disaster, a team of civil servant and H&K engineers were sent to Afghanistan. This team showed beyond reasonable doubt that the issues were, in fact, caused by the Bootnecks who hadn’t followed the cleaning drills they’d been issued and as a result the SA80/A2 was malfunctioning. As in 1943, the interface between the soldier and his weapon proved to be a complex and problematic one.

Little known anecdotes like these might seem irrelevant to the much larger question of military effectiveness, but I want to suggest that these specific instances of ambiguity offer considerable opportunity to generate insights into military-technical change. Superficially these stories indicate that the military can struggle to inculcate standardised doctrine, drills and techniques into soldiers. More fundamentally, they point to the way that effectiveness is shaped by a whole series of functions that aren’t simply the responsibility of the armed forces but also reach back to matters concerning engineering design and even defence industrial policy.

Part of the reason for highlighting these examples is to draw attention to the complex socio-technical web of relationships that need to be understood if effectiveness on the battlefield is to be adequately explained and theories of military innovation developed. In this respect the first wave of innovation literature (literature that looked at military-technical change as a top-down exercise driven through by politicians, generals or forced on the organisation by events) is of limited use. While there was an impulse to change small arms in the Second World War, the driver of this change didn’t come from frontline commands but the Director of Infantry at the War Office. In the same way not even the shock of the SA80’s failures during the First Gulf War could force the Conservative Government to admit that they had got the privatisation of Royal Ordnance dramatically wrong. Instead the Ministry of Defence had to content itself with tinkering with the SA80/A1 until a Labour Government came into power after which the atmosphere in Whitehall changed and a systematic evaluation and refurbishment of the Army’s assault rifle became possible.

Similarly, some caution is required when trying to fit these anecdotes into more recent literatures that emphasise adaptation on the battlefield. In the case of 18 Army Group in 1943, even with the support of the Director of Infantry, British engineers were unable to introduce the kinds of assault weapons they recognised could help the infantry win the tactical battle. Weapon designers were hemmed in both by the Government’s over-riding focus on mass-producing existing technological devices and the logistical and supply chain challenges that making a switch implied. Moreover, they also had to recognise that the tactical engagement had been displaced as the Army’s foremost priority by commanders like Montgomery, who favoured combined arms in order to deliver success at the operational level.

By contrast, recent operations in Iraq and Afghanistan point to the way that the battlefield has become less important for defining military effectiveness. As the evolution of components for the SA80/A2 (grips, picatinny rails, range finders, grenade launchers, sights etc.) indicates, the British Army has been excellent at accelerating its “lesson-learnt activity”, adapting its technology and technique in the face of changing insurgent tactics. However, indicators normally associated with the development of civil-society became proxies for overall campaign success even as Special Forces sought to win tactical engagements as part of kill/capture operations. Consequently, even as the pace of battlefield adaptation has increased, it has simultaneously exerted a decreasing effect upon the outcome of operations.

Thus, irrespective of whether military organisations continue to think of success in the same way as in Iraq and Afghanistan, events over the past 15 years present an opportunity to rethink how we theorise military innovation. Traditionally three conditions had to be met for a change to count as military innovation. First, innovation had to change the manner in which military formations functioned in the field. Secondly, an innovation had to be significant in scope and impact. Lastly, an innovation needed to result in greater military effectiveness. At the very heart of this definition, then, was the necessary link between effectiveness and the battlefield. What happens to a theory of military innovation, however, if we challenge the basis of the theory and reframe an analysis of change in ways that don’t make this assumption?

One possible way to generate insights into the process of change is to take what Dr J.P. Clark, my fellow panellist at a forthcoming Society for Military History conference, has done and look at innovations through the lens of time. Central to his approach is the recognition that there can be stark generational differences in professional expectations within the military profession. Among an older generation, even among those officers most supportive of reforms, there can be a tendency towards supporting innovation as a way to preserve the best of what was old. At the same time and by way of contrast, younger officers can regard reforms as the means by which to overturn old institutions and replace them with a new collectivist approach centred upon organisations filled with like-minded staff officers trained in accord with an approved doctrine. Far from constituting revolutionary change, innovation, as understood by Clark in his analysis of the U.S. Army during the 19th and early 20th centuries, may also represent an evolutionary inter-generational moulding of the military.

Alternatively Dr Laurence Burke offers a more explicitly theorised approach drawing on Science and Technology Studies (STS) as a way to reflect on processes of military innovation. By applying Bruno Latour’s Actor Network Model, Burke relates the experiences of the US military to the adoption of the aircraft in the early decades of the 20th century demonstrating the way in which a range of actors work to enrol groups into networks. What emerges from his analysis shows how interests are constructed and re-worked so as to dictate socio-technical outcomes.

Developing this point, my own approach recognises the value of STS and Clark’s consideration of time and seeks to develop theoretically informed insights into the importance of the concept of power in framing military innovation. By showing how innovation sits within a broader set of industrial and alliance relationships, relationships that demand engineering, scientific and bureaucratic mediation I reveal how frontline requirements are reframed in ways that make them intelligible for wider circulation. More than this I can show how users themselves have contested different views of their own requirements as they seek to enrol other groups and dictate the kinds of innovation that might emerge. In this respect I hope to draw scholarship back to not just discussion of top-down or bottom-up change but also to a consideration of what might be caricatured as middle-out change.

Image: SA-80 rifle stripped (1996), via wikimedia commons.


Sea Power, Alliances, and Diplomacy: British Naval Supremacy in the Great War Era


Louis is a current DPhil student at the University of Oxford. He holds an MA in History from the University of Calgary. Louis is co-organiser of the upcoming ‘Economic Warfare and the Sea’ Conference, to be held at All SoulS College in July 2017.

A recording of the talk this post is drawn from is available here.

President Donald Trump’s statements over the continued viability of NATO has raised questions about the relevance and utility of alliances in 21st century international politics. Who gains most from alliance structures and collective security? What are the benefits for a global power in leading alliances? These questions appear particularly pertinent with the end of the ‘American moment’ and the return to a degree of multipolarity in world affairs, where the rise of China and its aspirations of a blue water navy and an emboldened Russia are challenging the status quo with increasing regularity.

Fresh as they may appear, many of these issues have a long historical antecedence. At the start of the 20th century the British Empire faced a changing global environment – with rising powers on the Continent, in the Americas and in Asia – which forced statesmen to confront the dilemma of how to guarantee the security of Britain’s maritime empire without overstraining public finances on defence expenditure. The supremacy of the Royal Navy had ensured the safety of Britain’s dominions and colonies both through its physical might and as a symbol of prestige throughout the 19th century. However, with the rise of new naval powers, chiefly Imperial Germany across the North Sea, seeking local dominance in all theatres simultaneously would be needlessly expensive. Maintaining a policy of ‘splendid isolation’ might leave Britain vulnerable in secondary theatres as it was forced to out-build the German navy so to command home waters. Consequently, British statesmen turned to diplomacy to underwrite maritime security elsewhere, developing alliances and strategic alignment to tilt local balances in Britain’s favour, neutralising potential threats in the process.

The first move towards this was the alliance with Japan, first struck in 1902, and renewed in 1905 and 1911. This settled concerns in Whitehall over the threat to British possessions in the Far East, with Japan turning from potential danger to guardian of these interests. This was an important embarkation point as British statesmen began to explore the opportunities that such agreements presented.

Pressure to find a similar solution in European waters began to mount as the costs of winning the Anglo-German naval race soared. Winston Churchill, First Lord of the Admiralty from 1911, argued that Britain must prioritise a ratio of 60% superiority over the German High Seas Fleet, leaving little in the naval estimates for a Mediterranean fleet to protect this key imperial artery. The solution advanced was an accord with France (with whom ties had strengthened following the Anglo-French Entente, signed in 1904). The French navy could, with the support of a diminished British force, control the Mediterranean against a combination of Austria-Hungary and Italy, while the bulk of the Royal Navy took on the German navy in the North Sea.

Churchill’s predecessor at the Admiralty, Reginald McKenna had argued that Britain must spend whatever was necessary to give it domination in both seas without having to rely on France. However, this would require cuts to social programmes at home, which a Liberal government committed to welfare reform could not countenance. The Anglo-French naval agreement, signed in 1913, was therefore a means for securing British interests in the Mediterranean at minimal cost. It was not a sign of weakness: Britain was the senior partner in the agreement, giving little in return for the security of the Mediterranean (not least because it was not bound to supporting France in the event of war with Germany). Paris raised concerns over this imbalance, but made little headway.

When war broke out in the summer of 1914, these arrangements came into play, and proved largely effective at safeguarding British maritime interests in the Far East and Mediterranean. The July Crisis demonstrated the limits of what British diplomacy and sea power could achieve: it was not able to prevent war from breaking out. Nevertheless, they did put Britain in a commanding position to wage war at sea: containing the battle fleets of the Central Powers, protecting British shipping, and enabling blockade to begin.

From 1914, Britain used its status as the world’s leading naval power to dominate the naval coalition, directing the maritime elements of the Entente’s strategy. It left the smaller issue of the Austro-Hungarian navy to France (joined by Italy in the Adriatic from spring 1915), while focusing on the more potent German threats in the North Sea and Atlantic. However, when Germany carried its underwater guerre de course into the Mediterranean as 1915 progressed, the Admiralty sought to develop an operational leadership role in this theatre too; partly for reasons of prestige, primarily to address the exigencies of war. Yet the Mediterranean was important to Paris and Rome for reasons of prestige as well – the source of many Franco-Italian disagreements – and their naval establishments prevented the Royal Navy from taking the reins entirely.

The United States Navy, on the other hand, was more content to act as an auxiliary in European waters once the U-boats had forced American entry into the war in 1917. While the White House was keen to work closely with the Admiralty at the operational level, however, there were problems when it came to long-term grand strategy. President Woodrow Wilson had wanted to avoid becoming embroiled in the conflict; now that this was unavoidable he sought to maintain independence from London and Paris by becoming an associated, rather than allied, power. Moreover, the United States was engaged in a large programme of naval construction, which would produce a powerful battle fleet that might rival the Royal Navy. David Lloyd George, the British Prime Minister, wanted this suspended so that American shipyards could be directed to the construction of smaller craft suitable for anti-submarine warfare. Yet American leaders feared this would leave them vulnerable in the post-war world, so refused. Arthur Balfour, the Foreign Secretary, came up with a solution: a general naval alliance in which Britain would guarantee American security at sea while capital ship construction caught up. Moreover, Balfour worked on plans which would bring together the Allied navies (including France, Italy, Russia, and Japan) with the US under an umbrella agreement of mutual assistance against maritime attack lasting for four years after the conclusion of the war.

This was anathema to the White House, with Wilson unwilling to bind his hands. Nevertheless, this episode demonstrates the evolution of British strategic thinking on alliances and their utility. With Britain at the centre of a web of mutually supporting navies, of which the Royal Navy would be the greatest, its partners could help to extend the security of the empire, affording London potential auxiliaries in war and neutralising possible rivals. Of such future challengers, the United States – poised to assume second place in the naval rankings if Germany was defeated and disarmed – was the greatest. The prospect of an Anglo-American rivalry gathered pace as the U-boat threat receded and the Americans increased the pace of capital ship construction. Yet neither side wanted a costly naval arms race, and following victory in 1918 they soon found renewed common cause in the League of Nations project. The prospect of a post-war strategic alignment (if not a formal alliance) was on the table at Versailles in 1919. British and American diplomats managed to suppress the nascent competition between their sailors, with Robert Cecil of the Foreign Office and Colonel Edward M. House, Wilson’s chief lieutenant, reaching a compromise through which Britain could carefully manage the US’ rise as a naval power via bilateral talks. Meanwhile, Wilson was prepared to make a guarantee of French security with the British. A new world order was set to emerge, with an Anglo-American alignment at its centre (a dream which seemingly still resonates in Whitehall a century later).

Yet the gentleman’s agreement struck in Paris collapsed in Washington later that year. The result was that in 1921 the Lloyd George government had to negotiate in a multilateral environment at the Washington Naval Conference. While the decisions reached there allowed an Anglo-American agreement on the naval balance of power to be reached, the proposed strategic alignment could not be covered, and it was at a higher price than the one to be paid if the Cecil-House understanding had been implemented. One such cost was the end of the Anglo-Japanese Alliance. The potential benefits of that agreement was driven home two decades later when the Japanese ran riot across British possessions in the Fat East, dealing an irreversible blow to the integrity of the British Empire. Coming after two years of war against Nazi Germany, this defeat left Britain beleaguered and appeared to leave India open to the Japanese. Yet for the period 1939-41, the US had refrained from active military support for Britain. Alliances and strategic alignments, then, can offer significant benefits to global powers. To reject or lose them can have repercussions. Certainly, isolation rarely is a better alternative – a point worth remembering in the 21st century.

Featured image: A Middleweight bout at the Grand Fleet Boxing Tournament in 1918 between Chief Carpenter’s Mate Gartner (US Navy) and Leading Stoker Roberts (Royal Navy), via the Imperial War Museum