Missile Defence and Spacepower: A Panel Proceeding at ISA 2017

Dr. Bleddyn Bowen

This blog post is a short summary of a panel, on which I presented a paper, at the 2017 International Studies Association Annual (ISA) Convention in Baltimore, Maryland. The panel I was kindly invited to present on was titled ‘Missile Defenses, Space Weapons, and Advanced Conventional Weapons: Strategic Choices in Troubling Times for Arms Control and Disarmament,’ and was held on the 25th of February, 2017. It was expertly chaired by Dr Rachel Whitlark of the Georgia Institute of Technology, who also acted as a superb discussant. I would like to extend my thanks to Dr Whitlark, my fellow panellists, the audience at the panel, and ISA for an excellent conference.

Katarzyna Kubiak, German Institute for International and Security Affairs: ‘Strategic Culture and German Policy on Ballistic Missile Defence’

Katarzyna argued that the preferred German policy of using Ballistic Missile Defence (BMD) as a NATO-wide policy has fallen into disuse due to the deterioration in NATO-Russian relations and the continued disagreements within NATO as to how to deploy the European BMD system. Amidst this shift, Katarzyna examined German policy preferences over missile defence through the lens of strategic culture, and found that existing strategic culture did not help German policymakers adapt and pursue a new position on BMD. With no sign of a rapprochement between NATO and Russia, Germany now has to contend with balancing its residual preference to support BMD through NATO whilst risking further antagonising Russia.

Marco Fey, Peace Research Institute Frankfurt, ‘US Strategic Missile Defense Politics: Do We Really Witness an Emerging Consensus in Congress?’

Marco delivered a presentation that examined US Democratic and Republican attitudes towards missile defence between 1995-2014 using content analysis. Historically, the Democrats have staunchly opposed missile defence whilst the Republicans have faithfully advocated it. Reagan’s Strategic Defense Initiative, known as ‘Star Wars’ by its detractors, made BMD a simple partisan issue in the 1980s. Many observers claim that, under Barack Obama, the Democratic Party came to support missile defence. This, however, may not be the case. Marco, analysing speeches, debates, and legislative documents, came to find that although Democratic support for BMD has grown since 1995, in 2014 there was still no consensus within the Democratic Party over it. The shift towards supporting BMD is undeniable, however.

Andrew Futter, University of Leicester, ‘Full-Spectrum US Missile Defense: Toward a New Era of Instability?’

Andrew presented his thoughts on full-spectrum BMD, which includes ‘left of centre’ options such as cyber infiltration and disabling of enemy ballistic missile systems before they are launched. Although the US is considering cyber capabilities to complement its kinetic missile interceptors, China and Russia are probably to remain unconcerned about US hacking initiatives into missile control systems, due to their own extensive cyber capabilities. Full-spectrum BMD is likely to be a more significant threat to smaller and poorer nuclear powers. Andrew also raised the possibility that this shift to pre-emption could have dire consequences for strategic stability, and that American cyber defences regarding missile command and control may not be that robust should potential adversaries copy American thinking. Perhaps antiquated computer systems using large floppy disks may be a safer option than the lampoons of US Air Force missile control systems suggest.

Namrata Goswami, Senior Analyst and Minerva Grantee, Maxwell AFB, ‘China’s Attitudes and Aspirations toward Expansionism, Territoriality and Resource Nationalism in Space’

Namrata presented her analysis of evolving Chinese thought and discourse on space exploration, particularly regarding lunar exploration, asteroid redirection and mining, and space-based solar power. China’s plans regarding lunar missions and resource extraction, asteroid mining, and space-based solar power currently have deadlines in the 2020s, but China’s space achievements that we have all become familiar with were first articulated in the early 1990s – manned missions, a space station, and robotic missions. A degree of credibility should be given to lofty Chinese goals, so long as the funding and political stability that underpins any such expensive technical programme remains. As it stands, such ambitions enjoy the full support of Xi Jinping, the Central Military Commission, and the 8th generation of space scientists. Namrata also hypothesised extremely interested land-grabbing scenarios where China rushed to harvest resources on the moon, whilst copying the United States’ own private stellar resources law (SPACE Act 2015) to justify its own mineral extractions and profiteering in space.

Bleddyn E. Bowen, Defence Studies Department, King’s College London, ‘Down to Earth: The Influence of Spacepower Upon Future History’

A detailed summary of this paper was provided previously on Defence in Depth, and can be found here. I noted that Chinese and American war plans over Taiwan, which are now dependent on their own precision-strike weapons systems and space infrastructure, must exploit, and adapt to, the dispersing influence of spacepower. I also challenged the view, prevalent among many analysts, that such a war will start with a ‘Space Pearl Harbor’ – a major first strike from China against US space assets. I posited that such an astrostrategy was one possible strategy, and that an alternative of holding weapons systems in reserve until a critical moment in the terrestrial campaign was a possible alternative. I argued that deciding when to strike against space systems may be determined by when and where either side wishes to exploit and deny the dispersing effects of spacepower upon terrestrial warfare.

Image courtesy of Wikimedia Commons.

Institutional Complexity and the Fight against the Proliferation of Nuclear Weapons

Dr. Benjamin Kienzle

When future historians will look back at the late 20th and early 21st Century, one of the most remarkable features of the international system that they will note is the exponential growth of international institutions. At the beginning of the 20th Century there were only slightly more than 30 international intergovernmental organizations in the world and in the first decade after the end of World War II this number was still relatively low, with slightly more than 100 organizations. Yet, in 2016 the total number of international intergovernmental organizations has risen to a staggering 7,657 organizations! And this still excludes international agreements, conventions, informal groups of states and international non-governmental organizations. So, the total number of ‘international institutions’ – broadly defined – is even higher. All in all, today’s international system is characterized by a puzzling maze of thousands of international organizations, treaties, agreements, conventions, protocols and informal arrangements.

One way to make sense of this institutional maze is to examine international institutions in specific issue areas such as climate change, international trade or the non-proliferation of nuclear weapons. In fact, a closer look at international institutions reveals that they tend to cluster around certain issue areas. In other words, there are usually several international institutions designed to address the same global issue or problem. The big question is, of course, if this matters at all. Maybe the growth of the number of international organization is merely a reflection of nation states learning that cooperating on certain issues or problems is more effective than trying to deal with them in isolation or, even worse, in competition? After all, intuitively, few people would doubt that international organizations or agreements are inherently a good thing. The more organizations and agreements there are at the international level, the easier it is to solve certain global issues or problems! Or is it?

In recent years, International Relations scholars have developed a new concept to come to grips with the maze of international institutions in different issue areas: ‘regime complexity’. This concept helps researchers to go beyond the traditional piecemeal approach of analysing individual organizations and agreements individually. Rather, it assumes that sets of international organizations and agreements in a certain issue area such as climate change or nuclear non-proliferation form a single system or ‘complex’ of interlinked organizations and agreements. In this way, the concept of regime complexity offers a comprehensive view of international organizations and agreements, which may provide new insights into the impact that international organizations and agreements have on solving global issues and problems. As Karen J. Alter and Sophie Meunier, two of the leading scholars on regime complexity, point out, ‘Scholars who study complexity note that within complex systems, knowledge of the elementary building blocks—a termite, a neuron, a single rule—does not even give a glimpse of the behavior of the whole, and may lead to faulty understandings of the building blocks themselves’.

In a recent paper that I presented at the Annual Convention of the International Studies Association, I examined to what extent the concept of regime complexity actually helps us to understand the implications of international organizations and agreements in a concrete issue area, namely the non-proliferation of nuclear weapons, which is widely recognized as one of the most serious global security issues. Nuclear non-proliferation is also a conceptually very useful issue area, as it is regulated by over 40 international organizations, agreements, conventions and protocols. Even experts lose easily count of organizations and agreements as diverse as the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials, the International Convention on the Suppression of Acts of Nuclear Terrorism, the Nuclear Suppliers Group or the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies.

As can be expected from a system of international organizations and agreements as complex as the one that addresses nuclear non-proliferation, the impact of that system on the fight against the spread of nuclear weapons is complex, too. On the one hand, there exists strong evidence that the increasing complexity of international non-proliferation organizations and agreements has strengthened non-proliferation. Most notably, new institutions have often closed loopholes in the previously existing institutions. For example, the 1968 Nuclear Non-Proliferation Treaty (NPT), one of the international key agreements in the fight against nuclear proliferation, does not address the issue of exports of sensitive nuclear technologies or items to countries with potential nuclear weapon programmes. Yet, in 1974, only a few years after its entry into force, it became known that India was able to test a nuclear device using a civilian nuclear reactor that was built with technical expertise imported from Canada. Thus, India’s test, codenamed ‘Smiling Buddha’, triggered the establishment of a new non-proliferation institution to prevent the use of exported civilian nuclear technology and expertise for military purposes: the Nuclear Suppliers Group.

Another important advantage of complex sets of international organizations and institutions is that they can increase the commitment of nation states to an issue such as nuclear non-proliferation. Usually, the commitment of a state to an issue is seen as being stronger if it has signed up to several relevant international organizations and treaties rather than just one. In other words, one thing is to sign and ratify just the NPT, another thing is to sign and ratify the NPT, the Comprehensive Nuclear Test Ban Treaty and the so-called Additional Protocol of the International Atomic Energy Agency.

On the other hand, however, it is forgotten all too often that complexity also creates a number of institutional problems. In my paper for the Annual Convention of the International Studies Association I highlighted three in particular: First, the system of international non-proliferation organizations and agreements has grown in such a way that it has only strengthened non-proliferation in a strict sense. Originally, however, the international non-proliferation commitment was seen as a ‘grand bargain’ between nuclear weapon states and non-nuclear weapon states. As part of this bargain, nuclear weapon states committed also to nuclear disarmament and the uninhibited access to peaceful nuclear energy. Yet, only very few of today’s relevant international organizations and agreements address either nuclear disarmament or nuclear energy promotion. Hence, the current system of international non-proliferation organizations and agreements undermines the basis of the ‘grand bargain’. At some point, frustrated non-nuclear weapon states may well conclude that the ‘grand bargain’ has failed.

Second, a complex system of organizations and agreements inhibits the free flow of crucial information. For example, the International Atomic Energy Agency may have sensitive information that is relevant for the Nuclear Suppliers Group – and vice versa. But they usually do not share their information. Third, if there are many organizations and agreements addressing in one way or another the same global issue or problem, nation states tend to cherry pick those organizations and agreements that are most suitable for their narrow national gains rather than for addressing the global issue in the most effective way. In other words, by facilitating cherry-picking (or ‘forum-shopping’ in academic parlance) complexity undermines, once more, the ultimate goal of international organizations and agreements, in this case the non-proliferation of nuclear weapons.

All in all, while the increasing complexity of nuclear non-proliferation organizations and treaties has strengthened the regime as a whole so far, it has also caused new or exacerbated existing problems that should not be ignored. These problems may still get worse in the coming years and have the potential to undermine the very foundations on which international non-proliferation efforts are built.

Image: The IAEA in Vienna. Courtesy of Wikimedia Commons.

Were the Attacks in Paris and Brussels an Intelligence Failure?

Dr. Emmanuel Karagiannis

During 2015-2016, ISIS cells and ISIS-inspired lone wolves launched a series of terrorist attacks against European cities. On 13 November 2015, a group of ISIS assailants launched coordinated attacks on civilian targets in central Paris. They killed 132 people and injured 352. It appears that there were three teams of nine gunmen. Three suicide bombers attacked the national sports stadium during a friendly match between the national soccer teams of France and Germany. Then attackers shot at people outside several cafes and restaurants. Finally, gunmen entered the Bataclan concert hall and killed tens of people before detonating their suicide vests. Most of the gunmen were Belgian or French citizens. Next day ISIS claimed responsibility for the attacks.

According to Edoardo Camilli, the Paris attacks constitute an intelligence failure for three reasons. First, there was a failure in the detection and prioritization of threats. Some of the ISIS attackers were known to French authorities; yet, they clearly failed to identity these individuals as an imminent threats. Second, their surveillance was inadequate and ineffective. The French authorities had information about Abdelhamid Abaaoud, who masterminded the attacks, but they did not manage to monitor his moves in France and Belgium. Third, the assailants were able to travel freely across the Schengen Zone. Member states failed to share information and coordinate their efforts. Moreover, the Turkish authorities gave information about Omar Ismail Mostefai, one of Bataclan bombers, to their French counterparts but it was ignored. Most of the perpetrators had fought in Syria and Iraq as members of ISIS. To sum up, the French security agencies had enough information about the perpetrators, but they failed to take action.

Following the Paris attacks, Belgian authorities decided to raise the terror alert to the highest level. On 22 March 2016, however, a group of ISIS-affiliated assailants attacked the city of Brussels. Two suicide attacks occurred at the airport and one at the Maalbeek metro station. As a result, 30 people were killed and more than 300 were injured. ISIS claimed responsibility for the attacks. The two suicide bombers attacking the airport were Najim Laachraoui and Ibrahim el-Bakraoui, both Belgian citizens of Moroccan origin. Soon it became clear that the two attacks were linked. Again, most the assailants had either travelled or attempted to travel to Syria.

Like with Paris, most analysts have described the Brussels attacks as an intelligence failure. Krishnadev Calamur has blamed the fragmentation of the Belgian intelligence community for the apparent failure to prevent the attacks; for example, the capital city is served by six different police forces. The Belgian capital had already witnessed an attack against the Jewish museum in May 2014; the perpetrator was a French national of Algerian origin who had spent some time in Syria and had been recruited by ISIS. Despite its long experience in dealing with terrorism, the Belgian intelligence community apparently failed to prevent the attacks.

Could the attacks have been prevented? Did they constitute an intelligence failure? There is no easy answer to these questions. The Paris attacks did not only lead to the tragic death of tens of civilians, but also signified the end of terrorism as we know it. While jihadi groups have attacked non-military targets in Europe again and again, this is the first time that multiple soft targets were hit in an unprecedented series of assaults. For instance, the London and Madrid bombings targeted the transportation system. To a certain extent, the Paris massacre resembles more the 2008 Mumbai attacks than any terrorist attack we have seen before. Despite tactical differences (e.g. the use of suicide vests in Paris), the two attacks were based on the same strategy: small teams of heavily armed jihadis simultaneously attacking many people in order to maximize casualties.

The multiple attacks against soft, but high-profile, targets in Paris indicate a level of organization and sophistication that clearly took the French authorities by surprise. The country’s intelligence community functions on a basis of a Cold War model that is largely outdated. For many decades, intelligence agencies focused almost exclusively on foreign governments. As a result, the classic intelligence cycle that cannot cope with the complexities of transnational Islamist networks. Human intelligence is usually poor and perpetrators increasingly use encrypted technology to communicate. Geospatial intelligence is not much helpful either. Most of the assailants had European passports and were members of local Muslim communities. Thus, there were able to benefit from open borders and utilized family networks to organize attacks.

The tragic events inevitably raised questions about interstate intelligence cooperation and border controls, the EU’s refugee policy, and eventually the whole project of European integration. The Brussels’ bombings came to confirm what many Europeans suspected after the November 2015 Paris attacks. The EU has failed dramatically to protect its citizens from terrorism. Many European countries dealt with terrorism before, although not always effectively. However, the Irish Republican Army, the Basque ETA, the German Baader-Meinhof group, the Italian Red Brigades and the Belgian Cellules Communistes Combattantes had either limited capabilities or avoided, most of the times, the intentional targeting of civilians.

Now European governments face a new type of terrorism which seeks to inflict massive casualties on the population for two main reasons. First, the European public opinion has been identified as the Clausewitzian ‘Center of Gravity’, namely the source of strength and legitimacy for governments in Europe. ISIS’s actions aim at the repetition of the ‘Spanish scenario’, that is, the electoral defeat of politicians who favor military action against militants – like it happened with Spanish Prime Minister Jose Maria Aznar following the 2004 Madrid bombings. If this never happens and there is more European military involvement in the Middle East, then a clash of civilization between the West and Islam could become a self-fulfilling prophecy. By targeting civilians, ISIS also hopes to spark a racist backlash against Europe’s Muslim communities and thus gain more recruits. It is essentially a win-win situation for the group and there are no easy solutions to that.

Under such circumstances, European government must lower their expectations for the prevention of violent attacks. The simplicity of the Nice and Berlin attacks have revealed that there is no effective way to prevent a determined individual from committing an act of mass murder. In fact, there is an endless list of soft targets that that can be hit by terrorists. If there is a lesson to be learned from the 9/11 events is that the evil of terrorism cannot be defeated with security measures alone. Contrary to the public’s perceptions, jihadi terrorism has been a phenomenon primarily concerning the Middle East. Successful attacks against Europeans have been the exception, not the rule. There are several factors that count for this. First, intelligence agencies have been largely effective in preventing attacks. Following the Paris and Brussels attacks, ISIS lost valuable human assets. It is not a coincidence that most recent attacks were conducted by lone wolves. More importantly, jihadi groups have been unable to recruit significant numbers of European Muslims. The huge majority of them remains law-abiding and peaceful.

Intelligence failures can be determined by the lack of information or the lack of information accuracy, which determines a distortion of the analytical process. This can occur either through ignoring or through the mistaken interpretation of data. The analysis of the terrorist attacks in Paris and Brussels suggests there is a new form of terrorism, leading to an unpredictable intelligence failure. The asymmetric character of jihadi attacks means that the success of combatting terrorism no longer relies just on the magnitude of available resources. Unlike other fields, the identification of the causes of errors of intelligence activity is especially difficult, given that their main resource – information – is difficult to quantify. Thus, one can legitimately ask the question – are we talking about a failure of the intelligence services or of a failure of public policies that determine the direction of action of these organizations?

Image: Bataclan memorial. Courtesy of Wikimedia Commons.


Dr. Geraint Hughes

In early March 1947 the US President Harry S. Truman faced a political and diplomatic quandary. His administration had been informed by the Labour government of Clement Attlee that Britain could no longer afford to provide military aid to Greece and Turkey after the end of that month. Both countries had become embroiled in the nascent Cold War between the Soviet Union and the West, and the USA was the only state in a position to offer their governments defence assistance. However, Truman also had to contend with Senators and Representatives who were bound to question his proposed $400m aid package to Athens and Ankara.

The President received advice from what would today be seen as a peculiar source. The Republican Senator Arthur Vandenberg was the Chair of the Senate Foreign Relations Committee. He was also a pre-war isolationist who had changed his mind as a result of the Second World War, and regarded it as inevitable that the USA would now assume a world role. He also understood that any justification of aid to the Greeks and Turks based on the complexities of geopolitics and strategy would fall flat with both the Republican and Democratic parties, and the electorate. Vandenberg told Truman that he had to ‘scare the hell out of the American people’. The President followed this advice in his speech to their elected representatives on 12th March 1947, outlining what was subsequently dubbed the Truman Doctrine.

Today, there is something almost quaint about a Democratic President being offered advice – and taking it – from the other side of Congress. Furthermore, while Truman was making the case for America to see its security interests as being interconnected with peace and stability in the Eastern Hemisphere, the 45th President seems bent on demonstrating the opposite. Nonetheless, Truman’s presentation of his aid programme to Greece and Turkey as a matter almost of life and death for American democracy has parallels with future Presidencies, for example with Jimmy Carter’s State of the Union speech in January 1980 (in the aftermath of the Soviet invasion of Afghanistan), or George W. Bush’s ‘Axis of Evil’ speech 22 years later.

By early 1947 the wartime alliance between the USA, USSR and Great Britain had disintegrated. The imposition of Communist rule in Soviet-occupied Eastern Europe, quarrels over the fate of a divided Germany, and Josif Stalin’s uncompromising speech on East-West relations on 9th February 1946 testified to the collapse of trust between the Soviets and their former partners in the Grand Alliance, but it was a series of crises in the ‘Near’ and Middle East in 1946-1947 that helped precipitate the Cold War. In Iran, Moscow refused to remove Soviet troops deployed to the country as part of an Allied occupation during WWII, and appeared to sponsor separatist republics in Azerbaijan and Kurdistan. Stalin also tried to bully the Turks into revising the 1936 Montreux Convention governing the movement of shipping through the Straits, and also into making territorial concessions to the USSR, and these demands were accompanied by troop movements in the Caucasus and Soviet-occupied Bulgaria. Greece was mired in a civil war pitting the British-backed royal government against a Communist insurgency, aided by the Albanians, Yugoslavs and Bulgarians. Britain was the principal source of military aid to Turkey and Greece, but WWII had left it economically exhausted, and no longer able to sustain a Pax Britannica in the Eastern Mediterranean.

The intellectual foundations of the Truman Doctrine – and subsequent US Cold War policy – came from the ‘Long Telegram’ drafted by George F. Kennan, the then charge d’affairs of the US Embassy in Moscow on 22nd February 1946. Kennan’s argument, which was refined in an anonymous article published in the July 1947 edition of Foreign Affairs, was that the Soviet regime was implacably hostile to the USA and the West, and justified its despotic rule over its subjects by presenting America and its allies as a menace to the USSR. Kennan also stated that Moscow was committed to expand its power and influence throughout Eurasia, taking advantage of the instability and chaos caused by the recent world war. He argued that the USA needed to block the expansion of the USSR’s influence through diplomatic, economic and military means, adopting a policy of ‘containment’ to ensure that the Soviet empire did not expand beyond its own borders and its sphere of influence in Eastern Europe. Kennan’s appointment as the director of the newly-established Policy Planning Staff in the US State Department a month after Truman’s speech gave him the opportunity to help shape America’s Cold War strategy, although he subsequently argued that his concepts of containment had been militarised.

With his speech to Congress on 12th March, Truman outlined the problems that Greece faced; impoverished and wrecked by Axis occupation in WWII, now vulnerable to take-over by a guerrilla movement backed by Stalin and his Balkan clients. Turkey had been neutral during the war, but was now open to pressure from Moscow and lacked the means to defend itself. He then commented on the imposition of pro-Soviet regimes in Warsaw, Bucharest and Sofia, stating that:

At the present moment in world history nearly every nation must choose between alternative ways of life. The choice is too often not a free one.

One way of life is based upon the will of the majority, and is distinguished by free institutions, representative government, free elections, guarantees of individual liberty, freedom of speech and religion, and freedom from political oppression.

The second way of life is based upon the will of a minority forcibly imposed upon the majority. It relies upon terror and oppression, a controlled press and radio; fixed elections, and the suppression of personal freedoms.

I believe that it must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures.

Congress approved his aid package to the Greeks and Turks, and in retrospect the Truman Doctrine also provided the basis for the Marshall Plan later that year, the signing of the North Atlantic Treaty in September 1949, and ultimately the construction of an array of multilateral and bilateral security pacts which currently tie the USA to its allies in Europe and Asia. However, it is worth noting that Truman’s speech also had a controversial legacy.

The first problem with the Doctrine concerned applicability. What was good for Greece and Turkey was perhaps good for Nationalist China in 1949, South Korea in 1950, and South Vietnam in the early 1960s. Just over two years after Truman’s enunciation of his doctrine, his administration was pilloried by the Republicans for the ‘loss of China’, namely Mao Zedong’s victory in the Chinese civil war. Republican Senators and Representatives loudly demanded why the Democrats had allowed a stalwart ally to fall, and decried the package of military aid that Truman had provided to the Guomindang as insufficient. The logical implication here was that if arms, money and advisors were insufficient to stop a ‘free’ nation from being subjugated by Communism, then what was next? More military instructors? More equipment and munitions? The overt introduction of US combat troops?

Secondly, who exactly qualified to be ‘free’? East Berliners revolting against the GDR in 1953, and Hungarians involved in the 1956 revolution found that the criteria did not apply to them. The costs of liberating Eastern bloc countries would involve a Third World War, and no US administration confronted with turmoil in the Soviet empire was prepared to accept that outcome. In contrast, the criteria for ‘free’ throughout the Cold War essentially meant ‘anti-Communist’. Ngo Dinh Diem, Fulgencio Batista, Mobutu Sese Seko, Manuel Noriega and Augusto Pinochet were no more committed to contested elections, an independent, a free press and the concept of a ‘loyal’ legislative opposition than the USSR, China, North Korea or East Germany were. Yet their regimes were still deemed worthy of US backing because they were run by pro-American thugs, rather than pro-Communist ones.

Thirdly, there was a key question related to strategic priorities. Kennan subsequently argued that America would squander its means if it responded to every single Communist encroachment across the globe. Some countries and some regions mattered more than others from a power-political perspective, and in certain cases zero-sum thinking about successes and failures detracted from long-term calculations on strategic outcomes. NATO powers saw Soviet dominance of Eastern Europe as a strategic threat to the Western half. But this could also be viewed as an encumbrance to Moscow, particularly with the troop presence in the GDR, Poland, Hungary and (after 1968) Czechoslovakia that was needed to keep loyal regimes in power in the region. Further abroad, South Yemen (after its independence from Britain in 1967) was a basket-case. The Ethiopian and Angolan regimes needed substantial financial and military aid to fight off powerful insurgent movements, while a unified Vietnam required Moscow’s protection from China, particularly after the Sino-Vietnamese war of 1979. Furthermore, the Afghan ‘revolution’ in April 1978 proved to be a disaster for Soviet interests, requiring an extensive military intervention to save a client regime from downfall, leading to a war from 1979 to 1989 which contributed to Moscow’s economic and strategic woes.

Fourthly, there was the image of the monolithic Communist conspiracy that distracted US policy-makers from the multifaceted challenge they faced. The opening of Russia’s archives after 1991 showed that Stalin was not applying a master plan for world domination in the late 1940s, and adopted an opportunistic response to post-war crises. His territorial claims on Turkey were in part an attempt to appease the Communist Party of the Georgian Soviet Socialist Republic, demonstrating that even the ‘captive nations’ could have a significant impact on the USSR’s foreign policy. The coup in Prague in February 1948 was largely the initiative of the Czechoslovak Communist Party. Later that year the Soviet and Yugoslav leaderships had a bitter and public schism arising from Marshal Josef Tito’s clear frustration with Stalin’s efforts to direct his regime; the split removed one of the key sources of support for the Greek Communist rebellion. Twenty one years later, China and the USSR were on the brink of war over a series of border clashes in Siberia/Manchuria and Central Asia, and in February 1972 Mao welcomed President Richard Nixon – a formerly avowed enemy of the People’s Republic – to Beijing.

With the benefit of hindsight, it is all too easy to judge the Truman administration for failing to foresee the eventual fragmentation of the Communist ‘bloc’. Statesmen, diplomats and senior military officers have fragmentary and contradictory information to guide them, and rarely have the ability to see ‘the other side of the hill’. There is also a tendency in international politics for policy-makers to overrate both the strength and the strategic acumen of an adversary. It is also worth noting the USA’s alliance with its European allies did give it an advantage over its superpower adversary, insofar as political differences (such as France’s withdrawal from NATO’s military command structure in 1966, or Britain’s refusal to send troops to fight in Vietnam or to back the USA during the Yom Kippur War) could be mitigated by the established arts of democratic compromise. Debates over Communist dogma could not be managed in the same manner.

Nevertheless, Truman’s speech to Congress on 12th March 1947 still represents a turning point in US foreign policy, as he was able to do what his predecessor Woodrow Wilson failed to achieve after WWI; persuade the American body politic and the electorate that the national security interests of the USA and the survival of its constitution depended on its ability and willingness to protect its friends worldwide. This was reciprocated by what the Norwegian historian Geir Lundestad called the process of ‘empire by invitation’, in which allies (notably Britain in this case) solicited US diplomatic and military intervention to bolster their own security interests. Truman told his audience that America needed a stable world order as much as the latter needed the former. It remains to be seen if the current US President can persuade Congress and electorate that the reverse is true, and also what the consequences of such an outcome will be.

Image: President Harry S. Truman addressing a joint session of Congress asking for $400 million in aid to Greece and Turkey. This speech became known as the “Truman Doctrine” speech. Courtesy of Wikimedia Commons.


Dr. Sukanya Podder

Several rounds of peace negotiations between the main protagonists – Riek Machar and Salva Kiir – have not yet brought an end to the violence that is tearing the social fabric of South Sudan apart. The cycle of events shows how the many can suffer at the hands of the few. At the same time, the Afghan technocrat-cum-warlord government of President Ghani and CEO Abdullah attempts to increase security and prosperity while also maintaining networks of customary authority, patronage and grand corruption – rendering the war of attrition against the Taliban un-winnable. It shows how the forces of modernity and traditional rule compete and mesh in the cauldron of conflict. Many conflict-affected and fragile settings feature shades of grey of these examples. Negotiating peace in complex wars in Syria, Libya or Yemen on the one hand, and efforts to rationalise ‘hybrid’ governance arrangements in Somalia, the DRC, and Lebanon all offer contemporary illustrations of the challenges involved. What these cases share despite their numerous differences, is that international actors increasingly analyse their domestic power balance through the concept of a ‘political settlement’.

A political settlement is shorthand for the set of (in)formal representation, control and distribution rules between national political elites that guide governance and resource allocation in a particular country. These elite groups negotiate the extent to which they can pursue their interests on the basis of their relative power and skill within the boundaries of what their constituencies tolerate. The settlement that is the outcome of these negotiations is argued to influence the type of institutions that can exist and the nature of their performance. For example, an informal parameter of Iraq’s political settlement is that, whatever their differences, its main Shi’a parties unite when it matters to retain their dominance over the central Iraqi state.

The focus on politics and power that the concept of political settlement brings to the analysis of developmental problems is a welcome analytical tool for development and humanitarian policy makers. It is especially relevant in fragile or conflict-affected societies where lower levels of social capital reduce the ability of the population to contest elite preferences and where violence is reinforced by social norms that facilitate its contagion. Nevertheless, a more critical analytical perspective on the application of the concept of political settlements is warranted for several reasons. Key among those is that its use is likely to perpetuate existing ‘arrangements to rule’ between the selected few. This can reinforce structural conflict drivers that mostly benefit those with guns, funds or status and entrench inequalities that are difficult to remedy.

From this perspective, the establishment of a Peace- and Statebuilding Goal (PSG) entitled ‘achieving legitimate politics through fostering inclusive political settlements and conflict resolution’ by the New Deal for Engagement in Fragile States, strengthens the conception that the procedural aspects of legitimacy can be furthered through elections that in turns progress democracy. In practice, international statebuilding’s focus on elections as a viable pathway out of conflict tends to legitimise the status quo. This perpetuates the dominance of powerful elites that were often at the root of the conflict. For example, in Afghanistan as the competition between elite networks over the state has shaped the very nature of politics in Afghanistan since 2001. Elections in 2009 and more recently, have failed to overcome the logics of ethno-regional solidarity and patronage relations in Afghanistan. Unlike previous political settlements such as the the Riwalpindi Accord 1989; the Peshawar Accord 1992; and the Macca Accord 1993, the logic of the externally driven 2001 Bonn agreement was largely socially and politically constructed by external powers.

According to Heathershaw and Sharan (2011), the 2009 presidential election marked the negotiation and re-negotiation of these societal logics. It was a forum for conflict and compromise between two opposing elite networks – namely the oppositional former Northern Alliance (NA) Jihadis, in particular the Panjesheri in Shura-yi Nezar, the military wing of the Jamiat Tanzim, represented by Dr Abdullah Abdullah, and the incumbent President Karzai network, represented by the post-Bonn Western-educated technocratic elites who were brought in from the diaspora.

The former elite network emerged during the Jihad years, in particular 1992-2001, while the latter emerged with the outcome of the political settlement at the Bonn Conference in 2001. Both networks represent politically constructed ethno-regional factions which have been resourced by decades of intervention and interference by Western, Soviet and regional powers. The 2009 presidential election was a last attempt by the former network, the Jamiat Tanzim (predominantly ethnically Tajik), to regain its political dominance of 2001-4.

However, these two political networks are fluid as they have been reinforced, renewed, and reproduced in the post-Bonn statebuilding process and regime formation. Most of the Northern Alliance elites, especially those not belonging to the Jamiat Tanzim, have been effectively co-opted to the dominant network through bargains and exchange.

Elite bargaining within a loose and evolving political settlement remains a key source of instability as evidenced by the 2014 electoral compromise. The government of national unity has been far from smooth sailing, with the two leaders disagreeing on a number of issues. Meanwhile, the Taliban have gained strength, and President Ghani has not succeeded in convincing them to come to the negotiating table.

The consequence of the idea that elections can significantly influence the composition and dynamics of a political settlement is that elections as events tend to sanction the status quo that effectively perpetuate the dominance of powerful elites that were often at the root of the conflict. The fact that these elites are elected after the event is of course convenient for international actors as it relieves them from the much more complex task of engaging with hybrid governance structures. While it confers greater legitimacy on their support for the central state and provides them with a convenient exit strategy. It also overlooks ‘hidden’ social capacities and tends to underemphasise alternative political pathways to developmental change.

Image: Salva Kiir Mayardit, President of the Government of Southern Sudan that speaks to news reporters outside the Security Council chamber at United Nations Headquarters in New York, United States of America. Courtesy of Wikimedia Commons.

The Harmel Report Anniversary

Dr. Tracey German

2017 marks fifty years since the publication of NATO’s seminal Harmel Report, which reasserted the basic principles of the alliance and introduced the concept of cooperative security based on deterrence and dialogue. The Report committed the alliance to a twin-track policy, advocating the need to seek a relaxation of tensions between East and West, whilst maintaining adequate defence. Fifty years later, these issues are back at the top of the security agenda as relations between Russia and the West reach a new low. The post-Cold War evolution of NATO, which has seen it expand its membership and shift focus away from a purely defensive role towards out-of-area operations, is coming under pressure, as the alliance once again seeks to remain relevant and united. Russia is both a security problem for NATO and part of the solution, demonstrated most recently by the twin challenges of Ukraine and Syria.

Among the key themes of the 1967 report was the USSR’s place in the European security order and NATO’s quest to define a political role for itself, rather than a purely military one focused on collective defence: a state of affairs that resonates today. While there are similarities between the challenges facing the alliance in 1967 and the contemporary strategic environment, not least the disparity between the power of the United States and that of the European pillar, as well as the ongoing debate about Russia’s role in the European security order, the report’s key concern was the perceived continuing expansion of Soviet influence around the world, particularly in Asia and the Middle East. This stands in stark contrast to the situation today. Now it is Russia that has expressed its grave concerns about the perceived continuing expansion of NATO’s influence (and that of the West more generally) around the world, and more particularly within its ‘zone of privileged interest’. In the context of the Soviet challenge, the Harmel Report stated that the security of member states rested upon two pillars: the maintenance of adequate military strength and political solidarity to deter aggression and other forms of pressure and to defend the territory of the NATO countries if aggression should occur, as well as realistic measures to reduce tensions and the risk of conflict. While the tables have been turned in the twenty-first century, Harmel’s twin pillars of deterrence and dialogue remain central to Euro-Atlantic security, particularly for the alliance’s newer members. This was underlined by the focus of the 2016 NATO summit in Warsaw on the continuing threat to Euro-Atlantic security from Russia, leading to an emphasis on deterrence and a strengthening of the alliance’s defence posture. However, against a backdrop of continuing tensions between NATO and Russia, and futile attempts at dialogue, the deterrence pillar appears to be by far the more resilient of the two.

The political (dialogue), rather than military (deterrence), aspect of the alliance has always been the more controversial, particularly when connected to the question of enlargement, which has exposed tensions within the alliance with regard to these twin pillars of the Harmel report. The post-Cold War policy of enlargement has brought the alliance into competition, and in some cases direct confrontation, with Moscow, the very opposite effect to that intended: NATO’s own 1995 study on the topic maintained that enlargement was only one ‘element of a broad European security architecture that transcends and renders obsolete the idea of “dividing lines” in Europe’.

Since its establishment in 1949, the alliance has more than doubled its membership from 12 to 28 states, and the majority of the new entrants have joined since the end of the Cold War. The accession of Montenegro, expected to be completed in 2017, will take the total membership to 29. These enlargements have, to some extent, undermined NATO’s stated objectives in incorporating new members, and have exposed tensions within the alliance over deterrence and dialogue, the twin pillars of the 1967 Harmel Report on ‘the future security policy of the alliance’. These outcomes are the direct result of the enlargements of the post-Cold War era being motivated by political, rather than—as the enlargements of 1952 and 1955 had been—military considerations. In a recent article in International Affairs, I argue that in the light of the fundamental tension between its current ‘open door’ policy and Moscow’s desire to preserve its ‘zone of privileged interest’, NATO needs to revisit the purpose of enlargement and the balance between the two core pillars of the Harmel Report. Only then can it address fundamental questions of why (and if) it should continue to enlarge. Enlargement has become a symbolic act rather than one of defensive necessity, as the recent incorporation of members from the Balkans demonstrates. Montenegro’s accession is a vital demonstration of the alliance’s continuing commitment to its promises regarding its ‘open door’ policy, indicating the primacy of the political, rather than military, aspects of enlargement. However, Montenegro is likely to be the last new member state for some time to come, alliance consensus regarding further expansion proving elusive in the face of a combination of ‘enlargement fatigue’ among western allies (many of which are focused on internal challenges), concern about the apparent threat from Moscow and a lack of non-contentious candidate states.

Image: Pierre Harmel. Courtesy of Wikimedia Commons.

When Learning Goes Bad


Jonathan is a Senior Lecturer in Modern History at the University of Birmingham. His first book, Winning and Losing on the Western Front: The British Third Army and the Defeat of Germany in 1918 was published by Cambridge University Press in 2012. An audio recording of a paper detailing some of his new research on German command on the Western Front can be found here.

On the Western Front in September and October 1917, during the Third Battle of Ypres, the British army employed a new operational approach known as ‘bite and hold’. Rather than trying to drive deep into the German defences and break through, the BEF sought instead to limit any advance to the range of its artillery cover, driving a thousand yards or a mile into the enemy trenches, digging in quickly and then defeating the inevitable German counter attack. This approach posed a significant challenge to the German defenders. Based on new research into the papers of the army group commander opposite, Crown Prince Rupprecht of Bavaria, this article explores how they adapted to the new British method. It demonstrates three points relevant to modern commanders:

  • Find solutions which address the real problems you face, not those which you best know how to fix;
  • Don’t assume that a solution exists, much less that you’re the person to find it;
  • Intellectual honesty about the past is crucial to the integrity of ‘lessons learned’ processes. Infection by present-day concerns risks misrepresenting the past and drawing the wrong conclusions.

In late September 1917 the Third Battle of Ypres burst back into life with a series of resource-intensive, limited-objective British attacks. In the battles of the Menin Road Ridge and Polygon Wood (20 and 26 September, respectively), troops of the British Second Army used ‘bite and hold’ tactics to chew their way through the enemy defences. The Germans, practising an elastic defence in depth, seemingly had no answer. Their forward garrisons were too weak to beat off the first assault. And poor communications and the difficulty of movement across a devastated and lethal battlefield made it impossible to launch counter attacks to regain lost ground in time. For the first time in the Flanders campaign, Rupprecht needed to call in reinforcements. The search for counter-measures began.

According to the German Official History, the defensive expert, Fritz von Loßberg, proposed moving away from defence in depth and increasing the strength of the forward crust to prevent any initial British break-in. This was adopted on 30 September but did nothing to prevent another defeat in the Battle of Broodseinde (4 October). Consequently the Germans reverted to a (slightly reformed) elastic defence on 7 October. Within two weeks, then, they had been able to operate three different styles of defence: an impressive level of flexibility.

This narrative was false in three particulars. First, Loßberg’s was merely one of several senior voices, including some at Supreme Command (OHL), advocating a crust defence. Secondly, whatever the orders, it is far from clear that every front line unit was able to adjust their tactics in time. Some could but many could not. The 119th Infantry Division, for instance, which was at the front for 67 straight days from 11 August to 18 October, pointed out that new orders incorporating the latest lessons learnt were of only limited use with no opportunity to train. Thirdly, there were far simpler and more traditional explanations for the problems the Germans were facing, which spoke far more to the operational level of war than the tactical. The cumulative effect of attrition was beginning to make itself felt, with both the quantity and quality of replacements slipping. Poor leadership was another concern.

Nonetheless, the Official History narrative held considerable attractions for the German military writers who constructed it. First, by attaching responsibility for mistakes to Loßberg, it deflected blame from OHL and the General Staff more widely. Secondly, it simultaneously emphasised how flexible, rational and systematic the German approach usually was. Thirdly, it reinforced the case for elastic defence, which was an important tenet of German military thought between the wars. It is no accident that the Official History was compiled by former officers of the German General Staff, many of them who had served at OHL during the war. When the Treaty of Versailles demanded the abolition of the General Staff, the army transferred its finest doctrinal thinkers, steeped in the manoeuvrist approach of Schlieffen and Moltke the Elder, to the apparently civilian Reichsarchiv. There they were to keep the General Staff flame alive and produce a history designed to help train and teach the army’s officer cadre.

As a matter of fact, the revised elastic defence brought in after Broodseinde was never really tested. Continued British attacks at Ypres in autumn 1917 were handicapped more by weather and logistics than by German resistance. Thereafter there was little opportunity to try elastic defence until the Allied offensives of July-November 1918. It failed. But, since the German army was much weaker by then, and Allied attacks much stronger, comparisons with 1917 are tricky.

The lessons of this episode are threefold. First, the German general staff sought tactical solutions to what was in fact the operational challenge of ‘bite and hold’ and attrition. Culturally they were, like most militaries, ‘can-do’ institutions and natural problem solvers; but they were more comfortable offering tactical tweaks than in confronting operational reality. The tendency of the German army to offset operational weakness with tactical brilliance and to seek military solutions to political problems is a recurring theme in its history from Schlieffen to Stalingrad. Secondly, the experience of Flanders highlights the intellectual arrogance of its commanders. Men such as Erich Ludendorff and his entourage at OHL were convinced not only that a single solution to their difficulties existed but also that they could find it. This blinded them to the possibility that there might be no panacea, and that different situations might require different responses. It also meant that doctrine formulation became increasingly centralised and dogmatic, restricting the initiative of subordinate commanders and rendering the Germans predictable to their enemies. Rupprecht criticised this tendency, pointing out that ‘there is no cure-all. A pattern is harmful. The situation must be dealt with sometimes one way, sometimes another.’ Thirdly, the interwar German army’s prime mechanism for lesson-learning was distorted by official historians pursuing their own agenda. By misrepresenting the process of adaptation in contact during the Third Battle of Ypres they encouraged a fascination with tactical detail which helped distract the Wehrmacht from the strategic and political horrors it was soon to face. Their example reminds us that more history is not necessarily the answer. But better history may be.

Image: The view from a captured German pill-box, showing the burst of a shell of the German barrage searching British reserve trenches as part of the Battle of Polygon Wood within the Battle of Passchendaele. Taken near the Wieltje-Grafenstafel Road (Rat Farm), 27th September 1917, via the Imperial War Museum.

Legacies of the Great War: the Experiences of the British and American Legions during the Second World War


Ashely is a DPhil student in the Globalizing & Localising the Great War programme at the University of Oxford. You can here a recording of the talk associated with this post here.

The year 2017 marks the centenary of American involvement in the First World War, but it is unlikely to draw the same level of public attention as the 2014 anniversary has in Britain. The Great War does not hold such prominence in the American national consciousness, a reality which is often attributed to its more limited role in the conflict. The United States entered the war three years into the fighting and lost approximately 53,400 men killed in combat (although including influenza deaths among servicemen raises the tally of American dead to more than 115,000). Britain, by comparison, suffered more than 700,000 dead during the conflict. It could be argued that such figures explain why the First World War has receded in American public memory while it retains such prominence in Britain, but it is significant to note that this was not the case in the years immediately following the war. As scholars such as Jennifer Keene, G. Kurt Piehler, Mark Snell, and Stephen Trout have argued, the war left a considerable mark upon America and a culture of commemoration developed in the post-war years just as it did in Britain and other former belligerents. So when – and how – did these memory trajectories come to diverge so markedly?

Naturally, our thoughts turn to subsequent historical developments for this answer, and particularly to the Second World War, which is the predominant twentieth-century war remembered in the United States. How this latter conflict came to affect the memory of its predecessor is an intriguing question into which ex-servicemen’s organisations such as the British and American Legions can provide unique insights.

The Legions’ shared characteristics provide a baseline for comparison that may help illuminate the unique national contexts in which they were situated. The membership, leadership, structure, and relationship to the state of both groups mirrored one another – as prior work by Niall Barr, Graham Wootton, William Pencak, Thomas Rumer, and Stephen Ortiz has demonstrated. Former officers and the upper classes were over-represented among the national leaderships of both groups, while white middle-class men of small towns dominated the rank-and-file membership. Hierarchically structured with local, regional, and national outposts, both groups enjoyed close working relationships with their respective states, thanks to conservative political agendas. Perhaps the most significant similarity, however, is their common mission to perpetuate the memory of the First World War. This agenda came to inform their political and cultural engagements in Britain and America throughout the interwar period.

Yet despite the similarities in demographic and cultural terms, and the shared background and aims of these groups, in-depth research comparing the two is lacking. This is due in part to the differing national contexts mentioned earlier, but also because of important distinctions between the organisations themselves. The American Legion was considerably larger and more powerful politically than its British counterpart, claiming between 15-25% of all Americans mobilised for World War I as members and enjoying support from political elites such as Theodore Roosevelt, Jr. The British Legion, in contrast, represented 10% of British veterans at most during its interwar peak. Its national presence was felt more through its annual Poppy Day appeal rather than its influence on official policies.

Yet these differences only make the question of divergent memory trajectories even more pronounced, since it is in the United States – with its larger and more politically influential Legion – where the memory of the Great War subsides most. Perhaps the answer can be found in the differing national experiences of the Second World War?

That the Second World War delivered a blow to such groups so firmly anchored in the Great War is unsurprising. The onset of another global conflict forced both organisations to re-evaluate the legacy of the preceding war. Comparing the First World War with the Second thus became a frequent theme in British and American Legion discourses – especially early on in each nation’s war effort. Placing Great War veterans in relation to those being mobilised for the new fight was particularly important for the groups, whose membership rolls might increase via these future ex-servicemen later on.

At the heart of wartime discussion was a debate about comradeship – which my paper to the First World War Research Group at the Joint Services Command and Staff College on 14 February 2017 (available here) analysed in detail. Participation in the First World War served as a cornerstone in the collective identities of the British and the American Legions. Incorporating ex-servicemen who had not experienced the Great War challenged existing ideas of who could be considered a “comrade in arms.” Deviating too far from past views might jeopardise the memory of the First World War, both in terms of upholding its broader historical significance and as well as its personal import. At the same time, recruiting Second World War ex-servicemen offered the chance to secure their futures as organisations. Discussions, therefore, needed to appeal to this generation, too.

Deciding who belonged and who did not boiled down to a much larger question with significant implications: why did the service of veterans from both the First and Second World Wars matter?

Examining discussions among Great War ex-servicemen in America and Britain offers a helpful case study demonstrating how the Second World War impacted narratives of the First within these differing national contexts. The extent to which the Legions continued to uphold the Great War as significant raises interesting questions about wider developments in national memory discourses. Understanding the conflict’s place in British or American national consciousness in 2017 is not only a matter of grasping these state’s respective war experiences, but of discovering how subsequent events served to shape its narratives as well.

Image: Crowd at an American Legion convention in New Orleans, 1922, via wikimedia commons. 

Conference Report: Commemorating the Centenary of the First World War


This post reflects upon an event held on January 12th in the River Room at King’s College London. The symposium featured contributions from Prof Jay Winter, Dr Helen McCartney, Prof Annika Mombauer, Hanna Smyth, Dr Jenny Macleod, Dr Heather Jones, and Dr Catriona Pennell. Recordings of all of the days proceedings are available online and can be found by clicking on the name of the individual participant.

How the conflict which subsequently became known as the First World War ought to be interpreted, understood, and given meaning became a hotly contest topic almost immediately after the outbreak of hostilities in the summer of 1914. Debates over what the War meant displayed, and continue to display, a multiplicity of interpretations, attitudes, and agendas – which often reveal far more about those who formed them than the events they aim to discuss. The centenary of the conflict – and the accompanying raft of commemorative activities and spike in public interest – has presented a unique set of challenges to historians, but also a valuable opportunity to reflect upon the relationship between their craft and broader society. This event, held at King’s College London on January 12th, brought together scholars from a range of backgrounds to discuss the varying national approaches to the centenary, and what these might tell us about how the First World War is perceived and understood in the twenty first century.

 (Contested) Identities of Remembrance

What is the future of identities in the process of commemoration? Jay Winter’s provocation proved a key theme that ran through the event’s proceedings. With the aftermath of Brexit and the increasingly pluralised nature of identities in the modern age, participants were invited to consider how these identities might become contested and fluid, rather than temporally fixed. Vladimir Putin’s use of the ‘sacred memory’ of the First World War as a way of rehabilitating the Russian Empire and providing a ‘militarist narrative for popular consumption’ is just one example of the slippery way in which identities can be mobilised for political gain. Other speakers tapped into this pervasive theme. Hanna Smyth touched on these contested identities when speaking about the work of the Vimy Foundation. For Canada, national and imperial identities of remembrance were not binary. The idea of a Canadian national identity can be broken down further: how does Newfoundland – a separate dominion during the war, but now part of Canada – remember the First World War? What about the Quebecois? What about those from the First Nations? These contested identities are further compounded by the problematic narrative of ‘brave soldiers’ who died for freedom – a narrative that is by no means unique to Canada. In the case of Ireland, the tense, often divisive, nature of identities of remembrance has supposedly been tackled head on during the centenary commemorations. Catriona Pennell spoke of the ‘de-orangification’ of the First World War narrative, and the move towards equality of sacrifice in Ireland’s commemoration. As historians, we need to be mindful of the inherent complexity associated with the construction and presentation of national identities; the centenary has certainly reminded us of this.

Silences of commemoration

Despite the high level of commemorative activity across many of the main belligerents, there remain obvious silences of commemoration. Refugees and the reconfiguration of imperialism offer just two, broad examples. While attempts have been made to uncover and reintegrate the story of the Canadian First Nations, and Indigenous Australians into national commemorative narratives, there is still a continuing problem of visibility. Heather Jones spoke of the removal and muting of the ‘national’ narrative from France’s commemorative activity. While the international and the European has been a key focus of France’s commemoration, the continuing trauma of the nation’s colonial legacy and the often white, male face of commemoration has – unwittingly or not – proved another means of silencing complicated aspects of France’s past. From a British perspective, the focus on 1 July 1916 as a key focal point in the Somme commemoration is just one of the silences apparent in British commemorations. Cherry picking certain operations or campaigns, for commemoration, particularly those dominated by the army, is problematic. We are faced with similar problems when looking at the contributions of the army’s sister services. The British war in the air has been sidelined. In spite of its ubiquity, it will be commemorated in April 1918, aligning with the birth of the RAF. The war at sea has been both marginalised and militarised, overlooking the important contributions made by the Merchant Navy to the war effort. In many respects, commemoration activity in Britain runs the risk of distorting our own popular perceptions of the conflict, particularly in terms of who fought and their relative contribution. What happens then when we widen our view to look beyond the national to the international? What implications does this cleft between historical reality and remembrance have both during and beyond the centenary?

The Historian and the Centenary & Democratisation of commemoration

The complex relationship between historical accuracy and commemorative activity, and thus between the historian and the centenary, was also evident in the participants discussion of the democratization evident in the activities undertaken since 2014. Quite naturally the speakers welcomed initiatives intended to encourage broader participation in the centenary and engagement with the First World War. Schemes such as the ‘We’re here because we’re here’ and the poppy display at the Tower of London attracted widespread public interest, however questions remain over the extent to which they prompted people to reflect upon the conflict and its meaning. Helen McCartney highlighted how programmes such as Letter to an Unknown Soldier produced a degree of engagement with the historical detail that suggests a greater level of engagement with the record than critics might fear, however there is good reason to doubt the extent to which the centenary has genuinely changed the well-established narratives about the War evident prior to 2014. As Annika Mombauer highlighted in relation to Germany, even scholarship that penetrates into the popular domain – as Chris Clarke’s Sleepwalkers has done – tends to be simplified to the point of gross reductionism in popular debates, which are as much about the realities of the present as they are about the lost world of the past.

This all begs the question – what is the role of the historian during the centenary? Hanna Smyth observed that there is an implicit tension in those studying commemorative practice and centenary being involved in shaping its conduct. What effect does this have on the scholarship of those involved? And, in turn, ought the academic study of commemorative practice to play a role shaping how we commemorate? If the centenary is as much about the future as the past, what claim can historians make to inform a debate about events yet to pass?

Power & modern agendas – government, organizations, & the centenary

Ultimately, how we commemorate the First World War will always be determined by the needs of the moment. The iconic image of François Mitterrand and Chancellor Helmut Kohl standing hand in hand in the pouring rain before the memorial at Verdun is one of the most powerful encapsulations of European Unity and of a future devoid of conflict on the continent. Moments such as these are as much about power and political narrative as they are about historical accuracy, yet by attempting to mobilize the past for the needs of the present they also speak to the never ending debate as to what history is, and ought to be ‘for’. Indeed, the laudable inclusion of German and French representatives – alongside the British, Irish, and Commonwealth forces – at the centenary service for the Battle of the Somme at Thiepval – mirrored the move towards increasingly transnational, inclusive approaches within the discipline of history itself.


The timing of the UK’s referendum on its membership of the European Union – coming as it did days before the July 1st service – underlined how far we still are from a common narrative or understanding of the conflict. The War was mobilized in support of both the leave and remain arguments, often with precious little care for historical realities. Historians have no claim over this process, but do have an obligation to engage with it and to work against the crude instrumentalisation of the past for the needs of the political moment. This process is ongoing, and will be the subject of further discussion by the First World War Research Group as we approach the culmination of the centenary cycle in 2018-19.

Image: Poppies At The Tower Of London 23-8-2014 via Flickr.

The Operational Level as Military Innovation: Past, Present and Future


As Defence-in-Depth once again spends time exploring the concepts of the operational level and operational art, it seems an appropriate time to relate my previous contribution on the subject to the other research strand that I have previously blogged about: military innovation. Though the popular focus of military innovation tends to be on new technologies and weaponry, much of the theorising about the causes of military innovation takes evolutions in doctrine as its starting point. I will return to the different theoretical approaches to military innovation in a future post but, for now, the important point is that the operational level is, first and foremost, a doctrinal innovation and that this is crucial to any debate about its current and future worth. As discontent with the current form of the operational level grows, placing the debate in appropriate context becomes ever more important.

Before exploring the history of the operational level, we need to understand why doctrine has often been the source of scholar-practitioner theorising about the causes of innovation. First, a critical practical issue for any academic is the quality of primary source material on a subject. For historians trying to understand the dynamics of military reform in a given era, shifts in doctrine offer concrete evidence of change being enacted by the armed force in question. One can trace a doctrine’s origins back through the system and glean invaluable insights into how and why it came into being because, most obviously, it is written. Further, the formal character of its codification increases the likelihood of this traceable genealogy. Second, though the exact purpose of doctrine varies from military to military, its basic function is to provide authoritative guidance that helps militaries fulfil their raison d’être: usually, to be prepared to successfully wage war. Certainly, ‘field manuals’ and ‘warfighting doctrine’ has that purpose (the clue being in the titles) and so it is a reasonable assumption that it should also reflect the most current, institutionally agreed, thinking on how to actually conduct warfare. Inevitably, the more rapidly the character of conflict is changing, the harder it is for doctrine to keep pace but, sooner or later, it either reflects successful innovation or fails. It is no coincidence, therefore, that Barry Posen chose inter-war doctrine in Britain, Germany and France to analyse the drivers of innovation and that studies of doctrine formulation have been an integral part of military innovation theory ever since.

This is relevant here because the operational level, now integral to how we think about warfare, is, at its heart, doctrine. It makes its way into our consciousness because it takes hold as a concept that relates to bigger issues of strategy and campaigning but it formally originates in a specific piece of doctrine: US FM 100 from 1982. The distinction between the operational level and operational art was subsequently made in the 1986 variant. Ok, fine, so what? Well, though the formalisation of the operational level originates in the United States Army in 1982, thinking about ‘operational art’ long pre-dates it and, in each of its guises, is also a doctrinal response to specific circumstances. Taking three highly influential moments in turn; first, the Prussian General Staff seeking to apply the enduring lessons of Clausewitz and their practical experiences in the Austro-Prussian (Seven Weeks) War of 1866 and the Franco-Prussian War of 1870-71 to a highly innovative intellectual debate about the future of war. This debate encompassed several related innovations in warfare including the physical expansion of armies and of the battlespace and the impact of related technological advances in firepower, mobility and communications. Emphasis on decisive battle remained in the thinking of Moltke the Elder and the officer class but appreciation of the inter-connected nature of the battlespace grows; presaging modern thinking about operational art. We see these developments in the writings of key thinkers, in the Prussian Staff College, in ‘doctrine’ (such as it was) and, of course, in practice.

Second, after the First World War, the Germans and Soviets in particular respond to their own very specific experiences by developing cutting-edge combined arms and armoured manoeuvre concepts. Their shared experience of defeat and the Soviet experience of a subsequent civil war fought over huge distances encouraged radical experimentation and boldness when thinking about future war. In both countries, doctrine again reflected this innovation and though the Germans remained resistant to any formalisation of an operational level they pushed the technological boundaries and skill at campaigning to far greater effect. The Soviets, by contrast, fell behind in technological terms once Stalin imposed his own brutal control on the military but their doctrinal innovations of the 1920s and early 1930s advanced thinking about the link between strategy and tactics, operational art in other words, in a profoundly important manner. I would argue that they are actually more important in this respect than the Germans. Again, both eventually test their theories in the crucible of war and while German combined arms brilliance influences the physical component (how to conduct high-intensity warfare) to this day, Soviet thinking has had the greater impact on the conceptual (how to conceive of warfare).

Nowhere is this more evident than in the final snapshot: the US formalisation of an operational level. Partly in response to defeat in Vietnam in the 1970s and to the Soviet creation of Operational Manoeuvre Groups (OMGs) in the 1980s, the US military formalises the operational level. The concept spreads into NATO and then more broadly . Again, this innovation is doctrinal in origin and conceived in response to very specific challenges. Further, despite recent caricatures of the US military debate as founded on fundamental misunderstandings of the historical evolution of operational thinking, closer study of the genealogy of the doctrine actually reveals an open, intellectual and sophisticated analysis of what had gone before that is much more in-keeping with the traditions of the Prussians and Soviets. True, there are misunderstandings in US application of the concepts but, arguably, they served a very practical purpose in the context of the 1970s and 1980s. It, too, has been tested in battle with great success in the first Gulf War, 1991, and Iraq, 2003, but has proved increasingly problematic in dealing with the kinds of complex conflicts presented by Iraq and Afghanistan. These problems have inevitably led to the current debate about its current and future utility.

What are the implications of all of this for academics and modern militaries trying to think critically about operational art and the operational level? Well, there are lots of interesting lessons about the drivers of military innovation but a more profound lesson perhaps relates to the point that the concept is, first and foremost, doctrinal. The operational level does not have any intrinsic right to remain at the heart of how we conceive of modern warfare. I have argued in the past that only operational art, in its various guises, is a constant in campaigning. Thinking about a ‘level’ evolves in response to very particular threats in very specific circumstances and only becomes formal in the 1980s. It changes in form throughout history and is not a constant in warfare: you don’t necessarily need an operational level to enable operational art. Critics of the ‘level’ therefore have a point but, as an advocate of its continued utility, I would argue that its failings are not evidence of its redundancy and inevitable demise but rather the consequence of far too little time in recent decades spent on genuinely innovative thinking about its current and future form. Reminding ourselves that the operational level is an example of innovation in military thinking, of purposeful doctrine, should also serve as a reminder that good doctrine requires constant critical engagement to remain relevant. Time, perhaps, to stop bashing the concept and start thinking innovatively about it once again?

Image: Soviet stamp depicting Marshal Mikhail Tukhachevsky (wikicommons). Tukhachevsky was executed during Stalin’s Purges but rehabilitated as a national hero along with several other key military thinkers during the 1960s.