International Relations (IR) theory is certainly very useful for some things, and in a previous post I pointed out just how important it is. But by now you may have noticed some of its limitations as well. Because most of the ‘big’ theories operate at the systemic or structural level, they don’t provide much detail. You can explain the general reactions of governments to threats like ISIL or Putin’s incursion into Ukraine with a general theory, for instance, but it is hard to explain the particular decisions of leaders this way. Why, for instance, did John Kennedy decide to institute a naval blockade in response to the news that the Soviets had placed nuclear missiles on the island of Cuba? Why did Lyndon Johnson decide to escalate US involvement in Vietnam? Why did Anthony Eden decide to take back the Suez Canal from the Egyptians using military force? And why did Margaret Thatcher decide to re-invade the Falkland Islands?
When it comes to explaining particular decisions like these, general theories like realism, liberalism and constructivism offer general clues of the broadest kind, but they don’t provide the kind of detail we need to explain ‘why X did Y’. For this reason, the late neo-realist Kenneth Waltz always disavowed any ability to explain the policy decisions of states with his theory, although his followers haven’t always heeded his advice. Fortunately, Foreign Policy Decision-Making (FPDM) theory can provide such detailed answers though, and scholars have been exploring this approach since the 1950s. Some have focused on the state level of analysis, looking at the ways in which democracies supposedly differ from non-democracies in the ways that they formulate foreign policy, for instance. Others focus on the ways in which policy preferences are filtered through what has become known as ‘strategic culture’, which has to do with a nation’s particular ways of war and peace. Although culture may be a much a ‘grab bag’ into which actors delve to justify a given course of action as much as it constitutes a ‘determinant’ of that action, such state-level considerations at least help flesh things out a bit.
When I teach this topic as a special subject or elective at the UK Defence Academy, though, I tend to focus most of all on the bureaucratic, group and individual levels, since these seem to provide the most acute detail of all. The bureaucratic politics approach, for instance, is most associated in the United States with Graham Allison’s Essence of Decision. That book was in part an analysis of bureaucratic infighting within the US and Soviet governments during the 1962 Cuban missile crisis, but such infighting is just as rife (if not more so) within the British system of government. Readers may be familiar with the internecine conflicts which raged between the Royal Air Force and the Royal Navy in the 1960s, for example. Each fought the other so successfully that they effectively destroyed each other’s pet projects (the TSR2 aircraft in the case of the RAF, the CVA-01 carriers in the case of the RN). In 2010, the debate was revived as the two argued behind the scenes over the Queen Elizabeth-class carriers, the Nimrod carrier and the Harrier jump jet. Such infighting can lead to decision-making outcomes which are at the very least suboptimal, and may even be downright irrational. Decisions can end up being the ‘least common denominator’, representing what nobody really wanted in the first place. Or they can simply reflect what key organizations happen to know how to do, rather than what is actually needed in a strategic or operational sense.
Group-level forces such as ‘groupthink’ – an all-embracing tendency to think alike within a foreign policy group, first identified by Irving Janis in his book Groupthink – can have a similarly damaging effect. When Lyndon Johnson came to make the critical decisions in 1965 about whether or not to escalate America’s involvement in Vietnam, he increasingly ‘hunkered down’ and marginalized dissenters like George Ball and Bill Moyers who disagreed with the headlong rush to war. Similarly, the decision-making of Bernard Montgomery seems to have been characterized by this kind of stultifying group dynamic. When it came to the decision to go ahead with the ill-fated Operation Market Garden plan in 1944, the often rigid and opinionated Field Marshal refused to listen to members of his group who warned that the Allies were heading towards disaster. When outsiders like Dwight Eisenhower’s Chief of Staff Walter Bedell Smith and the British Intelligence Chief Bill Williams warned Montgomery of German Panzer divisions in the Arnhem area, they were simply waved away by the Field Marshal himself. And when Major Brian Urquhart – later to be Secretary General of the United Nations but in 1944 still a very junior officer – reached similar conclusions and pressed these on General Frederick ‘Boy’ Browning, a doctor was assigned to tell the young Major that he was ‘sick’ and ‘in need of a rest’, a classic method of diffusing dissent (Janis, by the way, called this ‘domestication’, suggesting that leaders sometimes treat dissenters more like family pets than trusted advisers). These events are vividly portrayed in some early scenes of the movie A Bridge Too Far, where Urquhart appears as the character of ‘Major Fuller’.
It is sometimes hard to distinguish what are essentially group-level forces (shared by large numbers of people) from the cognitive psychological peculiarities and errors of particular leaders. But decisions sometimes do emanate from the psychologies of individuals rather than the broader group. Prime Minister Anthony Eden was enormously taken by the historical analogy between Abdul Nasser and Benito Mussolini, for instance, just as that comparison (together with the more common parallel with Adolf Hitler) influenced how Margaret Thatcher looked at Leopoldo Galtieri. Thatcher never thought that the Argentines would invade, just as Galtieri believed that Thatcher would not react if he did. Eden fell victim to selective perception or wishful thinking as well, ignoring an explicit warning from Eisenhower – made in advance of the UK’s military action – that the United States would not support any British attempt to retake the Suez Canal. And in what should count as one of the greatest British decision-making fiascoes of all time, Lieutenant General Arthur Percival ignored repeated warnings that the Japanese were about to capture Singapore in 1941 and 1942. Assuming that Japan could not invade via the seemingly impenetrable jungles of Malaya – in fact, they would stream down the peninsula on bicycles with relative ease – Percival turned his guns out to sea instead of towards the enemy. Although some revisionist historians have subsequently tried to remove some blame from Percival himself, the result was the humiliating military disaster portrayed at the beginning of films like The Railway Man, and the untold suffering of thousands of British and other servicemen who were forced to toil long hours on the Burma railroad in appalling conditions.
It is still not entirely clear why Percival ignored the evidence presented to him, but cognitive consistency theory tells us that human beings routinely ignore information which does not ‘fit’ their established beliefs. Much as Dick Cheney refused to accept that evidence supposedly demonstrating that Saddam Hussein still possessed ‘Weapons of Mass Destruction’ (WMD) in 2003 was flimsy at best, so Percival may have seen what he wanted to see and disregarded the rest. Or he may simply have seen what he expected to see, a slightly different proposition which is derived from an approach called schema theory. As Daniel Kahneman shows in the popular Thinking Fast and Slow, human beings regularly and predictably make certain kinds of errors, mistakes that are easy to see in retrospect but may not be readily apparent at the time. Those interested in exploring a fuller range of organizational and psychological errors as applied to foreign policy decision-making cases might take a look at my The Decision Point, which is designed to explain a range of theoretical approaches as simply and directly as possible. Hopefully, however, I have given you enough examples already to suggest that FPDM theory is worth looking into. That kind of theorizing is quite simply indispensable to anyone who is trying to understand what makes leaders ‘tick’, and especially why they so often get things so badly wrong.
Image: Vice President Hubert Humphrey and President Lyndon Johnson discuss the Vietnam War, via wikimedia commons.