Author Information: Matthew R. X. Dentith, Institute for Research in the Humanities, University of Bucharest, firstname.lastname@example.org.
Dentith, Matthew R. X. “Politics, Deception, and Being Self-Deceived.” Social Epistemology Review and Reply Collective 8, no. 4 (2019): 38-43.
Sometimes you read a blurb and think “That dovetails with my work; I must read that!” Then, when you finally manage to get hold of a copy of the text, you either devour it in as few sittings as possible… Or you sit on it for months because whatever excitement you had dissipated by the end of the introduction.
This book is, unfortunately, an example of the latter. It is a book with a very interesting central thesis—the development of a theory about political self-deception—which suffers both from overreach and a poorly structured argument. Let me start with the former issue, since the latter is only really a problem of presentation rather than the nuance of the argument.
To Deceive Oneself…
Anna Elisabeth Galeotti argues that not only can agents self-deceive, but there is a special species of self-deception which is political in nature. So, what is self-deception, let alone political self-deception? According to Galeotti—although it is a contested concept—self-deception is “the distortion of reality against the available evidence and according to one’s wishes (p. 1)” and “a form of motivated irrationality, displayed by usually rational subjects, as a rule capable of responding to evidence adequately and forming and holding beliefs appropriately (p. 19).” That is, when self-deception occurs, it really is the case that the agent wants to and thus comes to believe p despite the evidence.
Why is self-deception a contested concept? Well, because both in Philosophy and Psychology people have questioned whether someone can really self-deceive. After all, to deceive someone else is one thing (and, it seems, unfortunately quite common), but to deceive yourself is another thing entirely.
Some justify this scepticism that self-deception is possible by preferring accounts that, say, characterise seeming self-deception as just post facto rationalisations (an agent acted as if p were true despite knowing it wasn’t, and then justified their actions after the fact by saying they believed p to be true at the time). Others explain away such apparent self-deception by saying agents are simply subject to biases (there is no motivation behind the irrational belief that p, just unconscious bias).
Galeotti argues that to ignore the possibility of self-deception, however, means “giving up a whole body of phenomena, amply described and reported in literary works and novels, and deeply entrenched in our common experience (p. 25).” Yet tropes of fiction are not evidence of the existence of actual mental phenomena; at best such tropes suggest that in our folk-psychologies we think it is possible to self-deceive.
Better is her argument that we seem to personally experience self-deception, although even then that is questionable: not only might we retroactively perceive biased thinking as a case of self-deception (since it moves our fallacious reasoning from the cold to hot domain of cognitive activity) but there is a huge literature—particularly in the Philosophy of Mind—which questions whether our self-descriptions of mental activity reflects at all how we actually think.
Galeotti also argues that there is evidence elsewhere for self-deception, although this evidence is hard to parse. In the first half of the book many such examples of this evidence are introduced, but then never explained properly. An experiment by C. Michel and A. Newen is mentioned in passing, the results of which are said to support the self-deception thesis (p. 39).
But there isn’t enough detail in the description of the experiment to show this. Similarly, when discussing voice recognition experiments by R. Gur and H. Sackein, Galeotti claims they support the ascription of self-deception as a mechanism for avoiding embarrassment (p. 24). Yet, once again, we are told this, rather than had it explained to us. There are also times that we are told things like “Yet philosophers regard this solution as less than desirable (p. 73)” without ever being told who these philosophers are.
Add to this the problem that if self-deception occurs, it occurs and operates selectively. As Galeotti writes self-deception:
[I]s not a general causal mechanism countering any unwelcome data, but it activates only under specified circumstances, namely, in case the negative evidence threatens an important belief for S’s well-being which S cannot (or believes he cannot, or discounts he can) counteract or finds too costly to counteract (p. 45).
That is, we have to ascertain when something is self-deception rather than merely deceptive. As to why people might engage in such self-deception—aside from being motivated to irrationally believe p over ~p—in part it is because, in the short term, self-deception can be good for reducing anxiety. After all, the self-deceiver really wants p over p to be true.
However, the problem with being self-deceived is that, in the long term, such self-deception leads to bad consequences. P, after all, turns out not to be true and your subsequent decisions based upon p being true will reflect badly on you and—as we will see in the political case—your mates.
Galeotti argues not only is it not a contradiction-in-terms to deceive oneself, but there is a species of political self-deception which explains away not just bad policy decisions but also the failure of policy-makers to see the consequences of such decisions.
Now, Galeotti is quick to point out that her conception of self-deception does not absolve self-deceivers of the responsibility for their subsequent actions. Once an agent realises they have been self-deceived, then they should correct their future behaviour.
Not just that, but Galeotti even suggests it might even be possible to design political institutions to ensure against self-deception in the first place. Yet, despite this her case studies—the Bay of Pigs fiasco, the Gulf of Tonkin Incident, and the Invasion of Iraq—end up reading as apologetics rather than excoriations.
Case Studies of Deception or Deceptive Case Studies?
The second half of the book concerns the application of her theory to the aforementioned three cases, all of which have been put forward as examples of political deception. As such, the problem with Galeotti’s characterisation of these events is the primary issue for self-deception itself: does it occur?
Because if we deny the existence of self-deception, then we still have explanations based on deception for these three cases. Not just that, but even if we grant there can be cases of self-deception, why think these are notable cases of such deception? Once again, there are rival and well-accepted explanations for the same phenomena.
In part my scepticism is based upon Galeotti’s dismissal of conspiracy theories for some of these cases. I mostly work on the epistemology of conspiracy theory, and I am somewhat used to people outside the small community of philosophers who bother to treat conspiracy theory seriously talking about said theories as if Karl Popper was the last word on the topic.
So Galeotti’s dismissive take on conspiracy theories when it comes to accounts of what went wrong in particular with the Gulf of Tonkin Incident in 1964, or the “case” for the Invasion of Iraq in 2003 was no surprise. She casts the conspiratorial explanations to these events to one side because of their supposed implausibility, and this is awkward because—just as I defend conspiracy theorising in a range of cases—she is seeking to show self-deception rather than deception best explains these curious sets of events. However, unlike self-deception—which is contested—we know conspiracies occur. What we probably do not need are rival explanations which seek to explain away deceptive political practices as something else.
Part of the problem is a certain naive reading by Galeotti of the minutes, diaries and other official records around these events, which she takes to show that the agents involved self-deceived rather than sought to deceive. Galeotti assumes that the extant records are honest testaments of the intentions and thoughts of the political actors involved.
Yet I can’t help but be reminded of Julius Caesar’s Commentāriī dē Bellō Gallicō (Commentaries on the Gallic Wars). These were ostensibly accurate recollections of the events of Caesar’s campaigns in Transalpine Gaul. Yet most Classicists and Ancient Historians now think these were written primarily to garner support and sympathy for Caesar back in Rome: Caesar characterised the Gauls as an almost existential threat to the Roman Republic which only a general like Caesar could counter. Yet, from what we now know about the Gauls and their civilisation, Caesar was hamming it up for his Roman readers.
My point here is that whilst it is possible the recollections of the people involved in Galeotti’s case studies are sincere, it is also possible that these recollections were written to excuse mistakes, or hide intentions. This is sometimes done intentionally, and sometimes unintentionally (and not necessarily in a self-deceptive fashion).
The Benefit of the Doubt?
Indeed, reading this book I was constantly asking myself why should we give these politicians the benefit of the doubt and say they deceived themselves?
Take for example, the Gulf of Tonkin affair—discussed in chapter 5—where the USS Maddox was alleged to have been attacked by Vietnamese forces on August 4th, 1964. Galeotti thinks that as there is evidence that, initially, senior naval personnel and members of the U.S. Government thought the attack was genuine, the resulting deception—after it was discovered to be otherwise—means a conspiratorial explanation is untenable.
Yet even if the president and his advisors thought the attack on the USS Maddox was genuine in the first instance, their subsequent actions—once it became clear the “attack” had been misreported and then misrepresented to Congress—does not take conspiracy out of the equation. Sometimes conspiracies emerge when events you had no control over end up being used to serve a political purpose, especially if you think you can control the narrative. You do not have to fake an event to conspire after the fact.
Not just that, but she then claims the resulting twenty-year cover-up of what really happened also does not support the existence of a conspiracy. Yet that is precisely what a cover-up is: a kind of conspiracy.
This is all very reminiscent of the Popperian success notion of conspiracy: in The Open Society and Its Enemies Popper argues that a conspiracy is only successful if it ultimately achieves all its ends. Thus the Holocaust, according to Popper, was not a successful example of a conspiracy because the Nazis did not ultimately achieve their goal.
Using the same kind of reasoning, Galeotti claims that as the story of what really happened to the USS Maddox eventually came out—some twenty years later, mind—this means the cover-up fails to be suitably conspiratorial. Indeed, Galeotti makes a similar argument for the dodgy dossier that “justified” the 2003 Invasion of Iraq, claiming the conspiracy theories:
[C]annot explain why a group of so skilled liars and cynics, so good at manipulating people and masterminding the attack, failed so miserably in managing the aftermath of the invasion and did not serve their own vested interests as well (p. 202).
Yet, as Charles Pigden argued in Popper Revisited, or What Is Wrong With Conspiracy Theories?, people do not believe conspiracies are only ultimately successful if the conspirators get what they want. Despite Popper’s claim, the Holocaust was, unfortunately, a success, even though the Nazis desired more.
Meanwhile, just because Blair, Bush and company might not have been able to ultimately control the narrative around the invasion of Iraq and its aftermath, they still got what they wanted at the time (at least according to the conspiracy theorists) and—notably—do not seem to have suffered much for it either (which raises the question of whether they really were ultimately unsuccessful in the first place).
Now, it may seem that this review is more a rumination on conspiracy theories than it is a discussion of Galeotti’s theory of political self-deception. This is a fair criticism, but I’ve focussed on these conspiratorial explanations precisely because she is putting forward alternative explanations which strip away deception and replace them with a story that the political actors in question were mistaken. Galeotti claims she is not absolving the culprits of responsibility for their actions, but it’s hard not to take her case studies as tailor-made to say that mistakes were made, rather than attributing them to disturbing political maneuvers.
And Another Thing…
Which brings me to the second part of my complaint, which relates to the poor structure and presentation of Galeotti’s argument.
Political Self-Deception is a long book, and its chief fault is that it takes far too long to get to the examples—the Bay of Pigs fiasco, the Gulf of Tonkin Incident, and the Invasion of Iraq—which motivate the use of political self-deception as a potential explanation for certain kinds of political activity. Before we get to these cases studies—which would justify the analysis—we have to go through the process of distinguishing the author’s view of self-deception from that of her contemporaries or forebears.
Whilst this is all well-and-good, spending half a book on a literature review before getting to the meat of the discussion makes the text lopsided, particularly since it is dry and unmotivated. Galeotti might have thought the examples had to wait until after the theory was expounded. But without meaty examples which suggest political self-deception might explain away certain events, the first half of the book is a long slog through theory which lacks any sense of motivation.
The text also has a number of other problems than its structure. It is badly edited (and its lop-sided structure could also be seen as evidence of that): “WDM” in place of “WMD” (p. 109); “Turn” rather than “turns” (p. 117); a missing “they” (p. 129); “real fact” (p. 171); “thepresident” (p. 179); a reference back to a section which has yet to occur (p. 217); and the author of the Chilcot Report is named “Sir John Chilcots” (p. 243). The book is littered with such mistakes, big and small, which is all a little off for a CUP publication.
This review might come across as harsh. But I want to stress that nothing I have said means self-deception does not occur: I somewhat think it is a likely contender in a range of cases. I was even struck by how closely Galeotti’s work on self-deception dovetails into a thesis I have been developing elsewhere (the notion of the “polite society”).
Rather, my point is that almost any explanation of a complex political event involving multiple actors is going to be amenable to a range of different explanations, some of which will turn out to be complementary. As such, this book is for people who already think there is something to the thesis of self-deception, and thus want to know how it applies in political cases; it is not likely to convince people who are skeptical of the notion agents can self-deceive.
The case studies used to show self-deception should be at least considered as potential explanations never quite show that self-deception is the only viable explanation on offer. Thus, unless you think political self-deception is common, this book is just not that compelling.
Contact details: email@example.com
Galeotti, Anna Elisabeth. Political Self-Deception. Cambridge: Cambridge University Press, 2018.
Pigden, Charles. “Popper Revisited: Or, What Is Wrong With Conspiracy Theories?” Philosophy of the Social Sciences 25, no. 1 (1995), 3-34.
Popper, Karl. The Open Society and Its Enemies. Princeton: Princeton University Press, 2013.