Archives For

Author Information: Patrick Stokes, Deakin University, patrick.stokes@deakin.edu.au

Stokes, Patrick. “Reluctance and Suspicion: Reply to Basham and Dentith.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 48-58.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3qM

Please refer to:

reluctant-_to_leave

Image credit: Thomas Huang, via flickr

I am grateful to both Matthew Dentith and Lee Basham for their thoughtful and generous replies to my barging into their discussion of particularism and generalism about conspiracy theory. An over-long reply is a rather poor way to repay that generosity, but here goes.

Conspiracy Theory vs. Conspiracy Narrative

A central part of my argument in Stokes is that there is a gap between how epistemologists use the term “conspiracy theory” and how the term is popularly used.[1] My concern is that by defining “conspiracy theory” so broadly, epistemologists end up losing sight of the recognizable cultural practice of conspiracy theorizing. It’s well established by this point in the debate that there is no prima facie reason to reject conspiracy theories on the basis of their formal explanatory structure alone. But that level of abstraction is not, so to speak, where we live, and nor is it the level on which social critiques of conspiracy theory operate.

Dentith and Basham respond to this concern in different ways. Dentith argues that some of my worries about conspiracy theory are really concerns about certain types of conspiracy narrative. The problem is not the simple act of forming (or asserting) explanations of observed events that involve two or more actors conspiring in secret, but the deployment of particular narratives about specific conspiracies; for instance, the “Jewish World Conspiracy” narrative (or overlapping narratives, perhaps) promulgated by figures as diverse as the Tsarist Okhrana, Henry Ford, Nesta Webster, Adolf Hitler, and David Duke. “To theorise about a conspiracy—to wit, to engage in conspiracy theorising—is a different task from hooking into an existing conspiracy narrative to press a point,” and accordingly, the two should be evaluated separately.[2]

At first blush, such a distinction maps neatly onto my own concern to differentiate conspiracy explanation as a formal category from conspiracy theory as a recognizable social practice and cultural formation. And in terms of the debate between generalism and particularism, adopting this distinction would seems to leave open the possibility of maintaining particularism about conspiracy theorizing while adopting a generalism about certain conspiracy narratives—something very like the “defeasible generalism” or “reluctant particularism” I endorsed.

In practice, however, it’s not clear how clear a line we can draw between conspiracy theory and conspiracy narrative as Dentith construes these terms. Dentith invites us to “imagine someone in a room, dispassionately coming up with conspiracy theories, and then getting her lackeys to see if they have any merit.”[3] But if this conspiracy theorist is anything like most conspiracy theorists, her theories, however dispassionate, are going to draw upon existing conspiracy theory tropes and narrative structures. It is remarkable how strongly the same tropes recur in otherwise disconnected conspiracy theories: for instance, the near-ubiquity of “false flag” explanations. Say Dentith’s speculator sees reports of a mass shooting event, and wonders: “Perhaps this shooting is a false flag designed to prepare the ground for disarming the population.” That is not a stand-alone explanation, but one embedded in a tradition of “the government is coming take your guns” anxieties. It sits within a long, ongoing, evolving, recognizable history of interpretation. These day, it re-emerges, fully-formed, within minutes of any major mass shooting, regardless of context or location.

Of course, one could reply here that there’s no reason to think conspiracies won’t tend to resemble each other: the similarity of conspiracy narratives may simply reflect the finite repertoire of strategies available to conspirators. Moreover, conspiracy theories generally posit fairly powerful actors, which in turn limits the pool of possible perpetrators, so we’d expect to see recurring villains in these explanations. In short, there are only so many possible conspirators, and only so many possible ways for them to conspire effectively. Even so, in considering any individual act of conspiracy theorizing it’s difficult to see how we could differentiate between what is genuinely original (even if isomorphic with other conspiracy theories) and what borrows its form—and a large part of its sanction—from existing conspiracy narratives.

However, let’s assume that Dentith’s lackey-dispatching idle speculator is somehow oblivious to conspiracy theorizing as a social practice—perhaps she, in a nod to Frank Jackson’s “Mary,” has been raised in an environment where she has never been exposed to any existing conspiracy theories or conspiracy tropes.[4] Her conspiracy theories are, let’s stipulate, self-standing and sui generis alternatives to “official” explanations of given events. Does that entitle all her theories to be considered in a particularist way?

Accusation and Reluctance

This question connects us to what I described as “reluctance,” which should attach to both conspiracy theorizing and to indulging in particular conspiracy narratives. Dentith’s conspiracy theorist spins her theories “dispassionately.” But then, what motivates them? Dentith tells us that the question of whether mass shootings are a government plot designed to curb gun rights is “a perfectly interesting question” and that “entertaining that notion is something someone, somewhere should engage in.”[5] It’s not clear however where the “should” emerges from here. Of course, one can “dispassionately” speculate about anything. I could, for instance, walk into any room and try to calculate the probability that anyone in that room is plotting to kill me. Despite being a fairly anxious sort I’d probably do so calmly, because I am not actually entertaining the prospect that some of these people want to do me in. I’m just idly playing with the idea. But it is far from clear why I should speculate like this, and likewise it is far from clear why I should speculate whether mass shooting events were hoaxed by the government.

Ok, we might think, but surely such speculation is both harmless enough on its own terms and potentially exposes genuine plots, however unlikely? After all, insists Dentith, “you can theorise about conspiracy theories without making accusations.”[6] Dentith here specifies that “the threshold for accusation here [must be] something higher than simply saying “They are up to something…’”[7] But just how far can we go down that path before we’re making accusations? We can certainly avoid blaming anyone specific by offering explanations so under-described they barely seem to warrant the name “theory” (“Things are not as they seem,” “I’ll bet they are behind this” etc.). But this doesn’t get us very far. It’s not clear how far you can go with suggesting a mass casualty event was really a false flag exercise without impugning someone. We might try to find a redoubt here between accusation and non-accusation to hide in; we might want to call that redoubt “expressing suspicion” or, more commonly, “just asking questions” (less charitably known as “JAQing off”). But just asking questions that call someone’s innocence into question is not a morally neutral act. Dentith’s dispassionate speculator may not be doing very much practical harm, but she is nonetheless engaging in a practice with a moral cost. My walking into a room and idly wondering if you’re planning to kill me may not cause you much upset—mostly because I wouldn’t mention doing so, as that would make things pretty awkward—but I’ve still entertained the idea you might be a murderer, and thereby done you a passing wrong. There are of course circumstances where that’s a warranted suspicion or even a necessary prudential response; but those circumstances are, precisely because they violate the background trust intrinsic to human sociality (more on this below), abnormal, even when pervasive and persistent.

For Dentith, distinguishing between conspiracy theorizing and conspiracy narrative does allow us to avoid certain narratives that are discredited or problematic. But the motivation here remains, on his telling, fundamentally epistemological rather than ethical:

After all, if the evidence is “This looks like a redressed version of a Jewish banking conspiracy narrative,” then the appropriate evidential response is to ask “Hasn’t this been debunked?” Because if it has, then we will have evidence to mount against the new version. If it has not, then we need to investigate the claim further.”[8]

That may well be a perfectly valid evidential response. But we do not apply our evidential reasoning in a vacuum, but do so from within historically conditioned and epistemically finite situations, in a world already freighted with moral and political meanings. We do not step out of the world when we think and reflect; our thinking, reflecting, and suspecting are all actions we perform and so subject to moral inspection. In that context, an at least equally appropriate response is:

Entertaining theories about a global Jewish world conspiracy is a well-recognized anti-Semitic practice, and I will not engage in such a practice by taking this theory seriously enough to investigate it.

It remains logically possible such a theory is true, but not only are we not morally or rationally obliged to entertain every theory, we are morally obliged to reject some theories even at the risk of occasionally being wrong. Basham claims it is a virtue of particularism that it “directly confronts theories that are unwarranted (Jews are trying to destroy Western civilization),” but as he presents particularism here, it doesn’t look like this is the sort of confrontation he has in mind.[9]

Generalism and Ethics

Unlike Dentith, Basham evidently doesn’t want to buy into a distinction between conspiracy theory as a cultural phenomenon and conspiracy theory as a particular form of explanation. He instead defends a thoroughgoing particularism without even the evidentiary heuristics Dentith wants to develop, insisting that conspiracy theories “should be evaluated solely case by case, on the basis of evidence, without any epistemic mal-biasing.”[1] Basham claims that my “reluctant particularism” or “defeasible generalism” is an unstable binary: it either collapses into generalism (given that generalists preserve some sliver of defeasibility) or is simply particularism.

Here’s the argument Basham attributes to me:

1) Epistemic generalism is true; epistemic issues are “off the table” except in extremely rare cases (traditional generalism);
2) Many popular conspiracy theories cause harm;
3) If a theory causes harm, it is morally suspect (consequentialism);
4) Particularism claims we should evaluate conspiracy theories on the evidential warrant of each;
5) Unwarranted conspiracy theories are popularly believed for long periods of time without evidence (the “unreasoning masses” gambit);

So, Particularism is not the correct approach to conspiracy theorizing.[2]

Basham also adds what he takes to be a missing premise here:

6) Our default analysis of conspiracy theories should not be in terms of evidential merit, but in terms of how they promote or undermine our political projects; those that undermine these should be rejected, those that promote these should be promoted.[3]

I don’t recognize my position in this argument, though I’ve no doubt this is down to imprecision on my part and not Basham’s. I do assert premises 2) and 3). Premise 5), as defined here, doesn’t really amount to an “unreasoning masses” gambit: conspiracy theorists rarely form a mass and are not necessarily irrational. For instance, with respect to my example of deaths from improperly/untreated AIDS in South Africa, it is of course no part of my original claim that the 330,000+ people who died necessarily believed in the conspiracy theory themselves, let alone that they were irrational; it is enough that the government (or even senior figures in the government) believed it and acted accordingly in framing their policy responses to the HIV epidemic.[4]

Premise 6) casts what is an essentially moral claim—show reticence in suspecting or accusing others of malfeasance—in political terms. Basham takes my view to be a version of the Public Trust Approach (PTA). But PTA is still an argument about the epistemic reliability of institutions; it’s “trust” in the sense of “I trust this ladder to bear my weight,” not trust in the sense of “I trust the people in this room not to kill me.” The latter is not merely predictive (“I’m 98% sure you’re not planning to kill me right now”) but an expression of a moral relation: I’m in your hands, and the fact I am so enjoins you not to act against me. This is not to deny that conspiracy theory can have dramatically corrosive effects on the body politic; indeed we’re arguably seeing that right now amidst the apparently tectonic shifts occurring in the relationship between media, politics, and citizenry. Nonetheless my point is primarily a scaled-up moral one rather than a scaled-down political one.

This brings us to the central point of disagreement here, which is premise 1). At least as phrased here, 1) seems to separate moral and epistemic issues that are in fact coimbricated right from the outset. That there is nothing prima facie epistemically false about conspiracy explanations simply as such is, to reiterate, now well established. But, as noted above, we never form our views in a moral vacuum, and that will (or should) have implications for the sort of theories we are prepared to entertain. In discussing my “reluctant particularism,” Basham notes that:

If “reluctant” means we will not immediately embrace a theory, but seek significant evidence for or against, then this is simply the particularist position. We have the same “reluctance” towards any scientific theory. This reluctance doesn’t view the theory as prima facie false. Saying a theory is not yet warranted is not to say it probably never will be, just because of the sort of theory it is.[5]

Quite right. But the comparison with science only goes so far, for we do not stand in a moral relation to the objects of scientific inquiry, at least as regards the purely scientific questions we pose of them; we do not do wrong by subatomic particles or nebulae by postulating theories about them that turn out to be false. Levelling a false accusation has a moral cost to it that proposing a flawed hypothesis in physics or chemistry, in itself at least, does not.

The Payoffs of Particularism

Basham takes it that when I discuss the moral cost of conspiracy accusation in this way, “the ‘immoral’ is a simple consequentialism.”[6] Consequences matter, and that is why I noted them in the case of AIDs denialism[7] in South Africa, but the claim is not fundamentally or solely a consequentialist one. If I publish a blog insisting without anything like credible evidence that Prince Philip had MI6 murder Diana, I’ve still wronged Prince Philip even if he never finds out or doesn’t care or suffers no other unwelcome effects of my accusation. But let’s dwell on consequences for a moment, as that is where Basham launches a defense of particularism.

Basham claims that particularism about conspiracy theory, characterized by “evidence-dissemination and open debate,” has in practice yielded various dividends, both in terms of confirming some conspiracy theories and refuting others. Two things need to be noted in response. The first is that all of the conspiracy theories Basham claims to have been defeated are alive and well: it will come as cold comfort to CDC employees harassed by anti-vaccination activists outside their workplace to hear that “The anti-vaccination movement has been profoundly undermined” and even less comfort to parents in places like the Northern Rivers region of New South Wales, where vaccination levels, thanks to denialism, remain dangerously below herd immunity level.[8] The President of the United States has publically supported the idea of a link between vaccines and autism, and has reportedly discussed appointing antivax activist Robert F. Kennedy, Jr. to chair a commission into the subject.[9] If this is a movement that has been profoundly undermined, one shudders to think what it looks like in rude health. It may also be true that, as Basham claims, “Many of the tenets of the 9/11 truth movement have been abandoned by its own members,”[10] but that movement has likewise hardly vanished; as Alex Jones has recently demonstrated, you can still go on TV and publically call 9/11 an inside job and Sandy Hook a hoax and still have the President-Elect of the United States call you to thank you and your viewers for their support.[11]

Secondly, Basham claims that particularism has made it possible for certain conspiracy theories to be confirmed. Specifically, he claims that “the Iraq war is now widely recognized in the West to be an act of political conspiracy on the part of the US and other Western governments, particularly those of Bush and Blair.”[12] But both “political conspiracy” and “widely recognized” (note that Basham does not simply say “widely believed”) are ambiguous here. If the claim is that the West unjustly pursued self-interested motives in invading Iraq under the cover of overblown WMD threats, that seems clearly true, but doesn’t necessarily rise to the level of a conspiracy. One can act in self-interested ways without conspiring with others.[13] If the claim is rather that Bush, Blair, and other actors actively and explicitly colluded to fake intelligence about WMDs to provide a false justification for invading Iraq, then this is far from a “widely recognized” fact.

The Chilcot Report, for instance, is comprehensively damning about the UK Government’s decision to go to war, yet even it stops short of alleging a conspiracy, unless we think that a grotesque combination of motivated willful ignorance, hubris, and negligence somehow meets the definition of conspiracy used by epistemologists. Of course, it may yet emerge someday that there was a conspiracy: a phone transcript might yet surface of Bush telling Blair “Let’s milk this 9/11 thing by pretending Iraq has WMD and then invading to take their oil.” But I’d be willing to bet that if that does happen, it won’t emerge from the ranks of those now popularly referred to as conspiracy theorists. It will come, as it usually does, from whistleblowers and journalists. (Until recently, I’d have included Wikileaks in that list…)

That in no way invalidates the important point made, by Pigden and others, that the pejorative use of the term “conspiracy theory” makes it easier for political actors to deflect attention from legitimate questions. But then, if we want to stop the term being used to shut down proper scrutiny, we need to be honest about why the term has the pejorative connotations it has: the tradition to which the term is characteristically applied, and the attitudes, tropes, and patterns of argumentation employed by that tradition.

The Tracy Affair

I raised the case of James Tracy as an instance of morally reprehensible behavior licensed by conspiracy theory. I think this case illustrates a very specific problem: the way conspiracy theories tend to (and note I do not say any more than “tend to”) cause conspiracy theorists to make purely defensive accusations. Basham insists however that while Tracy’s actions were “misguided” as well as “immoral and imprudent,” the Tracy affair has “no epistemic relevance to how we should approach conspiracy theories as such.”[14] The “as such” clause here makes a degree of sense if, like Basham, one is committed to a purely epistemological analysis of conspiracy theory. But only a degree. The behavior in this case is not simply a matter of insensitivity or imprudence grafted onto an otherwise unrelated belief system. It’s a direct result of trying to defend that belief system from disconfirmation.

Imagine you meet someone who tells you their child has been killed. What would need to be the case for you to begin to suspect that they are lying not merely about the death of that child, but about the child’s very existence? Now imagine how strong those suspicions would need to be for you to demand that the person you’re talking prove, to your satisfaction, that their child had existed. The evidentiary bar here would have to be very high indeed.

But now imagine that the story of the dead child (call this story or set of propositions x) is flatly incompossible with another set of beliefs you happen to hold (call this set c). You have four options:

1) Accept x is true and accept c is false;
2) Reject x and insist c is true;
3) Accept x is true but try to find a way to make this fact compossible with the truth of c;
4) Remain agnostic as to which, if either, of x and c is true.

In this case, the more committed you are to c, the stronger the reasons you’ll have for rejecting 1) and 4). That leaves you with either 3)—which is hard work and may turn out not to be possible in a given case—or 2). In this case, Tracy’s c was the belief that Sandy Hook was staged, and he took option 2). It strains credulity, to say the least, to claim that Tracy simply noticed, independently of his antecedent commitment to Sandy Hook being a hoax, problems with the Pozeners’ story and accused them on that basis. He accused them because their story contradicted an interpretation of the events of 14 December 2014 that he accepted. Moreover, such an accusation of deceit is easier to make, because more parsimonious, if one is already committed to the existence of a conspiracy not simply to commit the act, but to hide the truth. That doesn’t mean such accusations are always and necessarily a feature of conspiracy theorizing.

Again, my claim goes to the typical features of conspiracy theory as a social phenomenon rather than a specific form of explanation. And it is frequent enough to be a particularly salient feature of the phenomenon. Tracy, after all, is not the only person to confront Sandy Hook parents and witnesses and accuse them of being crisis actors. Nor is Sandy Hook Trutherism the only form of conspiracy theory that generates this class of accusations.[15] When journalist Alison Parker and her cameraman Adam Ward were shot dead on live television in August 2015, Parker’s boyfriend Chris Hurst found his grief compounded by conspiracy theorists insisting that Parker was a crisis actor, that she was not dead, that Hurst too was a crisis actor, that they had never had a relationship, and so on.[16] Again, this doubt is motivated not by any evidence that would be compelling independently of a conspiracy theory, but solely by a pre-existing disposition to believe the shooting was staged and that Parker and Ward (and by extension Hurst) must therefore be crisis actors—a claim made by, among others, James Tracy’s blog.[17]

As I understand it, Dentith’s current project seeks to develop heuristics for determinging when a conspiracy theory claim is and is not worthy of being taken seriously enough to investigate it—in other words, something like the non-absolutist particularism I’m endorsing and Basham rejects. If we’re developing heuristics for when we should and should not investigate conspiracy claims, then

Does taking this theory seriously enough to investigate it require me to dismiss grieving parents as frauds, under conditions in which there exist no compelling theory-independent reasons to think they are? If so, don’t take this theory seriously enough to investigate it.

— isn’t a bad start.

A Final Word on Trust

One thing that this discussion has made clear to me is that radically different foundational views of the role of trust are in play here. In my initial reply I only alluded to this parenthetically, and it is clear that more needs to be said, if only to clarify what underlies the divergences. A fuller working out of this point will need to wait for another occasion. For now, it’s worth simply noting where the underlying views of the normativity of trust differ.

The philosophical literature on conspiracy theory largely embeds a calculative view of trust. When most philosophers ask “How much should we trust our society’s sources of information?” they are asking a question about reliability: “On past performance, how much confidence should we have that these institutions are telling the truth and/or acting in a way consistent with their stated commitments to acting in our interests?” There is, as Dentith notes, no way of determining in advance just how conspired the world really is.[18] But nonetheless, it is not unconspired—conspiracies occur, and most philosophers working on this topic take conspiracy to be a more pervasive feature of social and political life than we usually assume, and think we should calibrate our suspicions accordingly.

David Coady, for instance, explicitly endorses a sort of Aristotelian account of trust, according to which “the intellectual virtue of realism is a golden mean between the intellectual vices of paranoia and naivety.”[19] Thus, our phronetic judgement should aim to be just suspicious enough. Alasdair MacIntyre[20] has offered a similar account of ideal trust as a mean between excessive suspicion and credulity, arrived at through a long process of moral training: learning who to trust, and when, and how much.[21]

Yet trust as an interpersonal and moral phenomenon is not simply a matter of calculating and responding to reliability. For one thing, it involves mutual responsiveness to need, taking the fact the other person knows I am reliant on them to be a reason for them to act consistent with my interests.[22]

We know that not everyone is trustworthy in that sense. Basham tells us that “Human life is conspiratorial. We can face this, embrace it, but if we deny it, we empower it in the worst way.”[23] People lie, cheat, and steal, and sometimes they conspire in order to do so. But human life is also predicated on foundational, non-calculative trust. When I walk into a room I don’t mentally calculate the odds of you trying to kill me, not because I’ve previously assured myself that the odds too low to worry about, but because of that default background trust that is a condition for social life. As K.E. Løgstrup put it, trust is both conceptually and ontogenetically primary, distrust secondary; without that foundational trust the sphere of human life falls apart.[24] Accordingly, our judgments of what to believe of other people are guided by heuristics that are not merely epistemic in character, but also ethical. Giving “the benefit of the doubt” is not, or not typically, merely a judgement about the reliability of the other party, but an expression of that normative default attitude towards others.

This picture of foundational trust sits awkwardly, to say the least, with the standing vigilance required to maintain a democratic polity. There are always good reasons to be suspicious of power of all forms, both overt and covert, explicit and intrinsic. The work of identifying and uncovering power relations is indispensable, and it seems to involve a relentless and remorseless hermeneutics of suspicion. That tension—between foundational trust and vigilance—is a real and seemingly permanent feature of political and social life. What I have called “reluctance” here is an expression of that tension, an awareness of being caught between the duty to view others as good faith interlocutors and the duty to uncover wrong-doing. The sort of generalized, eager suspicion involved in entertaining and advancing conspiracy theories abandons that reluctance, and thereby misses that central dimension of human sociality. In a world full of untrustworthy people, the demand of trust remains.

Or, to quote the US President who presided over the Gulf of Tonkin conspiracy, himself misquoting W.H. Auden: “We must love each other, or we must die.”

References

Basham, Lee. “Between Two Generalisms: A Reply to Stokes.” Social Epistemology Review and Reply Collective 5, no. 12 (2016): 4-12.

Coady, David. “An Introduction to the Philosophical Debate about Conspiracy Theories.” In Conspiracy Theories: The Philosophical Debate, edited by David Coady, 1-12. Aldershot: Ashgate, 2006a.

Coady, David. “Conspiracy Theories and Official Stories.” In Conspiracy Theories: The Philosophical Debate, edited David Coady, 115-127. Aldershot: Ashgate, 2006b.

Dentith, Matthew R. X. The Philosophy of Conspiracy Theories. Palgrave Macmillan, 2014.

Dentith, Matthew R. X. “In Defence of Particularism: A Reply to Stokes.” Social Epistemology Review and Reply Collective 5, no. 11 (2016): 27-33.

Jackson, Frank. “Epiphenomenal Qualia.” Philosophical Quarterly 32 (April 1982): 127-36.

Jones, Karen. “Trustworthiness.” Ethics 123, no. 1 (2012): 61-85.

Løgstrup, Knud Ejler. The Ethical Demand. Translated by Theodor I. Jensen, Gary Puckering, and Eric Watkins. Notre Dame, IN: University of Notre Dame Press, 1997.

MacIntyre, Alasdair. “Human Nature and Human Dependence: What Might a Thomist Learn from Reading Løgstrup?” In Concern for the Other: Perspectives on the Ethics of K. E. Løgstrup, edited by Svend Andersen and Kees van Kooten Niekerk, 147-166. Notre Dame, IN: University of Notre Dame Press, 2007.

Pigden, Charles. “‘Popper Revisited,’ or What Is Wrong With Conspiracy Theories?” Philosophy of the Social Sciences 25, no. 1 (1995): 3-34.

Stokes, Patrick. “Between Generalism and Particularism about Conspiracy Theory: A Response to Basham and Dentith.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 34-39.

Stokes, Patrick. “Spontaneity and Perfection: MacIntyre vs. Løgstrup.” In What is Ethically Demanded? K. E. Løgstrup’s Philosophy of Moral, edited by Hans Fink and Robert Stern, 275-299. Notre Dame, IN: University of Notre Dame Press, 2017.

[1] Ibid., 5.

[2] Ibid., 9.

[3] Ibid., 10-11.

[4] Hence I don’t see how my paper “implies the existence of popular conspiracy theory at work in the populace and then infers that this belief must be efficacious in apparent medication refusal” (Basham 2016, 10 n.23).

[5] Basham, “Between Two Generalisms,” 6.

[6] Ibid., 8.

[7] Basham (2016, 10) is right to note that denialism per se is not the same thing as conspiracy theory. But AIDS denialism of various forms, much like other familiar forms of denialism—climate, vaccination etc.—does end up embedding conspiracy explanations either on the level of core theory or on the level of auxiliary hypotheses meant to sandbag the theory against disconfirmation. If I insist the world isn’t warming due to human activity, or that HIV doesn’t cause AIDS, and yet the knowledge-generating mechanisms of society (academia, government research bodies, public health authorities etc.) keep insisting the contrary, I am forced to conclude the people who populate these mechanisms are collectively deluded, incompetent, or corrupt. The denialists just mentioned tend, with dispiriting regularity, to plump for the last option, even if they are not logically required to.

[8] Basham, “Between Two Generalisms,” 8.

[9] http://www.abc.net.au/news/2017-01-11/donald-trump-appoints-vaccine-sceptic/8174560

[10] Basham, “Between Two Generalisms,” 8-9.

[11] http://www.politico.com/story/2016/11/trump-thanked-alex-jones-231329

[12] Basham, “Between Two Generalisms,” 10.

[13] Consider the category of ‘quasi-conspiracies’: if all actors in a given context know that if they all act in certain ways the outcome will be better for all of them, and know that all the other actors know this too, they can act in a way that looks co-ordinated but in fact involves no actual collusion (Pigden 1995, 32 n.30; Coady 2006a, 5-6). Hence when an apprehended criminal gang all refuse to confess, this isn’t strictly a ‘conspiracy of silence’: they all just know if they each keep their mouth shut, they’ll all be better off than if any one of them spills the beans.

[14] Basham, “Between Two Generalisms,” 12.

[15] As I write this, local media is reporting that a conspiracy theorist phoned a Melbourne hospital posing as a friend of a patient injured in a mass-casualty event, apparently hoping to prove the event was staged and the injured woman’s story was fake. http://www.news.com.au/national/victoria/news/australian-actor-impersonated-family-of-bourke-st-victims-in-calls-to-hospitals/news-story/d9be5da3a809ddf7bdaa58a96a54fc4e

[16] http://www.thedailybeast.com/articles/2015/09/13/what-do-you-say-to-a-roanoke-truther.html This ‘the bereaved aren’t visibly upset enough in public so they must be lying’ trope is a depressingly recurrent one that extends far beyond conspiracy theory. Australians a few years older than myself will recall Lindy Chamberlain being accused of seeming too composed to be the grieving mother of a baby taken by a wild dingo she claimed to be. Chamberlain was convicted of murder, imprisoned, and subsequently exonerated when new evidence emerged; in 2012 a coroner found that a dingo had, in fact, taken baby Azaria. So much for the wisdom of crowds.

[17] http://memoryholeblog.com/2015/08/30/crisis-actors-alison-parker-and-adam-ward/ (Warning: on my most recent attempt to access this page [9 February 2017], Safari returned a malware warning)

[18] Denith, The Philosophy of Conspiracy Theories.

[19] Coady, “Conspiracy Theories and Official Stories,” 126.

[20] MacIntyre, “Human Nature and Human Dependence.”

[21] On MacIntyre’s Aristotelian account of trust, which he offers in opposition to Løgstrup’s view of trust as foundational, see Stokes 2017.

[22] Jones, “Trustworthiness.”

[23] Basham, “Between Two Generalisms,” 13.

[24] Løgstrup, The Ethical Demand.

Author Information: Frank Scalambrino, University of Akron, franklscalambrino@gmail.com

Scalambrino, Frank. “Employees as Sims? The Conflict Between Dignity and Efficiency.” Social Epistemology Review and Reply Collective 6, no. 2 (2016): 35-47.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3rP

Please refer to:

the_sims

Image credit: Aaron Parecki, via flickr

“… that which mediates my life for me, also mediates the existence of other people for me.” —Karl Marx[1]

Today’s technological mediation allows for unprecedented amounts and depths of surveillance. Those who advocate for such surveillance tend to invoke a notion of public safety as justification. On the one hand, if acceptance of being surveilled follows a philosophy, it would seem to be a kind of “greatest good for the greatest number” philosophy. However, it may be the case that the philosophy functions as an after-the-fact excuse, and people are simply willing to accept surveillance so long as they are able to use their technological devices. On the other hand, it is interesting to note that with context shifts in which such a philosophy could no longer justify surveillance, a philosophy of ownership may be the only viable justification for such surveillance. Yet, insofar as we are discussing the freedom of individuals, e.g. “employees,” we should be critical regarding surveillance justified by a philosophy of ownership.

This article seeks to provide a critique of surveillance in situations where surveillance thrives despite the tension between freedom and ownership. Specifically, this article examines the development of workplace surveillance—through technological mediation—from “loss prevention” to “profit protection.” The tension between freedom and ownership in this context may be philosophically characterized as the tension between dignity and efficiency. After describing an actual workplace situation in which a retailer uses technological mediation to surveil employees for the sake of “profit protection,” a critique of surveillance will emerge from a discussion of the notions of efficiency and dignity in relation to freedom. Rather than determine the justification of surveillance through technological mediation in terms of the “justified true belief” of “profit protection,” this article—from the perspective of social epistemology—takes for its point of departure a conception of knowledge in terms of the “social justification of belief” (Rorty, 1979: 170). Hence, the policy recommendations regarding technological mediation with which this article concludes may be understood as developed through social epistemology and a concern for freedom most often associated with existential philosophy.

Employees as Sims?

It is already the case that business owners may use their smartphones to access “real time” audio and video surveillance of their employees. This article considers a retail business with stores in more than one of the United States; speaking with individuals who have worked under such profit-driven surveillance is illuminating. The retail space in question was small enough to have audio and video surveillance covering the entire premises where employees and customers could interact. One employee described how “the boss” was “on a beach somewhere having a drink” watching the employee in question work. The “boss” would then periodically call the business to have the “middle management” ask this employee why he was doing whatever it was he was doing. The employee described the experience as “stressful.” Further, he described feeling “paranoid,” at times, not knowing for certain how closely he was being surveilled from moment to moment.

The idea of using technology to surveil a workplace is not new. However, the kinds of technology available today allow for unprecedented levels of surveillance. Whereas less technologically-mediated work environments could have justified surveillance in terms of employee safety and loss prevention, e.g. theft and accidental destruction, today’s technologically-mediated workplace allows for greater depths of “micro-managing” through surveillance. What we will see is that despite any negative connotation associated with the notion of “micro-managing,” when understood along a spectrum of “loss prevention” and in conjunction with the technological mediation which allows for it, the use of surveillance for the purpose of micro-managing employees can seem as justifiable as locking the door when you close shop for the night.

Originally the idea of “loss prevention” included concerns to monitor for theft. If setting up video surveillance will deter theft or help you recover lost property after theft, then the calculation seems straightforward enough that the video surveillance of your business is a good investment. Further, if video surveillance helps defend business owners against unwarranted worker compensation claims by employees who were hurt on the job through no fault of the business, then again the calculation seems straightforward enough that the video surveillance of your business is a good investment. In fact, retail businesses often employ an entire “loss prevention” department tasked not only with monitoring video surveillance of the business’s premises but also often to appear as customers among the customers to assure shop-lifters are quickly captured and restrained. From the perspective of a philosophy of ownership, the idea is that you own property which you are offering to sell to others, and if others attempt to take your property without compensating you as you deem appropriate, then it seems straightforward enough that your rights regarding your property have been violated.

Now, the idea of “profit protection” may be understood as an extension of “loss prevention.” Moreover, it should be kept in mind that such “profit protection” would not be possible without today’s technological mediation. “Profit protection” is supposed to refer to the reduction of the preventable loss of profit, and “the preventable loss of profit” refers to actions performed inadvertently or deliberately. Thus, notice how surveillance for the sake of “profit protection” may technically extend beyond theft and accidental destruction of property. In other words, if employees are not performing their job duties in a way that allows for the sale of your property, then the profit which you could have reasonably earned through their labor is lost.

There are a number of ways technological mediation allows for “profit protecting” surveillance. First, just like the popular smartphone applications which allow individuals to monitor their property while away from their homes or apartments, business owners may not only monitor their property but also the individuals tasked with facilitating the sale of their property. Second, a business owner could easily isolate which employees are not performing as efficiently as they should by simply tracking sales. If given the reasonable amount of expected sale, whether determined by season and time of day or by the ratio of sales to customer traffic, etc. business owners can determine when their property is not being sold as efficiently as it should be. Lastly, then, business owners may use technology to surveil those particular employees who are working during the times when business operations are not as efficient as they should be. In doing so, business owners could learn what these employees are doing “wrong.”

Notice, if such surveillance is framed as a “teaching opportunity,” then an employer could construe the whole surveillance operation as benevolent and caring, without even needing to mention “profit protection.” However, to whatever extent there would be a calculation involved to justify the use of management time to surveil such employees, then the notion of “profit protection” could be easily revealed as operable, despite denial on the part of the business. In either case, notice how the surveillance of such employees seems to justify such “micro-managing” as questioning sales techniques, and such a technologically-mediated relation to the employee would extend all the way to monitoring what employees say and how they say it. After all, even an employee’s relation to customers, if understood in terms of cybernetics[2] (cf. Scalambrino, 2014 & 2015b) may be quantified in terms of variables which correlate with successful sales. Thus, a business owner may be seen protecting profit by micro-managing the facial expressions, tone of voice, and suggestions made by their employees.

On the one hand, if all this is beginning to sound as if technologically-mediated business may make employee management and relations into a kind of video game (such as, for example, “the Sims”), then you are following the argument of this article.[3] On the other hand, there are three points to keep in mind. First, it would be too cumbersome to conduct such management and relations to employees, as if they were Sims, without technological mediation. Second, notice how framing the micro-management associated with such surveillance in terms of “profit protection” makes the enterprise sound like good (cybernetic) science and a wise business investment. Third, we will consider the question: How does such surveillance and micro-managing affect employees and relate to the constitution of their employee-identity? As we will see, whereas the second point may be rightfully characterized in terms of the efficiency of an employee in regard to the performance of assigned tasks, the third, which we will characterize in terms of the “dignity of the person” who is the employee, is not a simple question to answer. Moreover, as we shall see, the efficiency made possible by technological mediation seems to have tipped the balance in favor of efficiency over dignity.

The Conflict Between Efficiency and Dignity

There are a number of ways to articulate the conflict between efficiency[4] and dignity, and in doing so a distinction may be made between the rationale and the value[5] of such micro-managing and surveillance of employees through technological mediation. Privileging efficiency, it may be argued that the feelings and self-identity of an employee need not be included in the concerns of a reasonable business owner. In this way, it may be said that business owner’s need not include concerns for employee feelings and self-identity in their rationale for implementing various surveillance and management practices. Yet, insofar as employee feelings and self-identity have value which can be correlated with profit, then it becomes an issue of efficiency to control these variables as much as possible. That is to say, a cost/benefit analysis may be called for in which the impact of such variables on profit could be determined.

Considering profit necessary to sustain a business, a cost/benefit analysis of the appropriate relation to employee dignity can be quite complicated. For the purposes of this article, consider the following possibilities. The value of privileging dignity may run directly counter to “profit protection.” That is to say, venturing into the dimension of surveilling employees to promote various dignity-related psychological features may seem counter-intuitive, not only because a certain amount of disgruntlement may be constitutionally the norm for some individuals but also because it may be difficult to control the cost of sustaining such a workplace environment. Further, it is not immediately clear whether surveilling, micro-managing, and subsequently firing an employee for their inability to sustain a profit margin may not be in the best interest of the dignity of the employee. Whereas it may be more consistent with “profit protection” to screen potential employees for job aptitude, rather than hire individuals and subsequently surveil them for aptitude, to determine for an individual that they are not good at performing a task may be seen as providing helpful guidance consistent with respecting their dignity.

The “helpful guidance” framing of firing an employee is reminiscent of the “teaching opportunity” framing of surveillance and micro-management. In other words, though it may seem intuitively beneficial for an employer to appear to its employees as concerned with employee dignity in its various rationales for investing in surveillance and micro-managing, again it seems concern for profit would be the ultimate determining factor in whether the costs associated with maintaining such an appearance to its employees constitutes a good investment for the business. Moreover, on the one hand, it could be construed as a kind of alternative compensation, so business owners could justify keeping larger amounts of profit, e.g. “At our workplace managers will work with you to ensure you love your job.” On the other hand, establishing a workplace in which it is a requirement of employment that employees appear happy at all times may be considered unreasonably oppressive.

Hence, it seems even if a business were to remain neutral in expressing rationale for its actions regarding dignity, there may be a spectrum along which businesses cannot help but be placed regarding how they value employee dignity. On the end of the spectrum privileging efficiency would be located automatons, resulting from analyses and established through an investment in future profit; on the end of the spectrum privileging dignity would be autonomous persons, perhaps involved in a “profit-sharing” business.

Autonomy and Self-Awareness: The Scope of Simulation

There are three (3) distinctions which are now classic in the history of Western philosophy, which will help articulate the conflict between efficiency and dignity. These distinctions come from Immanuel Kant’s (1724-1804) ethics. The three distinctions are: the “three natural pre-dispositions to the good,” the “principle of ends” (as the second formulation of Kant’s famous Categorical Imperative), and the difference between “a person of good morals” and “a morally good person.”[6]

Building on Aristotle’s divisions of the soul, Kant distinguishes between the “animal,” “human,” and “personal” dimensions. Each of these dimensions has a corresponding type of “self-love,” which individuals use to determine self-worth. At the level of animality, self-love is “mechanical” and determined by physical pleasure. Individuals centered on this level determine the value of their existence by how much physical pleasure they experience in life. At the level of humanity, self-love is “comparative.” This is due to the fact that rationality cannot help but determine ratios. Individuals centered on this level determine the value of their existence by comparing aspects of their lives to the lives of others.

Finally, at the level of personality, according to Kant, the “predisposition to personality is the capacity for respect for the moral law as in itself a sufficient incentive of the will.” (Kant, 1960: 34). Thus fully actualized individuals determine their self-worth as “a rational and at the same time an accountable being” (Ibid), and the difference most relevant for our discussion is the sense in which a person has self-respect beyond the natural human tendency to compare oneself with others. In other words, though someone has more money or better possessions than you (cf. Epictetus, 1998: §6), you may value yourself in terms of your disciplined harmony with right living. Insofar as “right living” is meaningful, then its truth and reality precedes an individual’s acceptance of it. That is to say, it is true that touching the hot stovetop will hurt you, prior to your touching it and independent of your beliefs regarding it.

Hence, there are two conclusions to be drawn here. First, “dignity of the person” is meaningful, whether the self-respect associated with it is actualized by individuals or not. Second, “dignity” refers to the self-actualization which corresponds (as we will see more completely in a moment) with the highest natural capacity for living in humans. That is to say, individuals who have not actualized the personal dimension, and thereby self-respect, are individuals who are not living the most excellent life available to humans.

Two brief references to other philosophers may be helpful here for clarification. In regard to the second point, Friedrich Nietzsche’s (1844-1900) statement, “the seal of liberty” is “no longer being ashamed in front of yourself” (1974: 220) need not be understood as a philosophy of “anything goes,” but rather may be understood as indicating liberation from a life of self-shaming in regard to a comparison with the rest of humanity. Further, the first point, above, invokes a classic passage in Plato’s Republic where Socrates notes that rulers (i.e. employers and bosses) “in the precise sense” are people who “care for others” (Plato, 1997: 340d). This is, of course, juxtaposed with the definition of justice offered by Thrasymachus, namely, that “Rulers make laws to their own advantage.” (Ibid: 338c).

The next distinction from Kant is his “principle of ends.” This is the second formulation of his famous “Categorical Imperative,” and it suggests you should act in such a way “that you use humanity, whether in your own person or in the person of another, always at the same time as an end, never merely as a means.” (Kant, 2002: 38). On the one hand, notice how this suggests we should not use others as a means to determine our own self-worth.  On the other hand, it also points to the dignity of persons as ends in themselves. That is to say, the principle of ends suggests a person should not use others in such a way that it is merely for utility. As we will see, for Kant this goes beyond J.S. Mill’s “principle of liberty”[7] in that to treat another person—even a consenting person—merely as a means, and thereby not as a self-respecting person, may be construed as a kind of harm to their person insofar as their ability to self-actualize their personhood is conditioned by their capacity for self-respect.

The final distinction from Kant, then, is the one between “a person of good morals” and “a morally good person” (cf. Scalambrino, 2016c). What is fascinating about this distinction is that it is not in terms of the actual action that the different types of individuals perform. Both persons may perform the same action; however, the latter type of person is motived in terms of the self-respect of personhood, and the former is motived in terms of a different pre-disposition to goodness. Notice that because all of the pre-dispositions are “to the good,” it is not in terms of the goodness of the action that its performance should be evaluated. Rather, it is the motivation that determines which performance of the action is better. This will be important for the thesis of this article, as there is no attempt being made to suggest that profit is “not good.”

To synthesize these distinctions from Kant, notice he believes the “morally good person” is freer and is existentially-situated better than the “person of good morals.” Further, he thinks the “morally good person” is living a more excellent life than the “person of good morals,” and all of this is despite the fact that both individuals may be performing the same actions. How is this the case?

Because the three pre-dispositions to the good constitute a hierarchy, in order for an individual to actualize the highest capacity, i.e. for personhood, the existentially-prior capacities must first be actualized.[8] This means “personhood” is a higher excellence than mere “humanity,” and personhood is existentially-situated in a better way, therefore, since the person has a wider horizon of evaluation available to it than in terms of mere humanity. For example, even if someone merely at the level of humanity were hoping for the best means to manipulate others, having a wider horizon of evaluation would provide a wider range of potential justifications, i.e. this may be seen in the attempt to suggest that profit-driven surveillance is somehow for the benefit of the surveilled—when the motivation determining the performance of the action is clearly “profit protection.”

In order to understand how the “morally good person” also lives the better life, a brief reference to Aristotle’ Nicomachean Ethics may be helpful. As Aristotle goes through the various types of life in his search to discover the best life for humans, he notes, “The life of money-making is one undertaken under compulsion, and wealth is evidently not the good we are seeking; for it is merely useful and for the sake of something else.” (2009: 1096a5). The idea here is that to ask regarding the natural purpose of human life is to ask what human life is in itself, i.e. as an end for itself and not as a means to be expended for something else. This points directly to the synthesis of Kant’s distinctions as a justifying how the “morally good person” lives the better, i.e. the most excellent life available to humans, in that the natural presence and hierarchical order of the dispositions suggests that life was made to fully actualize itself.[9] To be fully-actualized means to actualize the highest pre-disposition, which is the predisposition in which life treats itself as an end in itself, whether in its own person or in that of another, and thereby constitutes the dignity of personhood thru its self-respect.[10]

Lastly, notice how the above explication of Kant’s ethics regarding the dignity of personhood may be characterized in terms of “self-awareness” and “autonomy.” Because the individual who has actualized the capacity for personhood may relate to itself in terms of a greater number of dimensions than the “person of good morals” who is not performing actions with the full[11] actualization of their self. In this way, the “morally good person,” in expressing the self-respect associated with the dignity of personhood, is more self-aware. Were this in terms of content, then it would be as if age should determine greatest amount of self-awareness; however, this is in terms of capacity, not content. In a similar way, Kant characterizes the autonomy of an individual, not in terms of content but rather, in terms of relation (cf. Scalambrino, 2016b).

Thus, it is the “autonomy” of the fully actualized person which makes them freer. According to Kant, the “principle of autonomy” is “The principle of every human will as a will giving universal law through all its maxims [i.e. its code of conduct].” (Kant, 2002: 40). Notice, because both the “person of good morals” and the “morally good person” perform the same action, it may be said that they are following the same “law.” However, it is not the following of the law but the relation to the law when following it that differentiates these two types of individuals. In other words, because the “morally good person” understands its self-worth in terms of its accountability to the Natural Moral Law, it is motivated in terms of self-respect exemplary of the dignity of personhood. In this way, this type of person is freely choosing to follow the law. Because other types of individuals have motivations other than the accountability determining personal dignity, their decisions to follow the law are compelled by other motivations. The motivation to follow the law for its own sake is not an additional motive from the motive made possible through the actualization of personhood.

Efficiency and Dignity

In what way does the above section illustrate “the limits of simulation,” and how do the limits of simulation relate to the conflict between efficiency and dignity? Again, it is, of course, technological mediation that conditions the whole problem under discussion. In other words, it is the amount and depth of surveillance made possible today by technological mediation which has allowed for the shift from “loss prevention” to “profit protection.”

On the one hand, the above section helps illustrate that though loss prevention and profit protection may be good, the surveillance of employees for their sake is founded upon a relation in terms of “humanity,” at best, and not “persons.” In other words, it seems to neither treat employees with dignity nor to provide an environment which may help them fully actualize self-respect as an employee. Like “persons of good morals” in Kant, employees under surveillance may perform the right action and the same action that an employee with dignity and self-respect may perform; however, also like “persons of good morals,” employees under surveillance may lack the best motivation to perform their work “duties.”

On the other hand, it is autonomy and self-awareness that limit the scope of possible simulation. What this ultimately means is that if the goal is efficiency, then approaching it through technological mediation, as if to make employees simulations of the desires and knowledge of their employers, may only lead to short-term capped-amounts of efficiency. In other words, it seems consistent with the above Kantian discussion of self-actualization to note that employees who respect themselves as persons who do the kind of work they are employed to do should make for the best employees. That is, long-term efficiency seems predicated upon autonomous employees who are self-aware for their own sake. Simulation is ultimately limited by the lack of autonomy and self-awareness associated with employees motivated at Kant’s level of “humanity,” and even when performing the correct actions, it is as if they do so like “persons of good morals,” not “morally good persons.”

For those who advocate for efficiency, even at the cost of dignity, the above discussion suggests promoting dignity might be a better way to promote efficiency. One, it is inefficient to “micro manage” employees. Two, even with the use of cybernetics and technological mediation to help indicate where such “micro-management” may increase efficiency, such practices may work against efficiency to the extent that they undermine employee dignity. As the above discussion suggests, employee dignity indicates more self-actualization, i.e. a freer and better existentially-situated employee. In this way, though it may be true that if an employee will not be subjected to conditions of technological mediation, perhaps a replacement who would will be easy to find. However, the ease at which individuals with less self-respect and dignity, or with greater compelling conditions, may be found neither resolves the conflict between efficiency and dignity nor does it ensure efficiency.

Excursus: Control & Inauthenticity: Simulation, “Legacy Protection,” and Despair

Some readers of our edited volume Social Epistemology & Technology: Toward Public Self-Awareness Regarding Technological Mediation have recognized, at least, an analogy between society and families in regard to the control for which technological mediation allows. Though we cannot work out every detail here, we can provide a sufficient sketch of the analogy to, if nothing else, provoke deeper thinking and self-awareness regarding the potential effects of technological mediation. In general, this question relates to the chapters located in the second half of Social Epistemology & Technology, and specifically in regard to my chapter “The Vanishing Subject: Becoming Who You Cybernetically Are.” Of particular interest regarding this topic may be the section of that chapter titled “Pro-Techno-Creation: Stepford Children of a Brave New Society (?),” though if read in isolation from the rest of the chapter, that section may seem obscure. Since my second article in this SERRC Special Issue will be devoted to discussing the theme to which the second part of Social Epistemology & Technology was devoted, i.e. the theme of “changing conceptions of humans and humanity,” we will not engage such a discussion in this excursus (cf. Scalambrino, 2015b & 2015c).

In regard to the analogy, “profit protection” is to the use of technological mediation in business as “legacy protection” is to the use of technological mediation in the family. The basic idea is that: just as technological mediation may be used to control employee actions, technological mediation may be used to constitute select attributes of a child (e.g. IVF, PGD, CRISPR-Cas9, etc.) and to promote and sustain a select identity for the child. The motivation may be characterized as “legacy protection,” since the ends afforded by technological mediation constitute a kind of investment made by parents. In this way, the dynamics of the problem we uncovered above concerning employees, employer desires, and technological mediation, manifest analogously in regard to the family. That is to say, the question of the employee’s existential-freedom becomes the question of the child’s existential-freedom, and the dilemma regarding whether to risk losing profit to allow for the individual’s autonomy and increased self-awareness becomes the risk of losing one’s legacy and “investment” in their children.

Given the large cost associated with what amounts to genetically engineering one’s children, it is clear that parents have some goal(s) in mind when selecting various attributes for a child (cf. Marcel, 1962). Whether this initial investment is made or not, some see it as the technologically-mediated equivalent of mate selection; however, notice, whether equivalent or not, the level of control increases significantly thru technological mediation. Beyond the birth of the child, then, there is the question of how to sustain the initial investment made—whether through mate selection or genetic engineering—to ensure “legacy protection.” The idea here is that whatever goal(s) parents have in mind when selecting, perhaps as best they can, various attributes for a child, those goals point to the legacy the parents are attempting to protect.

As the technological mediation of a child’s life increases so too does the potential to surveil and control the child. Since the idea of increasing surveillance should be obvious (e.g. checking to see what websites they view, what they text to friends, GPS of where they go, and so on.) we will focus only on the control piece here. Control is understood here in the sense of limiting the full self-actualization associated with personhood above and discussed through the philosophy of Immanuel Kant. That is to say, if you are able to limit an individual’s self-actualization to the level of “humanity,” then they will continually constitute their identity through comparison with others. Just as I indicated in my second chapter of Social Epistemology & Technology, the way to “lock down” such self-awareness is by “misunderstanding nothing.” What this means is that if you can provide an individual with a worldview that seems to provide an account for everything in terms of that individual’s comparative self-worth to others, then you control that individual’s ability to interpret their own existence.

When this can be anchored through a talent in which the individual excels, then the comparative model may be all the more effective, since the individual seems themselves as “winning” or a “winner” based on an identity which takes itself as able to account for whatever happens in life. The problem, Kant would say, is that the individual is not fully autonomous. The “law” given to them is not of their own choosing. There are a number of ways to use technological mediation to control individuals, and thereby to ensure “legacy protection.” On the one hand, a discussion of inauthenticity and memes would be appropriate here, since it becomes possible to understand the whole enterprise for “legacy protection” as founded upon the comparative understanding; thus, the agency more commonly attributed to the parental desire ensure legacy protection may be attributed to the transmission of the comparative worldview itself from generation to generation—like the transmission of thought memes—in that the parent evidently operates with the same worldview which is successfully engineered into the child should likewise promote that child’s desire to pass on the same worldview that values “legacy protection” to their children, and so on.

In this way, cybernetic theories of human existence function as a kind support for holding individuals at the human level in which self-worth is determined through comparison and self-awareness and autonomy are thereby diminished. What the phrase “cybernetic theories of human existence” refers to is precisely any theory of existence which believes all of existence can be explained. The sense in which such “epistemic closure” misunderstands nothing suggests to the individual’s inhabited by it that it is a worldview that can provide them with the truth in regard to everything (cf. Scalambrino, 2012). “Existentialists,” resist such systemization because it treats life like “a problem to be solved,” rather than (as Kierkegaard phrased it) “a mystery to be lived.” It is worth noting that Kierkegaard characterized such an inauthentic relation to life as “despair” (cf. Scalambrino, 2016b).

Some of the memes that are easy to notice are phrases such as “a gap year.” When an individual looks at the time of existence as though it is merely fulfilling a pre-established form, like a “cookie cutter,” then we should ask: How did that form get there? Notice how the perfect example here would be to invoke the self-understanding of individuals in “third world” locations, and ask what a “gap year” is for them. The idea is not that “gap year” has no reference. Rather, the idea is that individuals who truly believe that their lives are, and should be, following a pre-established pattern are individuals who are neither fully autonomous nor fully self-aware (cf. Marcuse, 1991). Of course, proponents of “legacy protection” may suggest that insofar as the individual in question is not from a “third world” location, then understanding the time of one’s existence in terms of “gap years, etc.” is a privilege to be coveted. Why is it a privilege to be coveted? Perhaps because such a self-understanding is more efficient for the individual to live (and pass on) the privileged existence which is their legacy.

Beyond any technological mediation used to genetically engineer a child, technological mediation helps hold individuals at the human level in which self-worth is determined through comparison by helping to sustain an identity, however explicit it may be to the individual, anchored in a cybernetic worldview. Technological mediation does this in all the ways philosophers have been saying it does this since at least when Plato talked about the technē of “writing” and its effects on human self-understanding. Yet, more to the point, when Heidegger and Jünger discuss the “form” in which humans understand themselves as “standing reserve” or as “workers,” then we can see the insidious influence of technological mediation as twofold. First, the efficiency allowed for by technology becomes an expectation. For example, the expectation is common today that we should have all our email accounts consolidated in an app on a smartphone, so you can receive emails with a level of efficiency as if they were all text messages, etc. Second, the idea that you may have some self-understanding other than legacy “protector” or germ-line “curator” is really just the folly of an inefficient employee or the noise of malfunction in a cybernetic human machine.

References

Aristotle. Nicomachean Ethics. Translated by Roger Crisp. Oxford: Oxford University Press, 2009.

Ashby, William Ross. An Introduction to Cybernetics. London: FIliquarian Legacy Publishing, 2012.

Ellul, Jacques. The Technological Society. Translated by J. Wilkinson. New York: Vintage Books, 1964.

Epictetus. Encheiridion. Translated by Wallace I. Matson. In Classics of Philosophy, Vol I, edited by L. P. Pojman. Oxford: Oxford University Press, 1988.

Fuller, Steve. “The Place of Value in a World of Information: Prolegomena to Any Marx 2.0.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 15-26. London: Rowman & Littlefield International, (2015).

Heidegger, Martin. Being and Time. Translated by John Macquarrie and Edward Robinson. New York: Harper & Row, 1962.

Heidegger, Martin. “The Question Concerning Technology”, In Basic Writings, edited by David F. Krell, 307-343. London: Harper & Row Perennials, 2008.

Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. London: The MIT Press. 2008.

Jünger, Ernst. “Technology as the Mobilization of the World Through the Gestalt of the Worker.” Translated by J. M. Vincent, revised by R.J. Kundell. In Philosophy and Technology: Readings in the Philosophical Problems of Technology, edited by Carl Mitchum and Robert Mackey, 269-89. New York: The Free Press, 1963/1983.

Kant, Immanuel. Groundwork of the Metaphysics of Morals. Translated by Mary J. Gregor and Jens Timmermann. Cambridge: Cambridge University Press, 2002.

Kant, Immanuel. Religion Within the Limits of Reason Alone. Translated by T.M. Greene and H.H. Hudson. New York: Harper & Row, 1960.

Lyotard, Jean-Francois. The Postmodern Condition: A Report on Knowledge. Translated by Brian Massumi. Minneapolis, MN: The University of Minnesota, 1984.

Marcel, Gabriel. “The Sacred in the Technological Age.” Theology Today 19: 27-38, 1962.

Marcuse, Herbert. One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society. Boston: Beacon Press, 1991.

Marx, Karl. “The Power of Money.” In Economic and Philosophic Manuscripts of 1844. Translated by M. Milligan, 136-141. New York: Dover Publications, 2007.

Nietzsche, Friedrich. The Cheerful Science. Translated by Walter Kaufmann. New York: Vintage Books, 1974.

Plato. Republic. Translated by G. M. A. Grube and Revised by C. D. C. Reeve. In Plato Complete Works, edited by John M. Cooper. Indianapolis, IN: Hackett Publishing, 1977.

Rorty, Richard. Philosophy and the Mirror of Nature. Princeton, NJ: Princeton University Press, 1979.

Scalambrino, Frank. Full Throttle Heart: Nietzsche, Beyond Either/Or. New Philadelphia, OH: The Eleusinian Press, 2015a.

Scalambrino, Frank. Introduction to Ethics: A Primer for the Western Tradition. Dubuque, IA: Kendall Hunt Publishing Company, 2016a.

Scalambrino, Frank. “The Shadow of the Sickness Unto Death.” In Breaking Bad and Philosophy, edited by Kevin S. Decker, David R. Koepsell and Robert Arp, 47-62. New York: Palgrave, 2016b.

Scalambrino, Frank. “Social Media and the Cybernetic Mediation of Interpersonal Relations.” In Philosophy of Technology: A Reader, edited by Frank Scalambrino 123-133. San Diego, CA: Cognella, 2014.

Scalambrino, Frank. “Tales of the Mighty Tautologists?” Social Epistemology Review and Reply Collective 2, no. 1 (2012): 83-97.

Scalambrino, Frank. “Toward Fluid Epistemic Agency: Differentiating the Terms ‘Being,’ ‘Subject,’ ‘Agent,’ ‘Person,’ and ‘Self’.” In Social Epistemology and Epistemic Agency, edited by Patrick Reider, 127-144. London: Roman & Littlefield International, 2016c.

Scalambrino, Frank. “The Vanishing Subject: Becoming Who You Cybernetically Are.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 197-206. London: Roman & Littlefield International, 2015b.

Scalambrino, Frank. “What Control? Life at the Limits of Power Expression.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 101-111. London: Rowman & Littlefield International, 2015c.

Wiener, Norbert. Cybernetics, Or, the Control and Communication in the Animal and the Machine. London: MIT Press, 1965.

[1] From Economic and Philosophical Manuscripts of 1844, translated by M. Milligan (1964).

[2] Cybernetics may be understood as a kind of science of life. For our purposes, it refers to a relation to life such that events in life are understood as capable of being fully quantified and subjected to calculations which would render the eventual outcomes predictable. Thus, proponents of such a relation to life tend to hold that the only limitation on the total cybernetic revelation of life is processing power in regard to the requisite quantification and calculation. Its continued relevance for conversations regarding technology and freedom is that if cybernetics is correct, then human freedom is a kind of illusion which results from the inability to calculate (what cybernetics considers to be) the fully deterministic nature of events. In short, according to cybernetics, it would be as if life were a machine with completely calculable motions (cf. Ashby, 2012; cf. Johnston, 2008; cf. Heidegger, 2008; cf. Wiener, 1965).

[3] For those unaware of the “Sims” reference, “The Sims is a video game series in which players “simulate life” by controlling various features of automatons and surveilling their activity. The video game was developed by “EA Maxis” and published by “Electronic Arts.”

[4] For a discussion of “efficiency” as indicative of the “Postmodern Condition,” see Lyotard, 1984.

[5] Cf. Fuller, 2015.

[6] I present the distinctions in this way for the sake of brevity and clarity; however, it should not escape Kant scholars that these three distinctions in essence represent a movement along Kant’s three different formulations of the Categorical Imperative, respectively, i.e. the principle of the law of nature, the principle of ends, and the principle of autonomy.

[7] Mill’s “Liberty Principle” suggests you are at liberty to act as you please so long as you are not harming others, i.e. so long as others consent to the treatment to which your actions subject them.

[8] Before even considering other reasons to justify this claim, notice the word “rational” in Kant’s articulation of the pre-disposition to personality.

[9] In Nietzsche’s language it is “to overcome itself.”

[10] This is, of course, why Kant thinks we naturally have a “duty” to be excellent.

[11] Cf. Scalambrino, 2015a.

Author Information: Rebecca Lowery, University of Texas at Dallas, rsl160530@utdallas.edu

Lowery, Rebecca. “Our Filtered Lives: The Tension Between Citizenship and Instru-mentality.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 21-34.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3rf

Please refer to:

unquiet

Image credit: Daniela Munoz-Santos, via flickr

The central problem to be examined here is that the loss of the private self is a threat to the theory of citizenship, which rests upon the idea that a citizen is a person with both a private life and a public life, a distinction inherent in many traditional theories of citizenship. Without the restoration of a potent private sphere in the individual life, citizenship becomes thin and shallow, an unnecessary and antiquated theory, useful only as a convenient tool for organizing the masses.

The private life of the individual in today’s society is now intricately linked with technology. Thus it is impossible to explore the loss of the private self without also looking at the role of technology in the life of the citizen, specifically the sense in which a citizen’s relation to their own existence is technologically-mediated. To such an end, I will have recourse to Martin Heidegger as a thinker who explicates how technology transforms our relation with existence, or to use his term “being.”

Technology gives us an opportunity to relate to the environment, others, and ourselves differently. Rather than experience being as present to us, we have the opportunity for a mediated experience with being because of the power of technology. In itself, technology is a tool; it is a means to an end, not an end in itself, a mediator between person and reality. By allowing technology to mediate our experiences, we are succumbing to what Heidegger will call “ge-stell,”[1] or enframing, with the result that we see everything as instrumental (a sunset is no longer a sunset, but something to be captured by technology in the form of a photograph for the sake of posting). Today, relating to the world instru-mentally is more pervasive and more difficult to resist because of social media, a new phenomenon particular to postmodernity.

In order to see how technology influences citizenship, I am dependent on Hannah Arendt’s characterization of the social, private, public and political realms. One consequence of social media is that the sharing of one’s private life (be it sentiments, activities, or opinions) is acceptable and expected in the public sphere; indeed, it seems that more and more, the public sphere is constituted by private stories. Further, because technology operates through enframing, both the private and public spheres have become spaces of utility. This may be opposed to how Arendt will characterize these spheres or to how Heidegger juxtaposes enframing with the more primordial poiesis as a mode of relation. It would seem, then, that the private sphere is receding into the public. To regain a thriving theory of citizenship, one in which participation as a citizen is an honor both for the state and for the private self, means a move away from the functionalism that encapsulates us today.

And finally, the enframing that results in the loss of the public and private boundary is harmful not only to the theory of citizenship, but to our own beings as well. If we can return to a state of relating to the revealing of nature as non-mediated, and privilege the poetic over the technological, then our own beings will return to a more natural state that nurtures and values the private life. If such a change is made, then the new, substantial private life will be prepared to contribute to an equally substantial public sphere.

Instru-mentality, Technological Mediation and Enframing

Heidegger, in “The Question Concerning Technology”, provides the philosophical underpinnings that illuminate the core of the postmodern problem with regards to technology.[2] Taking some of his contributions, the links between the technology of social media, the public and private spheres, and citizenship become clearer.

According to Heidegger, as humans, our natural and primary way of relating to being is through poiesis, which is a “bringing into appearance” or a “bringing-forth”[3] of the essence of a thing, where essence is understood in terms of presencing. For example, an oak tree brings forth its essence, gives its presence to us by revealing itself to us as what it is. Yet, another way to relate to beings is through technology, and when beings are revealed to us through technological mediation, they are revealed in terms of instru-mentality. When beings are revealed in terms of instru-mentality, they are revealed as instruments to be used for some end. When beings are revealed in terms of presencing their essence, they are revealed as ends in themselves, that is, not for some other—instrumental—purpose.

The concept that Heidegger introduces in order to relate modern technology with being is that of “ge-stell,” perhaps best translated as “enframing.” Just as poiesis is a revealing of essence, so too is technology a way of revealing. Heidegger suggests that technology reveals beings as “standing reserve,” meaning that “everywhere, everything is ordered to stand by, to be immediately at hand, indeed to stand there just so that it may be on call for a further ordering.”[4] Consider the oak tree. What are the thoughts that cross the mind in the presence of the tree? If the thought is something along the lines of “I could use that tree to build a table” or “I should take picture of that and publicly share it on social media so that my friends will know, for sure, that I appreciate nature” then the tree is being experienced as standing reserve. Rather than appreciating the tree for itself, it is subjected to order based on how useful it is, and what it can be used for.

For Heidegger, enframing “challenges [man] forth, to reveal the real, in the mode of ordering, as standing reserve.”[5] Enframing itself is a summons, and the summons expresses itself every time we relate to reality with technology in mind. That which “challenges forth” is the summons, but it is not an external factor. Rather, the summons is purely internal, somewhat similar in experience to what we refer to as the “call of conscience,” an instinct, a desire, or a need to behave and act in a certain way. Just as an instinct is first present in the thought, and then brought either to action or non-action, so too does enframing involve two steps: the summons is heard to relate to an experience through the medium of technology; the response is either to act on the summons, or to turn away from the summons.

In fact, to act on the summons does not require the physical apparatus of technology. For example, my experience of my life becomes enframed when I think, “I am going to update my Facebook status.” When I hear the internal summons to share my current situation or disposition via social media it shows that I already have a relation to my existence as if it were standing reserve. Becoming habituated to the summons is like exchanging my own mentality for the instru-mentality of technological mediation. I relate to my existence as if it were something to be “posted,” and as a means to whatever ends may come from such “posting.” In other words, as my existence is revealed to me, being filtered through technological mediation, social media orders my understanding to see itself in terms of instru-mentality (cf. Scalambrino, 2015). In this way, acting on the summons completes the presencing of existence in terms of enframing and, done repeatedly, becomes habitual.

One way social media, such as Facebook, establishes control, that is orders our existence into standing reserve, is through the “inter-face” mechanisms users must learn to successfully navigate the technology (cf. Scalambrino, 2014). A subtle example of this, through enframing, is in terms of the media’s attractive terminology. Status updates are an opportunity for “sharing” with friends and family. Under the guise of human relations, Facebook becomes the mediator. Enframing is always present whenever the technological tools of social media are present. If I am enjoying the company of friends, and yet I have my iPhone always at hand ready to take pictures, respond to the people not really there (cf. Engelland, 2015), etc. then I have opened myself up to answer the summons immediately, and to express the summons—to allow technological mediation order me—by actually taking the picture, and actually texting someone back. The idea of a social situation that is not mediated by technology is a rare find now. Even if I am physically present with a friend, my phone is still mediating my experience. In fact, the pervasiveness of social media and smartphones coincides with enframing as usual and customary in regard to social interactions.

The habit of living life ordered through technological mediation, and therefore as standing reserve, is what we are up against. With the habituality and the instru-mentality sustained through technological mediation, especially social media, the issue of standing reserve appears even more pressing. Enframing does not mean that a person looks at life as though through a picture frame fit with technological lenses. Rather, enframing is a summons that calls to us, internally, to relate to the world and to each other in terms of standing reserve, or, as instruments, that is, objects to be used in some fashion (cf. Scalambrino, 2015). When the relation of presencing (or “essencing”) occurs through enframing rather than poiesis, then we relate to the being of another person in terms of their utility. Social media allows us to create a representation of ourselves for others to encounter.

Through a cyber dimension, we become distanced from others, but the great guise is that we think we are becoming closer to them. In our private lives, we live with a fear of always imposing on others because we are so used to the non- imposition that is associated with media communication (cf. de Mul, 2015). We sacrifice presence for absence in that we are merely present in terms of instru-mentality, when our relations are technologically-mediated.

Enframed identities compete for us to be them, like Ernst Jünger’s insights regarding the identity of “worker” or the Hollywood-fashioned identity of “celebrity,” and as if possessed by the efficiency of instru-mentality, we work to be our own paparazzi. Of course, there are a multitude of examples that can be drawn from social media that illustrate just how easy it is to live a filtered life, where relation to being becomes mediated. Postmodern technology looks like the publication of the private; it is “the manipulation of man by his own planning.”[6] For example, the “personalize your LinkedIn profile page” with an image that describes you, your interests, etc. Physically present personalities are readily substituted for the chance to control what aspects of your personality you want projected for others to see.

Yet, Facebook is perhaps the most primary example of people giving to the public updates on their private lives, updates which can then be liked, shared, and commented on. The original meaning of words such as “sharing” and “liking” receive a second definition based on the instru-mentality of social media. In other cases, people submit themselves to technology, and thus lose themselves. The technology is too powerful. One of the popular hash tags in social media is the #besomebody. The idea behind the trend is that you are told you are being somebody to yourself, but really it’s directed outward, trying to tell others that you are somebody. Thus, you allow your own being to be hijacked into pure judgment, the judgment of others, and their enframed judgment at that.

The role of enframing has been present for as long as technology has been present. Today, social media is one particular instance of technology, but it is one that makes enframing more and more difficult to escape because we are constantly and physically around the tools that make social media possible: the computer at home and at work, the phone in the pocket or hand, the tablet always within reach. One of the reasons social media deserves consideration is because for the first time in history technology is in the hands of the everyman. We no longer just have the technology of big machines. Now it is big machines in addition to the technology of the masses, the technology of social media.

Gianni Vattimo, in “Postmodernity, Technology, Ontology,” comments on how “Heidegger … remained stuck in a vision of technology dominated by the image of the motor and of mechanical energy.”[7] Nevertheless, though Heidegger wrote about technology in his own historical situation and relates enframing with modern technology (machines powered by motors directed at the control of nature) his ideas are still highly relevant in today’s culture (and perhaps more so than ever before considering that technology permeates all sectors of society).

Thus, there is also a historical motivation behind this paper. To fully appreciate the state we are in today, it is helpful to look at how technology, in our postmodern condition, is one of the reasons why the issues here deserve (perhaps urgent) consideration. The historical evaluation will not be a lengthy one: it is not necessary to trace technology beyond the historical transition from modernity to postmodernity to gain an understanding of why and how technology today has become a (seemingly) essential part of everyday life, and a factor of everyday-ness that is not without consequences.

While Heidegger’s account of how technology alters our relationship with being can be traced back to the origin of technology, in more recent history the shift from modernity to postmodernity provides an explanation for how and why the concept of enframing deserves particular attention today, in our postmodern world. Richard Merelman’s article “Technological Cultures and the Liberal Democracy in the United States”[8] highlights the shift from modern technology to postmodern technology in order to suggest a reason for the change in how citizens view American government and liberal democracy. His distinction between the directions of technology (which serves as the groundwork for his entire essay) is important here, because it reinforces the urgency of the social media and enframing issue.

Merelman points to the modern era, when technology was directed outwards towards the control of nature. However, the entire culture of technology during that era was translucent; the average citizen was able to understand how technology operated. However, with the transition to postmodern technology, the emphasis of invention became directed on the human person, rather than nature. New technologies geared towards human development and health allowed the former focus on nature to be redirected.

In the modern era, as Merelman writes “the self acted, technology responded, and nature yielded to the civilized control of society.”[9] Thus Bacon was justified and Descartes was fulfilled. In his New Organon, Bacon’s third axiom reads, “Human knowledge and human power come to the same thing, because ignorance of cause frustrates effect. For Nature is conquered only by obedience; and that which in thought is a cause, is like a rule in practice.”[10] Bacon was the first to introduce the idea of controlling nature, and thus he introduced this era of modernity. In extension of this transition, Descartes succinctly writes in his Discourse on Method that we must “render ourselves, as it were, masters and possessors of nature. This is desirable not only for the invention of an infinity of devices that would enable one to enjoy trouble-free the fruits of the earth and all the goods found there, but also principally for the maintenance of health …”[11] As Descartes points out, such mastery of nature is made possible by physics. The important point about modern technology is that it was directed outwards.

Furthermore, because the technology was directed outwards, the effects, as Merelman writes, were immediately observable and calculable. We do not see the same possibility for calculating in postmodern technology, because enframing is an internal summons. What is internal to the person is much more complicated than the control of nature. The results of enframing are much moresubtle, less clear, less comprehensible, and ultimately less scientific.

Modern technology lasted through World War II, and indeed it continues today. Much of our technology is meant to master nature. However, it has receded. The transition to postmodernism began in post-World War II American culture, and was in full force by the 1960s. Why did modernism end? Perhaps our control of nature, as Merelman suggests, goes too far. Why else would the rise of environmentalism occur simultaneously with the shift to postmodernism? We controlled too much of nature, and we drew back. This is one interpretation. But perhaps it is more likely that environmentalism is also the control of nature; it’s just cleverly disguised. By focusing less attention on the control of nature, it became possible for technology to be redirected towards the human person. The technology is still external to us, but its effects are now seen in the workings of the person, not just in nature. Soon, we may realize that this too must be reined it. The other cause for transition to postmodern technology is more natural and obvious: technology and science strives on. Man is not content with domination of nature; it must also dominate the two extremes sandwiching our earth: the solar system on one hand and the human person on the other.

Thus, with the transition to postmodern technology, the emphasis of invention became directed on the human person, rather than nature. New technologies geared towards human development and health allowed the former focus on nature to be redirected. Now, in the postmodern condition, one of the main purposes of technology is to understand the self. In some ways this was successful, for example the research regarding the human genome and mental illness. These are two examples that aid in understanding the self (though in no way is this meant to suggest that human persons can be reduced to their mental faculties and their inherited genetic traits.) But what does technological enframing look like today? We will see that rather than aiding in understanding the self we are compromising and sacrificing the self. This is done under the great guise of technology. Postmodern technology promises self-fulfillment, life improvement, self-betterment…but it is, for the most part, a deceit and the repercussions extend into many areas of life, including that of citizenship.

I am focused on the so called communication technology of social media as representative of postmodern technology, I do not think it can separated from the technology directed towards understanding man’s biology, in other words, medical technology. All of these separations still fall within the technology of information; it is merely expressed differently based on specific areas. For example, medical technology allows the illusion of facial reconstruction; communication technology allows for the illusion of the media persona, a not-there identity, entirely fabricated (not only by the fabricator, but also by others who can say what they want about others within this technology). It is interesting that, with regards to medical technology, Descartes was in a way foreshadowing the evolution of postmodernism when he speaks of the “maintenance of health” as one of the benefits of mastering nature.

So far, we have seen that technology, as a source of revealing, reveals to us being as standing reserve. Also examined was the historical perspective: that the transition from modernity to postmodernity, culminating in the social media that permeates our world today, brings the concept of enframing to the forefront due to the extreme accessibility and habitual use of social media. Now, with the previous progress in mind, we will begin to turn our attention to the effects of enframing in the realm of citizenship, which will necessarily mean the effects on our own beings as well. To the extent that enframing is a part of our every day life, I will argue that enframing is contributing greatly to the loss of the sense of the private self, without which the theory of citizenship cannot remain meaningful to the citizen.

From Enframing to the Efficiency of Postmodern Technology

For Arendt, society, and thus the social realm, is where “private interests assume public significance”[12] which takes the form “of mutual dependence for the sake of life and nothing else…and where the activities connected with sheer survival are permitted to appear in public.”[13] What is necessary for survival? Eating, shelter, and the education of the young become some of the constituents of the social realm. It seems that social media should not be called social media. There is nothing about social media that makes it necessary for survival.

The private on the other hand is a “sphere of intimacy”[14] where the happenings of the private life need not extend into the social realm. It is closed off from the eyes of others, except those personally involved in the sphere. Furthermore, it ought to revolve around real presencing. However, Arendt points out that in the modern era “modern privacy in its most relevant function, to shelter the intimate, was discovered as the opposite not of the political sphere but of the social, to which it is therefore more closely and authentically related.”[15]

For Arendt, it is clear that the private sphere is closely linked with the social (and not the public) sphere. Does this then mean that the social and the private have nothing to do with citizenship since they are thus severed from the political realm? By no means. We shall see that Arendt is drawing a chain, and connects the social sphere with the public sphere. For Arendt, the public and private do not co-exist snugly side-by-side. Rather, the social realm falls between them and knits them together, while at the same time allowing the two spheres to remain distinct. Some private issues (such as education) appear in the social realm, and then the social realm contributes to the public sphere.

Arendt has a specific definition of the private sphere. Shiraz Dossa summarizes Arendt’s conception of the private as such: “that privacy is the natural condition of men is a truism for Arendt: the needs and wants of the human body and the natural functions to which the body is subject are inherently private.”[16] Further, Arendt contrasts the category of the private with that of the public. The public realm is fascinating because it can be either social or political.[17] Traditionally, the public was aligned with the political. However, the larger the community, the more social the public will be. We are therefore losing our sense of the political and the private to the social and the public.

Arendt constitutes the public realm in two ways. The first is “that everything that appears in public can be seen and heard by everybody and has the widest possible publicity.”[18] However, Arendt’s public is not infiltrated with social media as it is today; thus our public realm has becomes a filtered reality. In another sense, for Arendt the public “signifies the world itself, in so far as it is common to all of us and distinguished from our privately owned place in it.”[19] How awesome is it that we have private ownership in this world! And it is equally awesome that there is a public sphere that balances the private. However, it is not necessary for social media to publicize that the world is common to all; the commonness should be enough in itself and has no need to be enframed.

The other point that Arendt is making is that the public realm is receding. During her time, the state of the public realm was no longer permanent. The permanency of the public sphere is highly important in Arendt’s philosophy because it means that what we create today is not only for our generation; the public today ought to take the future into consideration as well since “It is the publicity of the public realm which can absorb and make shine through the centuries whatever men may want to save from the natural ruin of time.”[20] The idea is to live in a world, and to create a world, that is strong enough to withstand time.

To overcome time suggests a worthiness of the pursuits engaged in creating something in the public sphere because then the works succeed the condemnation of mortal decay. They participate and gain access to an eternal realm (though an eternal realm still confined in the physical world). Perhaps Arendt is right: how much of our public world will withstand time? But in another sense, the opposite is happening: all is falling into the public. The private is being subsumed under the public, and the public now has its identity as social, and not political. If all that is left is the public sphere, then without the opposition of another sphere there can be no loss of the public: it’s permanency is parallel to a dictator, ruling with no contestants. Rather than the public being like a dictator, it should rather retain a healthy tension with the private sphere, each of the two acting as a balance for the other.

Presented above are Arendt’s definitions of the social, private, public and political realms, and how each relates to the others. The most significant one for present purposes is the distinction between the public and the private. It is clear that Arendt elevates the public realm, and I elevate the private realm. She speaks of rising from the private to the public. But I would not say the move from public to private is an ascent. I would rather say that they are on a horizontally-related, rather than vertically.

From Postmodern Technology to Boundary Blurring Between the Public and the Private

The enframing that occurs with social media is mediating our relation to real presences and thus necessarily it is directly affecting our private and public lives. When our private lives bleed into the public sphere via social media, the public sphere itself becomes a mirror image of mediated personalities. For Arendt, the public sphere means “something that is being seen and heard by others as well as by ourselves.” Granted, social media is seen and heard via technological devices, however the relation to what appears via technology is once removed from reality: it is a copy, and it is also an illusion.

As social media makes a stronger and more permanent presence in the world, the private realm becomes less and less significant because what used to be strictly present in the private realm can now easily be projected into the public realm. While social media exacerbates enframing, the issue at hand is nothing new. Arendt notes how in modernity “functionalization makes it impossible to perceive any serious gulf between the two realms.”[21] Thus it is function, enframing, and usefulness that blur the boundary between the public and private.

In addition to Arendt, Vattimo argues “what concerns us in the postmodern age is a transformation of (the notion of) Being as such—and technology, properly conceived, is the key to that transformation.”[22] Indeed, our notion of being is transformed, or at least filtered, by technology because of enframing. Vattimo characterizes enframing as “the totality of the modern rationalization of the world on the basis of science and technology.”[23] Thus, it is impossible to conceive of being as extending beyond enframing. As we have already seen, the rationalization that Vattimo speaks of is the utilitarian nature of enframing, an aspect that coincides with the pragmatism originating in the 20th century.

The very utility that is necessarily attached to pragmatism continues to presence itself today through enframing, made easy by social media. Vattimo clearly states: “I don’t believe that Pragmatist and Neopragmatist arguments are strong enough to support a choice for democracy, nonviolence, and tolerance.” Therefore, he supports an ontological rather than a pragmatic point of view, which, as a philosophical position, prefers “a democratic, tolerant, liberal society to an authoritarian and totalitarian one.”[24] To have a life not dominated by the enframing of technology is more conducive to democratic ideals. While the private and public spheres are necessary in any political system, democracy is our own current situation, which adds a definite relevance to the experience of enframing as opposed to other ways of relating to reality.

Before moving on to discussing how the lack of a boundary between the public and private influences the individual life of the citizen, there is a final point to be made about the republic, one that speaks to the very lifeblood of citizenship as a theory. Wilson Carey McWilliams, drawing on Tocqueville, states “freedom is not the mastery of persons and things; it is being what we are, subject to truth’s authority. No teaching is more necessary if the technological republic is to rediscover its soul.”[25] What we are sure to lose in our current trajectory is the soul of our nation. In an illusionary manner, social media is about mastery and the sense of feeling like we are in control. It is the delusion that we can control a relationship in a text message. It is becoming evident that time is a huge factor with social media: how quickly in time can an image go viral? How quick is the response to messages?

As we can control this factor of time while participating in social media, we allow ourselves to fall prey to the illusion of power. In social media, there is no subjection of the self, there is only self-proclamation. When the citizens of our republic have no soul, the soul of the republic suffers. The soul of the republic is only as great as the people who make up the republic. Nietzsche, drawing on Aristotle, asks if greatness of soul is possible.[26] If it is, social media is not helping in the nurturing of greatness since a soul that relates to being as not exceeding standing reserve loses all sense of mystery. When the souls of a nation are suffering, infected with a continually enframed view of being, then the very soul of a nation suffers as well, as it’s lifeblood is slowly shut off.

Some encourage the publication of the private as a signal of the advancement of mankind in the social realm. If the social realm were the highest, then such would be the case. But there are reasons why I hold the private to be of great significance: people begin their role as citizens in the private realm. The remedy of this problem is necessary if we are to remain as citizens, if citizenship is itself going to survive. All can be traced back to what is going on in the private realm. It determines our identities, which we then carry into the public realm.

A healthy citizen is a citizen who is able to distinguish the private from the public, and retain a balance between the two. To lose this, is to lose the capacity to be a citizen, and thus we face the collapse of the theory of citizenship. This theory only has existence in so far as we as individuals uphold it through our own existences as public and private beings. Thus as we continue to sacrifice our private selves, we are slowly chipping away at the theory of citizenship. Arendt approaches the same problem, but subordinates the private to the public. For her, a well-lived public sphere trickles down to the private sphere and improves it. Her ordering is necessary if the public sphere is where man truly fulfills his nature (the guiding principle of Civic Republicanism). The conclusion is the same for both of us: an identity as a citizen that involves both the public and the private spheres. We merely diverge on the privileging of spheres.

Furthermore, the boundary between the public and private self is a condition for citizenship in that a strong identity of the private self serves as preparation for a well-constituted public sphere. The enframing by technology today that is weakening the boundary between the private and the public thus has implications for the theory of citizenship. If a citizen lacks a foundation in their private life, then that citizen may as well be a foreigner to the system of citizenship that they are attempting to participate in. Just as a foreigner will lack the disposition to give credibility and care to the style of citizenship that is either not their own or that they have no intention of participating in, so too is the citizen who attempts to participate in the public sphere while lacking a hidden and private life. Since the public sphere is made of citizens, the only way to have a thriving citizenship is for a sense of strong personal identity with the state where the citizenry reside. The personal identity is established in the private sphere, where the soul learns to relate to reality, and then brings itself to help constitute the reality of the public. A citizen with no private life is like an apple with no core: it is all façade, with nothing substantial to contribute to permanency and foundation.

Finally, the private realm ought to remain unpublicized for the sake of retaining a unified self, and for the sake of self-reverence and mystery. Once publicized, reverence and mystery become obsolete. Paul A. Cantor and Cardinal Ratzinger offer ideas on what it means for the human person to exist without reverence and without mystery, two aspects of the human race that technology helps make disappear. When we then lose our sense of private identity we are losing a part of ourselves. Though we are incomplete beings, we accentuate and magnify our incompleteness through technology. It is entirely voluntary, and entirely unnecessary.

Paul A. Cantor writes: “when man chooses to revere nothing higher than himself, he will indeed find it difficult to control the power of his own technology.”[27] Social media is followed with an attentive reverence, but since social media is a platform for the self, reverencing social media is essentially reverencing one’s media self, and nothing higher. When the media acts as such a vice grip, it is difficult to remember to revere anything else. Reverence does not have to pertain to religion or belief systems. It can mean to honor the internal difference of the human person, out of humility recognizing that no representation ever captures the greatness of man. Why would we choose to honor media personas that strive so hard for coherence over the contemplation of actuality?

The reverence that Cantor is talking about is similar to Cardinal Ratzinger, in An Introduction to Christianity, asking,

But if man, in his origin and at his very roots, is only an object to himself, if he is ‘produced’ and comes off the production line with selected features and accessories, what on earth is man then supposed to think of man? How should he act toward him? What will be man’s attitude toward man when he can no longer find anything of the divine mystery in the other, but only his own know-how?[28]

Our publication of the private dehumanizes us, reduces us, and secludes us. I argue that it is not part of the fabric of reality. We see in the face of the other, not their inherent mystery, but a shell of their opinions. Our participation further reduces our own mystery that we hold to ourselves. If we are to truly have a public sphere that lasts more than a generation, then a “production line” creation is far too weak and fallible, since it is so easily changed and manipulated to match the going trends and styles of the day. The weakness of such a system is then expounded when it applies not just to the manufacturing of things, but to the manufacturing of people as well. Not only is the result a loss of beauty in the creation of the public sphere, but also man is demoted to robotic-like expectations, devoid of all “divine mystery.”

Ratzinger’s characterization and implications of the manufactured person is the same as Heidegger’s exploration of standing reserve, since standing reserve fully embraces utility, and leaves no room for mystery. As previously illustrated, enframing harms the private life and destroys hiddenness. Thus, the experience of reality (including the human person) as standing reserve that occurs through enframing is detrimental to the mystery of the person. Though the mystery of the person is explored in the public sphere, it finds its root and primary expression in the private sphere. But, what is the point of divinity, or eternity, when there is no birth of such things in the private sphere, and no sustenance for them in the public sphere?

A Public, Shallow Life

Arendt provides a succinct summation of the problem: “A life spent entirely in public, in the presence of others, becomes, as we would say, shallow.”[29] Our life is constituted by physical presences, both in the public and the private spheres. However, added to the real flesh of the physical world is the prominence of media presences (which are immaterial) that allow the individual to have a constant presence in the public sphere. When the media presences become the main way in which we relate our lives to the world around us, then we are looking at a great private loss. Along with the loss of the private self comes the loss of a profound and real theory of citizenship. Thus, if the overarching idea to be preserved is citizenship, then we must search for a way to preserve the hidden life, the private life. It is possible that such a reversal will change our embodiment in the fibers of apathy that currently constitutes the general perception of citizenship.

If enframing occurs because we respond to the summons that results in standing reserve, then a change in perception, an internal change, will radically derail enframing. An internal change towards external reality means escaping from enframing and (perhaps) returning to what Heidegger will call more “primordial,” a relation to the world that was possible prior to the power of technology that allowed for enframing in the first place. Ideally, it means seeking the inherent value present in the world, rather than living by standing reserve alone. It means returning to reverence, to soul, and to mystery, as opposed to total revealing in utility and a life that does not extend beyond what is manufactured and functional. Though utility cannot (and need not) be totally eradicated, utility also need not be privileged above other paths of relation.

Once enframing is held in check, the private realm will not sink so quickly into the public, and the two realms will once again become distinct. The internal opposition to enframing will put a hold on the constant filtration of reality, and thus allow for a wellspring of endurance, a new revealing of truth not based in usefulness, and a return to the hiddenness of the private sphere. The re-established privacy then re-draws the boundary between the public and the private, such that a newly well-established private sphere provides for a stronger sense of self, a better preparation for entering the public sphere. The strength of self not hindered in the public sphere infuses the soul of citizenship, and thus saves citizenship.

References

Arendt, Hannah. The Human Condition. 2nd ed. Chicago: The University of Chicago Press, 1958.

Bacon, Francis. The New Organon. Eds. Lisa Jardine and Michael Silverthorne. Cambridge:  Cambridge University Press, 2000.

Bambach, Charles. “Heidegger on The Question Concerning Technology and Gelassenheit.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 115-127. London: Rowman & Littlefield International, 2015.

Brenner, Leslie. “Goodbye, avatar.” Dallas Morning News: October 30, 2014.

Cantor, Paul A. “Romanticism and Technology: Satanic Verses and Satanic Mills.” In Technology in the Western Political Tradition, edited by Arthur M. Melzer, Jerry Weinberger, and M. Richard Zinman, 214-28. Ithaca, New York: Cornell University Press, 1993.

de Mul, Elize. “Existential Privacy and the Technological Situation of Boundary Regulation.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 69-79. London: Rowman & Littlefield International, 2015.

Descartes, Rene. Discourse on Method. 3rd ed. Trans. Donald A. Cress. Indianapolis:  Hackett Publishing Company, 1998.

Dossa, Shiraz. The Public Realm and the Public Self: The Political Theory of Hannah Arendt.  Waterloo: Wilfrid Laurier University Press, 1989.

Eliot, T.S. “Burnt Norton.” In The Complete Poems and Plays, 117-22. New York: Harcourt, Brace & World, 1971.

Engelland, Chad. “Absent to Those Present: The Conflict between Connectivity and Communion.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 167-177. London: Rowman & Littlefield International, 2015.

Heidegger, Martin. The Question Concerning Technology and Other Essays. Trans. William Lovitt. New York: Harper & Row, 1977.

McWilliams, Wilson Carey. “Science and Freedom: America as the Technological  Republic.” In Technology in the Western Political Tradition, edited by Arthur M. Melzer, Jerry Weinberger, and M. Richard Zinman, 214-228. Ithaca, New York: Cornell University Press, 1993.

Merelman, Richard M. “Technological Cultures and Liberal Democracy in the United States.” Science, Technology, & Human Values 25, no. 2 (Spring 2000): 167-94.

Ratzinger, Joseph Cardinal. Introduction to Christianity. Trans. J.R. Foster and Michael J.  Miller. San Francisco: Ignatius Press, 2000.

Scalambrino, Frank. “Social Media and the Cybernetic Mediation of Interpersonal Relations.” In Philosophy of Technology: A Reader, edited by Frank Scalambrino, 123-133. San Diego, CA: Cognella, 2014.

Scalambrino, Frank. “What Control? Life at the Limits of Power Expression.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 101-111. London: Rowman & Littlefield International, 2015.

Vattimo, Gianni. “Postmodernity, Technology, Ontology.” In Technology in the Western Political Tradition, edited by Arthur M. Melzer, Jerry Weinberger, and M. Richard Zinman, 214-28. Ithaca, New York: Cornell University Press, 1993.

[1] Heidegger, The Question Concerning Technology and Other Essays, 19.

[2] Cf. Bambach, “Heidegger on The Question Concerning Technology and Gelassenheit.”

[3] Heidegger, The Question Concerning Technology and Other Essays, 10.

[4] Ibid., 17.

[5] Ibid., 20.

[6] Ratzinger, Introduction to Christianity, 66.

[7] Vattimo, “Postmodernity, Technology, Ontology,” 223.

[8] Merelman, “Technological Cultures and Liberal Democracy in the United States.”

[9] Merelman, “Technological Cultures and Liberal Democracy in the United States,” 168.

[10] Bacon, The New Organon, 33.

[11] Descartes, Discourse on Method, 35.

[12] Arendt, The Human Condition, 35.

[13] Ibid., 46.

[14] Ibid., 38.

[15] Ibid., 38.

[16] Dossa, The Public Realm and the Public Self: The Political Theory of Hannah Arendt, 59.

[17] Arendt, The Human Condition, 43.

[18] Ibid., 50.

[19] Ibid.

[20] Arendt, The Human Condition, 55.

[21] Ibid., 33.

[22] Vattimo, “Postmodernity, Technology, Ontology,” 214.

[23] Ibid., 222.

[24] Ibid., 226.

[25] McWilliams, “Science and Freedom: America as the Technological Republic,” 108.

[26] Nietzsche, Beyond Good and Evil, 139.

[27] Cantor, “Romanticism and Technology: Satanic Verses and Satanic Mills,” 127.

[28] Ratzinger, Introduction to Christianity, 18.

[29] Arendt, The Human Condition, 71.

Author Information: Zachary Willcutt, Boston College, willcuttz@bc.edu

Willcutt, Zachary. “The Enframing of the Self as a Problem: Heidegger and Marcel on Modern Technology’s Relation to the Person.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 11-20.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3qj

Please refer to:

Image credit: A Health Blog, via flickr

Discourse today often includes phrases such as “my neurons made me do it,” or “my brain does this or that.” Popular opinion increasingly maintains that the mind is identical to the brain. That is, social consciousness views the person as nothing more than a collection of chemicals and cells, resulting in the phenomenon, or perhaps the epiphenomenon, of consciousness, which has nothing incorporeal or interior about it. It is, following the pattern of the things in the world, supposed to be another physical thing. The contemporary collective consciousness knows that the human being is just another wholly material object, subjected to the same laws of causal determination as plants, atoms, and stars.

Following Heidegger, such social knowledge is shown to be the product of the present scientific and technological understanding of the self, subsuming consciousness, thought, emotions, passions, and choices as objects of empirical, scientific study, which uses various instruments to purportedly show that the person is her brain, converting the self into what Marcel calls a problem; however, this conflicts with the traditional perspective that the human is an immaterial soul. To defend the latter position, this article will deconstruct the claims of modern neuroscience to prevent the de-humanization of individuals that as the result of these claims now occurs.[1]

Enframing the Person as Brain

This understanding of the human person is consequent upon modern science, in particular neuroscience and psychology, which depend wholly on modern technology. There is thus a mutual relation between technology and science, leading to a process of the en-framing of the person as the brain. Martin Heidegger in The Question Concerning Technology sets forth the particulars of this process. He notes that there is a social awareness that modern technology “is based on modern physics as an exact science” (QCT, 14). In general, individuals are aware that their computers, cars, electricity, and other modern items, depend on scientific activity. Technology requires as its condition the development of scientific knowledge, in particular physics, without which microwaves and electricity would not be possible. However, just as technology depends on science, science depends on technology, since “modern physics, as experimental, is dependent upon technical apparatus and upon progress in the building of apparatus” (QCT, 14). The work of scientists in general is rooted in technology, which provides cyclotrons, electron tunneling microscopes, and spaceships that further scientific cognition.

Thus, the relation between science and technology is reciprocal; neither can exist without the other. Such reciprocity is becoming more clearly understood (QCT, 14). Modern technology and modern science mandate one another, one aiding the other, while each stands on the ground set forth by the other. In the context of mind-brain identity debates, this involves public awareness through social cognition of viewing the mind and person as nothing more than the brain, which is commonly held to be scientific knowledge, as dependent on modern technology such as brain scans. Without technology, contemporary neuroscience and cognitive science could never have developed. These fields require technology, which serves as their condition and has thus led to the furthering of mind-brain identity theories as social cognition.

The public predominance of such theories was described by Heidegger, in what he calls Gestell, usually translated into English as enframing, “the challenging claim which gathers man thither to order the self-revealing as standing-reserve” (QCT, 19). Standing-reserve occurs when “[e]verywhere everything is ordered to stand by, to be immediately at hand, indeed to stand there just so that it may be on call for a further ordering” (QCT, 17). The things of the world, and humans subsumed as part of the world, are arranged solely with respect to their use insofar as they may be employed for continued utilization. Entities and, more importantly, persons are reduced to their mere possibility of being used for some end. That persons have become standing-reserve is demonstrated by “the current talk about human resources” (QCT, 18). Individuals are integrated as parts into the whole of the technological systems dominating life, where individuals have value only insofar as they can be incorporated into the technological whole. Thus, enframing indicates the gathering and ordering of persons and things so that they are revealed as available for use.

In this context, Kisiel interprets enframing as ‘synthetic compositioning,’ indicating “artificiality to the system of positions and posits” (Kisiel 2014, 138). This translation for Gestell fully encompasses the meaning that Heidegger is trying to indicate, that the world and persons are brought together to be used for further instrumentality. That is, Gestell signifies the functionalization of persons and things into the disposability of the standing-reserve, which is ordering for the end of more ordering, with no end beyond that of such ordering. All that is, is reduced to its functionality. To synthetically compose the person as an instrument, he must be understood in terms of his instrumentality, his submission and application to technology, for he has become “a commodity to be stored, shipped, handled, delivered, and disposed of” (Bambach 2015, 10). In this state, humans “become the functionaries of technological positioning, we put ourselves in position to be stockpiled and surveyed” (Ibid). Technological-functionalization de-humanizes the person, whose individuality disappears within the system.

In this way, the person is a mere component of a machine, a machine that in the framework of mind-brain identity debates turns the self into a brain. Humans are synthetically composited as brains, destroying their uniqueness as persons, as they have become only material. The self is eclipsed by the impersonality of matter (cf. Scalambrino, 2015). Having no characteristics particular to a person, the brain belongs to no one, and vanishes into the nothingness of pure matter. For every brain is equally exchangeable as any other brain. As it has been established, technology requires modern science; thus, humans become objects of science in social consciousness, that they too might be ordered according to the orders of the ordering. This ordering, itself in its essence technological, necessitates that the person is considered as nothing more than his brain. For if he was not just a brain, he would have some aspect that escaped instrumentality; having been reduced to the order of instrumentality, he must therefore only be thought of in his physicality. Human beings are problematized as objects of natural scientific study, the socially common view among many scientists and much of western society today.

However, that the mind is identical to, or emerging from, the brain, despite the apparent scientific support for this conclusion, should be confronted with great scrutiny. For historically, most philosophers, religions, and cultures, have maintained a soul or spirit as the ground of personality, rather than mere matter. That such a view was so widely held dictates that it should be considered seriously.

On Reducing the Human Person

Augustine’s remarks in the narrative of Confessions that his mother Monica brought him “to birth, both in her flesh, so that [he] was born into this temporal light, and in her heart, so that [he] might be born into eternal light” (C 9.8.17). Here, Augustine has distinguished between the flesh and the heart, exteriority and interiority, that is, matter and spirit, respectively. For Monica gave birth to him in body, but also by her prayers for his soul gave birth to him in her heart as well. Her heart is in no way physical, being contrasted with the flesh; it is spiritual, which indicates it is not composed of anything material. The affective center of emotions of the human person is of the soul, indicating that the fundamental being of the self cannot be located in the material order.

During the Medieval period, Bonaventure with respect to the journey toward God that “[w]e must also enter into our soul, which is God’s image, everlasting, spiritual, and within us” (Journey of the Soul into God, 60). The soul, the self, is clearly considered as spiritual and interior, preventing it from being observed as if it were a material object. Humans are spiritual rather than physical beings. To be spiritual means that one thinks, desires, loves, cares, intends, and feels emotions and affects such as anger or joy.

Though not involving himself explicitly in the Critique of Pure Reason in debates on the nature of the person, Kant makes clear that the self is not physical. For “although all our cognition commences with experience, yet it does not on that account all arise from experience” (CPR B1). Experience is merely the stimulus for knowledge, rather than the ground; Kant observes that aspects of cognition do not find their source in the experience of the empirical world. Thus, there must be a transcendental, a priori root of knowledge, which indicates the person is not restricted to a mere body.

Now turning to the Mystery of Being, Marcel argues that modernity has reduced the human person to a problem as opposed to a mystery. A problem is that which “I find complete before me, but which I can therefore lay siege to and reduce … A genuine problem is subject to an appropriate technique by the exercise of which it is defined” (MB, 211). Problems are objective, and can be answered by a definite, adequate formula that will yield the requisite result. The human mind is capable of grasping problems as a whole, so that all aspects become visible, enabling the problem to be analyzed into its components. This is the process of the natural sciences. But when what is not genuinely a problem is considered as such, the result is a broken world. The latter consists in the reduction of personal identity to a “few sheets of an official dossier,” which is how “I am forced to grasp myself” (MB, 29).

Persons are compelled to understand themselves as mere instruments in the system set forth by the utilization of technology, where technocrats use science to justify their policies. As such, the human must be reduced to the brain, for if he has a mind, he would be thus not wholly subservient to the synthetic compositioning. To subject a person to technology mandates that he consider himself nothing more than a collection of neurons. Social consciousness leads individuals to submit to the control of those who produce scientific knowledge that furthers ordering society through technology under the reign of science. However, “there is within the human…something that protests against the sort of rape or violation of which he is the victim; and this torn, protesting state of the human creature is enough to justify us in asserting that the world in which we live is a broken world” (MB, 33).

The realm of technology destroys love, emotion, and care. The person is losing himself to his functionalization, in which he is a functionalized self that operates according to the deterministic laws of science; questions about his being are to be answered by examining as if he were merely another object in the world, a tree, planet, or mineral. He dwells, or more accurately fails to dwell, “in a mechanized world, a world deprived of passion” (MB 24). Through rigorous scientific analysis, all that is valuable in the person is detected and employed. The world of humanity is converted into a set of functionalized selves in a techno-scientific system that has as its purpose only its own furtherance; the world is broken, life is extinguished.

By the interposition of a cybernetic or the techno-scientific self-understanding, such as the mind-brain identity thesis into social consciousness, “the will is re-directed toward a virtual dimension” (Scalambrino 2015, 5). Taken radically, moving beyond the dangers of virtual reality, Scalambrino is pointing to the general threat posed by the technologically-conditioned reduction of the person to the brain. When humans believe that they are nothing more than piles of chemicals, their wills are oriented toward the possibilities appropriate to a pile of chemicals. They live for, and deliberate in terms of, a pile of chemicals, rather than for themselves qua persons. For them, to be is to be a brain, with no meaning or purpose greater than that of a toad, snake, or some animal with a brain.

However, as daily experience testifies, as persons, individuals have a feeling of their being-beyond-the-world. The person is not his body, requiring a different approach, that of mystery. In this way, Marcel understands the person as mysterious, being that which “transcends every conceivable technique,” and “is itself like a river, which flows into the Eternal, as into a sea” (MB, 211, 219). A mystery is infinite; it is a vast depth that cannot be sounded. There is no method to a mystery, it cannot be represented or known as such, for it exceeds the capacity of the mind to represent it (MB, 69). An individual can only move about, may only live, in the mystery, a reservoir of inexhaustible richness. Unlike the problem, the mystery draws the person out of himself, and he is himself a mystery, as exemplified by characteristics of his own being. Marcel notes that “the act of thought itself must be acknowledged a mystery; for it is the function of thought to show that any objective representation, abstract schema, or symbolical process, is inadequate” (MB, 69). Thus, humans shatter the boundaries of the physical, even in their thinking, and so cannot be reduced to the brain.[2]

The Libet Experiment

Among the most famous experiments reducing the mind to the brain by free will being interpreted out of existence, the Libet experiment, as interpreted by Benjamin Libet himself, purports to show that human behavior can be accurately predicted by brain events prior to such behavior actually occurring. Specifically, this test asked persons who were watching a dot moving along a circle to flick their wrists when they “freely wanted to do so” (Libet 2002, 553).[3] After doing this, they reported W, “the clock-time associated with the first awareness of the wish [or urge] to move” (Libet 2002, 553). 550 msec before muscle movement an increase in readiness potential (RP) began. For Libet, “an appearance of conscious will 550 msec…before the act seemed intuitively unlikely” (Libet 2002, 553). Two types of tests were performed on the subjects, who in one such type had two sets of results. In one test, subjects were asked to spontaneously move, in which case they would at times report a “general intention…to act within the next second or so,” or have no such planning, while in the other test type, subjects responded to a randomly given stimulus, of which time they were not aware (Libet 2002, 554).

With respect to when the subject freely acted without planning, there was a buildup of RP, which has been termed RP II, and when the subject acted with prior intention, there also was a buildup of RP, identified as RP I. In the trial with the stimulus, there was no buildup of RP. With prior intention, RP I accumulated 1000 msec before muscle movement, while in the absence of pre-planning, RP II built up 550 msec prior to muscle movement, and 350 msec prior to the wish to act, which itself was 200 msec before the act (Libet 2002, 557). As the result of the buildup of RP, in particular RP II, Libet states that the “volitional process is…initiated unconsciously” (Libet 2002, 551).

A superior perspective on of the Libet experiment rather indicates that the brain is subsequent to the mind, such that mental states precede brain states, which is the case for several reasons. Firstly, every instance of build-up of RP in the brain and the wrist movement of the body was correlated in some way with the mental state of the desire to move. Readiness-potential and wrist movement only occurred in relation to the desire to move, indicating an intrinsic relation between conscious willing and physical, both brain and kinesthetic, action. The build-up of RP always was temporally determined by its relation to the desire to move, so that brain states correspond to mental states. Given that hands can only be moved by the person through commands sent from the brain, hands being corporeal, some sort of modification of the brain would be required to move the hand. That this modification exists is not perplexing, and provides nothing against free will.

RP I, the RP observed with respect to previous intention, only had a significant increase with the time of initial planning given by subjects, who were aiming to move around a second before muscle movement. The significant increase in RP occurred at the same time plans were reported to be developing, at 1000 msec, indicating that the muscles were being primed for motion by the intentions of the subject. With respect to RP II, that the increase in RP was 350 msec before the urge to move is not an indicator of the absence of free volition. For, as both a methodological and substantive issue with the Libet experiment is the definition of the conscious urge to move, which carries a variety of significations, especially in the word ‘urge;’ Libet also conflated urge with will or active wish to move. This indicates that a person is contemplating whether she has an urge to move, a process that could lead to a build-up of RP in the brain. She is deliberating whether she has such an urge at this particular instant. For an individual may have an urge to do something, urge understood as the feeling of desire, and yet hesitate to act on that desire. The decision to act on a desire is distinct from the presence of this desire. What the Libet experiment shows most clearly is that humans can feel impulses, upon which they then decide to act. Often, a person eats when his stomach feels empty, an emptiness that can be registered by monitoring the brain. But to say that that person is determined by such emptiness is absurd, as demonstrated by those who are gluttons or go on hunger strikes. Further, the self might not be hungry at all and yet still indulge in food. Such is the result of the delay between RP II and W.

As the result of his lack of philosophical comprehension, Libet could not distinguish between the wish to move and the urge or impulse to move. If urge is understood as ‘wish,’ the appearance of such wish is arbitrary, a decision of the will; the mind contemplates enacting this will, and wishing to accomplish such a deed. The determination of this wish, only after which one would be conscious of the wish to wish this activity, would of course result in some type of brain activity in order to prepare the body. But this brain activity occurs as the result of the spiritual deliberation requisite for determination of will. Thus, the buildup of RP may either indicate that the person is determining his will with respect to the sensation of physical need or impulse, or merely anticipating the becoming of his wish, expecting that he will soon in the future wish this. That is, in order to move at an instant, the body must be primed, causing a buildup of RP, which on this count is not an argument against free will. For apart from the instant of conscious wish itself, a person is, even without pre-planning, still in a certain sense mentally planning his action. For as one must make the arbitrary choice of suddenly flicking his wrist, an action that he knows he will soon perform, his body is able to respond to the consciousness of the impending deliberate mental choice to flick the wrist by being primed in what is observed as readiness potential.

Analyzing Soon and Libet’s Work

Another scientific experiment conducted by Soon et al., tested the ability to predict the decisions of the subject before the decisions were consciously made, by having a person press a button with either their left or right index finger when they felt the urge to do so (Soon et al. 2008, 543). The researchers claimed to have been predicted subject choices 10 seconds prior to such choices; however, that the accuracy of predictions was a mere sixty-percent should also lead to hesitation in leaping to the conclusion that this experiment is evidence against free will. Sixty-percent is a mere ten-percent more than the result of guessing at every other instance whether a person would move. A random game of probability would provide results not significantly different from those of the Soon experiment. Thus, that the experimenters were successful in sixty-percent of cases is only evidence that they are but half-way decent at guessing games.

In nearly half of all instances, Soon was unable to predict physical movement on the basis of the build-up of readiness-potential. In nearly half of all instances, brain states gave no evidence for future movement. In nearly half of all instances, brain states at the scientific, technological, empirical level, examining among the most basic physical functions of the human person, were unable to yield a causal account of behavior. To say that brain states caused the movement, that they caused the mental urge, is wholly unwarranted. Causality is necessary and universal, yet here it is neither, the buildup of readiness-potential not necessary for the conscious wish to move and in no interpretation always universally present prior to mental states. No causal link whatsoever has been demonstrated by Soon.

Both Libet and Soon have made the paradigmatic example of an argument from ignorance; they say that because they can see no other potential cause for the actions performed by their subjects, then the brain is the source for those actions; unknown brain events, they say, are the source for human actions. But they cannot point to these brain events; for none such exist that are the causes of action. As they have committed themselves to materialism, they cannot think in terms of a spiritual cause that alters matter. Yet such spiritual cause, readily experienced as the conscious choice of a mind, is the obvious genesis of action and behavior.

This impossibility in observing even the simplest of motor functions, among the most basic thoughts or commands that a person can issue, implies that more complex choices are impossible to study. Since the command to issue motor controls has a genesis outside of the brain, all other more complex mental activities must similarly find their ground beyond a mere physical organ.

As previously noted, readiness-potential always is related to conscious deliberation, anticipation, and choice; the former is in the brain, while the latter three are mental events. Qua mental events, they are subjective, and never in themselves come under the observation of technological instruments. They are interior, not exterior, contrasting with brain events, which are observable. That brain events follow mental events seems to be shown by the Libet experiment, as no readiness-potential occurs without mental events being reported by the subject. The suggestion, then, is that brain events, such as the build-up of readiness-potential, are causally dependent on mental states, as there is in fact both a necessary and universal connection.

If anything, the Libet experiment indicates, as the result of the difference between mental and brain events, that mental and brain events cannot be correlated, which is simply further evidence for the traditional theory that interiority precedes exteriority; the spiritual precedes the corporeal. More evidence for this is available from one of the most well-documented medical occurrences, the Placebo Effect and its lesser-known twin, the Nocebo Effect. These in particular show that mental states are in no way reducible or causally contingent on brain states, yet that brain states depend on mental states.

The placebo and nocebo effects “are treatment effects, unrelated to the treatment mechanism, which are induced by patients’ expectations of improvement or worsening respectively” (Bartels et al. 2014, 1). That is, the placebo and nocebo effects are fundamentally cognitive, determined by the expectations of individuals. These expectations are mental, not physical, and wholly subjective; yet, despite their subjectivity and existence in the mind rather than the brain, they have an established effect on the outcomes of treatments. Thus, mental states have a direct causal role on the physical world. The mind influences the brain.

For the brain, through which pain is felt, does not know that the person is taking a drug that is not active to reduce an illness, while the mind does know that the individual is using this drug, causing the placebo or nocebo effect, as the result of anticipation of success, or the absence thereof. Were there not a mind independent of the brain, the placebo and nocebo effects could never happen, since intentionality is not characteristic of the brain, yet only the conscious mind. All intentional states are mental, which must therefore be assigned real existence as the result of its causal power. Knowledge and expectation exist in the subject, the mind alone. The brain does not think, and no brain has ever thought.

On the Status of Mental States

All contemporary neuroscience rests on a fundamental assumption, which is that mental states do not exist; they are mere figments of the brain, which, qua matter, is reality. All that is, is corporeal matter, and consciousness is an illusion. Thus, to study the person, the scientist should study the physical world. The subjective states of the individual are ultimately nothing, and should not be trusted in determining the scientific view of the self. Modern science, with its emphasis on the empirical and observable, as a methodological consideration, must use this assumption, for were it to not, it would be compelled to admit that there exists that which is beyond its capacity to know.

Yet this assumption, that all is matter and mental states are an illusion, terminates in a reductio ad absurdum. For beginning with the proposition that mental states are an illusion, let this be applied to mental states regarding external objects, physical objects. For instance, take the physical object Saturn; Saturn is known qua physical object. It is seen, a process that according to neuroscience occurs by various neurons crashing together in the brain. Thus, Saturn only exists on the basis of its existing in the brain, because we only know of Saturn by the seeing in the brain, which by analogy holds true for all physical objects, including brains. The physical does not need to exist outside of the mind; it could very well be a mere construct of the brain.

The fact that Saturn is seen by multiple persons is irrelevant; for this only means that persons opine that they share the seeing of the same Saturn. The possibility still exists that each individual sees a different Saturn, where humans are stimulated with the sensation of the apparently same object. The physical world is known only through mental states; thus, the physical world is an illusion. If neuroscientists want to say that emotions are not real merely because they occur in the brain, they must likewise say that Saturn is not real, as it too exists in only the brain. All that is, whether mental or physical, becomes an illusion, including the brain itself, as brains are only known by technological observation of them. Brains are known by brains. But Saturn does exist apart from the brain; therefore, mental states also have real existence not reducible to the brain.

All subjective mental states must consequently be given actual reality, and must be considered to have the same level of reality as is had by the corporeal world. This necessitates with the force of law that the brain not be hypothesized as the source of mental activity. Any attempt to reduce mental states to brain states results in the absurdity of the whole of existence, including the spatial, becoming an illusion, for matter only exists as a representation to the conscious subject.

Following from these problems associated with the synthetic compositioning of the self as the brain, the person is not reducible, even by modern technology, thereto. Humans should not be taken as objects of technological and scientific study, yet rather in accord with their own unique way of being that respects their unique status as humans. Man must not be reduced to a material brain by instrumentality, but rather acknowledged as the center of the world of his own first-person subjectivity. The reductionism of neuroscience must be overcome to keep humanity human. Marcel in Creative Fidelity reflects this task of rejecting de-humanization, to “strengthen the fierce resolution of those who reject the consummation by themselves or others of man’s denial of man, or…the denial of the more than human by the less than human” (CF, 10).

References

Augustine. Confessions. Trans. John K. Ryan. New York: Image Classics, 2014.

Bambach, Charles. “Heidegger on The Question Concerning Technology and Gelassenheit.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 115-126. Lanham, MD: Rowman and Littlefield, 2015.

Bartels, Danielle et al. “Role of Conditioning and Verbal Suggestion in Placebo and Nocebo Effects on Itch.” Public Library of Science One 9 (2014): 1-9.

Bonaventure. Bonaventure: The Soul’s Journey into God, The Tree of Life, The Life of Saint Francis. Translated by Ewert Cousins. New York: Paulist Press, 1978.

Heidegger, Martin. The Question Concerning Technology and Other Essays. Translated by William Lovitt. London: Harper Perennial, 2013.

Kant, Immanuel. The Critique of Pure Reason. Translated by Paul Guyer and Allen Wood. Cambridge: Cambridge University Press, 1999.

Kisiel, Theodore. “Heidegger and Our Twenty-first Century Experience of Ge-Stell.” Research Resources Paper 35 (2014): 137-151. http://fordham.bepress.com/phil_research/35

Libet, Benjamin. “Do We Have Free Will?” In The Oxford Handbook of Free Will, edited by Robert Kane, 551-564. Oxford: Oxford University Press, 2002.

Marcel, Gabriel. Creative Fidelity. Trans. Robert Rosthal. New York: Fordham University Press, 2002.

Marcel, Gabriel. The Mystery of Being, Vol. I: Reflection and Mystery. Translated by G.S. Fraser. South Bend:  St. Augustine’s Press, 1950.

Scalambrino, Frank. “The Vanishing Subject: Becoming who You Cybernetically Are.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 197-206. Lanham, MD: Rowman and Littlefield, 2015.

Soon, Chun et al. “Unconscious Determinants of Free Decisions in the Human Brain.” Nature Neuroscience 11, no. 5 (2008): 543-545.

[1]. This is neuroscience in the reductionist sense that seeks to state that the mind is an illusion; it is true that there are neuroscientists who reject reductionism, and they are not those against whom this essay is articulated, insofar as they recognize the independence of the mind from the brain.

[2]. This first requires the deconstruction of the functionalized and de-humanized self to restore the mystery about the person.

[3]. Libet, “Do We Have Free Will?”

Author Information: Nick Bostrom, University of Oxford, nick@nickbostrom.com

Bostrom, Nick. “In Defense of Posthuman Dignity.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 1-10.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3py

Reprint from Bostrom, Nick. “In Defence of Posthuman Dignity.” Bioethics 19, no. 3 (2005): 202-214.[1]

Please refer to:

screen-shot-2017-02-06-at-22-00-34

Image credit: RHiNO NEAL, via flickr

Abstract

Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision.

Transhumanists vs. Bioconservatives

Transhumanism is a loosely defined movement that has developed gradually over the past two decades, and can be viewed as an outgrowth of secular humanism and the Enlightenment. It holds that current human nature is improvable through the use of applied science and other rational methods, which may make it possible to increase human health-span, extend our intellectual and physical capacities, and give us increased control over our own mental states and moods.[2] Technologies of concern include not only current ones, like genetic engineering and information technology, but also anticipated future developments such as fully immersive virtual reality, machine-phase nanotechnology, and artificial intelligence.

Transhumanists promote the view that human enhancement technologies should be made widely available, and that individuals should have broad discretion over which of these technologies to apply to themselves (morphological freedom), and that parents should normally get to decide which reproductive technologies to use when having children (reproductive freedom).[3] Transhumanists believe that, while there are hazards that need to be identified and avoided, human enhancement technologies will offer enormous potential for deeply valuable and humanly beneficial uses. Ultimately, it is possible that such enhancements may make us, or our descendants, “posthuman,” beings who may have indefinite health-spans, much greater intellectual faculties than any current human being—and perhaps entirely new sensibilities or modalities—as well as the ability to control their own emotions. The wisest approach vis-à-vis these prospects, argue transhumanists, is to embrace technological progress, while strongly defending human rights and individual choice, and taking action specifically against concrete threats, such as military or terrorist abuse of bioweapons, and against unwanted environmental or social side-effects.

In opposition to this transhumanist view stands a bioconservative camp that argues against the use of technology to modify human nature. Prominent bioconservative writers include Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben. One of the central concerns of the bioconservatives is that human enhancement technologies might be “dehumanizing.” The worry, which has been variously expressed, is that these technologies might undermine our human dignity or inadvertently erode something that is deeply valuable about being human but that is difficult to put into words or to factor into a cost-benefit analysis. In some cases (e.g. Leon Kass) the unease seems to derive from religious or crypto-religious sentiments whereas for others (e.g. Francis Fukuyama) it stems from secular grounds. The best approach, these bioconservatives argue, is to implement global bans on swathes of promising human enhancement technologies to forestall a slide down a slippery slope towards an ultimately debased posthuman state.

While any brief description necessarily skirts significant nuances that differentiate writers within the two camps, I believe the above characterization nevertheless highlights a principal fault lines in one of the great debates of our times: how we should look at the future of humankind and whether we should attempt to use technology to make ourselves “more than human.” This paper will distinguish two common fears about the posthuman and argue that they are partly unfounded and that, to the extent that they correspond to real risks, there are better responses than trying to implement broad bans on technology. I will make some remarks on the concept of dignity, which bioconservatives believe to be imperiled by coming human enhancement technologies, and suggest that we need to recognize that not only humans in their current form, but posthumans too could have dignity.

Two Fears About the Posthuman

The prospect of posthumanity is feared for at least two reasons. One is that the state of being posthuman might in itself be degrading, so that by becoming posthuman we might be harming ourselves. Another is that posthumans might pose a threat to “ordinary” humans. (I shall set aside a third possible reason, that the development of posthumans might offend some supernatural being.)

The most prominent bioethicist to focus on the first fear is Leon Kass:

Most of the given bestowals of nature have their given species-specified natures: they are each and all of a given sort. Cockroaches and humans are equally bestowed but differently natured. To turn a man into a cockroach—as we don’t need Kafka to show us—would be dehumanizing. To try to turn a man into more than a man might be so as well. We need more than generalized appreciation for nature’s gifts. We need a particular regard and respect for the special gift that is our own given nature.[4]

Transhumanists counter that nature’s gifts are sometimes poisoned and should not always be accepted. Cancer, malaria, dementia, aging, starvation, unnecessary suffering, cognitive shortcomings are all among the presents that we wisely refuse. Our own species-specified natures are a rich source of much of the thoroughly unrespectable and unacceptable—susceptibility for disease, murder, rape, genocide, cheating, torture, racism. The horrors of nature in general and of our own nature in particular are so well documented[5] that it is astonishing that somebody as distinguished as Leon Kass should still in this day and age be tempted to rely on the natural as a guide to what is desirable or normatively right. We should be grateful that our ancestors were not swept away by the Kassian sentiment, or we would still be picking lice off each other’s backs. Rather than deferring to the natural order, transhumanists maintain that we can legitimately reform ourselves and our natures in accordance with humane values and personal aspirations.

If one rejects nature as a general criterion of the good, as most thoughtful people nowadays do, one can of course still acknowledge that particular ways of modifying human nature would be debasing. Not all change is progress. Not even all well-intended technological intervention in human nature would be on balance beneficial. Kass goes far beyond these truisms however when he declares that utter dehumanization lies in store for us as the inevitable result of our obtaining technical mastery over our own nature: the final technical conquest of his own nature would almost certainly leave mankind utterly enfeebled. This form of mastery would be identical with utter dehumanization. Read Huxley’s Brave New World, read C. S. Lewis’s Abolition of Man, read Nietzsche’s account of the last man, and then read the newspapers. Homogenization, mediocrity, pacification, drug-induced contentment, debasement of taste, souls without loves and longings—these are the inevitable results of making the essence of human nature the last project of technical mastery. In his moment of triumph, Promethean man will become a contented cow.[6]

The fictional inhabitants of Brave New World, to pick the best-known of Kass’s examples, are admittedly short on dignity (in at least one sense of the word). But the claim that this is the inevitable consequence of our obtaining technological mastery over human nature is exceedingly pessimistic—and unsupported—if understood as a futuristic prediction, and false if construed as a claim about metaphysical necessity.

There are many things wrong with the fictional society that Huxley described. It is static, totalitarian, caste-bound; its culture is a wasteland. The brave new worlders themselves are a dehumanized and undignified lot. Yet posthumans they are not. Their capacities are not super-human but in many respects substantially inferior to our own. Their life expectancy and physique are quite normal, but their intellectual, emotional, moral, and spiritual faculties are stunted. The majority of the brave new worlders have various degrees of engineered mental retardation. And everyone, save the ten world controllers (along with a miscellany of primitives and social outcasts who are confined to fenced preservations or isolated islands), are barred or discouraged from developing individuality, independent thinking and initiative, and are conditioned not to desire these traits in the first place. Brave New World is not a tale of human enhancement gone amok but a tragedy of technology and social engineering being used to deliberately cripple moral and intellectual capacities—the exact antithesis of the transhumanist proposal.

Transhumanists argue that the best way to avoid a Brave New World is by vigorously defending morphological and reproductive freedoms against any would-be world controllers. History has shown the dangers in letting governments curtail these freedoms. The last century’s government-sponsored coercive eugenics programs, once favored by both the left and the right, have been thoroughly discredited. Because people are likely to differ profoundly in their attitudes towards human enhancement technologies, it is crucial that no one solution be imposed on everyone from above but that individuals get to consult their own consciences as to what is right for themselves and their families. Information, public debate, and education are the appropriate means by which to encourage others to make wise choices, not a global ban on a broad range of potentially beneficial medical and other enhancement options.

The second fear is that there might be an eruption of violence between unaugmented humans and posthumans. George Annas, Lori Andrews, and Rosario Isasi have argued that we should view human cloning and all inheritable genetic modifications as “crimes against humanity” in order to reduce the probability that posthuman species will arise, on grounds that such a species would pose an existential threat to the old human species:

The new species, or “posthuman,” will likely view the old “normal” humans as inferior, even savages, and fit for slavery or slaughter. The normals, on the other hand, may see the posthumans as a threat and if they can, may engage in a preemptive strike by killing the posthumans before they themselves are killed or enslaved by them. It is ultimately this predictable potential for genocide that makes species-altering experiments potential weapons of mass destruction, and makes the unaccountable genetic engineer a potential bioterrorist.[7]

There is no denying that bioterrorism and unaccountable genetic engineers developing increasingly potent weapons of mass destruction pose a serious threat to our civilization. But using the rhetoric of bioterrorism and weapons of mass destruction to cast aspersions on therapeutic uses of biotechnology to improve health, longevity and other human capacities is unhelpful. The issues are quite distinct. Reasonable people can be in favor of strict regulation of bioweapons while promoting beneficial medical uses of genetics and other human enhancement technologies, including inheritable and “species-altering” modifications.

Human society is always at risk of some group deciding to view another group of humans as fit for slavery or slaughter. To counteract such tendencies, modern societies have created laws and institutions, and endowed them with powers of enforcement, that act to prevent groups of citizens from enslaving or slaughtering one another. The efficacy of these institutions does not depend on all citizens having equal capacities. Modern, peaceful societies can have large numbers of people with diminished physical or mental capacities along with many other people who may be exceptionally physically strong or healthy or intellectually talented in various ways. Adding people with technologically enhanced capacities to this already broad distribution of ability would not need to rip society apart or trigger genocide or enslavement.

The assumption that inheritable genetic modifications or other human enhancement technologies would lead to two distinct and separate species should also be questioned. It seems much more likely that there would be a continuum of differently modified or enhanced individuals, which would overlap with the continuum of as-yet unenhanced humans. The scenario in which “the enhanced” form a pact and then attack “the naturals” makes for exciting science fiction but is not necessarily the most plausible outcome. Even today, the segment containing the tallest ninety percent of the population could, in principle, get together and kill or enslave the shorter decile. That this does not happen suggests that a well-organized society can hold together even if it contains many possible coalitions of people sharing some attribute such that, if they ganged up, they would be capable of exterminating the rest.

To note that the extreme case of a war between humans and posthumans is not the most likely scenario is not to say that there are no legitimate social concerns about the steps that may take us closer to posthumanity. Inequity, discrimination, and stigmatization—against, or on behalf of, modified people—could become serious issues. Transhumanists would argue that these (potential) social problems call for social remedies. One example of how contemporary technology can change important aspects of someone’s identity is sex reassignment. The experiences of transsexuals show that Western culture still has work to do in becoming more accepting of diversity. This is a task that we can begin to tackle today by fostering a climate of tolerance and acceptance towards those who are different from ourselves. Painting alarmist pictures of the threat from future technologically modified people, or hurling preemptive condemnations of their necessarily debased nature, is not the best way to go about it.

What about the hypothetical case in which someone intends to create, or turn themselves into, a being of so radically enhanced capacities that a single one or a small group of such individuals would be capable of taking over the planet? This is clearly not a situation that is likely to arise in the imminent future, but one can imagine that, perhaps in a few decades, the prospective creation of superintelligent machines could raise this kind of concern. The would-be creator of a new life form with such surpassing capabilities would have an obligation to ensure that the proposed being is free from psychopathic tendencies and, more generally, that it has humane inclinations. For example, a future artificial intelligence programmer should be required to make a strong case that launching a purportedly human-friendly superintelligence would be safer than the alternative. Again, however, this (currently) science-fiction scenario must be clearly distinguished from our present situation and our more immediate concern with taking effective steps towards incrementally improving human capacities and health-span.

Is Human Dignity Incompatible with Posthuman Dignity?

Human dignity is sometimes invoked as a polemical substitute for clear ideas. This is not to say that there are no important moral issues relating to dignity, but it does mean that there is a need to define what one has in mind when one uses the term. Here, we shall consider two different senses of dignity:

  1. Dignity as moral status, in particular the inalienable right to be treated with a basic level of respect.
  1. Dignity as the quality of being worthy or honorable; worthiness, worth, nobleness, excellence (The Oxford English Dictionary[8]).

On both these definitions, dignity is something that a posthuman could possess. Francis Fukuyama, however, seems to deny this and warns that giving up on the idea that dignity is unique to human beings—defined as those possessing a mysterious essential human quality he calls “Factor X” [9]—would invite disaster:

Denial of the concept of human dignity—that is, of the idea that there is something unique about the human race that entitles every member of the species to a higher moral status than the rest of the natural world—leads us down a very perilous path. We may be compelled ultimately to take this path, but we should do so only with our eyes open. Nietzsche is a much better guide to what lies down that road than the legions of bioethicists and casual academic Darwinians that today are prone to give us moral advice on this subject.[10]

What appears to worry Fukuyama is that introducing new kinds of enhanced person into the world might cause some individuals (perhaps infants, or the mentally handicapped, or unenhanced humans in general) to lose some of the moral status that they currently possess, and that a fundamental precondition of liberal democracy, the principle of equal dignity for all, would be destroyed.

The underlying intuition seems to be that instead of the famed “expanding moral circle,” what we have is more like an oval, whose shape we can change but whose area must remain constant. Thankfully, this purported conservation law of moral recognition lacks empirical support. The set of individuals accorded full moral status by Western societies has actually increased, to include men without property or noble decent, women, and non-white peoples. It would seem feasible to extend this set further to include future posthumans, or, for that matter, some of the higher primates or human-animal chimaeras, should such be created—and to do so without causing any compensating shrinkage in another direction. (The moral status of problematic borderline cases, such as fetuses or late-stage Alzheimer patients, or the brain dead, should perhaps be decided separately from the issue of technologically modified humans or novel artificial life forms.) Our own role in this process need not be that of passive bystanders. We can work to create more inclusive social structures that accord appropriate moral recognition and legal rights to all who need them, be they male or female, black or white, flesh or silicon.

Dignity in the second sense, as referring to a special excellence or moral worthiness, is something that current human beings possess to widely differing degrees. Some excel far more than others do. Some are morally admirable; others are base and vicious. There is no reason for supposing that posthuman beings could not also have dignity in this second sense. They may even be able to attain higher levels of moral and other excellence than any of us humans. The fictional brave new worlders, who were subhuman rather than posthuman, would have scored low on this kind of dignity, and partly for that reason they would be awful role models for us to emulate. But surely we can create more uplifting and appealing visions of what we may aspire to become. There may be some who would transform themselves into degraded posthumans—but then some people today do not live very worthy human lives. This is regrettable, but the fact that some people make bad choices is not generally a sufficient ground for rescinding people’s right to choose. And legitimate countermeasures are available: education, encouragement, persuasion, social and cultural reform. These, not a blanket prohibition of all posthuman ways of being, are the measures to which those bothered by the prospect of debased posthumans should resort. A liberal democracy should normally permit incursions into morphological and reproductive freedoms only in cases where somebody is abusing these freedoms to harm another person.

The principle that parents should have broad discretion to decide on genetic enhancements for their children has been attacked on grounds that this form of reproductive freedom would constitute a kind of parental tyranny that would undermine the child’s dignity and capacity for autonomous choice; for instance, by Hans Jonas:

Technological mastered nature now again includes man who (up to now) had, in technology, set himself against it as its master… But whose power is this—and over whom or over what? Obviously the power of those living today over those coming after them, who will be the defenseless other side of prior choices made by the planners of today. The other side of the power of today is the future bondage of the living to the dead.[11]

Jonas is relying on the assumption that our descendants, who will presumably be far more technologically advanced than we are, would nevertheless be defenseless against our machinations to expand their capacities. This is almost certainly incorrect. If, for some inscrutable reason, they decided that they would prefer to be less intelligent, less healthy, and lead shorter lives, they would not lack the means to achieve these objectives and frustrate our designs.

In any case, if the alternative to parental choice in determining the basic capacities of new people is entrusting the child’s welfare to nature, that is blind chance, then the decision should be easy. Had Mother Nature been a real parent, she would have been in jail for child abuse and murder. And transhumanists can accept, of course, that just as society may in exceptional circumstances override parental autonomy, such as in cases of neglect or abuse, so too may society impose regulations to protect the child-to-be from genuinely harmful genetic interventions—but not because they represent choice rather than chance.

Jürgen Habermas, in a recent work, echoes Jonas’ concern and worries that even the mere knowledge of having been intentionally made by another could have ruinous consequences:

We cannot rule out that knowledge of one’s own hereditary features as programmed may prove to restrict the choice of an individual’s life, and to undermine the essentially symmetrical relations between free and equal human beings.[12]

A transhumanist could reply that it would be a mistake for an individual to believe that she has no choice over her own life just because some (or all) of her genes were selected by her parents. She would, in fact, have as much choice as if her genetic constitution had been selected by chance. It could even be that she would enjoy significantly more choice and autonomy in her life, if the modifications were such as to expand her basic capability set. Being healthy, smarter, having a wide range of talents, or possessing greater powers of self-control are blessings that tend to open more life paths than they block.

Even if there were a possibility that some genetically modified individuals might fail to grasp these points and thus might feel oppressed by their knowledge of their origin, that would be a risk to be weighed against the risks incurred by having an unmodified genome, risks that can be extremely grave. If safe and effective alternatives were available, it would be irresponsible to risk starting someone off in life with the misfortune of congenitally diminished basic capacities or an elevated susceptibility to disease.

Why We Need Posthuman Dignity

Similarly ominous forecasts were made in the seventies about the severe psychological damage that children conceived through in vitro fertilization would suffer upon learning that they originated from a test tube—a prediction that turned out to be entirely false. It is hard to avoid the impression that some bias or philosophical prejudice is responsible for the readiness with which many bioconservatives seize on even the flimsiest of empirical justifications for banning human enhancement technologies of certain types but not others. Suppose it turned out that playing Mozart to pregnant mothers improved the child’s subsequent musical talent. Nobody would argue for a ban on Mozart-in-the-womb on grounds that we cannot rule out that some psychological woe might befall the child once she discovers that her facility with the violin had been prenatally “programmed” by her parents. Yet when it comes to e.g. genetic enhancements, arguments that are not so very different from this parody are often put forward as weighty if not conclusive objections by eminent bioconservative writers. To transhumanists, this looks like doublethink. How can it be that to bioconservatives almost any anticipated downside, predicted perhaps on the basis of the shakiest pop-psychological theory, so readily achieves that status of deep philosophical insight and knockdown objection against the transhumanist project?

Perhaps a part of the answer can be found in the different attitudes that transhumanists and bioconservatives have towards posthuman dignity. Bioconservatives tend to deny posthuman dignity and view posthumanity as a threat to human dignity. They are therefore tempted to look for ways to denigrate interventions that are thought to be pointing in the direction of more radical future modifications that may eventually lead to the emergence of those detestable posthumans. But unless this fundamental opposition to the posthuman is openly declared as a premiss of their argument, this then forces them to use a double standard of assessment whenever particular cases are considered in isolation: for example, one standard for germ-line genetic interventions and another for improvements in maternal nutrition (an intervention presumably not seen as heralding a posthuman era).

Transhumanists, by contrast, see human and posthuman dignity as compatible and complementary. They insist that dignity, in its modern sense, consists in what we are and what we have the potential to become, not in our pedigree or our causal origin. What we are is not a function solely of our DNA but also of our technological and social context. Human nature in this broader sense is dynamic, partially human-made, and improvable. Our current extended phenotypes (and the lives that we lead) are markedly different from those of our hunter-gatherer ancestors. We read and write; we wear clothes; we live in cities; we earn money and buy food from the supermarket; we call people on the telephone, watch television, read newspapers, drive cars, file taxes, vote in national elections; women give birth in hospitals; life-expectancy is three times longer than in the Pleistocene; we know that the Earth is round and that stars are large gas clouds lit from inside by nuclear fusion, and that the universe is approximately 13.7 billion years old and enormously big. In the eyes of a hunter-gatherer, we might already appear “posthuman.” Yet these radical extensions of human capabilities—some of them biological, others external—have not divested us of moral status or dehumanized us in the sense of making us generally unworthy and base. Similarly, should we or our descendants one day succeed in becoming what relative to current standards we may refer to as posthuman, this need not entail a loss dignity either.

From the transhumanist standpoint, there is no need to behave as if there were a deep moral difference between technological and other means of enhancing human lives. By defending posthuman dignity we promote a more inclusive and humane ethics, one that will embrace future technologically modified people as well as humans of the contemporary kind. We also remove a distortive double standard from the field of our moral vision, allowing us to perceive more clearly the opportunities that exist for further human progress.[13]

References

Annas, George J., Lori B. Andrews and Rosario M. Isasi. “Protecting the Endangered Human: Toward an International Treaty Prohibiting Cloning and Inheritable Alterations.” American Journal of Law and Medicine 28, no. 2&3 (2002): 162.

Bostrom, Nick. “Human Genetic Enhancements: A Transhumanist Perspective.” Journal of Value Inquiry 37, no. 4 (2003): 493-506. http://www.nickbostrom.com/ethics/genetic.html.

Bostrom et al., “The Transhumanist FAQ, v. 2.1.” World Transhumanist Association, 2003. www.transhumanism.org/resources/faq.html.

Fukuyama, Francis. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Strauss and Giroux, 2002.

Glover, Jonathan. Humanity: A Moral History of the Twentieth Century. New Haven: Yale University Press, 2001.

Habermas, Jürgen. The Future of Human Nature. Oxford: Blackwell, 2003.

Jonas, Hans H. Technik, Medizin und Ethik: Zur Praxis des Prinzips Verantwortung. Frankfurt am Main: Suhrkamp, 1985.

Kass, Leon R. Life, Liberty, and Defense of Dignity: The Challenge for Bioethics. San Francisco: Encounter Books, 2002.

Kass, Leon R. “Ageless Bodies, Happy Souls: Biotechnology and the Pursuit of Perfection.” The New Atlantis (2003): 1.

Simpson, John and Edmund Weiner, eds. The Oxford English Dictionary, 2nd ed. Oxford: Oxford University Press, 1989.

[1]. Our thanks to Professor Bostrom for his permission to re-print this article in our Special Issue of the SERRC.

[2]. Bostrom et al., “The Transhumanist FAQ, v. 2.1.”

[3]. Bostrom, “Human Genetic Enhancements: A Transhumanist Perspective.”

[4]. Kass, “Ageless Bodies, Happy Souls: Biotechnology and the Pursuit of Perfection,” 1.

[5]. See e.g. Glover, Humanity: A Moral History of the Twentieth Century.

[6]. Kass, Life, Liberty, and Defense of Dignity: The Challenge for Bioethics, 48.

[7]. Annas, Andrews and Isasi, “Protecting the Endangered Human: Toward an International Treaty Prohibiting Cloning and Inheritable Alterations,” 162.

[8]. Simpson, John and Edmund Weiner, eds, The Oxford English Dictionary, 2nd ed.

[9]. Fukuyama, Our Posthuman Future: Consequences of the Biotechnology Revolution, 149.

[10]. Fukuyama, op cit. note 8, 160.

[11]. Jonas, Technik, Medizin und Ethik: Zur Praxis des Prinzips Verantwortung.

[12]. Habermas, The Future of Human Nature, 23.

[13]. For their comments I am grateful to Heather Bradshaw, John Brooke, Aubrey de Grey, Robin Hanson, Matthew Liao, Julian Savulescu, Eliezer Yudkowsky, Nick Zangwill, and to the audiences at the Ian Ramsey Center seminar of June 6th in Oxford, the Transvision 2003 conference at Yale, and the 2003 European Science Foundation Workshop on Science and Human Values, where earlier versions of this paper were presented, and to two anonymous referees.

In this Special Issue, our contributors share their perspectives on how technology has changed what it means to be human and to be a member of a human society. These articles speak to issues raised in Frank Scalambrino’s edited book Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation.

Shortlink: http://wp.me/p1Bfg0-3qa

Special Issue 4: “Social Epistemology and Technology”, edited by Frank Scalambrino

For the SERRC’s other special issues, please refer to:

Author Information: Stephen Howard, Kingston University London

Shortlink: http://wp.me/p1Bfg0-3pi

Editor’s Note:

socrates_tenured_cover

Image credit: Rowman & Littlefield International

Socrates Tenured: The Institutions of 21st-Century Philosophy
Robert Frodeman, Adam Briggle
Rowman & Littlefield International, 2016
182 pp.

Funding is being cut from humanities departments. Tenure-track jobs in philosophy are drying up. Governments and funding bodies are increasingly demanding that the research they fund delivers clear and measurable ‘impact’. Our globalised, technoscientific culture is throwing up a host of urgent ethical, political, even existential questions. Any answers we have come from technocrats or Silicon Valley technologists, futurists and entrepreneurs. In this context, the mainstream of philosophy is failing to address its own impending crisis or enter these major discussions. Philosophers are indulging in insular debates on narrow topics, writing only for their peers: the result of a natural-scientific academic model that encourages intense specialisation.

This, crudely put, is the bleak context that Robert Frodeman and Adam Briggle present at the outset of their lively and provocative new book. In response, Socrates Tenured: The Institutions of 21st-Century Philosophy offers an argument for a reconceived conception of philosophy for the twenty-first century. The thesis can be summarised as follows: philosophy must escape its primarily departmental setting and its primarily disciplinary nature to become ‘field philosophy’. The argument emerges through the book’s curious layered structure. The general thesis is stated upfront, with layers of support and detail added by the subsequent chapters. This structure risks being repetitive, but the quality of the writing prevents the reiteration of the core thesis from becoming tedious, and the central notion of ‘field philosophy’ shimmers into shape by the penultimate chapter of the book.

Part One diagnoses the current crisis in philosophy as double-edged. On the one hand, the discipline finds itself in an institutional setting—the neoliberal university—that is increasingly hostile to the prevailing model of philosophy. In a world of shrinking budgets and ever greater demands for return on investment and direct societal impact, professional philosophy’s self-conception as the pursuit of disinterested, pure thought for its own sake seems increasingly passé. On the other hand, the mainstream of philosophy is failing to engage with the major questions of our times. The debate over our technological modernity takes place in magazines and blogs, in what the authors call our ‘latter-day Republic of Letters’. Insofar as academics are consulted for help with answers to contemporary societal challenges, it is scientists and economists who tend to be called upon.

Part Two evaluates three attempts to remedy this predicament. These are the ‘applied philosophy’ that first appeared in the 1980s, environmental ethics and bioethics. Only the latter provides a salutary example for Frodeman and Briggle’s field philosophy, which is finally outlined in Part Three.

What, then, is field philosophy? It would see philosophers ‘escaping the department’. They would move between the university and non-academic sites: NGOs, laboratories, community groups, businesses, think tanks, policy units, and so on. Philosophers may be institutionally based in other departments: medicine, law, the sciences; or they might yo-yo between a philosophy department and wider society. This physical movement would be mirrored by an intellectual one: instead of consisting of closed debates among specialists, the content of the field philosopher’s work would to a great extent be given by the needs of the non-academic field to which they are seconded. Frodeman and Briggle envisage the field philosopher in a dialectical movement, in both mind and body: between urgent, given problems and considered, rational reflection; and between the ‘fray’ of non-academic sites and the ‘armchair’ of the university. As the title shows, this represents a return to a Socratic ideal of the philosopher, embedded in the polis and attuned to the needs of their time.

Frodeman and Briggle acknowledge that this might be seen as a capitulation to neoliberal demands for immediate economic utility. True, many of their statements about the ‘hand-waving’ response of professional philosophy to the demands for increased accountability are not as far from neoliberal critiques of the ‘useless’ humanities as they might be. There is a much-cherished idea that the very conduct of non-utilitarian, specialised humanities research itself represents a performative resistance to a neoliberal agenda. But the authors’ main point is that philosophy should be more pluralistic. Alongside ‘pure’ philosophical work – which might continue in the wealthiest universities, most independent of external pressures – Frodeman and Briggle wish to see alternative models of the figure of the philosopher, which can include the non-disciplinary field philosopher.

Yet a potentially important issue not broached by the authors is: what gives the philosopher the right to pronounce on societal, non-academic issues? Without explicit justification, philosophers appear to risk suggesting that it is simply because we think we’re smart. Admittedly, Frodeman and Briggle insist that the field philosopher’s engagement should be ‘interstitial, horizontal, and reciprocal’, and they give an example of a modest, semi-successful philosophical mediation between community groups and utility companies in a debate over an environmental energy plan.

Nevertheless, such a justification of the philosopher’s input seems to me necessary, and I have two suggestions. Firstly, we might point to the resources that philosophical history offers those who have studied it. This is not just the common, narrow defence of secondary school philosophy as providing tools for logical analysis. Rather, we might point to the synthetic approach to previous systems and ideas that characterises thinkers from Aquinas to Kant to Deleuze. A further resource is the sensitivity to rhetoric and context-sensitive argument, which we see in philosophers like Leibniz or Arendt.

Secondly, we might indicate recent examples of philosophical public intellectuals, who do indeed conceive of their work as an engagement with given societal problems. I am thinking not of purveyors of inoffensive, philosophically-tinged panaceas, such as Alain de Botton, but instead the likes of Foucault or the Frankfurt school. Both of these points serve to underline the fact that it is particularly contemporary Anglo-American analytic philosophy that is the target of Frodeman and Briggle’s critique. While the authors acknowledge that the predominant Anglo-American, disciplinary version of continental philosophy has also become inward-facing and exegetical, we might emphasise that the engaged ‘field philosopher’ is perhaps not such a new figure but was rather active in pre-war, wartime and post-war France and Germany, and has not yet died in the French-speaking world (and tiny pockets of other countries), at least.

Nonetheless, Socrates Tenured offers a bold diagnosis of philosophy’s malaise and a proposed means to escape it: whatever your view of the proposals, they are worth exploring and debating—even, perhaps, outside of the academy.

Author Information: Matthew R. X. Dentith, University of Bucharest, m.dentith@episto.org; Martin Orr, Boise State University, orr.martin@gmail.com

Dentith, Matthew R. X. and Martin Orr. “Clearing Up Some Conceptual Confusions About Conspiracy Theory Theorising.” Social Epistemology Review and Reply Collective 6, no. 1 (2017): 9-16.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3oY

 Please refer to:

conspirators

Image credit: Jes, via flickr

In volume 5, issue 10 of this journal, we—along with five other conspiracy theory theorists (Lee Basham, David Coady, Kurtis Hagen, Ginna Husting, and Marius Raab)—took the authors of an opinion piece in Le Monde to task for advocating a cure to conspiracy theorising (Basham and Dentith 2016). The authors of that piece—Gérald Bronner, Véronique Campion-Vincent, Sylvain Delouvée, Sebastian Dieguez, Nicolas Gauvrit, Anthony Lantian, and Pascal Wagner-Egger—with the exception of Karen Douglas—have since replied with a lengthy response, in which they claim:

What “they” had in mind, as must be clear by now, was to study how people, on their own or under some external influence, think and come to endorse some beliefs about such things. That, “they” think, would need some data, rather than wishful thinking, ideological clamours or armchair reasoning (Dieguez et al. 2016, 32).

So, we (at least two of us) are glad we could be of service, helping them elicit their purpose from the opinion piece they penned for Le Monde. However, we think that the lengthy response they have written raises more questions about their project than answers. In this short reply we will look at three systemic issues in their response: misrepresenting the work of the scholars they are responding to; the naive nature of their scientific research project; and the worry they are engaged in special pleading.

Misrepresentation En Masse

A curious feature of their response is to try and make out that the authors and co-signatories of the response to the Le Monde piece are inconsistent, or even hypocritical. We take issue with that for two reasons.

The first issue is the simplest to explain: yes, some of the earlier work of the co-signatories (some of it ten years old) is no longer reflective of their current thinking. You would think that people changing their minds, or refining their views would be considered an academic virtue, but it seems we are expected to hold fast to outdated views, or toe certain disciplinarian lines. As is to be expected, any group of scholars is bound to advance work that, despite broad-based agreements, will provide evidence of differences in approaches and conclusions.

The second issue is that where our respondents try to make out that our work is inconsistent, they achieve that only by misrepresenting said work. The number of these errors in their piece are too numerous for this short response, so let us just point out four examples, ranging from the bizarre (yet oddly mundane) to the worrying.

First, the mundane. They make much of the claim Lee Basham is the sole author of the co-signed letter, writing that ‘[Th]e article is referenced with Lee Basham as the sole author’ (Dieguez et al. 2016, 20). Yet, as is clear from the article itself, Matthew R. X. Dentith is listed as its co-author. This seems a simple mistake, but it is one that vexes them so much that they devote an entire footnote to what is an error in their collective reading of the piece.

More troubling is how they present our work. For example, they misrepresent one of the co-author’s work by claiming ‘Dentith seems very worried by those he calls “conspiracists”’ (Dieguez et al. 2016, 26). They seem to have missed section 7, ‘Stipulating Conspiracism’, where Dentith states quite clearly:

It might be also be the case that once we investigate Conspiracism, it turns out to be a fairly useless thesis, especially if it turns out there are not many (if any) conspiracists. However, if we are going to treat the thesis of Conspiracism seriously—and investigate it—we need to keep in mind that conspiracists are simply one kind of conspiracy theorist. The putative existence of such conspiracists does not tell us that belief in conspiracy theories generally is problematic. The question should be ‘When, if ever, is a conspiracy theorist a conspiracist?’ rather than presupposing that conspiracy theorists suffer from conspiracist ideation (Dentith, forthcoming).

‘When, if ever’ are hardly the words of someone who is vexed or troubled by the existence of conspiracists.

This is not the only example of such misrepresentation. Another of our works, a piece co-written by Ginna Husting and Martin Orr, gets similar treatment. Rather than attempting to ‘delegitimize the claims of alien believers’ (Dieguez et al. 2016, 26), Husting and Orr write:

While it is tempting to argue that Hofstadter is simply pointing to certain claims and claimants who seem truly misguided—for example, those who argue that aliens walk among us—this conclusion neglects a fundamentally important process (Husting and Orr 2007, 140 [emphasis added]).

Husting and Orr’s meaning is clear, and the use of the example is to make a point about our inability to establish a priori the truth of a belief or claim (whether a theory or not) simply by fixing the label ‘conspiracy theory’ to it. Likewise, in pointing out that we characterize the belief that the death of Elvis Presley was faked as ‘extreme’ (Dieguez et al. 2016, 26) we are objecting to the use of this example, and only this example, to reject all ‘conspiracy theories’ as a class of knowledge-claim. When we argue that ‘some claims characterized as conspiracy theories are false’, (Husting and Orr 2007, 131) the qualifier ‘characterized as’ is rather important to our meaning. Perhaps we should have been more direct: the point is that not all claims characterized as conspiracy theories are false.

We can debate the willfulness or sloppiness of these misrepresentations, but what is even worse is that they misrepresent the central argument of the piece they are directly replying to. By dropping essential qualifiers from the co-signatories argument they commit us to views we never expressed.

They claim that our position:

[C]an thus be framed as the following two-fold hypothesis: because real conspiracies have happened and still happen, conspiracy theories are not only warranted but necessary; the only reason this is not obvious to everyone is that “conspiracy theories” have been made to reflect badly on those who assert them by the very people they purport to unmask, and their enablers (Dieguez et al. 2016, 21).

Yet that is not what we said. Indeed, we are not committed to any general claim that ‘conspiracy theories are not only warranted but necessary’; at best we are committed to the two claims that:

1. We should not dismiss theories as unwarranted merely because they are called ‘conspiracy theories’, and

2. We should not downplay the necessity of conspiracy theorising. There should be no prescription against theorising about conspiracies, especially in a democracy, even if it turns out that some of those conspiracy theories will be pernicious, even damaging.[1]

So, at best, we agree that conspiracy theories are necessary, in that open democracies should tolerate (if not promote) investigating claims of conspiracy (the investigation of which will be predicated on the expression of conspiracy theories), but nowhere do we claim that conspiracy theories are in all cases warranted.

Now, it seems that what our colleagues meant to say is that we think conspiracy theorising is warranted, given that they claim:

Basham et al. (2016) essentially claim that conspiracy theorizing is generally warranted because there are conspiracies: that is a generalist view (Dieguez et al. 2016, 23).

Do we think conspiracy theorising is generally warranted? We certainly think it is warranted on a case-by-case basis, and we think that we should not dissuade people from theorising about conspiracies. Perhaps, then, we might extend an olive branch and say, yes, we think that—on some level—conspiracy theorising is generally warranted. There is, however, a huge difference between talk of conspiracy theorising and conspiracy theories. Thinking we should not dissuade people from theorising about conspiracies is a long way from saying that we think conspiracy theories are in all cases warranted and necessary. Perhaps our permissiveness about conspiracy theorising makes the existence of conspiracy theories in our polities necessary, but it does not commit us to any claim that said theories are necessarily warranted.

Taken individually, these errors (and we have but mentioned one minor, and three major) are troubling. Taken together, these errors indicate that our interlocutors have, to paraphrase words of Sherlock Holmes, ‘seen, but not observed’ (Conan Doyle 1891). It is errors like these which make us think ‘they’ wrote their response in haste: quick to anger; faster to reply. Rather than searching the corpus of seven scholars for evidence of apparently inconsistent views, they might look at what we have written in context. A few isolated or partial quotes might make us look inconsistent, or even foolish, but we trust readers of the reply at hand to be more careful.

A Naive Empiricism

Misrepresenting our work is one thing, but a bigger worry is the thread that runs throughout their reply piece: they are scientists, and our armchair theorising is no match for their experimental method. However, we think our social scientist friends might want to reconsider their scientific model.

The tenor of their reply reminds us of Bill Murray’s line from ‘Ghostbusters’. ’Back off man, I’m a scientist!’ (Murray, et al. 1984) Leaving to one side doctrinal disputes about the role of the social sciences in the grand schema of the sciences, the lack of engagement by these social scientists in pursuing the conceptual analysis of conspiracy theories by philosophers, sociologists, and the like is a marker of science done badly.

They, seemingly, do not want to dirty their work with the kind of theoretical concerns we are interested in. Rather, as scientists they see their job as going out to collect data, and then, perhaps, to theorise about said data later. But they are seemingly unaware of work from the middle of the century which showed that their naive empiricism is untenable. As W. V. Quine argued persuasively, evidence does not determine the truth of theories, because there are a potentially infinite number of theories consistent with a limited set of data points. Rather, our pre-existing theories (whether held explicitly or implicitly) end up being part of what determines what gets counted as evidence for said theories (Quine 1951). As social scientists, they are likely more familiar with the work of C. Wright Mills, who might suggest that ‘only within the curiously self-imposed limitations of their arbitrary epistemology have they stated their questions and answers…. [They] are possessed by … methodological inhibition’ (Mills 1959, 55).

The issue here is that our social scientists are taking the spectre of conspiracism and conspiracists seriously, without either doing the conceptual work to first identify what counts as conspiracist ideation before going off to find people who might suffer from it, or acknowledging that much of this work has already been done. The work of other scholars is ignored, and the difficult preliminary work of clarifying concepts and their relationships avoided. (That this work can often be most comfortably performed in an armchair is beside the point.)

Their whole project depends on taking the ‘conspiracist mindset’ as established empirical fact. Maybe the whole enterprise is scientific per se, but, if so, it is poorly conceptualised and operationalised. What we bring to this debate is a conceptual rigour that they, too, seem to want. Throughout their piece our colleagues ask for more time to work out definitions, or answer fundamental questions. Yet even a cursory look at the literature in philosophy, sociology, or anthropology shows that many of these questions are—if not outright answered—carefully considered (as we will show in the final section). But rather than engage with that work, they opt for special pleading: we need more time to work out the answers for ourselves!

A Case of Special Pleading

This brings us to our final set of worries; the fact that the reply piece penned by our colleagues ultimately rests upon special pleading.

Our social scientist friends present their project in the best possible light. They write:

So, what were “they” up to? Quite simply, “they” advocated for more research. “They” figured that, before “fighting” against, or “curing”, conspiracy theories, it would be good to know exactly what one is talking about (Dieguez et al. 2016, 21).

Specifically, they ask:

Are conspiracy theories bad? Are they good? Are they always bad, are they always good? Who endorses them, who produces them, and why? Are there different types of conspiracy theories, conspiracy theorists, and conspiracy consumers (Dieguez et al. 2016, 21)?

These questions have been addressed by scholars such as ourselves. Indeed, for a fulsome accounting of the problems of defining what counts as a conspiracy theory, and how our chosen definitions often presuppose answers to the research questions we are asking, they could do worse than look at the first three chapters of Dentith’s book, The Philosophy of Conspiracy Theories (Dentith 2014).

The idea we can research a topic without knowing the terms of the topic seems rather backwards. If we do not define what counts as a ‘conspiracy theory’, how do we begin to measure when someone believes in such a theory, let alone whether that belief is rational or irrational? It is clear ‘they’ think they know what a conspiracy theory is, because they research belief in them. So why the reluctance to settle on a definition? Is it because settling on a definition would lead to problems in making their work seemingly fit together as the product of a coherent research programme?

Indeed, for researchers in search of a definition, they seem to have an awful lot to say about the definition they claim to have not yet settled upon.

For example, they claim:

[A]sserting that a conspiracy theory is any kind of thinking or explanation that involves a conspiracy—real, possible or imaginary—and that’s all there is to it, seems like a premature attempt to settle the issue, as if the topic itself was a non-topic and anyone—and that’s a lot of people—who thinks there is something there of interest is simply misguided, or manipulated (Dieguez et al. 2016, 22).

That is to say, they are at least aware that scholars have presented definitions of what counts as a ‘conspiracy theory’, and they have found said definitions wanting. That—at the very least—means they are operating with some definition of the subject-at-hand.[2] (And we would be the last to suggest that conspiracy theories are not of interest.)

So, what is their definition?

For the time being, thus, a “conspiracy theory” is what the conspiracist mindset tends to produce and be attracted to, an apparently circular definition that rests on ongoing work but is firmly grounded in relevant research fields such as cognitive epidemiology, niche construction and cognitively driven cultural studies, and could be refined or refuted depending on future results (Dieguez et al. 2016, 30).

Where do we start? They define conspiracy theories as irrational to believe despite earlier in their piece admitting some conspiracy theories have turned out to be warranted. Either they think those warranted theories somehow only became rational to believe over time (at which point we can say they are ignorant of the history of certain prominent examples) or they are being inconsistent with their terminology. Both issues have long been addressed in the wider academic literature.

It follows, then, from their definition that a conspiracy theorist is simply a believer in some irrational theory about a conspiracy. It is telling that they defend their scientific endeavour by pointing only towards weird and wacky conspiracy theories. They ask why alien shape-shifting reptile theories persist, and, yes, that is a good question. Yet they do not talk about the alleged conspiracy theories which turned out to be warranted nonetheless, like the Moscow Show Trials, the Gulf of Tonkin Incident, or Watergate. It’s as if these examples of people theorising about actual conspiracies (yet being accused at the time of being irrational conspiracy theorists) are not of interest to them. Could it be because their theoretical basis for their scientific endeavour is entirely predicated on the idea that conspiracy theorists are not only gullible or subject to confirmation bias, but pathologically so—to the point that scientifically-informed state intervention is desirable? They ask us to explain why unwarranted conspiracy theories persist. We could ask them to explain how they would have reacted to John Dewey’s claim the Moscow Trials were rigged back in the 1930s, or to the claim that U.S. intelligence agencies were sweeping up intercontinental communications (subsequently documented by Edward Snowden).

What makes this all the worse is they acknowledge they start with a circular definition; a conspiracy theory is the sort of thing that attracts a deficient type of person, one plagued by a conspiracy mindset (which is assumed to be a problem from the get-go, rather than, say, the more widespread problems of confirmation bias, or premature closure of inquiry). Yes, people who believe things that are not true is a problem, so why not start there? That they proceed from a circular definition of the core concept, and then expect empirical research to fix fundamental conceptual problems, is just bad research design.

The Crux of the Matter

We stand, then, by our earlier claim that these social scientists seem to be committed to shutting down talk of conspiracy theories (Basham and Dentith 2016). After all, why would they not? They believe them to be, in all cases, bad beliefs. This, then, is the heart of our disagreement. We (both the authors of this article, and the undersigned of the piece the social scientists replied to) have done the conceptual work the social scientists claim they want to uncover in their empirical work. Now, they could embrace that fact, and consider the work of their academic peers seriously, using it to look at the cases where beliefs in conspiracy go awry (and also at those wonderful examples where it turned out the conspiracy theory was not just true, but well-evidenced and warranted to believe from the outset).

That is to say, before you decide something needs fixing, you need to come up with something other than a circular definition that rests on the existence of something that is demonstrated only by the research conducted premised upon your circular definition. What you do not do is assume the beneficence of those concerned about ‘the kids targeted by the programs’ (Dieguez et al. 2016, 30). That governments might discourage children from thinking critically about their governments (and the corporations they often serve), despite the very real history of the criminal abuse of power, seems to concern them only because they had not been consulted.

Apparently, though, ‘armchair philosophising’ (or, better put, careful conceptualisation of research problems) might interfere. This tendency to ignore the work of philosophers, sociologists, anthropologists and the like shows a stunning lack of insight into the role such theorists have had on the development of the scientific method over the Twentieth Century. Our conceptual work is the underpinnings of good, rigorous science. We clarify the theoretical definitions upon which quality research is grounded. However, scientists who work without definitions (or try to hand wave their need for them away) ultimately produce results which can be easily questioned. After all, if we do not define what a ‘conspiracy theory’ is, how can we possibly measure belief in one? And if we do not know what a conspiracy theory is, how can we identify who the conspiracy theorists are? Yet, while they have a (circular) definition, they are not willing to engage in the conceptual analysis of it. It would, it seems, just get in the way of their ‘science’.

References

Basham, Lee and Matthew R. X. Dentith. “Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 12-19.

Conan Doyle, Arthur. “A Scandal in Bohemia.” The Strand Magazine June 25, 1891.

Dentith, Matthew R. X. The Philosophy of Conspiracy Theories. Palgrave Macmillan, 2014.

Dentith, Matthew R. X. “The Problem of Conspiracism.” Argumenta (forthcoming).

Dieguez, Sebastian, Gérald Bronner, Véronique Campion-Vincent, Sylvain Delouvée, Nicolas Gauvrit, Anthony Lantian, and Pascal Wagner-Egger. “‘They’ Respond: Comments on Basham et Al.’s ‘Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone’.” Social Epistemology Review and Reply Collective 5, no. 12 (2016): 20-39.

Husting, Ginna and Martin Orr. “Dangerous Machinery: ‘Conspiracy Theorist’ as a Transpersonal Strategy of Exclusion.” Symbolic Interaction 30, no. 2 (2007): 127-50.

Mills, C. Wright. The Sociological Imagination. New York: Oxford University Press, 1959.

Murray, Bill, Dan Aykroyd, Sigourney Weaver, Harold Ramis, and Rick Moranis. Ghostbusters. Burbank, CA: RCA/Columbia Pictures Home Video, 1984.

Quine, W. V. O. “Two Dogmas of Empiricism.” Philosophical Review 60: (1951): 20-43.

[1] As for the second clause; we do not know what that they are trying to say, and have to assume that as the authors are French, it is a bad translation of some otherwise pithy point.

[2] We leave to the side that, once again, our social scientist friends have failed to capture or present this work accurately. These definitions they claim make the topic a non-starter are, in fact, aimed at looking at the broad class of theories covered by such a general definition, such that we can get to the heart of the question of how we judge and appraise such theories.

Justin Cruickshank at the University of Birmingham was kind enough to alert me to Steve Fuller’s talk “Transhumanism and the Future of Capitalism”—held by The Philosophy of Technology Research Group—on 11 January 2017.

Author Information: Clarissa Ai Ling Lee, National University of Malaysia, call@ukm.edu.my

Lee, Clarissa Ai Ling. “Review of Making Medical Knowledge by Miriam Solomon.” Social Epistemology Review and Reply Collective 6, no. 1 (2017): 1-8.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3oA

solomon_mmk

Image credit: Oxford University Press

Making Medical Knowledge
Miriam Solomon
Oxford University Press, 2015
224 pp.

Miriam Solomon’s Making Medical Knowledge is an important contribution to the oeuvre of philosophy of practice in medicine for interrogating medical practices while highlighting many of the ethical and scientific gray areas in the social-epistemological character of medical intervention. The findings of basic science are resituated into a messy chain of causal mechanisms represented by various phases of clinical trials. The production of evidence could be parlayed into the negotiation of guidelines for governing medical situations where consensus over intervention is lacking, and also for the development of a personalized approach in favor of the ‘cookbook’ approach in medical treatments. She highlights some contradictions and miscomprehensions regarding the interpretation of quantitative and qualitative medical evidence, while arguing for need to make a clear distinction between the practice of basic science and medicine.

“Disrespecting” Disciplinary Boundaries

Solomon keeps her philosophical intervention closely situated to the not-always-predictable and non-linear movements of medical discoveries, and their potential for heralding new treatments, as they go from the lab to clinical trials to the stage of treatment interventions; her critical descriptions continuously remind of the importance of not overemphasizing theoretical rigor at the expense of practical applications, while reminding us of the need to encourage strong science for successful interventions to happen. She refuses to play to the dichotomy of art and science in medicine, not because she rejects medicine as bearing both characteristics, but because she chooses to operate from a perspective that both are equally crucial but not the limits by which medicine operates. She also seems to subscribe to the opinion that a reduction of medical knowledge to a dualist representation is merely a submission to a traditional logical-positivist empiricist philosophy of science, and to eschew such dualism is to transcend the binary of “soft” versus “hard”, “precise” versus “mess”, and “reductionistic” versus “holistic”.

Beyond claiming that she intends to go beyond the art and science divide in her analysis of medical epistemology, which is also part of her mission to “disrespect” disciplinary boundaries, she declares in the first chapter that she will be providing a pluralistic account of methodologies that bring together the “naturalistic, normative, applied, pluralist, social epistemology” in an integrated manner, especially through a selection of case studies such as that involving cystic fibrosis, treatment of heart diseases, and use of mammography for women between ages 40 and 49 as part of early screening for breast cancer that she uses to highlight a methodologically plural approach in the penultimate (ninth) chapter of the book that considers the integration of the methodologies discussed throughout the book.  She is upfront about the STS and HPS traditions that inform her work, even though as a philosopher and lacking formal training in social scientific methods, she acknowledges where her alliance is with philosophy, and to a lesser degree, history of medicine.

That said, the methodologies discussed in the book are the consensus conferences for group deliberation among a group of experts (although non-expert stakeholders are also sometimes involved); evidence-based medicine (which some equate with “cookbook medicine” (meaning medicine that prescribes a general intervention for every patient facing the same ailment) and the problem of bias (or assumption of lack of bias) in randomized control trials, the hierarchies of evidence, scientific inadequacy, and unreliability; translational medicine that involves the translation of basic biomedical research to application to patients; and medical humanities exploration in the form of narrative medicine for exploring professional empathy and the phenomenological aspect of the relation between physicians and their patients.

On Consensus Conferences

The methodology she spends the most time exploring is that of the consensus conferences followed by that of evidence medicine, with only single chapters, each, dedicated to translational and narrative medicine. Her reason is that medical consensus conferences are under-considered. However, the details she provides are useful for presenting the mechanics underlying a movement as heavy in the politics as it is in juggling multiple epistemic commitments and priorities, and which, to my mind, contains narratives that should be of interest even to those in the medical humanities. Solomon claims that there has not been much interest by historians and sociologists of medicine to investigate this area, which is puzzling, given that the question of authority and expertise, together with the political history involved in the development of the conference consensus program globally, should be of interest to them.

Solomon limits her case studies to the development of the program and movement in the UK, North America (particularly the US), and Scandinavia Europe, although every country that practices modern medicine obviously will have their own form of consensus conferences; therefore the philosophical generalizations that are derived from the study of these cases could be apply, with some caveats and modifications, to other local settings.

The case studies that inform her story provide a case of binary tensions that she does not want embodied by her analyses of the situations, but which she acknowledges to be an unavoidable consequence of competing epistemic and social priorities involved in generating consensus, in addition to meeting the expectations of the medical community and non-medical stakeholders that include administrators, policy-makers, insurance companies (or whoever that pays the medical bills), technologists, and perhaps even some members of the public (which include patient advocates and patients). Each of these stakeholders will have their own opinion over what is considered epistemically authoritative, credible, and objective, even if the point of consensus seeking is to break away from over reliance on arbitrary designation of authority. Using the example of the US-based NIH Consensus Development Conferences, she argues that the justification for the conferences is the production of credibility in translating research findings to the general health care providers and members of the public. Solomon considers knowledge dissemination and changing how practices of the medical professionals transmit trust as much as it does results, such as trust in the researchers, the review processes, and incentives that will change practices for the better.

The rhetoric of consensus, according to Solomon, aims to dissuade other interested groups from manufacturing doubt on decisions made through the generation of authoritative sources of knowledge. However, the NIH Consensus Development Conference Program has since been retired, and evidence-based medicine apparently had taken over, although Solomon regrets that this potentially leads to the over-reliance of evidence-based medicine methods of more formalized knowledge assessment techniques rather than group knowledge assessment, which means giving evidence-based medicine a dominant position. At the same time, she acknowledges that there is a blackbox to the process of achieving consensus to hide the more ‘doubtful’ practices from public view, thereby maintaining a credible appearance. However, the Institute of Medicine (IOM) runs a modified version of the NIH program, which it claims will produce an “arguably” attainable and useful form of objectivity by not trying to eliminate intellectual bias, but rather, seeks for balance of expertise.  She also cites other US bodies, which are variations to the NIH model, such as the Medicare Coverage Advisory Committee (MedCAC) and the US Preventive Services Task Force (USPSTF). The difference with the USPSTF model is that attention is also given to interface (socio-economic level) consensus to get as many different organizations and experts as they are able, to commit.

Public Deliberation

She gives as a counter to the NIH consensus program, the ‘Danish model’ that takes on a public deliberative approach compared to the group expert judgment of the NIH model. She also looks at countries where health care is universal and centralized, such as in Canada and Europe; the differences in the system also change how the consensus conferences operate in these countries. The Canadian Task Force for Preventive Health Care has a permanent panel, rather than different panels for different topics, mainly because the evaluation is centered on primary care rather than the NIH model of evaluating specialty care.

The European consensus model focuses more on developing interface rather than technical consensus, therefore focusing more on the social, ethical, and financial consequences of reaching a medical decision, even though they also preserve certain features of the NIH model such as a neutral (rather than ‘balances of biases’) panel, half-day of expert presentation and questions, with the production of a public statement the following day. Solomon points to the difficulties of producing mutually agree-upon guidelines, since different panels could arrive upon different guidelines within the same country and with regard to the same medical technology; moreover, the expert composition of the panel (whether they be disciplinarily homogenous or interdisciplinary) skews support for forms of interventions and therefore, the guidelines for the interventions.

Solomon follows through this controversy with a more detailed philosophical discussion in chapter four, on the topic of consensus practices, including the difference between achieving consensus in science and medicine. As far as Solomon is concerned, consensus in science is merely aimed at achieving a semblance of united front on an issue that members of the scientific community agree on even if they might have different reasoning and interpretive processes for arriving at the same conclusion, such as in the case of climate change and their representation at the Intergovernmental Panel on Climate Change; consensus negotiation should not be utilized for negating controversy from developing due to conflicting views.  However, I am not persuaded by Solomon’s claim that the Strong Programme, as advanced by the Edinburgh Science Studies unit, subscribes to a simplistic acceptance of negotiation to get to truths; if one were to read chapter seven of David Bloor’s Knowledge and Social Imagery, the argument appears to be that what is being negotiated is not the syllogistic rule of formal reasoning, but rather the application of that syllogistic rule; the negotiation takes place when the informal process of reasoning is applied to problematic cases that do not fit the mold at content level, even if the logical form appears to prevail. Bloor uses examples in mathematics, particularly arithmetic, to demonstrate his arguments. Therefore, the form of scientific (or mathematical) consensus Solomon attributes to the Strong Programme is not so much a case of scientific consensus in the literal sense of attaining agreement through discussions, but rather, to find ways for explaining seemingly illogical contradictions.

Group Deliberation and Judgment

Solomon ventures into discussions concerning different philosophical views surrounding group judgment, such as the usual assumptions regarding how group deliberations could be sufficiently robust and rigorous to withstand individual bias and error while including different points of views through relevant data and considerations. Group deliberation is seen as a way for uncovering presuppositions, and transmitting further evidence from some members of the group to the rest of the group, from the perspectives of internalist and externalist views. The core belief of internalist approach is that justification of belief is internal to each individual owner and available for reflection whereas the externalist approach allows one to have knowledge without being able to explain how one is justified or in possession of that knowledge. Her core thesis in this chapter concerns the potential for group think as a fallible point during group deliberation, on the premise that cognitive and motivational bias will always be present regardless of attempts at eliminating them.

The problem of group-think persists due to factors such as the tyranny of majority (that led to the ignoring of dissenting views) and omission of relevant information.  However the manner group think operates, Solomon is right in suggesting that factors such as peer pressure, pressure from authorities, pressure to reach consensus, time pressure, and obviously, the presence of particularly domineering members of the group (who are also the more privileged members of the group) can skew how decision-making is made, given that the fallibility of the parts (individual members) that constitute the whole (group) would hold under such conditions. Moreover, the desire for standardization through consensus also brings on the problem of the loss of autonomy for individual professionals who may not even have a say in the process. Even if the consensus method is not eschewed, Solomon suggests that the process involved is still work in progress for which no final judgment could be attained.

Evidence-based medicine has much more literature that contributes to an exploration of its discourse, especially as it is a continuation of the empirical medicine discourse, such as in the context of clinical application. When different trials of the same intervention produce different results, an overall evidence assessment is made through the consensus conferences. Without rehearsing the evaluation of the practice of evidence-based medicine that Solomon provides, I venture that her most interesting contribution to the discourse is in taking out the usual assignment of mechanistic evidence (stemming from mechanistic reasoning), which she discusses in detail in chapter five, from within the evidence hierarchy because high-quality mechanistic reasoning is deemed to be logical and deductive (though she does not quite say why other than that mechanistic reasoning presupposes that one has complete knowledge for enabling that form of reasoning) rather than mechanistic reasoning. However, even if complete knowledge is available, it does not presupposes that the intervention proposed for applying knowledge of that mechanism will work; hence, mechanistic reasoning provides weak evidence while remaining useful as an instrument of discovery.

Regarding Cystic Fibrosis

Solomon uses the case of cystic fibrosis to illuminate her argument concerning application of mechanistic/causal knowledge and experimental heuristics come together. She argues that the case of treatment for cystic fibrosis is an example of evidence-base medicine through the deployment of a multidisciplinary medical team, with therapies that are discovered at distal or proximate ends of a causal chain such as in the case of the CFTR (cystic fibrosis transmembrane conductance regulator) gene; she claims that a treatment might not have been created specifically to address cystic fibrosis but rather conditions similar to the problem; nevertheless, regardless of the original intention of the treatments, they could still be put to test to see which are effective for dealing with cystic fibrosis. However, even with evidence of effectiveness (by the standards of evidence-based medicine), the question of why certain therapies work while others do not is not answered.

The trials deployed are a mix of randomized double-masked, observational, as well as other methods. In addition, even the identification of genetic mechanisms underlying cystic fibrosis are successful in the early stages of genetic testing, the improved information produces more uncertainty because more variables are now at play. Although she acknowledges a more complex consideration is needed, I would suggest that the nested problem of mechanisms could be laid out through a cybernetic systems approach, a system which has been deployed in other forms of psycho-social therapies, such as family therapy and counseling.

Chapter six sees Solomon returning to the problem of biases and of confounding factors (being factors that affect experimental and control groups in a sufficiently major way as to skew results); I agree with her argument that the hierarchy of absences of biases does not translate into a corresponding hierarchy pertaining to the reliability of evidence because any presence or potential for bias indicates a possibility for errors. Moreover, even a double-masked randomized controlled trial could fall apart when large differences are being measured, since this allow trial subjects to guess which intervention arm they have been assigned to.

Solomon argues that external validity could serve as a check to ameliorating the problem of bias and weak evidence even in double-masked randomized trials through the deployment of background knowledge and judgment based on context of those targeted for intervention and the ensuring of common traits between trial participants and the rest of the larger population. Further, there is difficulty in generalizing the findings from trials due to biological and cultural variations and complexities (which inform much of the discourse in the anthropology of medicine). In addition, the trial may be designed to demonstrate a particular effect, and therefore, is controlled to the point of excluding the possibility for illuminating another important, even if seemingly irrelevant, condition. An important point for Solomon is that evidence-based medicine does not constitute an algorithmic or infallible scientific methodology; and her recommendation for dealing with this epistemic fallibility is to apply social interventions such as the setting up of trial registries with the aim of preventing manipulation of data, publication bias, bias resulting from time to publication, and conflict of interest, among issues that could potentially arise.

Translational Medicine

Translational medicine, considered as a translation of pure science knowledge into effective healthcare application Solomon divides into T1, “applied research from bench to bedside (and back)” and T2 “moving successful new therapies from research to clinical context” (159), contains many historical cases that demonstrated the messiness, the timely arrival of a technology that could facilitated the operationalization of basic research after years of being stymied, and of course, the right team of people that all come together in the transformation from the lab bench up to the clinical stages. Due to some affinity and overlaps that one can find between evidence-based medicine and translational medicine, with evidence-based medicine requires a complementing methodology that could take on the risk, through the process of problem solving, careful observation, and tinkering needed to move the implementation of clinical intervention into the next level, even with success not assured and failure a strong possibility.

For Solomon, translation medicine is that level before evidence-based medicine as it occupies the place in the context of discovery, while the more prestigious evidence-based medicine is located within the context of justification. However, given the underdevelopment of translational medicine at this point, Solomon has not too much to suggest from a philosophical viewpoint. However, she offers possibilities for how the methodology of translational medicine could be strengthened within the causal chain of various therapies to be applied, such as in the case of the CFTR gene. Her most interesting contribution in this regard would be to show how that is put to practice in her chapter on the pluralism of methods in chapter nine.

In her appraisal of narrative medicine in chapter eight, Solomon does not offer as much philosophical insight into the contents of the narrative. Instead, she focuses her evaluation to the causality and effects of the deployment of narrative medicine, which provides a useful reflection on the intent behind the deployment of narrative medicine in the first place: forms of listening that need not be confined to the verbal, the creation of empathy at the experiential level (through an invocation of the aforementioned phenomenological method of eliciting the full spectrum of patient experiences), making the right diagnosis (through a close-reading of the narrative between the physicians and their patient), and making meaning out of the information compiled from the three aforementioned techniques. She also brings up historical precedents to narrative medicine that had other fallibilities, such as biopsychosocial model considered as lacking in intimacy, empathy, democracy, and attention to the dyadic relation between individual physicians and their patients; while deconstructing certain assumptions regarding the efficacy of narrative medicine in entering zones untouched by the biomedical-scientific method.

Obviously, the philosophical insight that comes out of this investigation could go even further when paired with the technique of narrative deconstruction found in literary and other areas of inquiry attending more to the problem of narrative contexts and non-evident subjectivities. Philosophy may identify such possibilities, but its current deployment through philosophy of science and medicine lacks the capacity for deeper penetration. Philosophically, it would appear that narrative medicine has only a limited role to play in medicine by being confined to deployment in primary care and psychiatry; but cultural angle, there is much more that could be offered even to specialized medicine.

The Importance of Dissent

What I find most compelling about the book is its advocacy of pluralism that is not merely a composition of multidisciplinary or multi-modal methodologies, but for how they could be integrated most effectively. Such pluralistic considerations have not have been given as much attention in the literature of medical humanities, or social-cultural studies of medicine. This book, perhaps, could open up more possibilities for such developments, especially for the examination of the development of medical infrastructures and practices outside of the Western world. The penultimate chapter takes a philosophical reconsideration of how certain traits deem crucial in the practice of science might work in reverse in medicine, such as that of dissent. However, I argue that the avoidance of dissent in medical practice will produce its own form of dogmatism, such as when less orthodox approaches of a yet unknown value are put forward, which happens more frequently than acknowledged. While I believe that the arguments she offers are important considerations for medical practitioners and not only to philosophers, the density of the material and the lack of clear prescription to practice would require patience from the practitioner-reader in their contemplation of how they could best profit from the discussions in the book.

References

Bloor, David. Knowledge and Social Imagery: Second. Chicago & London: Chicago University Press, 1991.

Solomon, Miriam. Making Medical Knowledge. Oxford: Oxford University Press, 2015.