Archives For Articles

Articles are stand-alone contributions to SERRC.

Author Information: Frank Scalambrino, University of Akron,

Scalambrino, Frank. “Employees as Sims? The Conflict Between Dignity and Efficiency.” Social Epistemology Review and Reply Collective 6, no. 2 (2016): 35-47.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Aaron Parecki, via flickr

“… that which mediates my life for me, also mediates the existence of other people for me.” —Karl Marx[1]

Today’s technological mediation allows for unprecedented amounts and depths of surveillance. Those who advocate for such surveillance tend to invoke a notion of public safety as justification. On the one hand, if acceptance of being surveilled follows a philosophy, it would seem to be a kind of “greatest good for the greatest number” philosophy. However, it may be the case that the philosophy functions as an after-the-fact excuse, and people are simply willing to accept surveillance so long as they are able to use their technological devices. On the other hand, it is interesting to note that with context shifts in which such a philosophy could no longer justify surveillance, a philosophy of ownership may be the only viable justification for such surveillance. Yet, insofar as we are discussing the freedom of individuals, e.g. “employees,” we should be critical regarding surveillance justified by a philosophy of ownership.

This article seeks to provide a critique of surveillance in situations where surveillance thrives despite the tension between freedom and ownership. Specifically, this article examines the development of workplace surveillance—through technological mediation—from “loss prevention” to “profit protection.” The tension between freedom and ownership in this context may be philosophically characterized as the tension between dignity and efficiency. After describing an actual workplace situation in which a retailer uses technological mediation to surveil employees for the sake of “profit protection,” a critique of surveillance will emerge from a discussion of the notions of efficiency and dignity in relation to freedom. Rather than determine the justification of surveillance through technological mediation in terms of the “justified true belief” of “profit protection,” this article—from the perspective of social epistemology—takes for its point of departure a conception of knowledge in terms of the “social justification of belief” (Rorty, 1979: 170). Hence, the policy recommendations regarding technological mediation with which this article concludes may be understood as developed through social epistemology and a concern for freedom most often associated with existential philosophy.

Employees as Sims?

It is already the case that business owners may use their smartphones to access “real time” audio and video surveillance of their employees. This article considers a retail business with stores in more than one of the United States; speaking with individuals who have worked under such profit-driven surveillance is illuminating. The retail space in question was small enough to have audio and video surveillance covering the entire premises where employees and customers could interact. One employee described how “the boss” was “on a beach somewhere having a drink” watching the employee in question work. The “boss” would then periodically call the business to have the “middle management” ask this employee why he was doing whatever it was he was doing. The employee described the experience as “stressful.” Further, he described feeling “paranoid,” at times, not knowing for certain how closely he was being surveilled from moment to moment.

The idea of using technology to surveil a workplace is not new. However, the kinds of technology available today allow for unprecedented levels of surveillance. Whereas less technologically-mediated work environments could have justified surveillance in terms of employee safety and loss prevention, e.g. theft and accidental destruction, today’s technologically-mediated workplace allows for greater depths of “micro-managing” through surveillance. What we will see is that despite any negative connotation associated with the notion of “micro-managing,” when understood along a spectrum of “loss prevention” and in conjunction with the technological mediation which allows for it, the use of surveillance for the purpose of micro-managing employees can seem as justifiable as locking the door when you close shop for the night.

Originally the idea of “loss prevention” included concerns to monitor for theft. If setting up video surveillance will deter theft or help you recover lost property after theft, then the calculation seems straightforward enough that the video surveillance of your business is a good investment. Further, if video surveillance helps defend business owners against unwarranted worker compensation claims by employees who were hurt on the job through no fault of the business, then again the calculation seems straightforward enough that the video surveillance of your business is a good investment. In fact, retail businesses often employ an entire “loss prevention” department tasked not only with monitoring video surveillance of the business’s premises but also often to appear as customers among the customers to assure shop-lifters are quickly captured and restrained. From the perspective of a philosophy of ownership, the idea is that you own property which you are offering to sell to others, and if others attempt to take your property without compensating you as you deem appropriate, then it seems straightforward enough that your rights regarding your property have been violated.

Now, the idea of “profit protection” may be understood as an extension of “loss prevention.” Moreover, it should be kept in mind that such “profit protection” would not be possible without today’s technological mediation. “Profit protection” is supposed to refer to the reduction of the preventable loss of profit, and “the preventable loss of profit” refers to actions performed inadvertently or deliberately. Thus, notice how surveillance for the sake of “profit protection” may technically extend beyond theft and accidental destruction of property. In other words, if employees are not performing their job duties in a way that allows for the sale of your property, then the profit which you could have reasonably earned through their labor is lost.

There are a number of ways technological mediation allows for “profit protecting” surveillance. First, just like the popular smartphone applications which allow individuals to monitor their property while away from their homes or apartments, business owners may not only monitor their property but also the individuals tasked with facilitating the sale of their property. Second, a business owner could easily isolate which employees are not performing as efficiently as they should by simply tracking sales. If given the reasonable amount of expected sale, whether determined by season and time of day or by the ratio of sales to customer traffic, etc. business owners can determine when their property is not being sold as efficiently as it should be. Lastly, then, business owners may use technology to surveil those particular employees who are working during the times when business operations are not as efficient as they should be. In doing so, business owners could learn what these employees are doing “wrong.”

Notice, if such surveillance is framed as a “teaching opportunity,” then an employer could construe the whole surveillance operation as benevolent and caring, without even needing to mention “profit protection.” However, to whatever extent there would be a calculation involved to justify the use of management time to surveil such employees, then the notion of “profit protection” could be easily revealed as operable, despite denial on the part of the business. In either case, notice how the surveillance of such employees seems to justify such “micro-managing” as questioning sales techniques, and such a technologically-mediated relation to the employee would extend all the way to monitoring what employees say and how they say it. After all, even an employee’s relation to customers, if understood in terms of cybernetics[2] (cf. Scalambrino, 2014 & 2015b) may be quantified in terms of variables which correlate with successful sales. Thus, a business owner may be seen protecting profit by micro-managing the facial expressions, tone of voice, and suggestions made by their employees.

On the one hand, if all this is beginning to sound as if technologically-mediated business may make employee management and relations into a kind of video game (such as, for example, “the Sims”), then you are following the argument of this article.[3] On the other hand, there are three points to keep in mind. First, it would be too cumbersome to conduct such management and relations to employees, as if they were Sims, without technological mediation. Second, notice how framing the micro-management associated with such surveillance in terms of “profit protection” makes the enterprise sound like good (cybernetic) science and a wise business investment. Third, we will consider the question: How does such surveillance and micro-managing affect employees and relate to the constitution of their employee-identity? As we will see, whereas the second point may be rightfully characterized in terms of the efficiency of an employee in regard to the performance of assigned tasks, the third, which we will characterize in terms of the “dignity of the person” who is the employee, is not a simple question to answer. Moreover, as we shall see, the efficiency made possible by technological mediation seems to have tipped the balance in favor of efficiency over dignity.

The Conflict Between Efficiency and Dignity

There are a number of ways to articulate the conflict between efficiency[4] and dignity, and in doing so a distinction may be made between the rationale and the value[5] of such micro-managing and surveillance of employees through technological mediation. Privileging efficiency, it may be argued that the feelings and self-identity of an employee need not be included in the concerns of a reasonable business owner. In this way, it may be said that business owner’s need not include concerns for employee feelings and self-identity in their rationale for implementing various surveillance and management practices. Yet, insofar as employee feelings and self-identity have value which can be correlated with profit, then it becomes an issue of efficiency to control these variables as much as possible. That is to say, a cost/benefit analysis may be called for in which the impact of such variables on profit could be determined.

Considering profit necessary to sustain a business, a cost/benefit analysis of the appropriate relation to employee dignity can be quite complicated. For the purposes of this article, consider the following possibilities. The value of privileging dignity may run directly counter to “profit protection.” That is to say, venturing into the dimension of surveilling employees to promote various dignity-related psychological features may seem counter-intuitive, not only because a certain amount of disgruntlement may be constitutionally the norm for some individuals but also because it may be difficult to control the cost of sustaining such a workplace environment. Further, it is not immediately clear whether surveilling, micro-managing, and subsequently firing an employee for their inability to sustain a profit margin may not be in the best interest of the dignity of the employee. Whereas it may be more consistent with “profit protection” to screen potential employees for job aptitude, rather than hire individuals and subsequently surveil them for aptitude, to determine for an individual that they are not good at performing a task may be seen as providing helpful guidance consistent with respecting their dignity.

The “helpful guidance” framing of firing an employee is reminiscent of the “teaching opportunity” framing of surveillance and micro-management. In other words, though it may seem intuitively beneficial for an employer to appear to its employees as concerned with employee dignity in its various rationales for investing in surveillance and micro-managing, again it seems concern for profit would be the ultimate determining factor in whether the costs associated with maintaining such an appearance to its employees constitutes a good investment for the business. Moreover, on the one hand, it could be construed as a kind of alternative compensation, so business owners could justify keeping larger amounts of profit, e.g. “At our workplace managers will work with you to ensure you love your job.” On the other hand, establishing a workplace in which it is a requirement of employment that employees appear happy at all times may be considered unreasonably oppressive.

Hence, it seems even if a business were to remain neutral in expressing rationale for its actions regarding dignity, there may be a spectrum along which businesses cannot help but be placed regarding how they value employee dignity. On the end of the spectrum privileging efficiency would be located automatons, resulting from analyses and established through an investment in future profit; on the end of the spectrum privileging dignity would be autonomous persons, perhaps involved in a “profit-sharing” business.

Autonomy and Self-Awareness: The Scope of Simulation

There are three (3) distinctions which are now classic in the history of Western philosophy, which will help articulate the conflict between efficiency and dignity. These distinctions come from Immanuel Kant’s (1724-1804) ethics. The three distinctions are: the “three natural pre-dispositions to the good,” the “principle of ends” (as the second formulation of Kant’s famous Categorical Imperative), and the difference between “a person of good morals” and “a morally good person.”[6]

Building on Aristotle’s divisions of the soul, Kant distinguishes between the “animal,” “human,” and “personal” dimensions. Each of these dimensions has a corresponding type of “self-love,” which individuals use to determine self-worth. At the level of animality, self-love is “mechanical” and determined by physical pleasure. Individuals centered on this level determine the value of their existence by how much physical pleasure they experience in life. At the level of humanity, self-love is “comparative.” This is due to the fact that rationality cannot help but determine ratios. Individuals centered on this level determine the value of their existence by comparing aspects of their lives to the lives of others.

Finally, at the level of personality, according to Kant, the “predisposition to personality is the capacity for respect for the moral law as in itself a sufficient incentive of the will.” (Kant, 1960: 34). Thus fully actualized individuals determine their self-worth as “a rational and at the same time an accountable being” (Ibid), and the difference most relevant for our discussion is the sense in which a person has self-respect beyond the natural human tendency to compare oneself with others. In other words, though someone has more money or better possessions than you (cf. Epictetus, 1998: §6), you may value yourself in terms of your disciplined harmony with right living. Insofar as “right living” is meaningful, then its truth and reality precedes an individual’s acceptance of it. That is to say, it is true that touching the hot stovetop will hurt you, prior to your touching it and independent of your beliefs regarding it.

Hence, there are two conclusions to be drawn here. First, “dignity of the person” is meaningful, whether the self-respect associated with it is actualized by individuals or not. Second, “dignity” refers to the self-actualization which corresponds (as we will see more completely in a moment) with the highest natural capacity for living in humans. That is to say, individuals who have not actualized the personal dimension, and thereby self-respect, are individuals who are not living the most excellent life available to humans.

Two brief references to other philosophers may be helpful here for clarification. In regard to the second point, Friedrich Nietzsche’s (1844-1900) statement, “the seal of liberty” is “no longer being ashamed in front of yourself” (1974: 220) need not be understood as a philosophy of “anything goes,” but rather may be understood as indicating liberation from a life of self-shaming in regard to a comparison with the rest of humanity. Further, the first point, above, invokes a classic passage in Plato’s Republic where Socrates notes that rulers (i.e. employers and bosses) “in the precise sense” are people who “care for others” (Plato, 1997: 340d). This is, of course, juxtaposed with the definition of justice offered by Thrasymachus, namely, that “Rulers make laws to their own advantage.” (Ibid: 338c).

The next distinction from Kant is his “principle of ends.” This is the second formulation of his famous “Categorical Imperative,” and it suggests you should act in such a way “that you use humanity, whether in your own person or in the person of another, always at the same time as an end, never merely as a means.” (Kant, 2002: 38). On the one hand, notice how this suggests we should not use others as a means to determine our own self-worth.  On the other hand, it also points to the dignity of persons as ends in themselves. That is to say, the principle of ends suggests a person should not use others in such a way that it is merely for utility. As we will see, for Kant this goes beyond J.S. Mill’s “principle of liberty”[7] in that to treat another person—even a consenting person—merely as a means, and thereby not as a self-respecting person, may be construed as a kind of harm to their person insofar as their ability to self-actualize their personhood is conditioned by their capacity for self-respect.

The final distinction from Kant, then, is the one between “a person of good morals” and “a morally good person” (cf. Scalambrino, 2016c). What is fascinating about this distinction is that it is not in terms of the actual action that the different types of individuals perform. Both persons may perform the same action; however, the latter type of person is motived in terms of the self-respect of personhood, and the former is motived in terms of a different pre-disposition to goodness. Notice that because all of the pre-dispositions are “to the good,” it is not in terms of the goodness of the action that its performance should be evaluated. Rather, it is the motivation that determines which performance of the action is better. This will be important for the thesis of this article, as there is no attempt being made to suggest that profit is “not good.”

To synthesize these distinctions from Kant, notice he believes the “morally good person” is freer and is existentially-situated better than the “person of good morals.” Further, he thinks the “morally good person” is living a more excellent life than the “person of good morals,” and all of this is despite the fact that both individuals may be performing the same actions. How is this the case?

Because the three pre-dispositions to the good constitute a hierarchy, in order for an individual to actualize the highest capacity, i.e. for personhood, the existentially-prior capacities must first be actualized.[8] This means “personhood” is a higher excellence than mere “humanity,” and personhood is existentially-situated in a better way, therefore, since the person has a wider horizon of evaluation available to it than in terms of mere humanity. For example, even if someone merely at the level of humanity were hoping for the best means to manipulate others, having a wider horizon of evaluation would provide a wider range of potential justifications, i.e. this may be seen in the attempt to suggest that profit-driven surveillance is somehow for the benefit of the surveilled—when the motivation determining the performance of the action is clearly “profit protection.”

In order to understand how the “morally good person” also lives the better life, a brief reference to Aristotle’ Nicomachean Ethics may be helpful. As Aristotle goes through the various types of life in his search to discover the best life for humans, he notes, “The life of money-making is one undertaken under compulsion, and wealth is evidently not the good we are seeking; for it is merely useful and for the sake of something else.” (2009: 1096a5). The idea here is that to ask regarding the natural purpose of human life is to ask what human life is in itself, i.e. as an end for itself and not as a means to be expended for something else. This points directly to the synthesis of Kant’s distinctions as a justifying how the “morally good person” lives the better, i.e. the most excellent life available to humans, in that the natural presence and hierarchical order of the dispositions suggests that life was made to fully actualize itself.[9] To be fully-actualized means to actualize the highest pre-disposition, which is the predisposition in which life treats itself as an end in itself, whether in its own person or in that of another, and thereby constitutes the dignity of personhood thru its self-respect.[10]

Lastly, notice how the above explication of Kant’s ethics regarding the dignity of personhood may be characterized in terms of “self-awareness” and “autonomy.” Because the individual who has actualized the capacity for personhood may relate to itself in terms of a greater number of dimensions than the “person of good morals” who is not performing actions with the full[11] actualization of their self. In this way, the “morally good person,” in expressing the self-respect associated with the dignity of personhood, is more self-aware. Were this in terms of content, then it would be as if age should determine greatest amount of self-awareness; however, this is in terms of capacity, not content. In a similar way, Kant characterizes the autonomy of an individual, not in terms of content but rather, in terms of relation (cf. Scalambrino, 2016b).

Thus, it is the “autonomy” of the fully actualized person which makes them freer. According to Kant, the “principle of autonomy” is “The principle of every human will as a will giving universal law through all its maxims [i.e. its code of conduct].” (Kant, 2002: 40). Notice, because both the “person of good morals” and the “morally good person” perform the same action, it may be said that they are following the same “law.” However, it is not the following of the law but the relation to the law when following it that differentiates these two types of individuals. In other words, because the “morally good person” understands its self-worth in terms of its accountability to the Natural Moral Law, it is motivated in terms of self-respect exemplary of the dignity of personhood. In this way, this type of person is freely choosing to follow the law. Because other types of individuals have motivations other than the accountability determining personal dignity, their decisions to follow the law are compelled by other motivations. The motivation to follow the law for its own sake is not an additional motive from the motive made possible through the actualization of personhood.

Efficiency and Dignity

In what way does the above section illustrate “the limits of simulation,” and how do the limits of simulation relate to the conflict between efficiency and dignity? Again, it is, of course, technological mediation that conditions the whole problem under discussion. In other words, it is the amount and depth of surveillance made possible today by technological mediation which has allowed for the shift from “loss prevention” to “profit protection.”

On the one hand, the above section helps illustrate that though loss prevention and profit protection may be good, the surveillance of employees for their sake is founded upon a relation in terms of “humanity,” at best, and not “persons.” In other words, it seems to neither treat employees with dignity nor to provide an environment which may help them fully actualize self-respect as an employee. Like “persons of good morals” in Kant, employees under surveillance may perform the right action and the same action that an employee with dignity and self-respect may perform; however, also like “persons of good morals,” employees under surveillance may lack the best motivation to perform their work “duties.”

On the other hand, it is autonomy and self-awareness that limit the scope of possible simulation. What this ultimately means is that if the goal is efficiency, then approaching it through technological mediation, as if to make employees simulations of the desires and knowledge of their employers, may only lead to short-term capped-amounts of efficiency. In other words, it seems consistent with the above Kantian discussion of self-actualization to note that employees who respect themselves as persons who do the kind of work they are employed to do should make for the best employees. That is, long-term efficiency seems predicated upon autonomous employees who are self-aware for their own sake. Simulation is ultimately limited by the lack of autonomy and self-awareness associated with employees motivated at Kant’s level of “humanity,” and even when performing the correct actions, it is as if they do so like “persons of good morals,” not “morally good persons.”

For those who advocate for efficiency, even at the cost of dignity, the above discussion suggests promoting dignity might be a better way to promote efficiency. One, it is inefficient to “micro manage” employees. Two, even with the use of cybernetics and technological mediation to help indicate where such “micro-management” may increase efficiency, such practices may work against efficiency to the extent that they undermine employee dignity. As the above discussion suggests, employee dignity indicates more self-actualization, i.e. a freer and better existentially-situated employee. In this way, though it may be true that if an employee will not be subjected to conditions of technological mediation, perhaps a replacement who would will be easy to find. However, the ease at which individuals with less self-respect and dignity, or with greater compelling conditions, may be found neither resolves the conflict between efficiency and dignity nor does it ensure efficiency.

Excursus: Control & Inauthenticity: Simulation, “Legacy Protection,” and Despair

Some readers of our edited volume Social Epistemology & Technology: Toward Public Self-Awareness Regarding Technological Mediation have recognized, at least, an analogy between society and families in regard to the control for which technological mediation allows. Though we cannot work out every detail here, we can provide a sufficient sketch of the analogy to, if nothing else, provoke deeper thinking and self-awareness regarding the potential effects of technological mediation. In general, this question relates to the chapters located in the second half of Social Epistemology & Technology, and specifically in regard to my chapter “The Vanishing Subject: Becoming Who You Cybernetically Are.” Of particular interest regarding this topic may be the section of that chapter titled “Pro-Techno-Creation: Stepford Children of a Brave New Society (?),” though if read in isolation from the rest of the chapter, that section may seem obscure. Since my second article in this SERRC Special Issue will be devoted to discussing the theme to which the second part of Social Epistemology & Technology was devoted, i.e. the theme of “changing conceptions of humans and humanity,” we will not engage such a discussion in this excursus (cf. Scalambrino, 2015b & 2015c).

In regard to the analogy, “profit protection” is to the use of technological mediation in business as “legacy protection” is to the use of technological mediation in the family. The basic idea is that: just as technological mediation may be used to control employee actions, technological mediation may be used to constitute select attributes of a child (e.g. IVF, PGD, CRISPR-Cas9, etc.) and to promote and sustain a select identity for the child. The motivation may be characterized as “legacy protection,” since the ends afforded by technological mediation constitute a kind of investment made by parents. In this way, the dynamics of the problem we uncovered above concerning employees, employer desires, and technological mediation, manifest analogously in regard to the family. That is to say, the question of the employee’s existential-freedom becomes the question of the child’s existential-freedom, and the dilemma regarding whether to risk losing profit to allow for the individual’s autonomy and increased self-awareness becomes the risk of losing one’s legacy and “investment” in their children.

Given the large cost associated with what amounts to genetically engineering one’s children, it is clear that parents have some goal(s) in mind when selecting various attributes for a child (cf. Marcel, 1962). Whether this initial investment is made or not, some see it as the technologically-mediated equivalent of mate selection; however, notice, whether equivalent or not, the level of control increases significantly thru technological mediation. Beyond the birth of the child, then, there is the question of how to sustain the initial investment made—whether through mate selection or genetic engineering—to ensure “legacy protection.” The idea here is that whatever goal(s) parents have in mind when selecting, perhaps as best they can, various attributes for a child, those goals point to the legacy the parents are attempting to protect.

As the technological mediation of a child’s life increases so too does the potential to surveil and control the child. Since the idea of increasing surveillance should be obvious (e.g. checking to see what websites they view, what they text to friends, GPS of where they go, and so on.) we will focus only on the control piece here. Control is understood here in the sense of limiting the full self-actualization associated with personhood above and discussed through the philosophy of Immanuel Kant. That is to say, if you are able to limit an individual’s self-actualization to the level of “humanity,” then they will continually constitute their identity through comparison with others. Just as I indicated in my second chapter of Social Epistemology & Technology, the way to “lock down” such self-awareness is by “misunderstanding nothing.” What this means is that if you can provide an individual with a worldview that seems to provide an account for everything in terms of that individual’s comparative self-worth to others, then you control that individual’s ability to interpret their own existence.

When this can be anchored through a talent in which the individual excels, then the comparative model may be all the more effective, since the individual seems themselves as “winning” or a “winner” based on an identity which takes itself as able to account for whatever happens in life. The problem, Kant would say, is that the individual is not fully autonomous. The “law” given to them is not of their own choosing. There are a number of ways to use technological mediation to control individuals, and thereby to ensure “legacy protection.” On the one hand, a discussion of inauthenticity and memes would be appropriate here, since it becomes possible to understand the whole enterprise for “legacy protection” as founded upon the comparative understanding; thus, the agency more commonly attributed to the parental desire ensure legacy protection may be attributed to the transmission of the comparative worldview itself from generation to generation—like the transmission of thought memes—in that the parent evidently operates with the same worldview which is successfully engineered into the child should likewise promote that child’s desire to pass on the same worldview that values “legacy protection” to their children, and so on.

In this way, cybernetic theories of human existence function as a kind support for holding individuals at the human level in which self-worth is determined through comparison and self-awareness and autonomy are thereby diminished. What the phrase “cybernetic theories of human existence” refers to is precisely any theory of existence which believes all of existence can be explained. The sense in which such “epistemic closure” misunderstands nothing suggests to the individual’s inhabited by it that it is a worldview that can provide them with the truth in regard to everything (cf. Scalambrino, 2012). “Existentialists,” resist such systemization because it treats life like “a problem to be solved,” rather than (as Kierkegaard phrased it) “a mystery to be lived.” It is worth noting that Kierkegaard characterized such an inauthentic relation to life as “despair” (cf. Scalambrino, 2016b).

Some of the memes that are easy to notice are phrases such as “a gap year.” When an individual looks at the time of existence as though it is merely fulfilling a pre-established form, like a “cookie cutter,” then we should ask: How did that form get there? Notice how the perfect example here would be to invoke the self-understanding of individuals in “third world” locations, and ask what a “gap year” is for them. The idea is not that “gap year” has no reference. Rather, the idea is that individuals who truly believe that their lives are, and should be, following a pre-established pattern are individuals who are neither fully autonomous nor fully self-aware (cf. Marcuse, 1991). Of course, proponents of “legacy protection” may suggest that insofar as the individual in question is not from a “third world” location, then understanding the time of one’s existence in terms of “gap years, etc.” is a privilege to be coveted. Why is it a privilege to be coveted? Perhaps because such a self-understanding is more efficient for the individual to live (and pass on) the privileged existence which is their legacy.

Beyond any technological mediation used to genetically engineer a child, technological mediation helps hold individuals at the human level in which self-worth is determined through comparison by helping to sustain an identity, however explicit it may be to the individual, anchored in a cybernetic worldview. Technological mediation does this in all the ways philosophers have been saying it does this since at least when Plato talked about the technē of “writing” and its effects on human self-understanding. Yet, more to the point, when Heidegger and Jünger discuss the “form” in which humans understand themselves as “standing reserve” or as “workers,” then we can see the insidious influence of technological mediation as twofold. First, the efficiency allowed for by technology becomes an expectation. For example, the expectation is common today that we should have all our email accounts consolidated in an app on a smartphone, so you can receive emails with a level of efficiency as if they were all text messages, etc. Second, the idea that you may have some self-understanding other than legacy “protector” or germ-line “curator” is really just the folly of an inefficient employee or the noise of malfunction in a cybernetic human machine.


Aristotle. Nicomachean Ethics. Translated by Roger Crisp. Oxford: Oxford University Press, 2009.

Ashby, William Ross. An Introduction to Cybernetics. London: FIliquarian Legacy Publishing, 2012.

Ellul, Jacques. The Technological Society. Translated by J. Wilkinson. New York: Vintage Books, 1964.

Epictetus. Encheiridion. Translated by Wallace I. Matson. In Classics of Philosophy, Vol I, edited by L. P. Pojman. Oxford: Oxford University Press, 1988.

Fuller, Steve. “The Place of Value in a World of Information: Prolegomena to Any Marx 2.0.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 15-26. London: Rowman & Littlefield International, (2015).

Heidegger, Martin. Being and Time. Translated by John Macquarrie and Edward Robinson. New York: Harper & Row, 1962.

Heidegger, Martin. “The Question Concerning Technology”, In Basic Writings, edited by David F. Krell, 307-343. London: Harper & Row Perennials, 2008.

Johnston, John. The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI. London: The MIT Press. 2008.

Jünger, Ernst. “Technology as the Mobilization of the World Through the Gestalt of the Worker.” Translated by J. M. Vincent, revised by R.J. Kundell. In Philosophy and Technology: Readings in the Philosophical Problems of Technology, edited by Carl Mitchum and Robert Mackey, 269-89. New York: The Free Press, 1963/1983.

Kant, Immanuel. Groundwork of the Metaphysics of Morals. Translated by Mary J. Gregor and Jens Timmermann. Cambridge: Cambridge University Press, 2002.

Kant, Immanuel. Religion Within the Limits of Reason Alone. Translated by T.M. Greene and H.H. Hudson. New York: Harper & Row, 1960.

Lyotard, Jean-Francois. The Postmodern Condition: A Report on Knowledge. Translated by Brian Massumi. Minneapolis, MN: The University of Minnesota, 1984.

Marcel, Gabriel. “The Sacred in the Technological Age.” Theology Today 19: 27-38, 1962.

Marcuse, Herbert. One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society. Boston: Beacon Press, 1991.

Marx, Karl. “The Power of Money.” In Economic and Philosophic Manuscripts of 1844. Translated by M. Milligan, 136-141. New York: Dover Publications, 2007.

Nietzsche, Friedrich. The Cheerful Science. Translated by Walter Kaufmann. New York: Vintage Books, 1974.

Plato. Republic. Translated by G. M. A. Grube and Revised by C. D. C. Reeve. In Plato Complete Works, edited by John M. Cooper. Indianapolis, IN: Hackett Publishing, 1977.

Rorty, Richard. Philosophy and the Mirror of Nature. Princeton, NJ: Princeton University Press, 1979.

Scalambrino, Frank. Full Throttle Heart: Nietzsche, Beyond Either/Or. New Philadelphia, OH: The Eleusinian Press, 2015a.

Scalambrino, Frank. Introduction to Ethics: A Primer for the Western Tradition. Dubuque, IA: Kendall Hunt Publishing Company, 2016a.

Scalambrino, Frank. “The Shadow of the Sickness Unto Death.” In Breaking Bad and Philosophy, edited by Kevin S. Decker, David R. Koepsell and Robert Arp, 47-62. New York: Palgrave, 2016b.

Scalambrino, Frank. “Social Media and the Cybernetic Mediation of Interpersonal Relations.” In Philosophy of Technology: A Reader, edited by Frank Scalambrino 123-133. San Diego, CA: Cognella, 2014.

Scalambrino, Frank. “Tales of the Mighty Tautologists?” Social Epistemology Review and Reply Collective 2, no. 1 (2012): 83-97.

Scalambrino, Frank. “Toward Fluid Epistemic Agency: Differentiating the Terms ‘Being,’ ‘Subject,’ ‘Agent,’ ‘Person,’ and ‘Self’.” In Social Epistemology and Epistemic Agency, edited by Patrick Reider, 127-144. London: Roman & Littlefield International, 2016c.

Scalambrino, Frank. “The Vanishing Subject: Becoming Who You Cybernetically Are.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 197-206. London: Roman & Littlefield International, 2015b.

Scalambrino, Frank. “What Control? Life at the Limits of Power Expression.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 101-111. London: Rowman & Littlefield International, 2015c.

Wiener, Norbert. Cybernetics, Or, the Control and Communication in the Animal and the Machine. London: MIT Press, 1965.

[1] From Economic and Philosophical Manuscripts of 1844, translated by M. Milligan (1964).

[2] Cybernetics may be understood as a kind of science of life. For our purposes, it refers to a relation to life such that events in life are understood as capable of being fully quantified and subjected to calculations which would render the eventual outcomes predictable. Thus, proponents of such a relation to life tend to hold that the only limitation on the total cybernetic revelation of life is processing power in regard to the requisite quantification and calculation. Its continued relevance for conversations regarding technology and freedom is that if cybernetics is correct, then human freedom is a kind of illusion which results from the inability to calculate (what cybernetics considers to be) the fully deterministic nature of events. In short, according to cybernetics, it would be as if life were a machine with completely calculable motions (cf. Ashby, 2012; cf. Johnston, 2008; cf. Heidegger, 2008; cf. Wiener, 1965).

[3] For those unaware of the “Sims” reference, “The Sims is a video game series in which players “simulate life” by controlling various features of automatons and surveilling their activity. The video game was developed by “EA Maxis” and published by “Electronic Arts.”

[4] For a discussion of “efficiency” as indicative of the “Postmodern Condition,” see Lyotard, 1984.

[5] Cf. Fuller, 2015.

[6] I present the distinctions in this way for the sake of brevity and clarity; however, it should not escape Kant scholars that these three distinctions in essence represent a movement along Kant’s three different formulations of the Categorical Imperative, respectively, i.e. the principle of the law of nature, the principle of ends, and the principle of autonomy.

[7] Mill’s “Liberty Principle” suggests you are at liberty to act as you please so long as you are not harming others, i.e. so long as others consent to the treatment to which your actions subject them.

[8] Before even considering other reasons to justify this claim, notice the word “rational” in Kant’s articulation of the pre-disposition to personality.

[9] In Nietzsche’s language it is “to overcome itself.”

[10] This is, of course, why Kant thinks we naturally have a “duty” to be excellent.

[11] Cf. Scalambrino, 2015a.

Author Information: Rebecca Lowery, University of Texas at Dallas,

Lowery, Rebecca. “Our Filtered Lives: The Tension Between Citizenship and Instru-mentality.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 21-34.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Daniela Munoz-Santos, via flickr

The central problem to be examined here is that the loss of the private self is a threat to the theory of citizenship, which rests upon the idea that a citizen is a person with both a private life and a public life, a distinction inherent in many traditional theories of citizenship. Without the restoration of a potent private sphere in the individual life, citizenship becomes thin and shallow, an unnecessary and antiquated theory, useful only as a convenient tool for organizing the masses.

The private life of the individual in today’s society is now intricately linked with technology. Thus it is impossible to explore the loss of the private self without also looking at the role of technology in the life of the citizen, specifically the sense in which a citizen’s relation to their own existence is technologically-mediated. To such an end, I will have recourse to Martin Heidegger as a thinker who explicates how technology transforms our relation with existence, or to use his term “being.”

Technology gives us an opportunity to relate to the environment, others, and ourselves differently. Rather than experience being as present to us, we have the opportunity for a mediated experience with being because of the power of technology. In itself, technology is a tool; it is a means to an end, not an end in itself, a mediator between person and reality. By allowing technology to mediate our experiences, we are succumbing to what Heidegger will call “ge-stell,”[1] or enframing, with the result that we see everything as instrumental (a sunset is no longer a sunset, but something to be captured by technology in the form of a photograph for the sake of posting). Today, relating to the world instru-mentally is more pervasive and more difficult to resist because of social media, a new phenomenon particular to postmodernity.

In order to see how technology influences citizenship, I am dependent on Hannah Arendt’s characterization of the social, private, public and political realms. One consequence of social media is that the sharing of one’s private life (be it sentiments, activities, or opinions) is acceptable and expected in the public sphere; indeed, it seems that more and more, the public sphere is constituted by private stories. Further, because technology operates through enframing, both the private and public spheres have become spaces of utility. This may be opposed to how Arendt will characterize these spheres or to how Heidegger juxtaposes enframing with the more primordial poiesis as a mode of relation. It would seem, then, that the private sphere is receding into the public. To regain a thriving theory of citizenship, one in which participation as a citizen is an honor both for the state and for the private self, means a move away from the functionalism that encapsulates us today.

And finally, the enframing that results in the loss of the public and private boundary is harmful not only to the theory of citizenship, but to our own beings as well. If we can return to a state of relating to the revealing of nature as non-mediated, and privilege the poetic over the technological, then our own beings will return to a more natural state that nurtures and values the private life. If such a change is made, then the new, substantial private life will be prepared to contribute to an equally substantial public sphere.

Instru-mentality, Technological Mediation and Enframing

Heidegger, in “The Question Concerning Technology”, provides the philosophical underpinnings that illuminate the core of the postmodern problem with regards to technology.[2] Taking some of his contributions, the links between the technology of social media, the public and private spheres, and citizenship become clearer.

According to Heidegger, as humans, our natural and primary way of relating to being is through poiesis, which is a “bringing into appearance” or a “bringing-forth”[3] of the essence of a thing, where essence is understood in terms of presencing. For example, an oak tree brings forth its essence, gives its presence to us by revealing itself to us as what it is. Yet, another way to relate to beings is through technology, and when beings are revealed to us through technological mediation, they are revealed in terms of instru-mentality. When beings are revealed in terms of instru-mentality, they are revealed as instruments to be used for some end. When beings are revealed in terms of presencing their essence, they are revealed as ends in themselves, that is, not for some other—instrumental—purpose.

The concept that Heidegger introduces in order to relate modern technology with being is that of “ge-stell,” perhaps best translated as “enframing.” Just as poiesis is a revealing of essence, so too is technology a way of revealing. Heidegger suggests that technology reveals beings as “standing reserve,” meaning that “everywhere, everything is ordered to stand by, to be immediately at hand, indeed to stand there just so that it may be on call for a further ordering.”[4] Consider the oak tree. What are the thoughts that cross the mind in the presence of the tree? If the thought is something along the lines of “I could use that tree to build a table” or “I should take picture of that and publicly share it on social media so that my friends will know, for sure, that I appreciate nature” then the tree is being experienced as standing reserve. Rather than appreciating the tree for itself, it is subjected to order based on how useful it is, and what it can be used for.

For Heidegger, enframing “challenges [man] forth, to reveal the real, in the mode of ordering, as standing reserve.”[5] Enframing itself is a summons, and the summons expresses itself every time we relate to reality with technology in mind. That which “challenges forth” is the summons, but it is not an external factor. Rather, the summons is purely internal, somewhat similar in experience to what we refer to as the “call of conscience,” an instinct, a desire, or a need to behave and act in a certain way. Just as an instinct is first present in the thought, and then brought either to action or non-action, so too does enframing involve two steps: the summons is heard to relate to an experience through the medium of technology; the response is either to act on the summons, or to turn away from the summons.

In fact, to act on the summons does not require the physical apparatus of technology. For example, my experience of my life becomes enframed when I think, “I am going to update my Facebook status.” When I hear the internal summons to share my current situation or disposition via social media it shows that I already have a relation to my existence as if it were standing reserve. Becoming habituated to the summons is like exchanging my own mentality for the instru-mentality of technological mediation. I relate to my existence as if it were something to be “posted,” and as a means to whatever ends may come from such “posting.” In other words, as my existence is revealed to me, being filtered through technological mediation, social media orders my understanding to see itself in terms of instru-mentality (cf. Scalambrino, 2015). In this way, acting on the summons completes the presencing of existence in terms of enframing and, done repeatedly, becomes habitual.

One way social media, such as Facebook, establishes control, that is orders our existence into standing reserve, is through the “inter-face” mechanisms users must learn to successfully navigate the technology (cf. Scalambrino, 2014). A subtle example of this, through enframing, is in terms of the media’s attractive terminology. Status updates are an opportunity for “sharing” with friends and family. Under the guise of human relations, Facebook becomes the mediator. Enframing is always present whenever the technological tools of social media are present. If I am enjoying the company of friends, and yet I have my iPhone always at hand ready to take pictures, respond to the people not really there (cf. Engelland, 2015), etc. then I have opened myself up to answer the summons immediately, and to express the summons—to allow technological mediation order me—by actually taking the picture, and actually texting someone back. The idea of a social situation that is not mediated by technology is a rare find now. Even if I am physically present with a friend, my phone is still mediating my experience. In fact, the pervasiveness of social media and smartphones coincides with enframing as usual and customary in regard to social interactions.

The habit of living life ordered through technological mediation, and therefore as standing reserve, is what we are up against. With the habituality and the instru-mentality sustained through technological mediation, especially social media, the issue of standing reserve appears even more pressing. Enframing does not mean that a person looks at life as though through a picture frame fit with technological lenses. Rather, enframing is a summons that calls to us, internally, to relate to the world and to each other in terms of standing reserve, or, as instruments, that is, objects to be used in some fashion (cf. Scalambrino, 2015). When the relation of presencing (or “essencing”) occurs through enframing rather than poiesis, then we relate to the being of another person in terms of their utility. Social media allows us to create a representation of ourselves for others to encounter.

Through a cyber dimension, we become distanced from others, but the great guise is that we think we are becoming closer to them. In our private lives, we live with a fear of always imposing on others because we are so used to the non- imposition that is associated with media communication (cf. de Mul, 2015). We sacrifice presence for absence in that we are merely present in terms of instru-mentality, when our relations are technologically-mediated.

Enframed identities compete for us to be them, like Ernst Jünger’s insights regarding the identity of “worker” or the Hollywood-fashioned identity of “celebrity,” and as if possessed by the efficiency of instru-mentality, we work to be our own paparazzi. Of course, there are a multitude of examples that can be drawn from social media that illustrate just how easy it is to live a filtered life, where relation to being becomes mediated. Postmodern technology looks like the publication of the private; it is “the manipulation of man by his own planning.”[6] For example, the “personalize your LinkedIn profile page” with an image that describes you, your interests, etc. Physically present personalities are readily substituted for the chance to control what aspects of your personality you want projected for others to see.

Yet, Facebook is perhaps the most primary example of people giving to the public updates on their private lives, updates which can then be liked, shared, and commented on. The original meaning of words such as “sharing” and “liking” receive a second definition based on the instru-mentality of social media. In other cases, people submit themselves to technology, and thus lose themselves. The technology is too powerful. One of the popular hash tags in social media is the #besomebody. The idea behind the trend is that you are told you are being somebody to yourself, but really it’s directed outward, trying to tell others that you are somebody. Thus, you allow your own being to be hijacked into pure judgment, the judgment of others, and their enframed judgment at that.

The role of enframing has been present for as long as technology has been present. Today, social media is one particular instance of technology, but it is one that makes enframing more and more difficult to escape because we are constantly and physically around the tools that make social media possible: the computer at home and at work, the phone in the pocket or hand, the tablet always within reach. One of the reasons social media deserves consideration is because for the first time in history technology is in the hands of the everyman. We no longer just have the technology of big machines. Now it is big machines in addition to the technology of the masses, the technology of social media.

Gianni Vattimo, in “Postmodernity, Technology, Ontology,” comments on how “Heidegger … remained stuck in a vision of technology dominated by the image of the motor and of mechanical energy.”[7] Nevertheless, though Heidegger wrote about technology in his own historical situation and relates enframing with modern technology (machines powered by motors directed at the control of nature) his ideas are still highly relevant in today’s culture (and perhaps more so than ever before considering that technology permeates all sectors of society).

Thus, there is also a historical motivation behind this paper. To fully appreciate the state we are in today, it is helpful to look at how technology, in our postmodern condition, is one of the reasons why the issues here deserve (perhaps urgent) consideration. The historical evaluation will not be a lengthy one: it is not necessary to trace technology beyond the historical transition from modernity to postmodernity to gain an understanding of why and how technology today has become a (seemingly) essential part of everyday life, and a factor of everyday-ness that is not without consequences.

While Heidegger’s account of how technology alters our relationship with being can be traced back to the origin of technology, in more recent history the shift from modernity to postmodernity provides an explanation for how and why the concept of enframing deserves particular attention today, in our postmodern world. Richard Merelman’s article “Technological Cultures and the Liberal Democracy in the United States”[8] highlights the shift from modern technology to postmodern technology in order to suggest a reason for the change in how citizens view American government and liberal democracy. His distinction between the directions of technology (which serves as the groundwork for his entire essay) is important here, because it reinforces the urgency of the social media and enframing issue.

Merelman points to the modern era, when technology was directed outwards towards the control of nature. However, the entire culture of technology during that era was translucent; the average citizen was able to understand how technology operated. However, with the transition to postmodern technology, the emphasis of invention became directed on the human person, rather than nature. New technologies geared towards human development and health allowed the former focus on nature to be redirected.

In the modern era, as Merelman writes “the self acted, technology responded, and nature yielded to the civilized control of society.”[9] Thus Bacon was justified and Descartes was fulfilled. In his New Organon, Bacon’s third axiom reads, “Human knowledge and human power come to the same thing, because ignorance of cause frustrates effect. For Nature is conquered only by obedience; and that which in thought is a cause, is like a rule in practice.”[10] Bacon was the first to introduce the idea of controlling nature, and thus he introduced this era of modernity. In extension of this transition, Descartes succinctly writes in his Discourse on Method that we must “render ourselves, as it were, masters and possessors of nature. This is desirable not only for the invention of an infinity of devices that would enable one to enjoy trouble-free the fruits of the earth and all the goods found there, but also principally for the maintenance of health …”[11] As Descartes points out, such mastery of nature is made possible by physics. The important point about modern technology is that it was directed outwards.

Furthermore, because the technology was directed outwards, the effects, as Merelman writes, were immediately observable and calculable. We do not see the same possibility for calculating in postmodern technology, because enframing is an internal summons. What is internal to the person is much more complicated than the control of nature. The results of enframing are much moresubtle, less clear, less comprehensible, and ultimately less scientific.

Modern technology lasted through World War II, and indeed it continues today. Much of our technology is meant to master nature. However, it has receded. The transition to postmodernism began in post-World War II American culture, and was in full force by the 1960s. Why did modernism end? Perhaps our control of nature, as Merelman suggests, goes too far. Why else would the rise of environmentalism occur simultaneously with the shift to postmodernism? We controlled too much of nature, and we drew back. This is one interpretation. But perhaps it is more likely that environmentalism is also the control of nature; it’s just cleverly disguised. By focusing less attention on the control of nature, it became possible for technology to be redirected towards the human person. The technology is still external to us, but its effects are now seen in the workings of the person, not just in nature. Soon, we may realize that this too must be reined it. The other cause for transition to postmodern technology is more natural and obvious: technology and science strives on. Man is not content with domination of nature; it must also dominate the two extremes sandwiching our earth: the solar system on one hand and the human person on the other.

Thus, with the transition to postmodern technology, the emphasis of invention became directed on the human person, rather than nature. New technologies geared towards human development and health allowed the former focus on nature to be redirected. Now, in the postmodern condition, one of the main purposes of technology is to understand the self. In some ways this was successful, for example the research regarding the human genome and mental illness. These are two examples that aid in understanding the self (though in no way is this meant to suggest that human persons can be reduced to their mental faculties and their inherited genetic traits.) But what does technological enframing look like today? We will see that rather than aiding in understanding the self we are compromising and sacrificing the self. This is done under the great guise of technology. Postmodern technology promises self-fulfillment, life improvement, self-betterment…but it is, for the most part, a deceit and the repercussions extend into many areas of life, including that of citizenship.

I am focused on the so called communication technology of social media as representative of postmodern technology, I do not think it can separated from the technology directed towards understanding man’s biology, in other words, medical technology. All of these separations still fall within the technology of information; it is merely expressed differently based on specific areas. For example, medical technology allows the illusion of facial reconstruction; communication technology allows for the illusion of the media persona, a not-there identity, entirely fabricated (not only by the fabricator, but also by others who can say what they want about others within this technology). It is interesting that, with regards to medical technology, Descartes was in a way foreshadowing the evolution of postmodernism when he speaks of the “maintenance of health” as one of the benefits of mastering nature.

So far, we have seen that technology, as a source of revealing, reveals to us being as standing reserve. Also examined was the historical perspective: that the transition from modernity to postmodernity, culminating in the social media that permeates our world today, brings the concept of enframing to the forefront due to the extreme accessibility and habitual use of social media. Now, with the previous progress in mind, we will begin to turn our attention to the effects of enframing in the realm of citizenship, which will necessarily mean the effects on our own beings as well. To the extent that enframing is a part of our every day life, I will argue that enframing is contributing greatly to the loss of the sense of the private self, without which the theory of citizenship cannot remain meaningful to the citizen.

From Enframing to the Efficiency of Postmodern Technology

For Arendt, society, and thus the social realm, is where “private interests assume public significance”[12] which takes the form “of mutual dependence for the sake of life and nothing else…and where the activities connected with sheer survival are permitted to appear in public.”[13] What is necessary for survival? Eating, shelter, and the education of the young become some of the constituents of the social realm. It seems that social media should not be called social media. There is nothing about social media that makes it necessary for survival.

The private on the other hand is a “sphere of intimacy”[14] where the happenings of the private life need not extend into the social realm. It is closed off from the eyes of others, except those personally involved in the sphere. Furthermore, it ought to revolve around real presencing. However, Arendt points out that in the modern era “modern privacy in its most relevant function, to shelter the intimate, was discovered as the opposite not of the political sphere but of the social, to which it is therefore more closely and authentically related.”[15]

For Arendt, it is clear that the private sphere is closely linked with the social (and not the public) sphere. Does this then mean that the social and the private have nothing to do with citizenship since they are thus severed from the political realm? By no means. We shall see that Arendt is drawing a chain, and connects the social sphere with the public sphere. For Arendt, the public and private do not co-exist snugly side-by-side. Rather, the social realm falls between them and knits them together, while at the same time allowing the two spheres to remain distinct. Some private issues (such as education) appear in the social realm, and then the social realm contributes to the public sphere.

Arendt has a specific definition of the private sphere. Shiraz Dossa summarizes Arendt’s conception of the private as such: “that privacy is the natural condition of men is a truism for Arendt: the needs and wants of the human body and the natural functions to which the body is subject are inherently private.”[16] Further, Arendt contrasts the category of the private with that of the public. The public realm is fascinating because it can be either social or political.[17] Traditionally, the public was aligned with the political. However, the larger the community, the more social the public will be. We are therefore losing our sense of the political and the private to the social and the public.

Arendt constitutes the public realm in two ways. The first is “that everything that appears in public can be seen and heard by everybody and has the widest possible publicity.”[18] However, Arendt’s public is not infiltrated with social media as it is today; thus our public realm has becomes a filtered reality. In another sense, for Arendt the public “signifies the world itself, in so far as it is common to all of us and distinguished from our privately owned place in it.”[19] How awesome is it that we have private ownership in this world! And it is equally awesome that there is a public sphere that balances the private. However, it is not necessary for social media to publicize that the world is common to all; the commonness should be enough in itself and has no need to be enframed.

The other point that Arendt is making is that the public realm is receding. During her time, the state of the public realm was no longer permanent. The permanency of the public sphere is highly important in Arendt’s philosophy because it means that what we create today is not only for our generation; the public today ought to take the future into consideration as well since “It is the publicity of the public realm which can absorb and make shine through the centuries whatever men may want to save from the natural ruin of time.”[20] The idea is to live in a world, and to create a world, that is strong enough to withstand time.

To overcome time suggests a worthiness of the pursuits engaged in creating something in the public sphere because then the works succeed the condemnation of mortal decay. They participate and gain access to an eternal realm (though an eternal realm still confined in the physical world). Perhaps Arendt is right: how much of our public world will withstand time? But in another sense, the opposite is happening: all is falling into the public. The private is being subsumed under the public, and the public now has its identity as social, and not political. If all that is left is the public sphere, then without the opposition of another sphere there can be no loss of the public: it’s permanency is parallel to a dictator, ruling with no contestants. Rather than the public being like a dictator, it should rather retain a healthy tension with the private sphere, each of the two acting as a balance for the other.

Presented above are Arendt’s definitions of the social, private, public and political realms, and how each relates to the others. The most significant one for present purposes is the distinction between the public and the private. It is clear that Arendt elevates the public realm, and I elevate the private realm. She speaks of rising from the private to the public. But I would not say the move from public to private is an ascent. I would rather say that they are on a horizontally-related, rather than vertically.

From Postmodern Technology to Boundary Blurring Between the Public and the Private

The enframing that occurs with social media is mediating our relation to real presences and thus necessarily it is directly affecting our private and public lives. When our private lives bleed into the public sphere via social media, the public sphere itself becomes a mirror image of mediated personalities. For Arendt, the public sphere means “something that is being seen and heard by others as well as by ourselves.” Granted, social media is seen and heard via technological devices, however the relation to what appears via technology is once removed from reality: it is a copy, and it is also an illusion.

As social media makes a stronger and more permanent presence in the world, the private realm becomes less and less significant because what used to be strictly present in the private realm can now easily be projected into the public realm. While social media exacerbates enframing, the issue at hand is nothing new. Arendt notes how in modernity “functionalization makes it impossible to perceive any serious gulf between the two realms.”[21] Thus it is function, enframing, and usefulness that blur the boundary between the public and private.

In addition to Arendt, Vattimo argues “what concerns us in the postmodern age is a transformation of (the notion of) Being as such—and technology, properly conceived, is the key to that transformation.”[22] Indeed, our notion of being is transformed, or at least filtered, by technology because of enframing. Vattimo characterizes enframing as “the totality of the modern rationalization of the world on the basis of science and technology.”[23] Thus, it is impossible to conceive of being as extending beyond enframing. As we have already seen, the rationalization that Vattimo speaks of is the utilitarian nature of enframing, an aspect that coincides with the pragmatism originating in the 20th century.

The very utility that is necessarily attached to pragmatism continues to presence itself today through enframing, made easy by social media. Vattimo clearly states: “I don’t believe that Pragmatist and Neopragmatist arguments are strong enough to support a choice for democracy, nonviolence, and tolerance.” Therefore, he supports an ontological rather than a pragmatic point of view, which, as a philosophical position, prefers “a democratic, tolerant, liberal society to an authoritarian and totalitarian one.”[24] To have a life not dominated by the enframing of technology is more conducive to democratic ideals. While the private and public spheres are necessary in any political system, democracy is our own current situation, which adds a definite relevance to the experience of enframing as opposed to other ways of relating to reality.

Before moving on to discussing how the lack of a boundary between the public and private influences the individual life of the citizen, there is a final point to be made about the republic, one that speaks to the very lifeblood of citizenship as a theory. Wilson Carey McWilliams, drawing on Tocqueville, states “freedom is not the mastery of persons and things; it is being what we are, subject to truth’s authority. No teaching is more necessary if the technological republic is to rediscover its soul.”[25] What we are sure to lose in our current trajectory is the soul of our nation. In an illusionary manner, social media is about mastery and the sense of feeling like we are in control. It is the delusion that we can control a relationship in a text message. It is becoming evident that time is a huge factor with social media: how quickly in time can an image go viral? How quick is the response to messages?

As we can control this factor of time while participating in social media, we allow ourselves to fall prey to the illusion of power. In social media, there is no subjection of the self, there is only self-proclamation. When the citizens of our republic have no soul, the soul of the republic suffers. The soul of the republic is only as great as the people who make up the republic. Nietzsche, drawing on Aristotle, asks if greatness of soul is possible.[26] If it is, social media is not helping in the nurturing of greatness since a soul that relates to being as not exceeding standing reserve loses all sense of mystery. When the souls of a nation are suffering, infected with a continually enframed view of being, then the very soul of a nation suffers as well, as it’s lifeblood is slowly shut off.

Some encourage the publication of the private as a signal of the advancement of mankind in the social realm. If the social realm were the highest, then such would be the case. But there are reasons why I hold the private to be of great significance: people begin their role as citizens in the private realm. The remedy of this problem is necessary if we are to remain as citizens, if citizenship is itself going to survive. All can be traced back to what is going on in the private realm. It determines our identities, which we then carry into the public realm.

A healthy citizen is a citizen who is able to distinguish the private from the public, and retain a balance between the two. To lose this, is to lose the capacity to be a citizen, and thus we face the collapse of the theory of citizenship. This theory only has existence in so far as we as individuals uphold it through our own existences as public and private beings. Thus as we continue to sacrifice our private selves, we are slowly chipping away at the theory of citizenship. Arendt approaches the same problem, but subordinates the private to the public. For her, a well-lived public sphere trickles down to the private sphere and improves it. Her ordering is necessary if the public sphere is where man truly fulfills his nature (the guiding principle of Civic Republicanism). The conclusion is the same for both of us: an identity as a citizen that involves both the public and the private spheres. We merely diverge on the privileging of spheres.

Furthermore, the boundary between the public and private self is a condition for citizenship in that a strong identity of the private self serves as preparation for a well-constituted public sphere. The enframing by technology today that is weakening the boundary between the private and the public thus has implications for the theory of citizenship. If a citizen lacks a foundation in their private life, then that citizen may as well be a foreigner to the system of citizenship that they are attempting to participate in. Just as a foreigner will lack the disposition to give credibility and care to the style of citizenship that is either not their own or that they have no intention of participating in, so too is the citizen who attempts to participate in the public sphere while lacking a hidden and private life. Since the public sphere is made of citizens, the only way to have a thriving citizenship is for a sense of strong personal identity with the state where the citizenry reside. The personal identity is established in the private sphere, where the soul learns to relate to reality, and then brings itself to help constitute the reality of the public. A citizen with no private life is like an apple with no core: it is all façade, with nothing substantial to contribute to permanency and foundation.

Finally, the private realm ought to remain unpublicized for the sake of retaining a unified self, and for the sake of self-reverence and mystery. Once publicized, reverence and mystery become obsolete. Paul A. Cantor and Cardinal Ratzinger offer ideas on what it means for the human person to exist without reverence and without mystery, two aspects of the human race that technology helps make disappear. When we then lose our sense of private identity we are losing a part of ourselves. Though we are incomplete beings, we accentuate and magnify our incompleteness through technology. It is entirely voluntary, and entirely unnecessary.

Paul A. Cantor writes: “when man chooses to revere nothing higher than himself, he will indeed find it difficult to control the power of his own technology.”[27] Social media is followed with an attentive reverence, but since social media is a platform for the self, reverencing social media is essentially reverencing one’s media self, and nothing higher. When the media acts as such a vice grip, it is difficult to remember to revere anything else. Reverence does not have to pertain to religion or belief systems. It can mean to honor the internal difference of the human person, out of humility recognizing that no representation ever captures the greatness of man. Why would we choose to honor media personas that strive so hard for coherence over the contemplation of actuality?

The reverence that Cantor is talking about is similar to Cardinal Ratzinger, in An Introduction to Christianity, asking,

But if man, in his origin and at his very roots, is only an object to himself, if he is ‘produced’ and comes off the production line with selected features and accessories, what on earth is man then supposed to think of man? How should he act toward him? What will be man’s attitude toward man when he can no longer find anything of the divine mystery in the other, but only his own know-how?[28]

Our publication of the private dehumanizes us, reduces us, and secludes us. I argue that it is not part of the fabric of reality. We see in the face of the other, not their inherent mystery, but a shell of their opinions. Our participation further reduces our own mystery that we hold to ourselves. If we are to truly have a public sphere that lasts more than a generation, then a “production line” creation is far too weak and fallible, since it is so easily changed and manipulated to match the going trends and styles of the day. The weakness of such a system is then expounded when it applies not just to the manufacturing of things, but to the manufacturing of people as well. Not only is the result a loss of beauty in the creation of the public sphere, but also man is demoted to robotic-like expectations, devoid of all “divine mystery.”

Ratzinger’s characterization and implications of the manufactured person is the same as Heidegger’s exploration of standing reserve, since standing reserve fully embraces utility, and leaves no room for mystery. As previously illustrated, enframing harms the private life and destroys hiddenness. Thus, the experience of reality (including the human person) as standing reserve that occurs through enframing is detrimental to the mystery of the person. Though the mystery of the person is explored in the public sphere, it finds its root and primary expression in the private sphere. But, what is the point of divinity, or eternity, when there is no birth of such things in the private sphere, and no sustenance for them in the public sphere?

A Public, Shallow Life

Arendt provides a succinct summation of the problem: “A life spent entirely in public, in the presence of others, becomes, as we would say, shallow.”[29] Our life is constituted by physical presences, both in the public and the private spheres. However, added to the real flesh of the physical world is the prominence of media presences (which are immaterial) that allow the individual to have a constant presence in the public sphere. When the media presences become the main way in which we relate our lives to the world around us, then we are looking at a great private loss. Along with the loss of the private self comes the loss of a profound and real theory of citizenship. Thus, if the overarching idea to be preserved is citizenship, then we must search for a way to preserve the hidden life, the private life. It is possible that such a reversal will change our embodiment in the fibers of apathy that currently constitutes the general perception of citizenship.

If enframing occurs because we respond to the summons that results in standing reserve, then a change in perception, an internal change, will radically derail enframing. An internal change towards external reality means escaping from enframing and (perhaps) returning to what Heidegger will call more “primordial,” a relation to the world that was possible prior to the power of technology that allowed for enframing in the first place. Ideally, it means seeking the inherent value present in the world, rather than living by standing reserve alone. It means returning to reverence, to soul, and to mystery, as opposed to total revealing in utility and a life that does not extend beyond what is manufactured and functional. Though utility cannot (and need not) be totally eradicated, utility also need not be privileged above other paths of relation.

Once enframing is held in check, the private realm will not sink so quickly into the public, and the two realms will once again become distinct. The internal opposition to enframing will put a hold on the constant filtration of reality, and thus allow for a wellspring of endurance, a new revealing of truth not based in usefulness, and a return to the hiddenness of the private sphere. The re-established privacy then re-draws the boundary between the public and the private, such that a newly well-established private sphere provides for a stronger sense of self, a better preparation for entering the public sphere. The strength of self not hindered in the public sphere infuses the soul of citizenship, and thus saves citizenship.


Arendt, Hannah. The Human Condition. 2nd ed. Chicago: The University of Chicago Press, 1958.

Bacon, Francis. The New Organon. Eds. Lisa Jardine and Michael Silverthorne. Cambridge:  Cambridge University Press, 2000.

Bambach, Charles. “Heidegger on The Question Concerning Technology and Gelassenheit.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 115-127. London: Rowman & Littlefield International, 2015.

Brenner, Leslie. “Goodbye, avatar.” Dallas Morning News: October 30, 2014.

Cantor, Paul A. “Romanticism and Technology: Satanic Verses and Satanic Mills.” In Technology in the Western Political Tradition, edited by Arthur M. Melzer, Jerry Weinberger, and M. Richard Zinman, 214-28. Ithaca, New York: Cornell University Press, 1993.

de Mul, Elize. “Existential Privacy and the Technological Situation of Boundary Regulation.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 69-79. London: Rowman & Littlefield International, 2015.

Descartes, Rene. Discourse on Method. 3rd ed. Trans. Donald A. Cress. Indianapolis:  Hackett Publishing Company, 1998.

Dossa, Shiraz. The Public Realm and the Public Self: The Political Theory of Hannah Arendt.  Waterloo: Wilfrid Laurier University Press, 1989.

Eliot, T.S. “Burnt Norton.” In The Complete Poems and Plays, 117-22. New York: Harcourt, Brace & World, 1971.

Engelland, Chad. “Absent to Those Present: The Conflict between Connectivity and Communion.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 167-177. London: Rowman & Littlefield International, 2015.

Heidegger, Martin. The Question Concerning Technology and Other Essays. Trans. William Lovitt. New York: Harper & Row, 1977.

McWilliams, Wilson Carey. “Science and Freedom: America as the Technological  Republic.” In Technology in the Western Political Tradition, edited by Arthur M. Melzer, Jerry Weinberger, and M. Richard Zinman, 214-228. Ithaca, New York: Cornell University Press, 1993.

Merelman, Richard M. “Technological Cultures and Liberal Democracy in the United States.” Science, Technology, & Human Values 25, no. 2 (Spring 2000): 167-94.

Ratzinger, Joseph Cardinal. Introduction to Christianity. Trans. J.R. Foster and Michael J.  Miller. San Francisco: Ignatius Press, 2000.

Scalambrino, Frank. “Social Media and the Cybernetic Mediation of Interpersonal Relations.” In Philosophy of Technology: A Reader, edited by Frank Scalambrino, 123-133. San Diego, CA: Cognella, 2014.

Scalambrino, Frank. “What Control? Life at the Limits of Power Expression.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 101-111. London: Rowman & Littlefield International, 2015.

Vattimo, Gianni. “Postmodernity, Technology, Ontology.” In Technology in the Western Political Tradition, edited by Arthur M. Melzer, Jerry Weinberger, and M. Richard Zinman, 214-28. Ithaca, New York: Cornell University Press, 1993.

[1] Heidegger, The Question Concerning Technology and Other Essays, 19.

[2] Cf. Bambach, “Heidegger on The Question Concerning Technology and Gelassenheit.”

[3] Heidegger, The Question Concerning Technology and Other Essays, 10.

[4] Ibid., 17.

[5] Ibid., 20.

[6] Ratzinger, Introduction to Christianity, 66.

[7] Vattimo, “Postmodernity, Technology, Ontology,” 223.

[8] Merelman, “Technological Cultures and Liberal Democracy in the United States.”

[9] Merelman, “Technological Cultures and Liberal Democracy in the United States,” 168.

[10] Bacon, The New Organon, 33.

[11] Descartes, Discourse on Method, 35.

[12] Arendt, The Human Condition, 35.

[13] Ibid., 46.

[14] Ibid., 38.

[15] Ibid., 38.

[16] Dossa, The Public Realm and the Public Self: The Political Theory of Hannah Arendt, 59.

[17] Arendt, The Human Condition, 43.

[18] Ibid., 50.

[19] Ibid.

[20] Arendt, The Human Condition, 55.

[21] Ibid., 33.

[22] Vattimo, “Postmodernity, Technology, Ontology,” 214.

[23] Ibid., 222.

[24] Ibid., 226.

[25] McWilliams, “Science and Freedom: America as the Technological Republic,” 108.

[26] Nietzsche, Beyond Good and Evil, 139.

[27] Cantor, “Romanticism and Technology: Satanic Verses and Satanic Mills,” 127.

[28] Ratzinger, Introduction to Christianity, 18.

[29] Arendt, The Human Condition, 71.

Author Information: Zachary Willcutt, Boston College,

Willcutt, Zachary. “The Enframing of the Self as a Problem: Heidegger and Marcel on Modern Technology’s Relation to the Person.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 11-20.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: A Health Blog, via flickr

Discourse today often includes phrases such as “my neurons made me do it,” or “my brain does this or that.” Popular opinion increasingly maintains that the mind is identical to the brain. That is, social consciousness views the person as nothing more than a collection of chemicals and cells, resulting in the phenomenon, or perhaps the epiphenomenon, of consciousness, which has nothing incorporeal or interior about it. It is, following the pattern of the things in the world, supposed to be another physical thing. The contemporary collective consciousness knows that the human being is just another wholly material object, subjected to the same laws of causal determination as plants, atoms, and stars.

Following Heidegger, such social knowledge is shown to be the product of the present scientific and technological understanding of the self, subsuming consciousness, thought, emotions, passions, and choices as objects of empirical, scientific study, which uses various instruments to purportedly show that the person is her brain, converting the self into what Marcel calls a problem; however, this conflicts with the traditional perspective that the human is an immaterial soul. To defend the latter position, this article will deconstruct the claims of modern neuroscience to prevent the de-humanization of individuals that as the result of these claims now occurs.[1]

Enframing the Person as Brain

This understanding of the human person is consequent upon modern science, in particular neuroscience and psychology, which depend wholly on modern technology. There is thus a mutual relation between technology and science, leading to a process of the en-framing of the person as the brain. Martin Heidegger in The Question Concerning Technology sets forth the particulars of this process. He notes that there is a social awareness that modern technology “is based on modern physics as an exact science” (QCT, 14). In general, individuals are aware that their computers, cars, electricity, and other modern items, depend on scientific activity. Technology requires as its condition the development of scientific knowledge, in particular physics, without which microwaves and electricity would not be possible. However, just as technology depends on science, science depends on technology, since “modern physics, as experimental, is dependent upon technical apparatus and upon progress in the building of apparatus” (QCT, 14). The work of scientists in general is rooted in technology, which provides cyclotrons, electron tunneling microscopes, and spaceships that further scientific cognition.

Thus, the relation between science and technology is reciprocal; neither can exist without the other. Such reciprocity is becoming more clearly understood (QCT, 14). Modern technology and modern science mandate one another, one aiding the other, while each stands on the ground set forth by the other. In the context of mind-brain identity debates, this involves public awareness through social cognition of viewing the mind and person as nothing more than the brain, which is commonly held to be scientific knowledge, as dependent on modern technology such as brain scans. Without technology, contemporary neuroscience and cognitive science could never have developed. These fields require technology, which serves as their condition and has thus led to the furthering of mind-brain identity theories as social cognition.

The public predominance of such theories was described by Heidegger, in what he calls Gestell, usually translated into English as enframing, “the challenging claim which gathers man thither to order the self-revealing as standing-reserve” (QCT, 19). Standing-reserve occurs when “[e]verywhere everything is ordered to stand by, to be immediately at hand, indeed to stand there just so that it may be on call for a further ordering” (QCT, 17). The things of the world, and humans subsumed as part of the world, are arranged solely with respect to their use insofar as they may be employed for continued utilization. Entities and, more importantly, persons are reduced to their mere possibility of being used for some end. That persons have become standing-reserve is demonstrated by “the current talk about human resources” (QCT, 18). Individuals are integrated as parts into the whole of the technological systems dominating life, where individuals have value only insofar as they can be incorporated into the technological whole. Thus, enframing indicates the gathering and ordering of persons and things so that they are revealed as available for use.

In this context, Kisiel interprets enframing as ‘synthetic compositioning,’ indicating “artificiality to the system of positions and posits” (Kisiel 2014, 138). This translation for Gestell fully encompasses the meaning that Heidegger is trying to indicate, that the world and persons are brought together to be used for further instrumentality. That is, Gestell signifies the functionalization of persons and things into the disposability of the standing-reserve, which is ordering for the end of more ordering, with no end beyond that of such ordering. All that is, is reduced to its functionality. To synthetically compose the person as an instrument, he must be understood in terms of his instrumentality, his submission and application to technology, for he has become “a commodity to be stored, shipped, handled, delivered, and disposed of” (Bambach 2015, 10). In this state, humans “become the functionaries of technological positioning, we put ourselves in position to be stockpiled and surveyed” (Ibid). Technological-functionalization de-humanizes the person, whose individuality disappears within the system.

In this way, the person is a mere component of a machine, a machine that in the framework of mind-brain identity debates turns the self into a brain. Humans are synthetically composited as brains, destroying their uniqueness as persons, as they have become only material. The self is eclipsed by the impersonality of matter (cf. Scalambrino, 2015). Having no characteristics particular to a person, the brain belongs to no one, and vanishes into the nothingness of pure matter. For every brain is equally exchangeable as any other brain. As it has been established, technology requires modern science; thus, humans become objects of science in social consciousness, that they too might be ordered according to the orders of the ordering. This ordering, itself in its essence technological, necessitates that the person is considered as nothing more than his brain. For if he was not just a brain, he would have some aspect that escaped instrumentality; having been reduced to the order of instrumentality, he must therefore only be thought of in his physicality. Human beings are problematized as objects of natural scientific study, the socially common view among many scientists and much of western society today.

However, that the mind is identical to, or emerging from, the brain, despite the apparent scientific support for this conclusion, should be confronted with great scrutiny. For historically, most philosophers, religions, and cultures, have maintained a soul or spirit as the ground of personality, rather than mere matter. That such a view was so widely held dictates that it should be considered seriously.

On Reducing the Human Person

Augustine’s remarks in the narrative of Confessions that his mother Monica brought him “to birth, both in her flesh, so that [he] was born into this temporal light, and in her heart, so that [he] might be born into eternal light” (C 9.8.17). Here, Augustine has distinguished between the flesh and the heart, exteriority and interiority, that is, matter and spirit, respectively. For Monica gave birth to him in body, but also by her prayers for his soul gave birth to him in her heart as well. Her heart is in no way physical, being contrasted with the flesh; it is spiritual, which indicates it is not composed of anything material. The affective center of emotions of the human person is of the soul, indicating that the fundamental being of the self cannot be located in the material order.

During the Medieval period, Bonaventure with respect to the journey toward God that “[w]e must also enter into our soul, which is God’s image, everlasting, spiritual, and within us” (Journey of the Soul into God, 60). The soul, the self, is clearly considered as spiritual and interior, preventing it from being observed as if it were a material object. Humans are spiritual rather than physical beings. To be spiritual means that one thinks, desires, loves, cares, intends, and feels emotions and affects such as anger or joy.

Though not involving himself explicitly in the Critique of Pure Reason in debates on the nature of the person, Kant makes clear that the self is not physical. For “although all our cognition commences with experience, yet it does not on that account all arise from experience” (CPR B1). Experience is merely the stimulus for knowledge, rather than the ground; Kant observes that aspects of cognition do not find their source in the experience of the empirical world. Thus, there must be a transcendental, a priori root of knowledge, which indicates the person is not restricted to a mere body.

Now turning to the Mystery of Being, Marcel argues that modernity has reduced the human person to a problem as opposed to a mystery. A problem is that which “I find complete before me, but which I can therefore lay siege to and reduce … A genuine problem is subject to an appropriate technique by the exercise of which it is defined” (MB, 211). Problems are objective, and can be answered by a definite, adequate formula that will yield the requisite result. The human mind is capable of grasping problems as a whole, so that all aspects become visible, enabling the problem to be analyzed into its components. This is the process of the natural sciences. But when what is not genuinely a problem is considered as such, the result is a broken world. The latter consists in the reduction of personal identity to a “few sheets of an official dossier,” which is how “I am forced to grasp myself” (MB, 29).

Persons are compelled to understand themselves as mere instruments in the system set forth by the utilization of technology, where technocrats use science to justify their policies. As such, the human must be reduced to the brain, for if he has a mind, he would be thus not wholly subservient to the synthetic compositioning. To subject a person to technology mandates that he consider himself nothing more than a collection of neurons. Social consciousness leads individuals to submit to the control of those who produce scientific knowledge that furthers ordering society through technology under the reign of science. However, “there is within the human…something that protests against the sort of rape or violation of which he is the victim; and this torn, protesting state of the human creature is enough to justify us in asserting that the world in which we live is a broken world” (MB, 33).

The realm of technology destroys love, emotion, and care. The person is losing himself to his functionalization, in which he is a functionalized self that operates according to the deterministic laws of science; questions about his being are to be answered by examining as if he were merely another object in the world, a tree, planet, or mineral. He dwells, or more accurately fails to dwell, “in a mechanized world, a world deprived of passion” (MB 24). Through rigorous scientific analysis, all that is valuable in the person is detected and employed. The world of humanity is converted into a set of functionalized selves in a techno-scientific system that has as its purpose only its own furtherance; the world is broken, life is extinguished.

By the interposition of a cybernetic or the techno-scientific self-understanding, such as the mind-brain identity thesis into social consciousness, “the will is re-directed toward a virtual dimension” (Scalambrino 2015, 5). Taken radically, moving beyond the dangers of virtual reality, Scalambrino is pointing to the general threat posed by the technologically-conditioned reduction of the person to the brain. When humans believe that they are nothing more than piles of chemicals, their wills are oriented toward the possibilities appropriate to a pile of chemicals. They live for, and deliberate in terms of, a pile of chemicals, rather than for themselves qua persons. For them, to be is to be a brain, with no meaning or purpose greater than that of a toad, snake, or some animal with a brain.

However, as daily experience testifies, as persons, individuals have a feeling of their being-beyond-the-world. The person is not his body, requiring a different approach, that of mystery. In this way, Marcel understands the person as mysterious, being that which “transcends every conceivable technique,” and “is itself like a river, which flows into the Eternal, as into a sea” (MB, 211, 219). A mystery is infinite; it is a vast depth that cannot be sounded. There is no method to a mystery, it cannot be represented or known as such, for it exceeds the capacity of the mind to represent it (MB, 69). An individual can only move about, may only live, in the mystery, a reservoir of inexhaustible richness. Unlike the problem, the mystery draws the person out of himself, and he is himself a mystery, as exemplified by characteristics of his own being. Marcel notes that “the act of thought itself must be acknowledged a mystery; for it is the function of thought to show that any objective representation, abstract schema, or symbolical process, is inadequate” (MB, 69). Thus, humans shatter the boundaries of the physical, even in their thinking, and so cannot be reduced to the brain.[2]

The Libet Experiment

Among the most famous experiments reducing the mind to the brain by free will being interpreted out of existence, the Libet experiment, as interpreted by Benjamin Libet himself, purports to show that human behavior can be accurately predicted by brain events prior to such behavior actually occurring. Specifically, this test asked persons who were watching a dot moving along a circle to flick their wrists when they “freely wanted to do so” (Libet 2002, 553).[3] After doing this, they reported W, “the clock-time associated with the first awareness of the wish [or urge] to move” (Libet 2002, 553). 550 msec before muscle movement an increase in readiness potential (RP) began. For Libet, “an appearance of conscious will 550 msec…before the act seemed intuitively unlikely” (Libet 2002, 553). Two types of tests were performed on the subjects, who in one such type had two sets of results. In one test, subjects were asked to spontaneously move, in which case they would at times report a “general intention…to act within the next second or so,” or have no such planning, while in the other test type, subjects responded to a randomly given stimulus, of which time they were not aware (Libet 2002, 554).

With respect to when the subject freely acted without planning, there was a buildup of RP, which has been termed RP II, and when the subject acted with prior intention, there also was a buildup of RP, identified as RP I. In the trial with the stimulus, there was no buildup of RP. With prior intention, RP I accumulated 1000 msec before muscle movement, while in the absence of pre-planning, RP II built up 550 msec prior to muscle movement, and 350 msec prior to the wish to act, which itself was 200 msec before the act (Libet 2002, 557). As the result of the buildup of RP, in particular RP II, Libet states that the “volitional process is…initiated unconsciously” (Libet 2002, 551).

A superior perspective on of the Libet experiment rather indicates that the brain is subsequent to the mind, such that mental states precede brain states, which is the case for several reasons. Firstly, every instance of build-up of RP in the brain and the wrist movement of the body was correlated in some way with the mental state of the desire to move. Readiness-potential and wrist movement only occurred in relation to the desire to move, indicating an intrinsic relation between conscious willing and physical, both brain and kinesthetic, action. The build-up of RP always was temporally determined by its relation to the desire to move, so that brain states correspond to mental states. Given that hands can only be moved by the person through commands sent from the brain, hands being corporeal, some sort of modification of the brain would be required to move the hand. That this modification exists is not perplexing, and provides nothing against free will.

RP I, the RP observed with respect to previous intention, only had a significant increase with the time of initial planning given by subjects, who were aiming to move around a second before muscle movement. The significant increase in RP occurred at the same time plans were reported to be developing, at 1000 msec, indicating that the muscles were being primed for motion by the intentions of the subject. With respect to RP II, that the increase in RP was 350 msec before the urge to move is not an indicator of the absence of free volition. For, as both a methodological and substantive issue with the Libet experiment is the definition of the conscious urge to move, which carries a variety of significations, especially in the word ‘urge;’ Libet also conflated urge with will or active wish to move. This indicates that a person is contemplating whether she has an urge to move, a process that could lead to a build-up of RP in the brain. She is deliberating whether she has such an urge at this particular instant. For an individual may have an urge to do something, urge understood as the feeling of desire, and yet hesitate to act on that desire. The decision to act on a desire is distinct from the presence of this desire. What the Libet experiment shows most clearly is that humans can feel impulses, upon which they then decide to act. Often, a person eats when his stomach feels empty, an emptiness that can be registered by monitoring the brain. But to say that that person is determined by such emptiness is absurd, as demonstrated by those who are gluttons or go on hunger strikes. Further, the self might not be hungry at all and yet still indulge in food. Such is the result of the delay between RP II and W.

As the result of his lack of philosophical comprehension, Libet could not distinguish between the wish to move and the urge or impulse to move. If urge is understood as ‘wish,’ the appearance of such wish is arbitrary, a decision of the will; the mind contemplates enacting this will, and wishing to accomplish such a deed. The determination of this wish, only after which one would be conscious of the wish to wish this activity, would of course result in some type of brain activity in order to prepare the body. But this brain activity occurs as the result of the spiritual deliberation requisite for determination of will. Thus, the buildup of RP may either indicate that the person is determining his will with respect to the sensation of physical need or impulse, or merely anticipating the becoming of his wish, expecting that he will soon in the future wish this. That is, in order to move at an instant, the body must be primed, causing a buildup of RP, which on this count is not an argument against free will. For apart from the instant of conscious wish itself, a person is, even without pre-planning, still in a certain sense mentally planning his action. For as one must make the arbitrary choice of suddenly flicking his wrist, an action that he knows he will soon perform, his body is able to respond to the consciousness of the impending deliberate mental choice to flick the wrist by being primed in what is observed as readiness potential.

Analyzing Soon and Libet’s Work

Another scientific experiment conducted by Soon et al., tested the ability to predict the decisions of the subject before the decisions were consciously made, by having a person press a button with either their left or right index finger when they felt the urge to do so (Soon et al. 2008, 543). The researchers claimed to have been predicted subject choices 10 seconds prior to such choices; however, that the accuracy of predictions was a mere sixty-percent should also lead to hesitation in leaping to the conclusion that this experiment is evidence against free will. Sixty-percent is a mere ten-percent more than the result of guessing at every other instance whether a person would move. A random game of probability would provide results not significantly different from those of the Soon experiment. Thus, that the experimenters were successful in sixty-percent of cases is only evidence that they are but half-way decent at guessing games.

In nearly half of all instances, Soon was unable to predict physical movement on the basis of the build-up of readiness-potential. In nearly half of all instances, brain states gave no evidence for future movement. In nearly half of all instances, brain states at the scientific, technological, empirical level, examining among the most basic physical functions of the human person, were unable to yield a causal account of behavior. To say that brain states caused the movement, that they caused the mental urge, is wholly unwarranted. Causality is necessary and universal, yet here it is neither, the buildup of readiness-potential not necessary for the conscious wish to move and in no interpretation always universally present prior to mental states. No causal link whatsoever has been demonstrated by Soon.

Both Libet and Soon have made the paradigmatic example of an argument from ignorance; they say that because they can see no other potential cause for the actions performed by their subjects, then the brain is the source for those actions; unknown brain events, they say, are the source for human actions. But they cannot point to these brain events; for none such exist that are the causes of action. As they have committed themselves to materialism, they cannot think in terms of a spiritual cause that alters matter. Yet such spiritual cause, readily experienced as the conscious choice of a mind, is the obvious genesis of action and behavior.

This impossibility in observing even the simplest of motor functions, among the most basic thoughts or commands that a person can issue, implies that more complex choices are impossible to study. Since the command to issue motor controls has a genesis outside of the brain, all other more complex mental activities must similarly find their ground beyond a mere physical organ.

As previously noted, readiness-potential always is related to conscious deliberation, anticipation, and choice; the former is in the brain, while the latter three are mental events. Qua mental events, they are subjective, and never in themselves come under the observation of technological instruments. They are interior, not exterior, contrasting with brain events, which are observable. That brain events follow mental events seems to be shown by the Libet experiment, as no readiness-potential occurs without mental events being reported by the subject. The suggestion, then, is that brain events, such as the build-up of readiness-potential, are causally dependent on mental states, as there is in fact both a necessary and universal connection.

If anything, the Libet experiment indicates, as the result of the difference between mental and brain events, that mental and brain events cannot be correlated, which is simply further evidence for the traditional theory that interiority precedes exteriority; the spiritual precedes the corporeal. More evidence for this is available from one of the most well-documented medical occurrences, the Placebo Effect and its lesser-known twin, the Nocebo Effect. These in particular show that mental states are in no way reducible or causally contingent on brain states, yet that brain states depend on mental states.

The placebo and nocebo effects “are treatment effects, unrelated to the treatment mechanism, which are induced by patients’ expectations of improvement or worsening respectively” (Bartels et al. 2014, 1). That is, the placebo and nocebo effects are fundamentally cognitive, determined by the expectations of individuals. These expectations are mental, not physical, and wholly subjective; yet, despite their subjectivity and existence in the mind rather than the brain, they have an established effect on the outcomes of treatments. Thus, mental states have a direct causal role on the physical world. The mind influences the brain.

For the brain, through which pain is felt, does not know that the person is taking a drug that is not active to reduce an illness, while the mind does know that the individual is using this drug, causing the placebo or nocebo effect, as the result of anticipation of success, or the absence thereof. Were there not a mind independent of the brain, the placebo and nocebo effects could never happen, since intentionality is not characteristic of the brain, yet only the conscious mind. All intentional states are mental, which must therefore be assigned real existence as the result of its causal power. Knowledge and expectation exist in the subject, the mind alone. The brain does not think, and no brain has ever thought.

On the Status of Mental States

All contemporary neuroscience rests on a fundamental assumption, which is that mental states do not exist; they are mere figments of the brain, which, qua matter, is reality. All that is, is corporeal matter, and consciousness is an illusion. Thus, to study the person, the scientist should study the physical world. The subjective states of the individual are ultimately nothing, and should not be trusted in determining the scientific view of the self. Modern science, with its emphasis on the empirical and observable, as a methodological consideration, must use this assumption, for were it to not, it would be compelled to admit that there exists that which is beyond its capacity to know.

Yet this assumption, that all is matter and mental states are an illusion, terminates in a reductio ad absurdum. For beginning with the proposition that mental states are an illusion, let this be applied to mental states regarding external objects, physical objects. For instance, take the physical object Saturn; Saturn is known qua physical object. It is seen, a process that according to neuroscience occurs by various neurons crashing together in the brain. Thus, Saturn only exists on the basis of its existing in the brain, because we only know of Saturn by the seeing in the brain, which by analogy holds true for all physical objects, including brains. The physical does not need to exist outside of the mind; it could very well be a mere construct of the brain.

The fact that Saturn is seen by multiple persons is irrelevant; for this only means that persons opine that they share the seeing of the same Saturn. The possibility still exists that each individual sees a different Saturn, where humans are stimulated with the sensation of the apparently same object. The physical world is known only through mental states; thus, the physical world is an illusion. If neuroscientists want to say that emotions are not real merely because they occur in the brain, they must likewise say that Saturn is not real, as it too exists in only the brain. All that is, whether mental or physical, becomes an illusion, including the brain itself, as brains are only known by technological observation of them. Brains are known by brains. But Saturn does exist apart from the brain; therefore, mental states also have real existence not reducible to the brain.

All subjective mental states must consequently be given actual reality, and must be considered to have the same level of reality as is had by the corporeal world. This necessitates with the force of law that the brain not be hypothesized as the source of mental activity. Any attempt to reduce mental states to brain states results in the absurdity of the whole of existence, including the spatial, becoming an illusion, for matter only exists as a representation to the conscious subject.

Following from these problems associated with the synthetic compositioning of the self as the brain, the person is not reducible, even by modern technology, thereto. Humans should not be taken as objects of technological and scientific study, yet rather in accord with their own unique way of being that respects their unique status as humans. Man must not be reduced to a material brain by instrumentality, but rather acknowledged as the center of the world of his own first-person subjectivity. The reductionism of neuroscience must be overcome to keep humanity human. Marcel in Creative Fidelity reflects this task of rejecting de-humanization, to “strengthen the fierce resolution of those who reject the consummation by themselves or others of man’s denial of man, or…the denial of the more than human by the less than human” (CF, 10).


Augustine. Confessions. Trans. John K. Ryan. New York: Image Classics, 2014.

Bambach, Charles. “Heidegger on The Question Concerning Technology and Gelassenheit.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 115-126. Lanham, MD: Rowman and Littlefield, 2015.

Bartels, Danielle et al. “Role of Conditioning and Verbal Suggestion in Placebo and Nocebo Effects on Itch.” Public Library of Science One 9 (2014): 1-9.

Bonaventure. Bonaventure: The Soul’s Journey into God, The Tree of Life, The Life of Saint Francis. Translated by Ewert Cousins. New York: Paulist Press, 1978.

Heidegger, Martin. The Question Concerning Technology and Other Essays. Translated by William Lovitt. London: Harper Perennial, 2013.

Kant, Immanuel. The Critique of Pure Reason. Translated by Paul Guyer and Allen Wood. Cambridge: Cambridge University Press, 1999.

Kisiel, Theodore. “Heidegger and Our Twenty-first Century Experience of Ge-Stell.” Research Resources Paper 35 (2014): 137-151.

Libet, Benjamin. “Do We Have Free Will?” In The Oxford Handbook of Free Will, edited by Robert Kane, 551-564. Oxford: Oxford University Press, 2002.

Marcel, Gabriel. Creative Fidelity. Trans. Robert Rosthal. New York: Fordham University Press, 2002.

Marcel, Gabriel. The Mystery of Being, Vol. I: Reflection and Mystery. Translated by G.S. Fraser. South Bend:  St. Augustine’s Press, 1950.

Scalambrino, Frank. “The Vanishing Subject: Becoming who You Cybernetically Are.” In Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation, edited by Frank Scalambrino, 197-206. Lanham, MD: Rowman and Littlefield, 2015.

Soon, Chun et al. “Unconscious Determinants of Free Decisions in the Human Brain.” Nature Neuroscience 11, no. 5 (2008): 543-545.

[1]. This is neuroscience in the reductionist sense that seeks to state that the mind is an illusion; it is true that there are neuroscientists who reject reductionism, and they are not those against whom this essay is articulated, insofar as they recognize the independence of the mind from the brain.

[2]. This first requires the deconstruction of the functionalized and de-humanized self to restore the mystery about the person.

[3]. Libet, “Do We Have Free Will?”

Author Information: Nick Bostrom, University of Oxford,

Bostrom, Nick. “In Defense of Posthuman Dignity.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 1-10.

The PDF of the article gives specific page numbers. Shortlink:

Reprint from Bostrom, Nick. “In Defence of Posthuman Dignity.” Bioethics 19, no. 3 (2005): 202-214.[1]

Please refer to:


Image credit: RHiNO NEAL, via flickr


Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision.

Transhumanists vs. Bioconservatives

Transhumanism is a loosely defined movement that has developed gradually over the past two decades, and can be viewed as an outgrowth of secular humanism and the Enlightenment. It holds that current human nature is improvable through the use of applied science and other rational methods, which may make it possible to increase human health-span, extend our intellectual and physical capacities, and give us increased control over our own mental states and moods.[2] Technologies of concern include not only current ones, like genetic engineering and information technology, but also anticipated future developments such as fully immersive virtual reality, machine-phase nanotechnology, and artificial intelligence.

Transhumanists promote the view that human enhancement technologies should be made widely available, and that individuals should have broad discretion over which of these technologies to apply to themselves (morphological freedom), and that parents should normally get to decide which reproductive technologies to use when having children (reproductive freedom).[3] Transhumanists believe that, while there are hazards that need to be identified and avoided, human enhancement technologies will offer enormous potential for deeply valuable and humanly beneficial uses. Ultimately, it is possible that such enhancements may make us, or our descendants, “posthuman,” beings who may have indefinite health-spans, much greater intellectual faculties than any current human being—and perhaps entirely new sensibilities or modalities—as well as the ability to control their own emotions. The wisest approach vis-à-vis these prospects, argue transhumanists, is to embrace technological progress, while strongly defending human rights and individual choice, and taking action specifically against concrete threats, such as military or terrorist abuse of bioweapons, and against unwanted environmental or social side-effects.

In opposition to this transhumanist view stands a bioconservative camp that argues against the use of technology to modify human nature. Prominent bioconservative writers include Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben. One of the central concerns of the bioconservatives is that human enhancement technologies might be “dehumanizing.” The worry, which has been variously expressed, is that these technologies might undermine our human dignity or inadvertently erode something that is deeply valuable about being human but that is difficult to put into words or to factor into a cost-benefit analysis. In some cases (e.g. Leon Kass) the unease seems to derive from religious or crypto-religious sentiments whereas for others (e.g. Francis Fukuyama) it stems from secular grounds. The best approach, these bioconservatives argue, is to implement global bans on swathes of promising human enhancement technologies to forestall a slide down a slippery slope towards an ultimately debased posthuman state.

While any brief description necessarily skirts significant nuances that differentiate writers within the two camps, I believe the above characterization nevertheless highlights a principal fault lines in one of the great debates of our times: how we should look at the future of humankind and whether we should attempt to use technology to make ourselves “more than human.” This paper will distinguish two common fears about the posthuman and argue that they are partly unfounded and that, to the extent that they correspond to real risks, there are better responses than trying to implement broad bans on technology. I will make some remarks on the concept of dignity, which bioconservatives believe to be imperiled by coming human enhancement technologies, and suggest that we need to recognize that not only humans in their current form, but posthumans too could have dignity.

Two Fears About the Posthuman

The prospect of posthumanity is feared for at least two reasons. One is that the state of being posthuman might in itself be degrading, so that by becoming posthuman we might be harming ourselves. Another is that posthumans might pose a threat to “ordinary” humans. (I shall set aside a third possible reason, that the development of posthumans might offend some supernatural being.)

The most prominent bioethicist to focus on the first fear is Leon Kass:

Most of the given bestowals of nature have their given species-specified natures: they are each and all of a given sort. Cockroaches and humans are equally bestowed but differently natured. To turn a man into a cockroach—as we don’t need Kafka to show us—would be dehumanizing. To try to turn a man into more than a man might be so as well. We need more than generalized appreciation for nature’s gifts. We need a particular regard and respect for the special gift that is our own given nature.[4]

Transhumanists counter that nature’s gifts are sometimes poisoned and should not always be accepted. Cancer, malaria, dementia, aging, starvation, unnecessary suffering, cognitive shortcomings are all among the presents that we wisely refuse. Our own species-specified natures are a rich source of much of the thoroughly unrespectable and unacceptable—susceptibility for disease, murder, rape, genocide, cheating, torture, racism. The horrors of nature in general and of our own nature in particular are so well documented[5] that it is astonishing that somebody as distinguished as Leon Kass should still in this day and age be tempted to rely on the natural as a guide to what is desirable or normatively right. We should be grateful that our ancestors were not swept away by the Kassian sentiment, or we would still be picking lice off each other’s backs. Rather than deferring to the natural order, transhumanists maintain that we can legitimately reform ourselves and our natures in accordance with humane values and personal aspirations.

If one rejects nature as a general criterion of the good, as most thoughtful people nowadays do, one can of course still acknowledge that particular ways of modifying human nature would be debasing. Not all change is progress. Not even all well-intended technological intervention in human nature would be on balance beneficial. Kass goes far beyond these truisms however when he declares that utter dehumanization lies in store for us as the inevitable result of our obtaining technical mastery over our own nature: the final technical conquest of his own nature would almost certainly leave mankind utterly enfeebled. This form of mastery would be identical with utter dehumanization. Read Huxley’s Brave New World, read C. S. Lewis’s Abolition of Man, read Nietzsche’s account of the last man, and then read the newspapers. Homogenization, mediocrity, pacification, drug-induced contentment, debasement of taste, souls without loves and longings—these are the inevitable results of making the essence of human nature the last project of technical mastery. In his moment of triumph, Promethean man will become a contented cow.[6]

The fictional inhabitants of Brave New World, to pick the best-known of Kass’s examples, are admittedly short on dignity (in at least one sense of the word). But the claim that this is the inevitable consequence of our obtaining technological mastery over human nature is exceedingly pessimistic—and unsupported—if understood as a futuristic prediction, and false if construed as a claim about metaphysical necessity.

There are many things wrong with the fictional society that Huxley described. It is static, totalitarian, caste-bound; its culture is a wasteland. The brave new worlders themselves are a dehumanized and undignified lot. Yet posthumans they are not. Their capacities are not super-human but in many respects substantially inferior to our own. Their life expectancy and physique are quite normal, but their intellectual, emotional, moral, and spiritual faculties are stunted. The majority of the brave new worlders have various degrees of engineered mental retardation. And everyone, save the ten world controllers (along with a miscellany of primitives and social outcasts who are confined to fenced preservations or isolated islands), are barred or discouraged from developing individuality, independent thinking and initiative, and are conditioned not to desire these traits in the first place. Brave New World is not a tale of human enhancement gone amok but a tragedy of technology and social engineering being used to deliberately cripple moral and intellectual capacities—the exact antithesis of the transhumanist proposal.

Transhumanists argue that the best way to avoid a Brave New World is by vigorously defending morphological and reproductive freedoms against any would-be world controllers. History has shown the dangers in letting governments curtail these freedoms. The last century’s government-sponsored coercive eugenics programs, once favored by both the left and the right, have been thoroughly discredited. Because people are likely to differ profoundly in their attitudes towards human enhancement technologies, it is crucial that no one solution be imposed on everyone from above but that individuals get to consult their own consciences as to what is right for themselves and their families. Information, public debate, and education are the appropriate means by which to encourage others to make wise choices, not a global ban on a broad range of potentially beneficial medical and other enhancement options.

The second fear is that there might be an eruption of violence between unaugmented humans and posthumans. George Annas, Lori Andrews, and Rosario Isasi have argued that we should view human cloning and all inheritable genetic modifications as “crimes against humanity” in order to reduce the probability that posthuman species will arise, on grounds that such a species would pose an existential threat to the old human species:

The new species, or “posthuman,” will likely view the old “normal” humans as inferior, even savages, and fit for slavery or slaughter. The normals, on the other hand, may see the posthumans as a threat and if they can, may engage in a preemptive strike by killing the posthumans before they themselves are killed or enslaved by them. It is ultimately this predictable potential for genocide that makes species-altering experiments potential weapons of mass destruction, and makes the unaccountable genetic engineer a potential bioterrorist.[7]

There is no denying that bioterrorism and unaccountable genetic engineers developing increasingly potent weapons of mass destruction pose a serious threat to our civilization. But using the rhetoric of bioterrorism and weapons of mass destruction to cast aspersions on therapeutic uses of biotechnology to improve health, longevity and other human capacities is unhelpful. The issues are quite distinct. Reasonable people can be in favor of strict regulation of bioweapons while promoting beneficial medical uses of genetics and other human enhancement technologies, including inheritable and “species-altering” modifications.

Human society is always at risk of some group deciding to view another group of humans as fit for slavery or slaughter. To counteract such tendencies, modern societies have created laws and institutions, and endowed them with powers of enforcement, that act to prevent groups of citizens from enslaving or slaughtering one another. The efficacy of these institutions does not depend on all citizens having equal capacities. Modern, peaceful societies can have large numbers of people with diminished physical or mental capacities along with many other people who may be exceptionally physically strong or healthy or intellectually talented in various ways. Adding people with technologically enhanced capacities to this already broad distribution of ability would not need to rip society apart or trigger genocide or enslavement.

The assumption that inheritable genetic modifications or other human enhancement technologies would lead to two distinct and separate species should also be questioned. It seems much more likely that there would be a continuum of differently modified or enhanced individuals, which would overlap with the continuum of as-yet unenhanced humans. The scenario in which “the enhanced” form a pact and then attack “the naturals” makes for exciting science fiction but is not necessarily the most plausible outcome. Even today, the segment containing the tallest ninety percent of the population could, in principle, get together and kill or enslave the shorter decile. That this does not happen suggests that a well-organized society can hold together even if it contains many possible coalitions of people sharing some attribute such that, if they ganged up, they would be capable of exterminating the rest.

To note that the extreme case of a war between humans and posthumans is not the most likely scenario is not to say that there are no legitimate social concerns about the steps that may take us closer to posthumanity. Inequity, discrimination, and stigmatization—against, or on behalf of, modified people—could become serious issues. Transhumanists would argue that these (potential) social problems call for social remedies. One example of how contemporary technology can change important aspects of someone’s identity is sex reassignment. The experiences of transsexuals show that Western culture still has work to do in becoming more accepting of diversity. This is a task that we can begin to tackle today by fostering a climate of tolerance and acceptance towards those who are different from ourselves. Painting alarmist pictures of the threat from future technologically modified people, or hurling preemptive condemnations of their necessarily debased nature, is not the best way to go about it.

What about the hypothetical case in which someone intends to create, or turn themselves into, a being of so radically enhanced capacities that a single one or a small group of such individuals would be capable of taking over the planet? This is clearly not a situation that is likely to arise in the imminent future, but one can imagine that, perhaps in a few decades, the prospective creation of superintelligent machines could raise this kind of concern. The would-be creator of a new life form with such surpassing capabilities would have an obligation to ensure that the proposed being is free from psychopathic tendencies and, more generally, that it has humane inclinations. For example, a future artificial intelligence programmer should be required to make a strong case that launching a purportedly human-friendly superintelligence would be safer than the alternative. Again, however, this (currently) science-fiction scenario must be clearly distinguished from our present situation and our more immediate concern with taking effective steps towards incrementally improving human capacities and health-span.

Is Human Dignity Incompatible with Posthuman Dignity?

Human dignity is sometimes invoked as a polemical substitute for clear ideas. This is not to say that there are no important moral issues relating to dignity, but it does mean that there is a need to define what one has in mind when one uses the term. Here, we shall consider two different senses of dignity:

  1. Dignity as moral status, in particular the inalienable right to be treated with a basic level of respect.
  1. Dignity as the quality of being worthy or honorable; worthiness, worth, nobleness, excellence (The Oxford English Dictionary[8]).

On both these definitions, dignity is something that a posthuman could possess. Francis Fukuyama, however, seems to deny this and warns that giving up on the idea that dignity is unique to human beings—defined as those possessing a mysterious essential human quality he calls “Factor X” [9]—would invite disaster:

Denial of the concept of human dignity—that is, of the idea that there is something unique about the human race that entitles every member of the species to a higher moral status than the rest of the natural world—leads us down a very perilous path. We may be compelled ultimately to take this path, but we should do so only with our eyes open. Nietzsche is a much better guide to what lies down that road than the legions of bioethicists and casual academic Darwinians that today are prone to give us moral advice on this subject.[10]

What appears to worry Fukuyama is that introducing new kinds of enhanced person into the world might cause some individuals (perhaps infants, or the mentally handicapped, or unenhanced humans in general) to lose some of the moral status that they currently possess, and that a fundamental precondition of liberal democracy, the principle of equal dignity for all, would be destroyed.

The underlying intuition seems to be that instead of the famed “expanding moral circle,” what we have is more like an oval, whose shape we can change but whose area must remain constant. Thankfully, this purported conservation law of moral recognition lacks empirical support. The set of individuals accorded full moral status by Western societies has actually increased, to include men without property or noble decent, women, and non-white peoples. It would seem feasible to extend this set further to include future posthumans, or, for that matter, some of the higher primates or human-animal chimaeras, should such be created—and to do so without causing any compensating shrinkage in another direction. (The moral status of problematic borderline cases, such as fetuses or late-stage Alzheimer patients, or the brain dead, should perhaps be decided separately from the issue of technologically modified humans or novel artificial life forms.) Our own role in this process need not be that of passive bystanders. We can work to create more inclusive social structures that accord appropriate moral recognition and legal rights to all who need them, be they male or female, black or white, flesh or silicon.

Dignity in the second sense, as referring to a special excellence or moral worthiness, is something that current human beings possess to widely differing degrees. Some excel far more than others do. Some are morally admirable; others are base and vicious. There is no reason for supposing that posthuman beings could not also have dignity in this second sense. They may even be able to attain higher levels of moral and other excellence than any of us humans. The fictional brave new worlders, who were subhuman rather than posthuman, would have scored low on this kind of dignity, and partly for that reason they would be awful role models for us to emulate. But surely we can create more uplifting and appealing visions of what we may aspire to become. There may be some who would transform themselves into degraded posthumans—but then some people today do not live very worthy human lives. This is regrettable, but the fact that some people make bad choices is not generally a sufficient ground for rescinding people’s right to choose. And legitimate countermeasures are available: education, encouragement, persuasion, social and cultural reform. These, not a blanket prohibition of all posthuman ways of being, are the measures to which those bothered by the prospect of debased posthumans should resort. A liberal democracy should normally permit incursions into morphological and reproductive freedoms only in cases where somebody is abusing these freedoms to harm another person.

The principle that parents should have broad discretion to decide on genetic enhancements for their children has been attacked on grounds that this form of reproductive freedom would constitute a kind of parental tyranny that would undermine the child’s dignity and capacity for autonomous choice; for instance, by Hans Jonas:

Technological mastered nature now again includes man who (up to now) had, in technology, set himself against it as its master… But whose power is this—and over whom or over what? Obviously the power of those living today over those coming after them, who will be the defenseless other side of prior choices made by the planners of today. The other side of the power of today is the future bondage of the living to the dead.[11]

Jonas is relying on the assumption that our descendants, who will presumably be far more technologically advanced than we are, would nevertheless be defenseless against our machinations to expand their capacities. This is almost certainly incorrect. If, for some inscrutable reason, they decided that they would prefer to be less intelligent, less healthy, and lead shorter lives, they would not lack the means to achieve these objectives and frustrate our designs.

In any case, if the alternative to parental choice in determining the basic capacities of new people is entrusting the child’s welfare to nature, that is blind chance, then the decision should be easy. Had Mother Nature been a real parent, she would have been in jail for child abuse and murder. And transhumanists can accept, of course, that just as society may in exceptional circumstances override parental autonomy, such as in cases of neglect or abuse, so too may society impose regulations to protect the child-to-be from genuinely harmful genetic interventions—but not because they represent choice rather than chance.

Jürgen Habermas, in a recent work, echoes Jonas’ concern and worries that even the mere knowledge of having been intentionally made by another could have ruinous consequences:

We cannot rule out that knowledge of one’s own hereditary features as programmed may prove to restrict the choice of an individual’s life, and to undermine the essentially symmetrical relations between free and equal human beings.[12]

A transhumanist could reply that it would be a mistake for an individual to believe that she has no choice over her own life just because some (or all) of her genes were selected by her parents. She would, in fact, have as much choice as if her genetic constitution had been selected by chance. It could even be that she would enjoy significantly more choice and autonomy in her life, if the modifications were such as to expand her basic capability set. Being healthy, smarter, having a wide range of talents, or possessing greater powers of self-control are blessings that tend to open more life paths than they block.

Even if there were a possibility that some genetically modified individuals might fail to grasp these points and thus might feel oppressed by their knowledge of their origin, that would be a risk to be weighed against the risks incurred by having an unmodified genome, risks that can be extremely grave. If safe and effective alternatives were available, it would be irresponsible to risk starting someone off in life with the misfortune of congenitally diminished basic capacities or an elevated susceptibility to disease.

Why We Need Posthuman Dignity

Similarly ominous forecasts were made in the seventies about the severe psychological damage that children conceived through in vitro fertilization would suffer upon learning that they originated from a test tube—a prediction that turned out to be entirely false. It is hard to avoid the impression that some bias or philosophical prejudice is responsible for the readiness with which many bioconservatives seize on even the flimsiest of empirical justifications for banning human enhancement technologies of certain types but not others. Suppose it turned out that playing Mozart to pregnant mothers improved the child’s subsequent musical talent. Nobody would argue for a ban on Mozart-in-the-womb on grounds that we cannot rule out that some psychological woe might befall the child once she discovers that her facility with the violin had been prenatally “programmed” by her parents. Yet when it comes to e.g. genetic enhancements, arguments that are not so very different from this parody are often put forward as weighty if not conclusive objections by eminent bioconservative writers. To transhumanists, this looks like doublethink. How can it be that to bioconservatives almost any anticipated downside, predicted perhaps on the basis of the shakiest pop-psychological theory, so readily achieves that status of deep philosophical insight and knockdown objection against the transhumanist project?

Perhaps a part of the answer can be found in the different attitudes that transhumanists and bioconservatives have towards posthuman dignity. Bioconservatives tend to deny posthuman dignity and view posthumanity as a threat to human dignity. They are therefore tempted to look for ways to denigrate interventions that are thought to be pointing in the direction of more radical future modifications that may eventually lead to the emergence of those detestable posthumans. But unless this fundamental opposition to the posthuman is openly declared as a premiss of their argument, this then forces them to use a double standard of assessment whenever particular cases are considered in isolation: for example, one standard for germ-line genetic interventions and another for improvements in maternal nutrition (an intervention presumably not seen as heralding a posthuman era).

Transhumanists, by contrast, see human and posthuman dignity as compatible and complementary. They insist that dignity, in its modern sense, consists in what we are and what we have the potential to become, not in our pedigree or our causal origin. What we are is not a function solely of our DNA but also of our technological and social context. Human nature in this broader sense is dynamic, partially human-made, and improvable. Our current extended phenotypes (and the lives that we lead) are markedly different from those of our hunter-gatherer ancestors. We read and write; we wear clothes; we live in cities; we earn money and buy food from the supermarket; we call people on the telephone, watch television, read newspapers, drive cars, file taxes, vote in national elections; women give birth in hospitals; life-expectancy is three times longer than in the Pleistocene; we know that the Earth is round and that stars are large gas clouds lit from inside by nuclear fusion, and that the universe is approximately 13.7 billion years old and enormously big. In the eyes of a hunter-gatherer, we might already appear “posthuman.” Yet these radical extensions of human capabilities—some of them biological, others external—have not divested us of moral status or dehumanized us in the sense of making us generally unworthy and base. Similarly, should we or our descendants one day succeed in becoming what relative to current standards we may refer to as posthuman, this need not entail a loss dignity either.

From the transhumanist standpoint, there is no need to behave as if there were a deep moral difference between technological and other means of enhancing human lives. By defending posthuman dignity we promote a more inclusive and humane ethics, one that will embrace future technologically modified people as well as humans of the contemporary kind. We also remove a distortive double standard from the field of our moral vision, allowing us to perceive more clearly the opportunities that exist for further human progress.[13]


Annas, George J., Lori B. Andrews and Rosario M. Isasi. “Protecting the Endangered Human: Toward an International Treaty Prohibiting Cloning and Inheritable Alterations.” American Journal of Law and Medicine 28, no. 2&3 (2002): 162.

Bostrom, Nick. “Human Genetic Enhancements: A Transhumanist Perspective.” Journal of Value Inquiry 37, no. 4 (2003): 493-506.

Bostrom et al., “The Transhumanist FAQ, v. 2.1.” World Transhumanist Association, 2003.

Fukuyama, Francis. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Strauss and Giroux, 2002.

Glover, Jonathan. Humanity: A Moral History of the Twentieth Century. New Haven: Yale University Press, 2001.

Habermas, Jürgen. The Future of Human Nature. Oxford: Blackwell, 2003.

Jonas, Hans H. Technik, Medizin und Ethik: Zur Praxis des Prinzips Verantwortung. Frankfurt am Main: Suhrkamp, 1985.

Kass, Leon R. Life, Liberty, and Defense of Dignity: The Challenge for Bioethics. San Francisco: Encounter Books, 2002.

Kass, Leon R. “Ageless Bodies, Happy Souls: Biotechnology and the Pursuit of Perfection.” The New Atlantis (2003): 1.

Simpson, John and Edmund Weiner, eds. The Oxford English Dictionary, 2nd ed. Oxford: Oxford University Press, 1989.

[1]. Our thanks to Professor Bostrom for his permission to re-print this article in our Special Issue of the SERRC.

[2]. Bostrom et al., “The Transhumanist FAQ, v. 2.1.”

[3]. Bostrom, “Human Genetic Enhancements: A Transhumanist Perspective.”

[4]. Kass, “Ageless Bodies, Happy Souls: Biotechnology and the Pursuit of Perfection,” 1.

[5]. See e.g. Glover, Humanity: A Moral History of the Twentieth Century.

[6]. Kass, Life, Liberty, and Defense of Dignity: The Challenge for Bioethics, 48.

[7]. Annas, Andrews and Isasi, “Protecting the Endangered Human: Toward an International Treaty Prohibiting Cloning and Inheritable Alterations,” 162.

[8]. Simpson, John and Edmund Weiner, eds, The Oxford English Dictionary, 2nd ed.

[9]. Fukuyama, Our Posthuman Future: Consequences of the Biotechnology Revolution, 149.

[10]. Fukuyama, op cit. note 8, 160.

[11]. Jonas, Technik, Medizin und Ethik: Zur Praxis des Prinzips Verantwortung.

[12]. Habermas, The Future of Human Nature, 23.

[13]. For their comments I am grateful to Heather Bradshaw, John Brooke, Aubrey de Grey, Robin Hanson, Matthew Liao, Julian Savulescu, Eliezer Yudkowsky, Nick Zangwill, and to the audiences at the Ian Ramsey Center seminar of June 6th in Oxford, the Transvision 2003 conference at Yale, and the 2003 European Science Foundation Workshop on Science and Human Values, where earlier versions of this paper were presented, and to two anonymous referees.

Author Information: Sebastian Dieguez,i Gérald Bronner,ii Véronique Campion-Vincent,iii Sylvain Delouvée,iv Nicolas Gauvrit,v Anthony Lantian,vi Pascal Wagner-Eggervii

i Laboratory for Cognitive and Neurological Sciences, University of Fribourg, Switzerland
ii Laboratoire Interdisciplinaire des Énergies de Demain, Paris Diderot, France
iii Retired sociologist, formerly Maison des Sciences de l’Homme, Paris, France
iv Psychology Laboratory: Cognition, Behaviour, Communication, Rennes 2 University, France
v Human and Artificial Cognition Laboratory, University of Paris-Saint-Denis, France
vi Laboratoire Parisien de Psychologie Sociale, Université Paris Nanterre, France
vii Department of Psychology, University of Fribourg, Switzerland

Dieguez, Sebastian, Gérald Bronner, Véronique Campion-Vincent, Sylvain Delouvée, Nicolas Gauvrit, Anthony Lantian & Pascal Wagner-Egger. “’They’ Respond: Comments on Basham et al.’s ‘Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone’.” [1] Social Epistemology Review and Reply Collective 5, no. 12 (2016): 20-39.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: MoxyJane@Spiral Bound Images, via flickr


Basham et al. (2016), fear that “they want to cure everyone” of conspiracy theories. Here, “they” respond and try to put this concern to rest. The commentary “they” published in French newspaper Le Monde, with which Basham et al. take issue, cautioned against governmental initiatives to counter conspiracy theories among youths and advocated for more research on the topic. Basham et al. call instead for more conspiracy theories and less “conspiracy theory panic”. “They” attempt to explain this fundamental disagreement by “just asking” some questions, first about the definition of conspiracy theories and the conspiracist mindset, second about the possibility that Basham et al. hold an inflated view of the investigative and reasoning talents of conspiracy theorists, third about whether “they” are conspirators or Basham et al. conspiracy theorists, and fourth about the notion that conspiracy theorists should be “cured”. In the process, “they” reiterate the importance of empirical sociopsychological research to resolve issues related to the conspiracist mindset and highlight a number of problems in Basham et al.’s approach to conspiracy theories. “They” conclude on a bright note: careful and rigorous research, in the end, might help everybody become better conspiracy theorists, perhaps even Basham et al.

“How can we account for our present situation unless we believe that men high in this government are concerting to deliver us to disaster?”—Joe McCarthy, U.S. Senate, 1951.

“Who sits in the taller chairs? Do They have names?”—Thomas Pynchon, Gravity’s Rainbow, 1973, 202.

As the authors of Le Monde’s article[2] targeted in Basham et al.’s (2016) comment, we are delighted to be referred to by the collective “they” in their title. Who is “they”? Now we finally know, it’s us! In the spirit of this pynchonesque homage, we thus henceforth proudly adopt this qualifier. Indeed, what could be more appropriate, in a scholarly discussion of conspiracy theories, than labelling a disagreeing party as “they”? It makes things much clearer. So “they” thank Basham et al., for permitting “them” to clarify some points of interest and addressing a few potential misunderstandings.

The crux of the matter is an open commentary “they” wrote for Le Monde (Bronner et al. 2016), in which “they” took issue with French governmental and local initiatives designed to tackle the apparent proliferation of conspiracy theories among youths, a trend many deem worrying in the wake of several terrorist attacks on French soil, and in a context of ethnic and religious tensions, increased ideological polarization and ready online access to hate-speech and misinformation. “They” thought, too, that something should be done. But “they”, as scientists working on reasoning, belief formation, belief epidemiology, social influence, and related fields, figured that early and hasty endeavours had the potential to misfire or simply be ineffective. Indeed, as Basham et al.’s comment rather fittingly illustrates, suspicion towards authorities will be poorly amenable to advice and educational programs devised and offered by those very authorities.

The Need for Questions, More Research

So, what were “they” up to? Quite simply, “they” advocated for more research. “They” figured that, before “fighting” against, or “curing”, conspiracy theories, it would be good to know exactly what one is talking about. Are conspiracy theories bad? Are they good? Are they always bad, are they always good? Who endorses them, who produces them, and why? Are there different types of conspiracy theories, conspiracy theorists, and conspiracy consumers? What is the difference between people who believe in conspiracy theories and those who reject them, or between hardcore believers and those who vaguely endorse conspiracy theories or parts of them, those who acquiesce to them but don’t really believe them (Risen 2016), or those who playfully engage in conspiracy theories, share them, discuss them, but never give them any credence?

“They”, in fact, are “just asking” some questions (Aaronovitch 2010), which Basham et al. surely agree is always a good thing. Perhaps, by clarifying such and related issues, some pertaining to the conceptual and others to the empirical domains, one could get a better sense of how to address conspiracy theories, and even ascertain whether there is a problem at all. At the very least, “they” suggested in their letter, one should keep track of the already launched initiatives and obtain some measure of how they fare, if only as a matter of accountability, or even mere curiosity. Doing things right would benefit everybody: the authorities, social scientists, the kids targeted by the programs, and yes, society at large, including taxpayers.

That’s, at any rate, what “they” wrote and what “they” think. But perhaps that was naive. Perhaps “they” should just have turned to the writings of a handful of philosophers and social epistemologists, and they would have discovered that everything there is to know on the matter is in fact already known, and that any further attempt to investigate the topic would be a “grave intellectual, ethical and prudential error” (15),[3] or worse, a genocidal crime against the masses, destroying lives “by the thousands, even millions” (16). Indeed, while reading Basham et al.’s piece, “they” shuddered at the thought that “they” could be allies to really evil people, such as, say, warmongers, Nazis, Stalinists, assassins, and the like. And pondering Basham et al.’s rhetoric, “they” also almost wondered whether “they” themselves needed to be cured from their “unethical and foolish” (15) penchants, and hopefully be saved from their “faux intellectual sophistication” (13). Even better, maybe “they” should just abandon their research and shut up about the whole thing. But, of course, that would be intimidation and censorship, and also needlessly incendiary, so that can’t possibly be what Basham et al., as the self-avowed defenders of an open society that they are, had in mind.

Basham et al.’s Hypothesis and the Conspiracist Mindest

Well, let “they” see. The whole issue, it seems to “them”, is best summarized in a single point, a point which, it turns out, “they” are quite familiar with from numerous online discussions and the type of hate mail “they” regularly receive from “historically or politically literate” (12) defenders of the truth, sometimes also called “conspiracy theorists”: “But what about the real conspiracies?” Indeed, what about them? History books can be read as an endless list of failed and successful conspiracies—which everybody, including “they”, acknowledges—so, what could be wrong about conspiracy theorizing? Running parallel to this classic objection against the preemptive dismissal of conspiracy theories, is the idea that “conspiracy theory” is a rhetorical weapon used to discredit and silence any opponent holding inconvenient beliefs or exposing dangerous truths. That’s “anti-conspiracy panic” (14), or “conspiracy denialism” (15). Basham et al.’s point, it seems to “them”, can thus be framed as the following two-fold hypothesis: because real conspiracies have happened and still happen, conspiracy theories are not only warranted but necessary; the only reason this is not obvious to everyone is that “conspiracy theories” have been made to reflect badly on those who assert them by the very people they purport to unmask, and their enablers.

The heuristic value, as well as the truth or falsity, of this hypothesis would of course depend on how the concept of “conspiracy theory” is defined. The outcome of such a conceptual groundwork could then determine whether “they” are enablers of criminal conspirators with blood on their hands (or perhaps conspirators “themselves”), and also whether Basham et al. are just posturing in a desperate attempt to rationalize their own conspiracist tendencies. Now, it is true that “they” are no social epistemologists, merely a “cadre” (12) of social and psychological scientists. Fortunately, “they” believe that the issue is at its core an empirical one, to be addressed with appropriate data collection and experimental designs, not only armchair venting.

But yes, “they” agree that simply claiming that “conspiracy theories” only refer to those conspiracies that are “unwarranted” will not do (Brotherton 2013). Also, pointing out that a conspiracy theory which turns out to be true is no longer a conspiracy theory is ad hoc and obviously unsatisfying. Adjusting the concept of conspiracy theory to its use in common parlance, whatever that is, is not convenient and somewhat tautological. Yet claiming that “conspiracy theory” is simply a derogatory term invented by those in power to discredit inquiring minds, pace Basham et al. (2016), seems oddly conspiratorial and self-serving, or at the very least a rather partial approach of the issue. Likewise, asserting that a conspiracy theory is any kind of thinking or explanation that involves a conspiracy—real, possible or imaginary—and that’s all there is to it, seems like a premature attempt to settle the issue, as if the topic itself was a non-topic and anyone—and that’s a lot of people—who thinks there is something there of interest is simply misguided, or manipulated.

What do “they” make of all this? Well, “they” have another approach. As Basham et al. (2016) aptly noticed, “they” did not argue at all about specific conspiracy theories or the nature of the evidence available in favour or against them[4]. Rather, “they” think that there is such a thing as a conspiracist mindset, also diversely called the political-paranoid style (Hofstadter 1966), conspiracist ideation (Brotherton, French, & Pickering 2013; Brotherton & French 2015; Swami et al. 2011), or conspiracist mentality (Moscovici 1987; see also Bruder, Haffke, Neave, Nouripanah, & Imhoff 2013). This is to say that, regardless of the facts available in the outside world, the mind of some people attracts and is attracted by conspiracist cognitions, which come to form a monological belief system involving conspiracies (Goertzel 1994).

Take the newly elected president of the USA, Mr. Donald J. Trump. Trump has been said to believe in, or at some point have believed or endorsed, about 60 different conspiracy theories.[5] According to Basham et al’s approach, this means that Trump is a relentless investigator. And if it is the case that Trump is indeed “theorizing” about conspiracies, wouldn’t we all be in great trouble if we chose to ignore such a bold truth-seeker and disinterested whistle-blower? On the other hand, perhaps Mr. Trump has just been mindlessly making stuff up and irresponsibly spreading nonsense all along, because of some peculiar worldview he holds and/or some cognitive and personality propensities that favour the perception and endorsement of conspiracy theories in general (in other words a conspiracist mindset). This last approach is actually, at the moment, the most robust finding in the rather recent field of social-psychological conspiracy theory research: people who believe in one conspiracy theory tend to believe in other, unrelated, conspiracy theories.[6]

Why could that be? If conspiracy believers are avid followers of Basham et al’s (2016) advice, to wit that “we should focus, always, on the facts” (15), then they are indeed remarkable citizens, specializing, for the sake of preserving our liberties, in fields as diverse as climate science, structural architecture, geopolitics, economics, photography, history, forensics, and so on.[7] The alternative, of course, is that many, perhaps most, conspiracy theorists simply tend to endorse conspiracy theories qua conspiracy theories, and are not, in fact, really “theorizing” about conspiracies at all, but rather driven to a specific type of propositions and explanations because of a conspiracist mindset. After all, if Basham et al. deplore that so many people mindlessly reject conspiracy theories qua conspiracy theories, they could at least entertain the possibility that many people also endorse them for the same “reason”. Especially as the conspiracist mindset hypothesis seems to be in line with other observations, which “they” think are very promising and valuable areas of current and future research (by which “they” mean actual scientific research).

First there is the remarkable rapidity of conspiracy “theorizing”. If “evidence is key” (14), as Basham et al. (2016) insist, then it is all the more incomprehensible that conspiracy theories would increasingly flourish before any evidence is available, or indeed even during the unfolding of the events being “theorized” about. What is more, some school teachers in France complain that conspiracy theories are endemic in their classes. So either there is a truly remarkable generation of 14-17 years-old already well versed in geopolitical and historical assessment of multiple lines of evidence, or there is some kind of a problem. Second, speaking of evidence, it would be interesting to investigate just what kind of evidence is assessed, and how, by conspiracy theorizers.

In their letter to Le Monde, “they” pointed out that the so-called confirmation bias could be a central feature of conspiracy theorizing, leading to the odd and frequently noted tendency of conspiracy theorists to favour “errant data” and otherwise weak sources, while ignoring or dismissing (for instance, as fabricated, planted or irrelevant) “official” reports and resources. Already in 1995, McHoskey (1995) presented pioneering results in this direction, by showing that the immutability of personal theories about the real perpetrator(s) of the John F. Kennedy’s assassination (both in favor and against conspiracy theories) was associated to processes of biased assimilation and attitude polarization. Of course, this needs to be replicated and further investigated, but it suggests that at least some theories about conspiracies are the result of a biased assessment of the evidence. But only rigorous empirical research will tell, not armchair digressions or self-pity.

In the meanwhile, research has found other points of interest as well. For instance, why should it be the case that people merely interested in uncovering the lies of would-be tyrants by carefully gathering, evaluating and presenting the best evidence, would also turn out to be believers in the paranormal (Brotherton et al., 2013),[8] outright reject well-established scientific findings (Lewandowsky, Cook, & Lloyd 2016; Lewandowsky, Oberauer, & Gignac 2013), simultaneously endorse flatly contradictory conspiracy theories (Wood et al. 2012), readily accept experimentally made-up conspiracy theories (Swami et al. 2011), and display a strong need for control in their lives (van Prooijen, 2016; van Prooijen & Acker 2015)? Wouldn’t such findings, and many others of the sort, start to make sense if it turns out that people’s interest and belief in conspiracy theories is, at least in part, the result of a conspiracist mindset? “They” think this is a plausible hypothesis.[9]

Bad Conspiracy Theorists and Evidence

“They” also think that’s all rather fascinating. What Basham et al. think, however, is up for grabs. Perhaps this is all bad science to them. Perhaps these results were forged, or simply the outcome of “a false, research-distorting assumption” (Basham 2016, 10). Perhaps “they” are all sold to the military-industrial complex, or part of some secret society. Or perhaps the results are all an artefact of the stereotype-threat induced by the negative connotation of “conspiracy theory”. Better yet: it might be the case that having a conspiracist mentality, assuming that such a construct exists, is actually an excellent predictor of the capacity to successfully expose actual conspiracies! This would need to be empirically tested, but for the time being, somehow, “they” are not holding their breath.[10]

Now, in the spirit of academic collegiality, “they” would like to suggest a much more plausible, and practical, way out to Basham et al.: why not simply claim that, if it is true that some people hold conspiracy theories simply out of a conspiracist mindset, then those are not real conspiracy theorists? Such people would not be theorizing at all. Rather, for whatever reason, they would be dismissing out of hand the “official narrative” being offered, or they would have decided that whatever happens in the world is never what it’s made to look like. After all, Basham et al. (2016) admit that there are, or at least there have been, “absurd conspiracy theories” (15), some perhaps merely involving “racist babbling” (13).

Other co-signatories of Basham et al. (2016) also seem to be aware of this issue. Husting & Orr (2007, 140), for instance, delegitimize the concerns of alien believers as “truly misguided” (140) and those that believe Elvis is (still) alive as “extreme” (141). Hagen (2011) does the same with Roswell conspiracy believers, whose theorizing, he laments, do nothing but discredit the good theorizing of 9/11 “inside job” theorists. Dentith (2016) seems likewise to deplore the association of conspiracy theorists with figures such as David Icke and Alex Jones. In fact, Dentith seems very worried by those he calls “conspiracists”, such as the “archetypal conspiracy theorist” built by Cassam (2016), to wit “Oliver”, who believes that 9/11 was an “inside job” based on his reading (or “research”) on his spare time.

Dentith (2016) thinks that this construal might not be “the best” and indeed “suggests [that] Cassam simply shares with social psychologists the same views on those pernicious conspiracy theorist”, by which he probably means a “pathologizing” one. So, all the Olivers out there turn out to be bad, or suboptimal, or not “typical” (Dentith 2016, 24), or “dumbed down” (Dentith 2016, ft 14) conspiracy theorists, or simply, “first and foremost—people who are gullible who—secondly—just happen to be conspiracy theorists” (Dentith 2016, 24). Oliver, in Dentith’s most recent development, is thus a “conspiracist”, in other words a bad conspiracy theorist, just like David Icke, Alex Jones, alien believers, Elvis enthusiasts, in short all those who believe in “weird and wacky” (Dentith 2016, 3) conspiracies.

Of course, it is possible that gullibility, bad thinking, stupidity, mental pathology, and extremism sometimes “just happen” to be associated with belief in conspiracy theories, just as all these features can be associated with being a philosopher or a scientist. But what if certain cognitive biases, personality features and ideological worldviews “happen” to be correlated with belief in conspiracy theories?[11] Well, then that would point to conspiracy theories as a specific attractor for certain social, personality and cognitive features, regardless of whether they are good or bad conspiracy theories, and that would require some explaining, not mere hand-waving.

But in the meanwhile, it is clear that Basham et al. (2016) do sometimes act as normative prescribers, deciding from their scholarly authority that some conspiracy theorists are wrong, “weird and wacky”, and even could be undermining the cause of “healthy conspiracy theorizing” (Dentith 2016, 32).[12]

Yet, somehow, Basham et al. (2016) still feel confident that “[p]oorly evidenced conspiracy theories will be quickly set aside” (14). While that would certainly be reassuring, it doesn’t say what one should make of all those “weird and wacky” (i.e., false) conspiracy theories out there. If indeed “some claims characterized as conspiracy theories are false” (Husting & Orr 2007, 131), then how do these conspiracy theories even come to life, and what can possibly explain that some of them seem not to be “quickly set aside”? Are the people who propose or endorse them crazy? Are they sick? Are they a bunch of losers (Uscinski, Parent, & Torres 2011)? Or simply stupid, gullible (Cassam 2016) or somehow epistemologically “crippled” (Sunstein & Vermeule 2009)? Are they a “caricature” of real conspiracy theorists (Hagen 2011, 15)? Perhaps they are just bad conspiracy theorists? Or are they actual conspirators, consciously striving to manipulate the masses with their forged “absurd” conspiracy theories, in order to discredit the good conspiracy theorists? Even more disturbing: it stands to reason that there are not only false conspiracy theories, but also missing conspiracy theories. Just as not all conspiracy theories have their counterpart conspiracy, not all actual conspiracies have their conspiracy theories. In those cases, what where conspiracy theorists doing? Where they distracted? Did their theorizing skills somehow falter? Where they, once again, bad conspiracy theorists?

But let “they” stick to the first case: why are there false conspiracy theories around? Hard to say, when, according to Basham et al. (2016), “an evaluation of the evidence for or against [a conspiracy theory] really should be the end of the story” (13). “They” agree it would indeed be worthwhile, at least now and then, to reach “the end of the story”. But then “they” would also love to have a rational explanation of why this doesn’t always happen. Indeed, in real life, that is, outside of the cosy echo chamber of radical-chic social epistemologists, the mantra that “We should always, without exception, adopt a case-by-case, evidential evaluation of all allegations of politically momentous conspiracy” (15) yields some surprising results. For one thing, one can ask: where does the buck stop? If no evidence whatsoever is found to substantiate the occurrence of a conspiracy, is that “the end of the story”, or rather evidence in favour of a really good conspiracy?

Maybe Basham et al. would agree that such perfect conspiracies are unlikely, or by definition outside of the scope of rational inquiry, and therefore they would admit that at least some kind of conspiracy theorizing, the kind involving the “preternatural” conspiracies Hofstadter (1966) had in mind, is unwarranted after all. But are there other, more mundane, claims of conspiracy that could suggest “evidence” is not really what is at stake in some (or most?) conspiracy theories? While some might think it would be helpful to obtain X-Rays of politicians to ascertain that they are truly humans and not Reptilians (or anything else), “they” remain somewhat sceptical about the value of such an inquiry. Obviously, one should first trust those who would provide and assess the X-rays, and who can trust anyone these days, especially people working with scientific equipment? More importantly, however, there is no evidence that Reptilians even exist, yet, somehow, “the end of the story” has not been reached for everybody on that issue. Why? Well, perhaps some social-psychological scientific research could help understand this mystery.

It would also be fascinating to make absolutely sure that the police officer who was shot point-blank by the Charlie Hebdo assailants is actually dead. After all, we are told by some valiant vigilantes of governmental deviousness that a clean head-shot is supposed to produce more blood than was observed in the video-recording of that murder. Perhaps, if pressed, that man’s family would confess to some false-flag operation? Again, “they” are just asking questions (Aaronovitch 2010). And what about that moon landing thing? Why can’t that most important debate be settled once and for all? The public wants to know the truth (or some of the public, at any rate). One can only hope that after 47 years of intense “moral watch” (14), the “end of the story” is just around the corner.

Perhaps Basham et al. have not been following the intense evidence checking of conspiracy theorizers during the Charlie Hebdo attacks, the Bataclan massacre and the Brussels airport and subway bombings. Well, “they” have, and still are. Here’s a short selection of the type of “evidence” pursued “without censor” (15) about these events by those that practice that “essential” (13), “crucial” and “necessary” “gift of watchfulness” (16): it smells fishy; this can’t be true; real Muslims would never do that (so it must be the Jews); who really benefits from all of this?; it’s all a big lie; it’s all fake; this detail is not clearly explained; this seems to be connected to that; this thing happened in close spatial or temporal proximity with that other thing; such and such says that he is not convinced and I agree with him, and so forth, ad nauseam.

If this does not sound like the kind of democratic and epistemological hygiene that Basham et al. would prefer to see, then maybe they can contribute by helping online investigators to uncover the real truth when the next school shooting or terror attack unfolds, and show these people how a false-flag operation is properly and correctly exposed. Likewise, real democratic heroes could benefit from Basham et al’s lights on the fronts of the anti-vaccination movement, climate-change denialism, and, well, that very old conspiracy theory that somehow has never been “quickly set aside”, namely, that of a global Jewish conspiracy. Or perhaps these are not so “poorly evidenced” after all, and Basham et al. would like to suggest that the jury is still out? “They” are all ears.

So What are Conspiracy Theories?

True, none of this still amounts to a working definition of “conspiracy theory”. This is because “they” think that any such definition should entail a clear understanding of why some people are prone, quick and enthusiastic when it comes to endorsing, producing or spreading ideas about conspiracies, while others are not. “They” have nevertheless already provided some evidence that there seems to be such a thing as a conspiracist mindset that is quite unrelated to the available (or unavailable) “evidence” pertaining to specific claims of conspiracy. This concept of a conspiracist mindset is problematic for Basham et al.’s hypothesis, because it introduces a large set of individuals who “theorize” about conspiracies with a complete disregard for the evidence, as they are rather interested in conspiracy theories qua conspiracy theories, as long as they allow freeriding a general suspicion against the authorities or intuitively pinpointing some faceless, or not so faceless, enemy.

Here is a proposal. If this conspiracist mindset exists, then it will be especially attuned to still ongoing, unverified, vague, controversial, sensationalistic, and sometimes ultimately unverifiable theories about conspiracies. Why? Because contrary to the unveiling of real and verified conspiracies, which most often requires careful work from scholars, investigators, official agents, journalists, and people generally working on one specific issue at a time with a clear goal in their mind, that’s where the conspiracist mindset can best and endlessly display its talents. Therefore, “they” posit, a “conspiracy theory” is a powerful attractor for the conspiracist mindset, which involves features such as errant data, unfalsifiability, disregarded for and asymmetrical care in the evaluation of counter-evidence, the perception of malevolent intentions, the ascription of preternatural omnipotence and omniscience on the part of the conspirators, a taste for plain bullshit (Pennycook, Cheyne, Barr, Koehler & Fugelsang 2015), and certainly other features, which all still need to be robustly tested, researched or replicated. In that respect, the content of conspiracy theories is certainly a valuable field of inquiry, but “they” believe that such research needs to work hand in hand with a more thorough understanding of the features, the mechanisms and the development of the conspiracist mindset itself, which “they” already know involves some perfectly normal and expected aspects of the human cognitive architecture, but which seem to be biased in a particular direction or excessively sensitive to a specific type of information available on the cognitive market (Bronner 2011; 2013).

At any rate, that is the kind of things “they” are working on, and “they” must say it’s all pretty interesting. Surely Marius Raab, one of the co-signatories of Basham et al. (2016), would agree, as he and his colleagues found that a conspiracist mindset predisposes individuals to endorse specific types of fictional conspiracies (Gebauer, Raab, & Carbon 2016), that conspiracy theories are “a means of constructing and communicating a set of personal values” (thus not a disinterested quest for the truth only based on the evidence), which could help “understand why some people cling to immunized, racist and off-wall stories—and others do not” (Raab, Ortlieb, Auer, Guthmann, & Carbon 2013), and that the presence of “blatant” and “extreme statements indicating an all-encompassing cover-up” increased the persuasive power of conspiracy theories (Raab, Auer, Ortlieb, & Carbon 2013). Very interesting stuff indeed.[13]

For the time being, thus, a “conspiracy theory” is what the conspiracist mindset tends to produce and be attracted to, an apparently circular definition that rests on ongoing work but is firmly grounded in relevant research fields such as cognitive epidemiology, niche construction and cognitively driven cultural studies, and could be refined or refuted depending on future results.

“Conspiracy Theorists” vs “Conspirators”

Note that, already, this approach can be profitably applied to Basham et al. (2016). Indeed, if “they” are correct, then Basham et al.’s article (and hypothesis) not only severely misses the point, but simply is a conspiracy theory as “they” define it for the time being. Notwithstanding the meek disclaimer that “they” are motivated by “the best of intentions” (14), the rest of the comment’s rhetoric is rather candidly unmistakable. Consider: even with “the best of intentions”, “they” are portrayed as to be so misguided as to be allies and analogues of the most malevolent forces that have plagued and continue to precipitate humanity into suffering, desolation and death, including Nazis, the Bush government, covert assassins, liars, and Big Brother. “They” not only utterly (and willingly?) fail to “impugn our hierarchies of power, but only defend them” (15). “They” have “blood on [their] hands” (16). And not only are “they” devising devious techno-social-engineering methods to silence the masses, but in “their” cynical outlook, “they” plan to do so with the very money of those to be censored and brainwashed. “They” are thus (at best) guilty by association, dangerous, and possibly malevolent, orchestrating something that will restrain freedom and democracy all the while protecting the interests of those in power. Also, “they” seem to be rather powerful, otherwise “they” wouldn’t be the targets of such worrying accusations. “They” must be really evil indeed, and up to no good.

Or are “they”? To make sure, one would need to be very careful in one’s appreciation of the facts. Of course, it is true that conspiracies exist, but that’s all the more reason to give some pause before accusing fellow academics of being part of one such awful conspiracy. Conversely, one should not use the phrase “conspiracy theorist” lightly if one holds a theory about “conspiracy theories” that might not please one’s opponents in the debate. It could betray some “direct association with pejorative phrases, caricature/exaggeration of claims, and the creation of equivalencies between very different claims”, as Husting and Orr (2007, 138-139), two co-signatories of Basham et al. (2016), so perceptively wrote about “the epithet conspiracy theorist”. And that would indeed be undignified.

Now, if any of this happens in this exchange, then “they” would claim that “they” are the unlucky targets of individuals driven by a conspiracist mindset, and then Basham et al. would scream in horror that they are the targets of that stigmatizing “phrase of social manipulation” (15). As a result, we all would be running in unproductive circles, and no one likes that; presumably, not even social epistemologists.

So, with that caveat in mind, “they” scrupulously examined the method Basham et al. used to reach their damning conclusions about “they”. Did Basham et al. carefully examine the available evidence, as they insist proper conspiracy theorizing needs to be done? Did Basham et al. weight into their assessment the current socio-political context in France? Did Basham et al. stick to the text they criticize without drawing overblown inferences? Did Basham et al. consider that differences in scholarly, theoretical, and methodological approaches do not necessarily make disagreeing parties enemies or criminals? Did Basham et al. painstakingly seek to avoid “direct association with pejorative phrases, caricature/exaggeration of claims, and the creation of equivalencies between very different claims”? How exactly did Basham et al. proceed in writing their article, or the co-signatories before approvingly signing it?

Thankfully, an indication of how to respond to all these questions is directly available in Basham et al. (2016), clearly spelled out on page 15. There, Basham et al. make the claim that the “explanatory method” of what they call “official conspiracy theories”, referring to the type of conspiracies that are denounced and sanctioned by authorities or experts legitimated by the powers in place (such as the “official” version of what happened on 9/11, namely a conspiracy by Islamist terrorists), is in fact “indistinguishable” from the nonofficial, run-of-the-mill type of conspiracy theories held by ordinary people and that are systematically discredited by the same authorities and experts. “They” think this is a remarkable admission.

Quite aside from the point that “conspiracy theory” is thus shown to be “a phrase of social manipulation” (a gentle euphemism, Husting and Orr (2007) call it a piece of “dangerous machinery” enforcing a “transpersonal strategy of exclusion” and “discursive violence”), it more importantly suggests that, according to Basham et al., the liars and criminals in place in fact use the same “explanatory method” than the minorities fighting for the truth, and conversely that these minorities use in fact the same “explanatory methods” than the conspiring malefactors they are seeking to expose. So, what gives? Simple: apparently, any “explanatory method” will do, and thanks to Basham et al., epistemology, social or not, has just become a much more accessible field.

No doubt “conspiracy theorist” can be a derogatory term (see Klein, Van der linden, Pantazi, & Kissine 2015; Kumareswaran, 2014; Wood & Douglas 2013). But then so is “conspirator”, and anyone developing or holding a conspiracy theory must have a group of conspirators in sight. Surely Basham et al. have considered the potential harm done when a false accusation of “conspirator” is lightly made, but “they” wonder whether they have pondered about the effectiveness of labelling someone “conspirator”, even, or especially, if it happens that he or she, or they, is an actual conspirator. By their reasoning, the negative value attached to “conspiracy theorist” increases the possibility that conspirators will get away with their conspiracies: “in an environment in which people take a dim view of conspiracy theories, conspiracies may multiply and prosper” (13). But does the idea that a stigma attached to the label “conspiracy theorist” increases the risk of conspiracies, and thus that conspiracy theory skeptics enable real conspirators, even make sense in the first place?

How is one to evaluate this claim without taking into account: 1) the potential cost-limiting effect of successfully discarding false conspiracies; 2) potential cases of conspiracies that unfold in such secrecy that there is not even a conspiracy theory about them to be skeptical of; 3) potential cases of conspiracies that unfold unproblematically even in the face of conspiracy theories about them that nobody or almost nobody is skeptical of, or that are endorsed by a substantial share of the population; and 4) potential cases were derogatively labelling something a conspiracy theory is ineffective in deflecting suspicions of actual conspiracy? Further, if journalists and academics with a prejudiced view of conspiracy theorists had such power in blinding entire populations, one wonders why many of them still deem conspiracy theories such a worrying issue, or even would want to “cure” them.

In light of such complications, it is probably premature, not to mention rather distasteful and slightly delusional, to start drawing the respective body counts of conspiracy theorists and those who take a dim view of them, merely based on the idea that “conspiracy theorists” is a disabling and stigmatizing epithet, which is an empirical question anyway. For all “they” know, because conspiracy theorists are presumably quite often aware that the “alternative information” they follow and sometimes endorse is routinely classified as “conspiracy theories” by those they deem untrustworthy, the label “conspiracy theorist” might in fact well reinforce their belief that they are “on to something” and have positive effects on in-group cohesion and self-esteem[14].

A Cure?

Basham et al. (2016) fear that “they” want to curtail the free speech of conspiracist opinions, asking, after having made the point that whoever poisoned Alexander Litvinenko, his death had to be the result of some conspiracy: “Should we pay for a science that teaches us not to understand this?” (15). Indeed, it would be ironic that innocent people would end up paying, with their hard-earned money, for a scientific conspiracy meant at making sure that no one will ever even dare to think there could be any type of conspiracy in this world. In fact, that would not only be ironic, that would be genius, a conspiracy of the “preternatural” kind if there ever was one. “They” only wish “they” had such power and influence, but thankfully, at least for the time being, that is not what “they” had in mind. What “they” had in mind, as must be clear by now, was to study how people, on their own or under some external influence, think and come to endorse some beliefs about such things. That, “they” think, would need some data, rather than wishful thinking, ideological clamours or armchair reasoning.

Now, it is true that “they” used a medical analogy to make “their” point. Before flooding the market with a new remedy, it is good medical and scientific practice to carefully and rigorously test said remedy. Perhaps it doesn’t work, perhaps it makes things worse, perhaps it has unforeseen side-effects. At any rate, surely it would be desirable to know more about the disease, what it is, what are its mechanisms, its etiology, its symptoms and so forth. Who knows, maybe it would turn out that the remedy is not needed after all, as the disease might be transitory, or even not a disease at all. Scientific research turns out to be the best currently available tool to answer such questions, and that’s where the analogy lies with programs devised to counter conspiracy theories.

The issue is indeed pressing, at least in France, not so much because conspiracy theories are a danger (although they could be), but because many uninformed people are jumping on the bandwagon and adding confusion to the issue. “They” advocate some patience, lest things will get worse in the long run. Note that “they” are not even promoting “cognitive infiltration” (Sunstein & Vermeule 2009)—although in some way this is what “they” are attempting to do here, namely introducing some cognitive diversity in a spiraling and self-congratulating clique of insulated theorizers—which effectiveness would need to be carefully evaluated anyway. Presumably in our day and age, more technologically advanced solutions could be devised. How about a neural micro-chip disrupting the cortical networks responsible for conspiracy beliefs and subversive thinking in general? Indeed, why settle for a “micropolitical power” (Husting & Orr 2007, 140) when “they” could go full bio-psycho-political? Well, again, perhaps “they” don’t have so much power (and time, and resources) after all, having to resort to open letters in the mainstream press to gently chastise the government and local initiatives for taking hasty and unpredictable measures.

To repeat, “they” merely urged for more research. Many people, and this includes philosophers, seem to think the topic of “conspiracy theories” is merely a matter of opinion or of conceptual clarification. “They” think it is foremost a matter of empirical research and careful hypothesis testing, and that any action designed to decrease belief in conspiracy theories, especially in the classroom, should be based on evidence and empirically assessed at the same time.

Even Dentith acknowledges that there might be a problem with “certain conspiracy theorist”, perhaps even “some seeming cases of irrational, or even pathological belief in conspiracy theories” (Dentith 2016, 36).[15] He doesn’t say, however, to what extent these people are a problem, how widespread “conspiracism” is, and what to do with such people and beliefs. This is unfortunate, because had Basham et al. the answers to these questions, they could actually help people become the good conspiracy theorists they wish everybody were, so that no one would endorse conspiracy theories qua conspiracy theories. “They” suggest that when this happens, the problem of those people who dismiss conspiracy theories qua conspiracy theories would become largely irrelevant. In fact, such behaviour would be ineffective and bizarre: the “completely sensible questions about government conduct” (13) would simply look as such.

Indeed, what would be best for democracy and the open society? An army of bad conspiracy theorists, jumping on every semi-cooked counter-explanation available on the cognitive market out of a conspiracist mindset, wasting precious time and resources on worthless and discrediting ventures, or good and thoughtful conspiracy theorists, asking tough and relevant questions on the basis of a careful and unbiased examination of the facts? Now, imagine if some kind of education and basic principles, based on scientific evidence, could help obtain the latter, and reduce the former, wouldn’t that be fantastic news? Or maybe this idea is an unacceptable infringement on one’s basic freedom to be systematically misguided, wrong and isolated, and deserves an angry and self-righteous response.

Obviously, Basham et al. are entirely free to perceive and construe the proposal to pursue research on the topic of conspiracy beliefs as, say, a “transpersonal strategy” for “othering” “certain voices” (Husting & Orr 2007), a slippery-slope towards dehumanization and abuse (Hagen 2011), or, more positively, an encouraging sign that the ongoing “conspiracy panic” demonstrates that political tyrannies will no longer be tolerated by the increasingly enlightened public. Whatever “cure” “they” end up devising, if any, “they” are not so gullible as to think it will work for everybody. But let “they” propose a deal.

“They” will continue to focus on the psychology and sociology of conspiracy theories, conceived as the outcome of situational factors, personality, and cognitive traits largely unrelated to the truth or falsity of said “conspiracies”. In the meanwhile, Basham et al. will carry on their defense of those that aim, driven by the “gift of watchfulness” and their idiosyncratic “explanatory method”, at closely monitoring and exposing the criminal tyrants and liars that would like to rule the world and mislead the public. Hopefully, together, “they” and Basham, Dentith, Coady, Husting, Orr, Hagen, and Raab will help the world become a better place. Already, or so “they” are told by undisclosed sources of information, it appears that the powerful are shaking in their boots.


Aaronovitch, David. Voodoo Histories: The Role of the Conspiracy Theory in Shaping Modern History. London: Jonathan Cape, 2010.

Abalakina-Paap, Marina, Walter G. Stephan, Traci Craig, W. Larry Gregory “Beliefs in Conspiracies.” Political Psychology 20, no. 3 (1999): 637–647.

Barkun, Michael. A Culture of Conspiracy: Apocalyptic Visions in Contemporary America. Berkeley, CA: University of California Press, 2003.

Barron, David, Kevin Morgana, Tony Towella, Boris Altemeyera, Viren Swamia. “Associations Between Schizotypy and Belief in Conspiracist Ideation.” Personality and Individual Differences 70 (2014): 156–159.

Basham, Lee. “The Need for Accountable Witnesses: A Reply to Dentith.” Social Epistemology Review and Reply Collective 5, no. 7 (2016): 6–13.

Basham, Lee and Matthew R. X. Dentith. “Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 12–19.

Bronner, Gerald. The Future of Collective Beliefs. Oxford: Bardwell Press, 2011.

Bronner, Gerald. La Démocratie des Crédules. Paris: Presses Universitaires de France, 2013.

Bronner, Gerald, Véronique Campion-Vincent, Sylvain Delouvée, Sebastian Dieguez, Karen Douglas, Nicolas Gauvrit, Anthony Lantian, Pascal Wagner-Egger. “Luttons Efficacement Contre les Théories du Complot.” Le Monde 5-6, 29 juin 2016.

Brotherton, Robert. “Towards a Definition of ‘Conspiracy Theory’.” PsyPAG Quaterly 88, no. 3 (2013): 9-14.

Brotherton, Robert & Silan Eser. “Bored to Fears: Boredom Proneness, Paranoia, and Conspiracy Theories.” Personality and Individual Differences 80 (2015): 1–5.

Brotherton, Robert & Christopher C. French. “Intention Seekers: Conspiracist Ideation and Biased Attributions of Intentionality.” PLOS ONE 10, no. 5 (2015): e0124125. doi: 10.1371/journal.pone.0124125.

Brotherton, Robert, Christopher C. French & Alan D. Pickering. “Measuring Belief in Conspiracy Theories: The Generic Conspiracist Beliefs Scale.” Frontiers in Psychology 4 (2013): 279. doi:10.3389/fpsyg.2013.00279.

Bruder, Martin, Peter Haffke, Nick Neave, Nina Nouripanah & Roland Imhoff. “Measuring Individual Differences in Generic Beliefs in Conspiracy Theories Across Cultures: Conspiracy Mentality Questionnaire.” Frontiers in Psychology 4, article 225 (2013): 1-14.

Buenting, Joel & Jason Taylor. “Conspiracy Theories and Fortuitous Data.” Philosophy of the Social Sciences 40, no. 4 (2010): 567-578.

Campion-Vincent, Veronique. “From Evil Others to Evil Elites: A Dominant Pattern in Conspiracy Theories Today.” In Rumor Mills: The Social Impact of Rumor and Legend, edited by Chip Heath, Veronique Campion-Vincent, Gary A. Fine, 103–122. New Brunswick: Aldine Transaction, 2005.

Cassam, Quassim. “Vice Epistemology.” The Monist 99, no. 2 (2016): 159–180.

Crocker, Jennifer, Riia Luhtanen, Stephanie Broadnax & Evan Blaine. “Belief in U.S. Government Conspiracies Against Blacks Among Black and White College Students: Powerlessness or System Blame?” Personality and Social Psychology Bulletin 25, no. 8 (1999): 941–953.

Dagnall, Neil, Kenneth Drinkwater, Andrew Parker, Andrew Denovan & Megan Parton. “Conspiracy Theory and Cognitive Style: A Worldview.” Frontiers in Psychology 6, 206 (2015). doi:10.3389/fpsyg.2015.00206.

Darwin, Hannah, Nick Neave & Joni Holmes. “Belief in Conspiracy Theories: The Role of Paranormal Belief, Paranoid Ideation and Schizotypy.” Personality and Individual Differences 50, no. 8 (2011): 1289–1293.

Dentith, Matthew R. X. “The Problem of Conspiracism.” Argumenta: “The Ethics and Epistemology of Conspiracy Theories” (forthcoming special issue): 1-42.

Dieguez, Sebastian, Pascal Wagner-Egger & Nicolas Gauvrit. “Nothing Happens by Accident, or Does It? A Low Prior for Randomness Does Not Explain Belief in Conspiracy Theories.” Psychological Science 26, no. 11 (2015): 1762–1770.

Douglas, Karen & Robbie Sutton. “Does It Take One to Know One? Endorsement of Conspiracy Theories is Influenced by Personal Willingness to Conspire.” British Journal of Social Psychology 50, no. 3 (2011): 544–552.

Douglas, Karen, Robbie M. Sutton, Mitch J. Callan, Rael J. Dawtry & Annelie J. Harvey. “Someone is Pulling the Strings: Hypersensitive Agency Detection and Belief in Conspiracy Theories.” Thinking & Reasoning 22, no. 1 (2016): 57–77.

Furnham, Adrian. “Commercial Conspiracy Theories: A Pilot Study.” Frontiers in Psychology 4 (2013): 379. doi: 10.3389/fpsyg.2013.00379.

Gebauer, Fabian, Marius H. Raab & Claus-Christian Carbon. “Conspiracy Formation is in the Detail: On the Interaction of Conspiratorial Predispositions and Semantic Cues.” Applied Cognitive Psychology 30, no. 6 (2016): 917–924.

Goertzel, Ted. “Belief in Conspiracy Theories.” Political Psychology 15, no. 4 (1994): 731–742.

Grzesiak-Feldman, Monika. “The Effect of High-Anxiety Situations on Conspiracy Thinking.” Current Psychology 32, no. 1 (2013): 100–118.

Grzesiak-Feldman, Monika & Anna Ejsmont. “Paranoia and Conspiracy Thinking of Jews, Arabs, Germans, and Russians in a Polish Sample.” Psychological Reports 102, no. 3 (2008): 884–886.

Grzesiak-Feldman, Monika & Monika Irzycka. “Right-Wing Authoritarianism and Conspiracy Thinking in a Polish Sample.” Psychological Reports 105, no. 2 (2009): 389–393.

Grzesiak-Feldman, Monika & Hubert Suszek. “Conspiracy Stereotyping and Perceptions of Group Entitativity of Jews, Germans, Arabs and Homosexuals by Polish Students.” Psychological Reports 102, no. 3 (2008). 755–758.

Hagen, Kurtis. “Conspiracy Theories and Stylized Facts.” Journal for Peace and Justice Studies 21, no. 2 (2011): 3–22.

Hofstadter, Richard. “The Paranoid Style in American Politics.” In The Paranoid Style in American Politics and Other Essays, edited by Richard Hofstadter, 3-40. New York, NY: Knopf, 1966.

Husting, Ginna & Martin Orr. “Dangerous Machinery: ‘Conspiracy Theorist’ as a Transpersonal Strategy of Exclusion.” Symbolic Interaction 30, no. 2 (2007): 127–150.

Imhoff, Roland & Martin Bruder. “Speaking (Un-)Truth to Power: Conspiracy Mentality as a Generalised Political Attitude.” European Journal of Personality, 28, no. 1 (2014): 25-43.

Jolley, Daniel & Karen M. Douglas. “The Social Consequences of Conspiracism: Exposure to Conspiracy Theories Decreases Intentions to Engage in Politics and to Reduce One’s Carbon Footprint. British Journal of Psychology 105. No. 1 (2014a): 35–56.

Jolley, Daniel & Karen M. Douglas. “The Effects of Anti-Vaccine Conspiracy Theories on Vaccination Intentions.” PLOS ONE 9, no. 2 (2014b): e89177. doi: 10.1371/journal.pone.0089177.

Klein, Olivier, Nicolas Van der Linden, Myrto Pantazi & Mikhail Kissine. “Behind the Screen Conspirators: Paranoid Social Cognition in an Online Age.” In The Psychology of Conspiracy, edited by Michal Bilewicz, Aleksandra Cichocka & Wiktor Soral, 162-182 London: Routledge, 2015.

Kumareswaran, Darshani Jai. The Psychopathological Foundations of Conspiracy Theorists. (2014): unpublished doctoral dissertation. Retrieved from

Lantian, Anthony. Rôle Fonctionnel de L’adhésion aux Théories du Complot : Un Moyen de Distinction? (2015): unpublished doctoral dissertation. Retrieved from

Lantian, Anthony, Dominique Muller, Cecile Nurra, and Karen M. Douglas, (2016). “Measuring Belief in Conspiracy Theories: Validation of a French and English Single-Item Scale.” International Review of Social Psychology 29 no. 1 (2016), 1–14.

Leman, Patrick J. & Marco Cinnirella. “A Major Event Has a Major Cause: Evidence for the Role of Heuristics in Reasoning about Conspiracy Theories.” Social Psychology Review 9, no. 2 (2007): 18–28.

Lewandowsky, Stephan, John Cook & Elisabeth Anne Lloyd. “The ‘Alice in Wonderland’ Mechanics of the Rejection of (Climate) Science: Simulating Coherence by Conspiracism.” Synthese (2016): 1–22. doi: 10.1007/s11229-016-1198-6.

Lewandowsky, Stephan, John Cook, Klaus Oberauerd, Scott Brophy, Elisabeth A. Lloyd, Michael Marriott. “Recurrent Fury: Conspiratorial Discourse in the Blogosphere Triggered by Research on the Role of Conspiracist Ideation in Climate Denial.” Journal of Social and Political Psychology 3, no. 1 (2015): 161–197.

Lewandowsky, Stephan, Klaus Oberauer & Gilles E Gignac. “NASA Faked the Moon Landing—Therefore, (Climate) Science is a Hoax: An Anatomy of the Motivated Rejection of Science.” Psychological Science 24, no. 5 (2013): 622–633.

Lobato, Emilio, Jorge Mendoza, Valerie Sims & Matthew Chin. “Examining the Relationship Between Conspiracy Theories, Paranormal Beliefs, and Pseudoscience Acceptance Among a University Population.” Applied Cognitive Psychology 28, no. 5 (2014): 617–625.

McCauley, Clark and Susan Jacques. “The Popularity of Conspiracy Theories of Presidential Assassination: A Bayesian Analysis.” Journal of Personality and Social Psychology 37, no. 5 (1979): 637–644.

McHoskey, John W. “Case Closed? On the John F. Kennedy Assassination: Biased Assimilation of Evidence and Attitude Polarization.” Basic and Applied Social Psychology 17, no. 3 (1995): 395–409.

Moscovici, Serge. “The Conspiracy Mentality.” In Changing Conceptions of Conspiracy, edited by Carl Friedrich Graumann and Serge Moscovici, 151-169. New York: Springer Verlag, 1987.

Moulding, Richard, Simon Nix-Carnell, Alexandra Schnabel, Maja Nedeljkovic, Emma E. Burnside, Aaron F. Lentini, and Nazia Mehzabin. “Better the Devil You Know Than a World You Don’t? Intolerance of Uncertainty and Worldview Explanations for Belief in Conspiracy Theories.” Personality and Individual Differences 98 (2016): 345–354.

Newheiser, Anna-Kaisa, Miguel Farias, and Nicole Tausch. “The Functional Nature of Conspiracy Beliefs: Examining the Underpinnings of Belief in the Da Vinci Code Conspiracy.” Personality and Individual Differences 51, no. 8 (2011): 1007–1011.

Oliver, J. Eric and Thomas J. Wood. “Conspiracy Theories and the Paranoid Style(s) of Mass Opinion.” American Journal of Political Science 58, no. 4 (2014): 952–966.

Pennycook, Gordon, James Allan Cheyne, Nathaniel Barr, Derek J. Koehler, and Jonathan A. Fugelsang. “On the Reception and Detection of Pseudo-Profound Bullshit.” Judgment and Decision Making 10, no. 6 (2015): 549-563.

Raab, Marius Hans, Nikolas Auer, Stefan A. Ortlieb, and Claus-Christian Carbon. “The Sarrazin Effect: The Presence of Absurd Statements in Conspiracy Theories Makes Canonical Information Less Plausible.” Frontiers in Psychology 4 (2013): 453.

Raab, Marius Hans, Stefan A. Ortlieb, Nikolas Auer, Klara Guthmann, & Claus-Christian Carbon. “Thirty Shades of Truth: Conspiracy Theories as Stories of Individuation, Not of Pathological Delusion.” Frontiers in Psychology 4 (2013): 406.

Radnitz, Scott and Patrick Underwood. “Is Belief in Conspiracy Theories Pathological? A Survey Experiment on the Cognitive Roots of Extreme Suspicion.” British Journal of Political Science 47, no. 1 (2015): 1–17.

Risen, Jane L. “Believing What We Do Not Believe: Acquiescence to Superstitious Beliefs and Other Powerful Intuitions.” Psychological Review 123, no. 2 (2016): 183–207.

Stieger, Stefan, Nora Gumhalter, Ulrich S. Tran, Martin Voracek, and Viren Swami. “Girl in the Cellar: A Repeated Cross-Sectional Investigation of Belief in Conspiracy Theories about the Kidnapping of Natascha Kampusch.” Frontiers in Psychology 4 (2013): 297.

Sunstein, Cass R. and Adrian Vermeule. “Conspiracy Theories: Causes and Cures.” Journal of Political Philosophy 17, no. 2 (2009): 202–227.

Sutton, Robbie M. and Karen M. Douglas. “Examining the Monological Nature of Conspiracy Theories.” In Power, Politics, and Paranoia: Why People are Suspicious of their Leaders, edited by Jan-Willem van Prooijen and Paul A. M. van Lange, 254–272. Cambridge, UK: Cambridge University Press, 2014.

Swami, Viren. “Social Psychological Origins of Conspiracy Theories: The Case of the Jewish Conspiracy Theory in Malaysia.” Frontiers in Psychology 3 (2012): 280.

Swami, Viren, Tomas Chamorro-Premuzic, and Adrian Furnham. “Unanswered Questions: A Preliminary Investigation of Personality and Individual Difference Predictors of 9/11 Conspiracist Beliefs.” Applied Cognitive Psychology 24, no. 6 (2010): 749–761.

Swami, Viren, Rebecca Coles, Stefan Stieger, Jakob Pietschnig, Adrian Furnham, Sherry Rehim, and Martin Voracek. “Conspiracist Ideation in Britain and Austria: Evidence of a Monological Belief System and Associations Between Individual Psychological Differences and Realworld and Fictitious Conspiracy Theories.” British Journal of Psychology 102 (2011): 443–463.

Swami, Viren and Adrian Furnham. “Examining Conspiracist Beliefs About the Disappearance of Amelia Earhart.” The Journal of General Psychology 139, no. 4 (2012): 244–259.

Swami, Viren, Adrian Furnham, Nina Smyth, Laura Weis, Alixe Lay, and Angela Clow. “Putting the Stress on Conspiracy Theories: Examining Associations Between Psychological Stress, Anxiety, and Belief in Conspiracy Theories.” Personality and Individual Differences 99 (2016): 72–76.

Swami, Viren, Jakob Pietschnig, Ulrich S. Tran, Ingo Nader, Stefan Stieger, and Martin Voracek. “Lunar Lies: The Impact of Informational Framing and Individual Differences in Shaping Conspiracist Beliefs about the Moon Landings.” Applied Cognitive Psychology 27, no. 1 (2013): 71–80.

Swami, Viren, Martin Voracek, Stefan Stieger, Ulrich S. Tran, and Adrian Furnham. “Analytic Thinking Reduces Belief in Conspiracy Theories.” Cognition 133, no. 3 (2014): 572–585.

Taguieff, Pierre-André. Court Traité de Complotologie. Paris: Fayard/Mille et une nuit, 2013.

Uscinski, Joseph E., Joseph M. Parent, and Bethany Torres. “Conspiracy Theories are for Losers.” Paper Presented at the 2011 American Political Science Association Annual Conference, Seattle, WA. September 2011. Retrieved from:

Uscinski, Joseph E., Casey Klofstad, and Matthew D. Atkinson. “What Drives Conspiratorial Beliefs? The Role of Informational Cues and Predispositions.” Political Research Quarterly 69, no. 1 (2016): 57–71.

van Der Tempel, Jan and James E. Alcock. “Relationships Between Conspiracy Mentality, Hyperactive Agency Detection, and Schizotypy: Supernatural Forces at Work?” Personality and Individual Differences 82 (2015): 136–141.

van Elk, Michiel. “Perceptual Biases in Relation to Paranormal and Conspiracy Beliefs.” PLOS ONE 10 (2015): e0130422.

van Prooijen, Jan-Willem. “Sometimes Inclusion Breeds Suspicion: Self-Uncertainty and Belongingness Predict Belief in Conspiracy Theories.” European Journal of Social Psychology 46, no. 3 (2016): 267–279.

van Prooijen, Jan-Willem and Michele Acker. “The Influence of Control on Belief in Conspiracy Theories: Conceptual and Applied Extensions.” Applied Cognitive Psychology 29, no. 5 (2014): 753–761.

van Prooijen, Jan-Willem, Andre P. Krouwel, and Thomas V. Pollet. “Political Extremism Predicts Belief in Conspiracy Theories.” Social Psychological and Personality Science 6, no. 5 (2015): 570–578.

van Prooijen, Jan-Willem and Eric van Dijk. “When Consequence Size Predicts Belief in Conspiracy Theories: The Moderating Role of Perspective Taking.” Journal of Experimental Social Psychology 55 (2014): 63–73.

Wagner-Egger, Pascal and Adrian Bangerter. “La Vérité est Ailleurs: Corrélats de L’adhésion aux Théories du Complot.” [“The Truth Lies Elsewhere: Correlates of Belief in Conspiracy Theories.”]. Revue Internationale de Psychologie Sociale 20, no. 4 (2007): 31–61.

Wood, Michael J. “Some Dare Call it Conspiracy: Labeling Something a Conspiracy theory Does not Reduce Belief in It.” Political Psychology 37, no. 5 (2015): 695-705.

Wood, Michael J. and Karen M. Douglas. “What About Building 7? A Social Psychological Study of Online Discussion of 9/11 Conspiracy Theories.” Frontiers in Psychology 4 (2013): 409.

[1] Although the article is referenced with Lee Basham as the sole author, it is in fact signed collectively, in the following order, by Matthew R. X. Dentith, Lee Basham, David Coady, Ginna Husting, Martin Orr, Kurtis Hagen and Marius Raab. We see no reason, and none is mentioned in the paper, to single out Lee Basham as the author of that piece, when all co-signatories have approved and have agreed to be associated with its contents. We thus henceforth will refer to this article as Basham et al. (2016).

[2] “Luttons efficacement contre les théories du complot” [Let’s fight conspiracy theories effectively], written and signed by Gérald Bronner, Véronique Campion-Vincent, Sylvain Delouvée, Sebastian Dieguez, Karen Douglas, Nicolas Gauvrit, Anthony Lantian and Pascal Wagner-Egger, published on June 5th, 2016, in the print edition of Le Monde. As this text was widely read and shared (more than 2,000 hits on ResearchGate), the authors produced an English translation for non-French readers.

[3] Unless otherwise specified, paginations refer to the pdf version of Basham et al. (2016).

[4] In their writings, Basham and Dentith, a co-signatory of Basham et al., make much of a distinction between “particularists” and “generalists”. Briefly, the former refers to an approach to conspiracy theories based on the examination of each specific claim of conspiracy and its respective argumentative and evidential merits (or shortcomings). The latter approach sees broadly conspiracy theories as a class of its own. The distinction has been put forward by Buenting and Taylor (2010) who defend the particularist against the generalist view on the grounds that the generalist view a priori, and unfairly, assumes that conspiracy theories are irrational and thus need not be assessed on their own merits. Yet on closer inspection, this partition turns out to be meaningless, self-serving and self-refuting. First Basham et al. (2016) essentially claim that conspiracy theorizing is generally warranted because there are conspiracies: that is a generalist view. Moreover, the evidence shows that conspiracy theorists are generalists, in that they tend to endorse several and varied conspiracy theories (see below). Yes, “generalism” might lead to the “flippant rejection” (14) of conspiracy theories, but it might as well lead to their uncritical acceptance, a generalist stance. On the other hand, “they” never said that all propositions regarding the possibility of some nefarious intent or actions from some group of colluding individuals must be immediately rejected. In fact, “they” could as well be the “particularists” because “they” are interested in individual differences between believers and non-believers, thus assessing cognitive processes and personality profiles on a “case-by-case” (15) basis. “They” are even open to the possibility that there might be different kinds of conspiracy theories—say, minority conspiracies (Campion-Vincent 2005; Moscovici 1987; Wagner-Egger & Bangerter 2007), system conspiracies (Campion-Vincent, 2005; Moscovici 1987; Wagner-Egger & Bangerter 2007), supernatural conspiracies (van der Tempel & Alcock, 2015), in-group vs out-group conspiracies (Grzesiak & Suszek 2008; Uscinski & Parent 2014) and so forth—a particularist approach. But of course, such convolutions are only needed if one absolutely wants to build artificial rivalries and point to imaginary enemies. As it turns out, “they” see the “particularist vs generalist” distinction as orthogonal to “their” interests anyway. Bad conspiracy theorists could still come up with, or endorse, good theories about actual conspiracies, but for bad reasons. By the same token, then, good conspiracy theorists could come up with false conspiracies, although for good reasons. While this might be fascinating for political score-keeping, “they” are merely interested in the psychology of all this, and thus, the “evidential” content of specific conspiracy claims can indeed be safely put aside unless what “they” study calls up for such details.

[5] Speaking of President Trump, there are two interesting insights to be gained from his election. First, it now seems clear that the epithet “conspiracy theorist” is not such a powerful engine of delegitimization. Trump has been derided as a conspiracy theorist over and over again, and yet he still managed to get elected. Second, among Trump conspiracies, and similar to what happened with the “Brexit” vote in Britain and the recent referendum held in Italy, there were many conspiracy theories making the rounds about a “rigged” polling or electoral process. Pro-Trump, pro-Brexit and anti-Renzi outlets repeatedly claimed and feared that those in power would never allow their desired outcome to come to life. Yet it happened in all three cases, and it would be interesting to see what became of these “theories of conspiracy” when they were directly contradicted by the facts. Basham et al., in particular, could learn something about the operations of the mind when conspiracist thinking demonstrably fails, a Festinger 2.0, as it were, manifestation of cognitive dissonance, except that nowadays, there does not even seem to be any such “dissonance” anymore.

[6] Please refer to: Dagnall, Drinkwater, Parker, Denovan, & Parton 2015; Goertzel 1994; Imhoff & Bruder 2014; Lantian, Muller, Nurra, & Douglas 2016; Sutton & Douglas 2014; Swami, Chamorro-Premuzic, & Furnham 2010; Swami et al. 2011; Swami & Furnham 2012; Wagner-Egger & Bangerter 2007; Wood, Douglas, & Sutton 2012.

[7] Come to think of it, because low ratings of endorsement are also correlated across conspiracy theories (surprise!), it might be the case that other individuals have also scrupulously examined all the evidence and came up with the conclusion that the “official story” is okay after all in all or most cases. Why should these people be any less careful and heroic investigators than their conspiracists counterparts?

[8] See also: Bruder et al. 2013; Darwin, Neave, & Holmes 2011; Lantian et al., 2016; Lobato, Mendoza, Sims, & Chin 2014; Newheiser, Farias, & Tausch, 2011; Stieger, Gumhalter, Tran, Voracek, & Swami 2013; Swami et al., 2011; van Elk, 2015; Wagner-Egger & Bangerter 2007.

[9] Perhaps it is worth mentioning that some of “they”, in their quest to better—scientifically—understand such a mindset, sought to investigate the widely-made claim that conspiracy theorists have a miscalibrated appreciation of chance events, in other words, that they tend to think that “nothing happens by accident”. This was and is still claimed to be a central feature of the psychology of conspiracy theory believers (e.g. Barkun 2003; Campion-Vincent, 2005; Lewandowsky, Cook, Oberauer, Brophy, Lloyd, & Marriott 2015; Taguieff 2013). Well, “they” found that is not the case, and “they” even reported this negative finding in an actual scientific journal (Dieguez et al. 2015). Which goes to show that intuitive feelings and conceptual reasoning should, in the end, always be subjected to empirical testing. Then again, “they” also found, in the same study, the now familiar strong correlations among various conspiracy theories endorsements, which “they” now know, thanks to research, are not associated to a low prior for randomness.

[10] Or perhaps Basham et al. think that regardless of the existence of a conspiracist mindset as described above, it would still be advantageous to entertain conspiracy theories rather than dismiss or avoid them. Yet, the idea that because real conspiracies happen it would be worst to dismiss conspiracy theories than to consider them carefully is simplistic at best (more about this below). To be sure, large-scale conspiracies would be an awful thing. Yet what is missing from the calculus is a clear and fair assessment of the dangers of endorsing and spreading theories of conspiracies that are nonexistent. Indeed, perhaps real conspiracies would be better fought against if conspiracy theories in general were not so widespread. Fighting for democracy and transparency might actually involve some hard work, rather than mere anti-authoritarian jerk-reflexes and random annotations on a screen-shot, and known conspiracies could be precisely those that weren’t the subject of armchair “theorizing”, but were uncovered by real investigative work from, say, government officials, members of the “mainstream” media or even the state police. Thinking and acting on the basis of misinformation, especially if it turns out that no large-scale malevolent conspiracies are actually ongoing in parallel, could very much endanger the open society and erode democracy. Of course, the cost-benefit analysis is hard to perform, in particular because conspiracy theories involve, by definition, so many, or even mostly, unknowns. As a result, it is “their” opinion that the conspiracist-mind is fruitless, whereas the uncovering of actual conspiracies is never, or almost never, the outcome of the conspiracist-mind as “they” define it. “They” could be wrong, of course, but “they” think a democracy needs investigators, journalists, whistle-blowers, strong and democratic institutions and safeguards, as well as trust, rather than merely conspiracy theorists.

[11] For instance, as reported in Abalakina-Paap, Stephan, Craig, & Gregory 1999; Barron, Morgan, Towell, Altemeyer, & Swami 2014; Brotherton & Eser 2015; Brotherton & French 2015; Crocker, Luhtanen, Broadnax, & Blaine 1999; Dagnall et al. 2015; Darwin et al. 2011; Dieguez, Wagner-Egger, & Gauvrit 2015; Douglas & Sutton 2011; Douglas, Sutton, Callan, Dawtry, & Harvey 2016; Furnham 2013; Goertzel 1994; Grzesiak-Feldman 2013; Grzesiak-Feldman & Ejsmont 2008; Grzesiak-Feldman & Irzycka 2009; Grzesiak-Feldman & Suszek 2008; Imhoff & Bruder 2014; Jolley & Douglas 2014a; Jolley & Douglas 2014b; Lantian 2015; Lantian et al. 2016; Leman & Cinnirella 2007; Lobato et al. 2014; McCauley & Jacques 1979; McHoskey 1995; Moulding et al. 2016; Oliver & Wood 2014; Newheiser et al. 2011; Radnitz & Underwood 2015; Stieger, et al. 2013; Swami, Voracek, Stieger, Tran, & Furnham 2014; Swami 2012; Swami et al. 2010, 2011, 2013, 2016; Swami & Furnham 2012; Uscinski, Klofstad, & Atkinson 2016; van Elk 2015; Wood et al. 2012; Wagner-Egger & Bangerter 2007; van Prooijen, Krouwel, & Pollet 2015; van Prooijen & van Dijk 2014.

[12] Dentith even acknowledges that “it may still be useful to study Conspiracism and putative conspiracists, given that such a study may well explain particular cases of weird belief in conspiracy theories” (2016, 35). Well, “they” look forward to reading his research in this area, and can only lament that such an epiphany came too late for Dentith to sign our letter in Le Monde.

[13] As we say in French, one wonders what Raab, when signing Basham et al. (2016), est allé faire dans cette galère.

[14] If anything, evidence already suggests that labelling a conspiracist claim “conspiracy theory” does not decrease its endorsement (Wood 2015). Surely Basham et al. (2016) should be delighted by such good news (at least Basham (2016, ft 5) seems to be, somewhat).

[15] In a previous version of Dentith (2016), the pathologizing term “suffering from conspiracism” was even used (emphasis added). Dentith, however, still writes about “healthy conspiracy theorising” (emphasis added, Dentith 2016, 32), thus pointing out the existence of unhealthy conspiracy theorising, and Basham sees no problem with introducing the concept of “conspiracy theory phobia”, a term borrowed from clinical psychiatry (emphasis added, Basham 2016, 8). Although, thankfully, Basham and Dentith propose no “cure” whatsoever for these problems at least they collectively managed to medically diagnose pretty much everybody but themselves.


Image credit: Mike Licht, via flickr

Editor’s Note: The following is a slightly abridged version of Steve Fuller’s article “Science has always been a bit ‘post-truth’” that appeared in The Guardian on 15 December 2016.

Even today, more than fifty years after its first edition, Thomas Kuhn’s The Structure of Scientific Revolutions remains the first port of call to learn about the history, philosophy or sociology of science. This is the book famous for talking about science as governed by ‘paradigms’ until overtaken by ‘revolutions’.

Kuhn argued that the way that both scientists and the general public need to understand the history of science is ‘Orwellian’. He is alluding to 1984, in which the protagonist’s job is to rewrite newspapers from the past to make it seem as though the government’s current policy is where it had been heading all along. In this perpetually airbrushed version of history, the public never sees the U-turns, switches of allegiance and errors of judgement that might cause them to question the state’s progressive narrative. Confidence in the status quo is maintained and new recruits are inspired to follow in its lead. Kuhn claimed that what applies to totalitarian 1984 also applies to science united under the spell of a paradigm.

What makes Kuhn’s account of science ‘post-truth’ is that truth is no longer the arbiter of legitimate power but rather the mask of legitimacy that is worn by everyone in pursuit of power. Truth is just one more – albeit perhaps the most important – resource in a power game without end. In this respect, science differs from politics only in that the masks of its players rarely drop.

The explanation for what happens behind the masks lies in the work of the Italian political economist Vilfredo Pareto (1848-1923), devotee of Machiavelli, admired by Mussolini and one of sociology’s forgotten founders. Kuhn spent his formative years at Harvard in the late 1930s when the local kingmaker, biochemist Lawrence Henderson, not only taught the first history of science courses but also convened an interdisciplinary ‘Pareto Circle’ to get the university’s rising stars acquainted with the person he regarded as Marx’s only true rival.

For Pareto, what passes for social order is the result of the interplay of two sorts of elites, which he called, following Machiavelli, ‘lions’ and ‘foxes’. The lions acquire legitimacy from tradition, which in science is based on expertise rather than lineage or custom. Yet, like these earlier forms of legitimacy, expertise derives its authority from the cumulative weight of intergenerational experience. This is exactly what Kuhn meant by a ‘paradigm’ in science – a set of conventions by which knowledge builds in an orderly fashion to complete a certain world-view established by a founding figure – say, Newton or Darwin. Each new piece of knowledge is anointed by a process of ‘peer review’.

As in 1984, the lions normally dictate the historical narrative. But on the cutting room floor lies the activities of the other set of elites, the foxes. In today’s politics of science, they are known by a variety of names, ranging from ‘mavericks’ to ‘social constructivists’ to ‘pseudoscientists’. Foxes are characterised by dissent and unrest, thriving in a world of openness and opportunity. (Read more …)

Author Information: Gregory Sandstrom, European Humanities University and Mykolas Romeris University,

Sandstrom, Gregory. “Trans-Evolutionary Change Even Darwin Would Accept.” Social Epistemology Review and Reply Collective 5, no. 11 (2016): 18-26.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Lasso Tyrifjord, via flickr

“[T]he grandest narrative of western culture, the modern story of evolution.” — Betty Smocovitis (1996)

“[E]volutionary change occurs over timeframes that transcend virtually all the interesting contexts that call for sociological explanations. Specifically, genetic change occurs either over too large a temporal expanse to interest professional sociologists or at a level too far below the humanly perceptible to interest the social agents that sociologists usually study.”— Steve Fuller (2005)

The theory of evolution is “one of the most ideological of sciences.”— Eduard Kolchinsky (2015)

The controversy over Darwin’s evolutionary legacy in biology, philosophy and social science, re-examined at the recent Royal Society ‘new trends’ meeting reinforces the belief within SSH that Darwin’s contribution to knowledge, whatever it may have been politically (cf. Patrick Matthew and the Arago Effect) or natural scientifically, was incomplete and in many ways destructive when applied to human beings. The danger of Darwinian evolution being applied to society is something that even the arch-Darwinist Richard Dawkins admits. Some scholars, however, don’t seem willing to heed such a warning or even to acknowledge it has merit.

Scholarly disagreement surrounding the concept of ‘evolution’ (read: history, change-over-time, development, etc.) isn’t only about biology, but also about social sciences and humanities (SSH). Thankfully, practitioners in SSH have not often felt obliged to prostrate our fields to the promised hand-me-down evolutionistic ‘contribution’ of natural sciences, including biology. Yet there has also been a fruitful mixture of concepts between biology and SSH, that from time to time needs to be untangled or re-catalogued, to return a better proportion during a temporal disharmony.

One can see a modest level of internet buzz surrounding this Royal Society event from a variety of exotic quarters, including mainstream Nature, the British Academy, and philosopher Nancy Cartwright, to fringe journalism, outright philosophistry that is basically neo-creationism, in USAmerican-style, shouted loud and proud by the Intelligent Design Movement, and likewise aggressively resisted by the Darwinistas and members of the humanities Evolutionariat. And of course the ‘orthodox’ of scientistic right-wing conservative Kabala in pop USA culture while it seems to know surprisingly little about the philosophy of science. One almost needs a guide to navigate their way through all of this noise and pretence to defence of territories and ideologies, which oftentimes comes at too high an intellectual cost.

The gap between the ‘two cultures’ in this sense is as fresh as ever, which the Discovery Institute and their ‘new atheist’ opponents both exacerbate; together and taken separately. In our ‘multiversities’ today there are many more than just ‘two cultures’ or a ‘third culture.’ We try with many of these ‘cultures’ to make sense of them, that they may pollinate our understandings and identities both in the digital internet universe and in the actual physical university structures that institutionally support most of the people reading this message. The gap in understanding now evident in the N. American landscape is simply that natural science has come to be seen as the mantle of a ‘culture apart’ from all others. In this view, natural scientists have now run into a wall in trying to dictate their particular discipline’s ‘evolutionary principles’ to all other ‘knowledge cultures,’ including SSH. And now philosophy and social science have been given a platform to fight for their intellectual rights to not be imperialised by a frenzied hoard of Darwinists.

In addition to naturalistic evolution, the ‘humanistic’ SSH discourse surrounding the term ‘evolution’ is rich and varied, with many open disagreements (e.g. R. Lewontin and J. Fracchia vs. W. Runciman 2000s, Fuller 2005-2010s). If one is to respect the cultural diversity of practises that R. Dawkins would attribute to ‘extended phenotypes’ in his gene-centric view of the world, then one needs to include the voices of philosophers and social scientists. The typical biologistic generalisations and mere condescending (pretending) to understand cultural fields have become tired reminders of anti-intellectualism within the Evolutionariat. The Royal Society gathering generally addressed the task of raising awareness about SSH on Day 3 – November 9, though the overall agenda was dominated by a kind of ‘biologism’ of the modern and extended evolutionary syntheses (MEES).

Nevertheless, the event’s mission was no less than to reposition ‘Darwinism,’ as well as clarify how 21st century evolutionary theories can effectively be(come) post-Darwinian. Thus, we come to a historical moment when the option of discarding much of the ‘crude Darwinism’ of the degenerate late-modern period, infused with biologistic imperialism in SSH, may now be propositioned further. By now, with annual Darwin Day celebrations in the Anglo-American world, this debacle of Darwin-idolisation has turned into the “Lysenko Affair of the ‘West’.” Given the opportunity for evolutionary ideas in SSH to be tried by a jury of representative scholars with the prospect that they be found largely empty of many of their promises, the prospect of trans-evolutionary change would indeed be seen as a direct threat to both the coherence and any claim to significance of the MEES. Darwinian evolution either needs to be significantly repositioned and shrunk in SSH usage or it needs to be thrown out altogether.

To achieve a way forward beyond the constraints and false pathways left over from the old Darwinian corpus, we introduce the notion of ‘trans-evolutionary change’ as a feature particularly of SSH (humanistic) rather than naturalistic fields. This is a trans-evolutionary change even Darwin would accept as it acknowledges humanity ‘in tension,’ but not necessarily always ‘at war’. It was a major contribution that the Russian scientific tradition made even to the ‘western’ canon about ‘evolution’ in the names of Karl F. Kessler and Piotr A. Kropotkin to highlight ‘mutual aid’ (vzaimnopomosh), ‘cooperation’ and later ‘symbiosis’ and ‘symbiogenesis.’ By ‘trans-evolutionary change’ the author thus identifies human tension in contrast with the struggle motif in the growingly discredited Darwin-Malthus-Hobbes school.

This topic has been raised several times already at SERRC, though with less of the flair than what comes from Steve Fuller’s own writings. Student of Fuller, William Lynch’s long paper “Darwinian Social Epistemology” was responded to adequately by Peter Taylor with a short critique. Lynch’s longer reply to Taylor includes this gem: “I accept that simple, biological explanations of complex human behaviors are unlikely to be effective.” O.k., then maybe it’s time he intellectually mature and move beyond 19th century ‘Darwinism’ dressed in pragmatic USAmericano culturological garb and consider dropping the reductionistic evolutionistic ideology in SSH? Taylor replied to Lynch convincingly in April 2016. This message reconnects with that one and takes it a stage further.

Taylor defines ‘artificial selection’ as “deliberate selection based on some explicit criterion”, which he calls “a restrictive form of explanation of evolutionary change” (2016). In both of these notions I agree with Taylor and disagree with Lynch. The larger issue involves the kinds of non-evolutionary change that are legitimately available for considered scholarly discussion, instead of hand-waving and dismissal by a throng of backwards-looking, Darwin-outdated biologists and self-styled ‘public understanding of science’ or STS gurus. While I agree with Taylor that it appears Lynch’s “view of Darwinism is what drives his taking on of Fuller and so it would be difficult for him to satisfy a reader like me,” I disagree that banning any and all talk of design or Design in the Academy, particularly in SSH, e.g. social epistemology, serves a constructive purpose.

It is too obvious for everyone involved that the Discovery Institute winks with little (secret) giggles to each other when speaking about human design, i.e. design by intelligent agents, the effects of intelligent agency, etc. Such talk is all standard fare and nothing spectacular, since it could be seen in any SSH field. Human beings are involved in ‘designing’ processes, just as we do many other processes in addition to ‘designing.’ It is now both sad and tired that the ID people still seem to think they’ve reinvented the wheel while making a major innovation on sliced bread (ReVoluTion!) in the concept duo of ‘intelligent’ + ‘design.’ Perhaps Taylor’s view is simply that Steve Fuller’s representation of ID isn’t one he can personally, confessionally or professionally endorse, as it overlaps necessarily with Fuller’s worldview, which has apparently undergone (if by no more than label alone) a shift in recent years.

To achieve a way forward by dropping the tired chains of the old and new Darwinian corpus, we introduce the notion of ‘trans-evolutionary change’ as a particular feature of SSH, rather than biological or natural scientific fields. Trans-evolutionary change acknowledges humanity in tension and on smaller space-time scales than Big History naturalistic evolutionary theories. As well, it highlights the peculiar interest in the Extended Mind Thesis (Clark and Chalmers 1998), which is pushing envelopes in philosophy of mind, group cognition and dynamic systems theory. This is done to show there are burgeoning fields of study in philosophy and social sciences, e.g. such studies involving the ‘extensions’ of humanity in a non-evolutionary way, that are ready to take off once the proverbial Darwinian monkey is removed from SSH’s back. Focus on these studies may help make more coherent the Royal Society’s “philosophical and social sciences” agenda moving forward.

Trans-Evolutionary Change Can be Observed in Five Things

1) A category of change by human beings (i.e. in the anthropocene period) that occurs across, above, under, <, >, beyond or through the temporal and spatial scales found in biological and other naturalistic evolutionary theories.

What’s the minimum allowable time that it would take for something to ‘evolve?’ If there is no minimum, then there is no quantifiable scientific theory based on time. If you allow a minimum time scale, even across a range of applications, then you open the possibility of studying ‘trans-evolutionary’ change because there must then be ‘actions/processes/origins’ that cross the relevant time scale. In such cases, it must be left open for alternative ways to discover an answer using a non-evolutionary toolkit.

Darwin’s defenders often avoid the importance of exploring and explaining this ‘scale and identity controversy’ in public. Darwin had studied geology with his mentor Charles Lyell, and noted: “if we make the same allowances as before for our ignorance, and remember that some forms of life change most slowly, enormous periods of time being thus granted for their migration, I do not think that the difficulties are insuperable.” The large time scales involved in Darwin’s evolutionary narrative are quite clearly not the same scales involved when decisions are made, artefacts made and actions taken on the level of institutions, communities, groups, etc. that SSH studies.

The question logically then arises: what happens when we are not dealing with ‘enormous periods of time’ but rather with much shorter, non-evolutionary time scales? One way to distinguish the particular focus of interest that SSH has taken as its rightful province from the beginning until now has found a new name, which suits our purpose of signifying trans-evolutionary change. More than simply a new geological period, the epoch of trans-evolutionary change is now called: the Anthropocene.

2) Not only (reducible to) the externalist ‘Darwinian’ version of ‘natural selection’ acting upon an object from ‘outside,’ but rather also invokes the internalist (e.g. extended mind) notion of ‘human selection’ (Wallace 1890) from ‘inside’ a person.

This requires a kind of social epistemology that Fuller acknowledges as “a distinctive counter-biological sense of ‘social selection’: religious, academic, and political.” (2005: 6) Once people see that deterministic Darwinian models of social change are ‘not even wrong,’ the desire for an alternative that focuses on ‘selection’ on the human level will become more tangible.

Perhaps the most heinous result of so-called Darwinian logic has been that it handicapped a whole realm of knowledge with expectations that it could not meet. How was it ever thought possible that a naturalistic externalist view of human society and culture could ever take priority over a humanistic view of society? One ideology explores not only Einstein’s physical notion of “the starry heavens above”, but also the personal notion of a “moral universe within,” which is the anthropic dimension.

3) Investigable on both the individual (person) and population (society) levels (i.e. multiple levels) simultaneously, interactively and proportionally.

There is no avoiding the fact that the single discipline that has put the most of its attention and resources into the study of “individuals and groups” is sociology. When biologists use language borrowed from SSH, weave it into their disciplinary language with variations, adaptations and neologisms (e.g. ‘memetics’) inserted alongside it, they often distort or mangle its key message(s). One example of this is the notion of ‘group selection’ vs. ‘individual selection.’ Sociologists have been studying both, but with a concentration on the ‘agency’ of ‘selection’ that is far more developed than evolutionistic musing. We already have what biologists later decided to call “multi-layer selection,” which is typical language already in SSH where there are often multiple competing (or cooperating) hypotheses.

4) Dedicated to intentional, mindful, wilful, planned and directed changes (i.e. teleological) that are temporally and spatially lived and enacted by human beings within their (read: our) social, cultural, natural and other environments.

Nothing much really needs to be added about this feature of trans-evolutionary change. Enough people know about it and have written about it already. It’s a simple question of conversational proportionality and ideological control over journal publications and ‘associations’ that restricts ideological anti-evolutionism (as if it simply must by definition come from USAmerican fundamentalists and biblical literalists) from gaining a ready audience. Trans-evolutionary change serves to crush the materialistic aspirations of old-guard Darwinists and evolutionists because it shows quite simply, plainly and clearly how varieties of non-evolutionary change can be studied in SSH.

5) Inclusive of theories about sources and formal/final causes of ethics and morality (in addition to efficient and material causes) that transcend adaptationist evolutionary accounts based on naturalist reductionism.

This is a macro-feature of the trans-evolutionary discourse, which by beginning in SSH we forego the dilemma of whether or not to focus solely on efficient and material causes. The alternative, which is required for investigation on the more holistic level of SSH than NPS, allows the proper study of formal and final causes (Aristotelian causality) in ethics and morality. Naturalist reductionism is then seen as an (only efficiency/materialist) ideology with limited purposeful applicability in fields where elevation to mind-also and heart rather than reduction to body-alone is required.

The above is just a brief point-form introduction to trans-evolutionary change, which is one of the main topics of my upcoming book on Human Tension. These 5 indicators provide a basic outline of the new concept of trans-evolutionary change. They are not meant to be exhaustive, but rather indicative that this topic is ripe and ready for exploration and application across a range of scientific and scholarly fields. Particularly for those with a philosophical interest in the communication and sharing of knowledge, the notion that knowledge ‘extends’ and that our minds also can be perceived as ‘extending’ into society, while society also applies ‘intensions’ on our lives, has many opportunities for both scholarly and everyday application beyond the boundaries of evolutionary thinking.

If a person does not wish to acknowledge the notion of ‘trans-evolutionary’ as legitimate, as having a proper semantic meaning or as worthy of conversational inclusion, nothing can stop a person from holding that attitude. One may then need to be very restricted in speaking with them when looking more carefully at their particular meaning of ‘evolutionary’ because it might be tricky or uncelar. With some people, evolutionary theories turn into an evolutionistic worldview, a Darwin-idolising anti-theism apologetics based on aggressive ‘new atheist’ rhetoric rather than simply an arrangement of more or less clear and important scholarly ideas about change, motion, chance, intention, purpose, etc.

Yet with the conundrum of convoluted definitions, evolution is also used by others with sometimes too narrow a range of explanations, e.g. ‘only biology.’ This cohort of unknown size has an over-inflated view of biology as “the science of Life” and therefore as Queen of the Academy following the former Science Queen – physics. The importance therefore of having enabled a flanking move to evolutionary theory with trans-evolutionary change, by accumulating arguments in sovereign, independent, autonomous (but integral), developing SSH fields of knowledge, has many potential consequences. Do biologists really wish to restrict ‘evolution’ to being ‘strictly a biological’ idea and if not, then which new ‘map of knowledge’ would they suggest so that ideological biologism (which they likely won’t openly name) does not continue to plague the academic landscape? I see nothing coherent coming from biologists, even the non-exaggerators, to visualise a more realistic ‘map of knowledge’ than the grossly disproportionate view that many of them currently hold, uneducated in the sociology of science as most of them are.

My appeal then is to people first, not to abstract ‘post-evolutionary’ ideas. I’m not interested in those who feel they categorically must refuse to even consider the notion of trans-evolutionary change. It is those who may be curious to depart from the biological status quo into a post-Darwinian reality, to metaphorically ‘follow the white rabbit’ away from Darwin’s dehumanising determinist hole into a more fulfilling exploration of human society that appeal to me. A trans-evolutionary thinker may and often does know the ‘evolutionary canon’ rather well, but also moves beyond it to embrace a more dynamic, realistic model of choice, change and human development in 21st century SSH. They therefore need no longer embrace the mainstream ‘strictly neo-Darwinian’ or ‘Modern Synthesis’ version of evolutionary theories in natural sciences (or in economics, sociology, anthropology, psychology, etc.) any longer because we are right now in the midst of significant changes to the ‘paradigm,’ an (over-)extension, amendment, revision or even ‘replacement.’

The Intelligent Design Movement has turned into such a circus that even one of its ringleaders William Dembski recently had to publically ‘retire’ from it. He simply cannot be defended as a ‘revolutionary’ IDist anymore. One of the mainstays of the Discovery Institute for over a decade, Casey Luskin, also recently left the DI to pursue ‘further studies.’ Yet the so-called Darwinists display radical tendencies just as do their IDist ‘debate and publish’ partner foes. In one of the most absurd dead-ends in late-modern intellectual life, D.S. Wilson’s biologistic ideologising at the Evolution Institute, with Evolution for Everyone, most recently misguided Robin Hoodism at ‘Evonomics’, has led him now even into the promotion of ‘social Darwinism’. While the scientific ethos to reject hubris with humility generally holds, there do seem to be cases within the party-atmosphere of the Evolutionariat in some psychology of science sense where scholars belief they have achieved a kind of ‘god’s eye view’ and conceptual monopoly over change. However, in this case by returning to a 19th century naturalist icon in Darwin, Wilson isn’t exactly blazing new territory. He is rather waving a smudged, outdated flag of Evolutionary Naturalism towards SSH as he rides off towards a detoured naturalised/under-humanised destination for humanity. And already he has attracted a small mob to his journey of fuzzy evolutionistic logic.

Yet when leaders of the Evolutionariat, people like D.S. Wilson, are caught actually saying things like, “The biggest victim of the stigmatized view of Social Darwinism has been all of us,” most sane people, most normal people, basically just most people realise that something has gone very wrong. Can this type of ideologically evolutionistic mess be avoided or perhaps just somehow cleaned up and fixed following this recent Royal Society meeting? While the option of ‘replace,’ ‘amend’ or ‘extend’ was on the table, speakers of course could easily escape facing the ‘over-extension’ of the modern evolutionary synthesis by huddling into the safe status quo backwardness of Darwinian thinking. Or, perhaps the good ole’ English paddle is what Darwin’s theory of ‘evolution by natural selection in the struggle for life’ needs.

It is a unique moment in the landscape of history, philosophy and sociology of science that there is now forged such a strong post-Darwinian evolutionary biology position (L. Margulis and the Third Way), which is what led to this important and timely Royal Society meeting. Steve Fuller has raised this issue in multiple venues and on many occasions at least since 2005 and it seems to be a question of time when the public conversation finally catches up to his unique cybernetic design intelligence contribution. This may be yet another timely opportunity to re-explore his views on this topic as it seems several people at SERRC have recently found air to voice their concerns and criticisms of Fuller’s evolutionism, creationism and IDism, science and religion work. And well, if Peter Thiel can promote (lowercase) ‘intelligent design’ (not to be confused with the theistic ‘design argument,’ right?), then why can’t most other people in the 21st century at least acknowledge it exists and isn’t really that big a deal?

The most meaningful aspects of this conversation in my view are very little about the actual person or ideas of Charles Darwin. What an amazing convenient distraction the recluse from Downe, England has become! It’s time to close that chapter and read on further than Darwin in the Book of Nature. The key factors of interest here in SSH have been more about the ideological movement of the so-called ‘Darwinists’ and the illogical inversion of processes for origins (cf. Whitehead) from the start. And now with the Royal Society, the rest of society has also caught up with the ‘Darwinists’ who can be largely now rejected in society, just as R. Dawkins has now been publically unveiled as highly un-liked and disapproved by scientists (even when his name is not mentioned in the survey question!) for his aggressive agnosticism/atheism and distortions of scientific knowledge. This is something that social epistemology can help us uncover and better understand … in case any SERRC members are interested in proactivating studies of trans-evolutionary change across a range of SSH fields, to which when broadly and specifically applied leaves Dawkins’ ‘memetics’ far behind.

Sociobiology was tried and failed. Memetics failed. Evolutionary psychology is trying and failing miserably because its governing principles are self-contradictory and it has ideological self-blinders on. Why do they keep desperately looking back to Darwin for answers? It is time to change the music program from the dissonant Darwinist hymn sheets that some scientists have been using to experiment their humanistic fantasies upon the world. As the times change, we are now no longer willing to accept the characterisation of ‘species egalitarian’ when speaking above the mere biological, physiological or zoological levels. Uplift from homo to human is a vertical cultural process, in which we’re best either to forget completely or if necessary simply put ‘in its proper limited place’ the horizontal naturalism of the Beagle Enlightenment story in SSH.

Trans-evolutionary change helps to overcome Darwin’s cultural regret with a less scientistic, naturalistic and generally pessimistic approach to human existence on Earth. Trans-evolutionary change ushers in potentiality for global-social reconciliation for science, philosophy and theology/worldview discourse through magnetism by rotation. Let us see those post-Darwinian ideas that are being blocked en-masse by defensive biologists and naturalists. It does no good whatsoever to first call a people, community or society ‘under-evolved’ or even ‘un-evolved’ and then to claim that some ambiguous cultural evolutionary theory of human development ‘scientifically’ proves this on a scale of your choosing. That is simply civilisational racism.

In contrast, with trans-evolutionary change, multiple levels of selection mean multiple interpretations of development are possible and even encouraged, based on the resources available to the community rather than demanding internal compliance to some external evolutionary civilisational Standard. The User instead has to supply the content for the magnetism, which takes discussions of human-social change away from Darwin’s outdated evolutionary framework towards more contemporary advanced discussions about emergence, agency, design, planning, and indeed, human extension, though this latter language is still not widely familiar in SSH.

The way forward is to begin applying trans-evolutionary thinking in SSH as a way to cleanse many humanistic fields from the naturalistic plague that was part of the 20th century and early 21st century science wars. It will become obvious immediately regarding those who actually wish to ‘try’ and use TEC and those who clearly do not. Those who do not wish to try trans-evolutionary thinking will become the laggards in 21st century science, philosophy and theology/worldview discourse, stuck perhaps by a fear of the future as much as a love of the past.

It’s time to send Darwin down the scholarly river into history, away from SSH land where he is no longer welcome. And it’s not only about treating women as 2nd class citizens and marrying his cousin. Yes, it means there will be a cohort of angry evacuees from Darwin; those who wish to remain Darwinists to the end, astonishingly even in SSH, who ultimately must demand rescue from the absurdity of the intellectual territorial flooding that they now occupy; turned out into a land of SSH giants that pushed their heroic scientist idol away.

Darwin’s theory of the struggle for existence and the selectivity connected with it has by many people been cited as authorization of the encouragement of the spirit of competition. Some people also in such a way have tried to prove pseudo-scientifically the necessity of the destructive economic struggle of competition between individuals. But this is wrong, because man owes his strength in the struggle for existence to the fact that he is a socially living animal. – Albert Einstein (1931)

This is so much closer to an ‘eastern’ worldview than a ‘western’ one. A neutral onlooker might wonder if there is more going on with Darwin-Malthus-Hobbes western ‘struggle’ proponents and practitioners than meets the eye on global humanity scales.

To close, a peroration: It would do many, but not all of us (that’s a non-scientific principle of ‘democracy’ in action, to which I’m confident that a significant ‘WE’ in global societies are ready to say together: ‘cheerio Charles!’), the honour, if England would please take Darwin’s pigeons, barnacles and worms back to Downe, U.K. and provide Darwin with a proper civilisational retirement from public attention. Patrick Matthew and the Arágo Effect send a preferable diversion courtesy of the trans-evolutionary stream.

Smocovitis writes of “the grandest narrative of western culture, the modern story of evolution” (1996), perhaps only up to the limits of her natural(istic )science. A more inspiring humanistic ‘narrative’ of SSH than the one constructed in Victorian England is made possible once a person passes beyond naturalist ideology in the name of ‘evolution.’ Indeed, the grandest narrative of global human culture may eventually come to be seen as that of ‘human extension’ (services) and thus with it also our lives in human tension beyond biology alone.

Author Information: Gregory Sandstrom, European Humanities University and Mykolas Romeris University,

Sandstrom, Gregory. “No Fuller than Complete: Darwin’s Age Comes to an End.” Social Epistemology Review and Reply Collective 5, no. 11 (2016): 12-17.

The PDF of the article gives specific page numbers. Shortlink:


Image credit: Marc Brüneke, via flickr

The bagpipes are playing the funeral oration for Darwin’s evolutionary theories as they have been chronically misapplied and ill-championed in social sciences and humanities (SSH), the true home of the Darwin wars. The feverish century-long pitch of the drum, drum, drumming of evolutionary war; war in nature, struggle for life, survival of the fittest, man vs. nature, man vs. each other motif, has finally moved past its zenith. No fuller than complete, the Age of Darwinian evolution now comes to an end, with a sign to mark its place at the Royal Society.

The Scottish originator of the phrase “natural process of selection” (1831) might be put out by all the notoriety that C. Darwin has received over the past 158 years since publication of ‘The Origin.’ But the fall from grace that Darwin is set up for once again in London, this time in front of a jury of world-class intellectual peers that will include philosophers and social scientists may be enough that the gracious Scot Patrick Matthew would never wish Darwin’s eventual fate upon him.

At an upcoming meeting at the Royal Society on ‘new trends in evolutionary biology,’ the prospect of finally over-turning ideological Darwinism in biology, with global leading evolutionists in attendance, is on our doorstep. Will Darwin’s Age finally come to an end? Darwin’s theory now comes across to the educated eye as ‘developed but incomplete,’ in stark contrast with how things looked in the mid-19th century.

When Darwin wrote privately to his mentor C. Lyell in 1860 about “a complete but not developed anticipation!” of his theory (of the origin of species by means of) natural selection, he obviously hadn’t yet heard of the so-called ‘Arago Effect’ of scientific priority. Otherwise, he wouldn’t have written it. Darwin’s letter symbolically gives official priority over the discovery of ‘natural selection’ to Matthew; ‘complete’ signals that Darwin didn’t add anything new and that his theory was ‘anticipated.’ A serious argument can thus be made that we are more hanging onto the name ‘Charles Robert Darwin of Down, England,’ etc. than we are any longer confident that the ‘evolutionary’ ideas coming from Darwin’s 19th century ‘canon’ of hand-me-down texts are still fuel for the scientific imagination and research programs today.

As Matthew wrote to the Gardener’s Chronicle in making his claim to having pioneered the idea of “nature’s law of selection,” others were not ready to receive what he wrote at the time and there was a “spirit of resistance to scientific doctrine” in positing nature’s ‘selection,’ “that caused my work to be voted unfit for the public library of the fair city itself. The age was not ripe for such ideas.” This was said in 1860 (less than 2 years after publication of OoS), when Matthew responded in print to a review of Darwin’s ‘Origin’ that suggested Darwin was original and held priority over ‘natural selection.’ Publically, however, Darwin would only suggest that nobody had read Matthew’s work and that he took nothing, even through word-of-mouth from others who had read Matthew, from Matthew’s ideas as a kind of ‘knowledge contamination’ (Sutton 2014).

What would happen if someone found something like an English acronym N.L.O.S. or even the directly stated Matthew phrase “nature’s law of selection” in any of the personal correspondence between Darwin and someone before 1858? If any such thing exists, with it the priority game for Darwin would surely be up with disgrace to his legendary name. But the so-called ‘smoking gun,’ much like those pesky transitional fossils in the historical geological record on Earth sometimes remain, is still yet to be found, if it even does exist.

Shift to 2016 and the ‘culture war’ in the Anglo-American English world surrounding the term ‘evolution’ (leave aside ‘creationism’ for the time being) is about to get a facelift with the upcoming Royal Society ‘new trends’ meeting. The scholarly discourse of change-over-time in SSH today has little to nothing to gain from Darwin’s corpus any more, but it may still lose much by not dropping him and his unruly ideological followers now.

Perhaps one of the biggest problems in the Anglo-American discourse is that many people there seemingly “don’t know what they don’t know” regarding evolutionism in SSH. In this case, in not knowing, they continue to abuse evolutionary language, under the spell of Darwinism. This happens both on the side of atheists that try to argue evolution offers a scientific argument to bolster their atheism, and for theists who employ the term ‘evolution’ even in the most absurd of cases in trying to linguistically woo their opponents.

At the USA’s evangelical Christian-based BioLogos, where ‘science and faith’ are supposed to co-exist peacefully (D. Falk), except when they don’t (e.g. cloning, contraception, pharmaceuticals, nano-technology, neural-linguistic programming, etc.), or be ‘integrated’ into each other (J. Swamidass), except when they aren’t (welcome to 21st century fracked philosophy!), and evolutionary biology is not considered as problematic to religious belief, except when it comes to the mystical genomics of Adam & Eve, there is a glaring problem of equivocation by the Management regarding the meaning of ‘evolution.’ Yes, folks, all good intentions aside, they really don’t know what they don’t know and furthermore don’t want to know. They want to be stubborn ‘creationists’ at their local churches instead.

The reason for this is that BioLogos holds an ideologically ‘scientistic’ epistemology, where scientisation runs rampant over knowledge with implications for secular human nature, character and theology (cf. A. McGrath’s ‘scientific theology’). Thus, BioLogos has demonstrated that it actively supports the over-extension of ‘evolution’ into evolutionism and uses metaphor transfer from natural to artificial ‘designs.’ We also see this in the over-extension of ‘creation’ into ‘creationism,’ which BioLogos not subtly endorses. Sadly, they offer no excuse or explanation for their simple and obvious grammatical error in displaying their confused ideologies.

Here’s one example. A commenter named Rafael Galvão wrote on their site:

I have a degree in economics and my object of study is the history of economic thought. Biological evolution and economic evolution are always used interchangeably, like the models of Samuel Bowles and Herbert Gintis are drawn from the evolutionary theory. I think it’s interesting that there are lots of discussions in the history of economic thought about Malthus and in the history of theology he’s basically forgotten.

This comment was ‘liked’ by BioLogos Managing Editor Brad Kramer, Joshua Swamidass & @Caspar_Hesp (Forum Moderator).

We can therefore conclude, aside from not recognising a simple falsehood in economics – evolution is not “always used interchangeably” – that BioLogos thus even promotes interchangeable usage of ‘evolution’ in biology and economics. This is significant by itself because they “don’t know what they don’t know” on this topic. They display no public recognition regarding ideological evolutionism and its underside, even welcoming a Christian evolutionary psychology project (which was not well received) into their Templeton-funded grants program.

Yet BioLogos is, unfortunately, not alone here and their conflation of NPS with SSH joins a considerably large group of economists who if they don’t call themselves ‘evolutionists’ then at least openly applies what they consider as loosely (because there isn’t much more than that) ‘evolutionary principles’ in their economics work. Whether the so-called principles themselves are worthless and of minimal theoretical contribution doesn’t seem to matter to them, as long as it is labelled ‘evolutionary’ and thanks be given to Darwin in the genre of scientific origins mythology.

Many fields in play, you might be wondering where this is going and why it’s important. Economics is a clear and blatant example of a field in confusion as a result of evolutionism in SSH. When the notion of what exactly does and what doesn’t evolve is not even raised and a discussion not had to clarify borders or boundaries, or at least evolutionary ‘aspirations,’ then little can be done to stop what Dennett called “Darwin’s universal acid.” Darwin is upheld by some as one of the greatest developers of SSH fields; he has been called the founder of psychology, of sociology and of modern political economy, etc. The notion that Darwin’s ‘principles’ may apply equally to human beings as to other creatures and even plants, rocks, the solar system and universe, etc. symbolizes a existential threat to human freedom and sovereignty, while some also see it as some kind of liberation.

One need only bring up one example among hundreds to throw a cold bucket of water on the notion that BioLogos actually supports ‘evolutionary economics’ or even knows much about what it means. They seem unaware of the potentially deadly social consequences that a misunderstanding of economic development might cause. With a law of competition based on “survival of the fittest in every department” between people, “[w]e accept and welcome great inequality (and) the concentration of business,” said Andrew Carnegie, “in the hands of a few.” Is this the kind of Darwinian economics BioLogos supports? It sadly remains a problem that BioLogos “doesn’t know what it doesn’t know” and therefore thinks that evolutionism everywhere without limits. Perhaps someday we will receive some clarity from BioLogos regarding abuses, and also under-sights, like why they never discuss cutting-edge biology and genetics involved with the Third Way. BioLogos shows ‘No Results’ regarding this “New Trends” meeting on its website although it has many biologists among its commentators. The USAmerican discourse surrounding ‘evolution,’ from this global village Canadian’s perspective is, given such intentional avoidance of crucial issues as at BioLogos, indeed largely a side-note to more interesting and important things.

Of key import at the Royal Society meeting is the notion of an ‘extended evolutionary synthesis’ and also the meaning of evolutionary ‘over-extension,’ since the notion of ‘replacement’ or major correction (amendment) for (neo-)Darwinian evolutionary theory is now realistically in play. R. Dawkins had already warned us in 2004 about getting “not too extended,” regarding the so-called ‘extended phenotype.’ In the McLuhan tongue, there is a distinction to make between a ‘speed-up’ and being ‘flipped.’ Thus, if evolutionary theory is ‘extended’ too far, sooner or later it ‘flips’ and becomes something other than itself at the core.

One of the most difficult puzzles nowadays seems to be finding opportunities for non-evolutionary thinking. Are there any replacement-like ‘non-evolutionary’ options for studying human character ready and available to consider that Darwin could never have imagined? If so, let us see some of them presented publically at the Royal Society.

In the present Wikipedia example, Objections to evolution is “part of a series on Evolutionary Biology.” This may seem unimportant, but it is a simple example that is repeated rampantly wherein objecting to evolution can only happen ‘legitimately’ in biology, yet at the same time the concept is widely used outside of biology, even in SSH. It begs the question if objections to evolution outside of biology can be legitimated and on what grounds would one decide if they are legitimate? If one listens only to the status quo of ‘normal evolutionary science’ voices in the Academy nowadays they could quite easily block this questioning out. Yet this Royal Society meeting makes the ‘universal Darwinism’ (Dawkins 1983) position very difficult to defend anymore and indeed much easier to leave aside for more progressive models.

Evolutionary ideas borrowed from biology are caught in the natural-physical scientific methodology of requiring that the ‘interpreter’ of nature (scientist) be entirely ‘un-reflexive’ in their scientific practise. Such an approach takes aim at a kind of ‘positive’ science or ‘objective’ knowledge which is thought to liberate the individual researcher from his or her typical human reflexivity into ‘objective scientific neutrality.’ But this is not the kind of ‘knowledge’ that is produced and shared in SSH, no matter how much easier it would make things if we could find ‘natural science-like’ looking data collection techniques.

Just as SSH scholars cannot escape their (our) reflexivity in our various research topics, neither can we impose our own worldview upon others as if the scientific theories and methods we use and advocate supposedly requires that. As Dawkins once cautioned, however, there are ‘Neville Chamberlain evolutionists,’ i.e. atheist-appeasers who argue that science and religion are somehow mutually compatible. The compatibility argument for science, philosophy and theology/worldview discourse runs contrary to what Dawkins and many of the ‘new atheists’ believe, which is that science and religion are fundamentally incompatible.

Theistic evolutionists (TEs) or evolutionary creationists (ECs), on the other hand, believe that science and religion are compatible, even while there are oftentimes disagreements and even open ideological conflicts. TEs consist of the majority and current default position among Abrahamic theists. Yet the protestant evangelicals who swarm to this topic of conversation turning it into a large in-market often come across as simply confused and under-educated, whether they self-identify as ‘creationist’ (against Darwin’s view that “it becomes highly improbable that they [species] have been separately created by individual acts of the will of a Creator”) or not.

One problematic feature of this recent development only in the past 5-10 years is that ideological TEs (which means all of them, by definition of the term ‘evolutionists’) often won’t stand alongside of their fellow theists who haven’t given up Orthodox teachings for evolutionistic ideology. Yet for TEs who are otherwise orthodox and mainstream even without carrying the label, the continual embrace of evolutionism may come to be seen as an unnecessary linguistic act that can be corrected simply by will of words and nothing else.

In short, there certainly are people who need to hear the message: “Please stop trying to ‘evolutionise’ everything. We see through this ruse with trans-evolutionary change.” The spirit of the difference between ‘evolving’ and other types of change and the discernment of evolution’s limitations is something that TEs still seem unable to experience or perceive. This condition may change with the inclusion of trans-evolutionary change into SSH discourse.

One problem in the sub-field of social epistemology (i.e. not just individualistic analytic ‘western’ epistemology, or even Goldmanian social epistemology) is that Fuller himself seems to draw no clear distinction between what ‘evolves’ and what doesn’t. I can find nothing in my Fuller notes where he defines or even acknowledges ‘non-evolutionary’ in any meaningful way. On the one hand, Fuller is putting risk and reward mechanisms in front of people in public the way he contends that “we are now entering a new era in the understanding of minds and machines.” It may sound somehow empowering when Fuller uses such language, that of enhancement, uplift and higher projection than homo sapiens sapiens. This is provocative ‘social epistemology’ that engages many people and in my opinion could do so in a more effective way, were Fuller to clarify himself about what specifically does and doesn’t evolve.

Fuller recently displayed surprisingly backwards in his language by a least a century and was uncharacteristically ‘precautionary’ on the topic of ‘social evolution.’ He still actually seems to believe in that old myth! Fuller says that “military and police drones may evolve” (into ‘android companions’). Yet this is a primarily externalistic notion of ‘evolve’ with no internal ‘human guidance’ involved. Obviously that scenario is quite contrary to actual social reality. If Fuller wishes to conceptually disavow ‘social evolution,’ the academic world will no more vilify him for this than they have already for his endorsement of ‘intelligent design.’

Mere gradualism and step-by-step thinking likewise shouldn’t be defended by Fuller here as ‘evolutionary’ based on loosely defined views of change-over-time in society. Proactionary thinking, in contrast with evolutionistic SSH, is much more (if not entirely) internalistic in character; with the individual (or group) choosing to intentionally act based on inner reasons, instincts or principles. Fuller thus seems to be stuck on the right side, yet still the downside of Darwin’s legacy, not yet having moved past evolutionism in his linguistic strategy and offering little clarity through his linguistic embrace of social evolution. In this confusing message regarding evolution and evolutionism, Fuller thus seems to want to have things as many ways as possible at the same time and all at once in his unity-oriented social epistemology.

“‘Wouldn’t ‘Nature,’ understood in its totality,” Fuller a self-described ‘naturalist’ asks, “suffice as the name of God?’ The authors of this book [Fuller and Lipinska], on the other hand, stand with those who locate the ‘best explanation’ for nature in the workings of the sort of anthropocentric yet transcendent deity favoured by the Abrahamic religions.” This was the public(ation) moment of Fuller’s conversion from secular humanism to Unitarian (proto-Christian) science, philosophy & theology discourse. Without this piece to the puzzle, without reference to a “transcendent deity,” Fuller’s defence of neo-creationist Intelligent Design would make no sense. So, with this understanding, Fuller’s social epistemology now no longer looks as ‘naturalistic’ as it once may have.

At least we note that Fuller has come around (2014) to reluctantly acknowledging the new geological Anthropocene period of human impact on Earth, what one might call ‘little history’ in contrast to ‘big history’ or ‘macrohistory’ (Christian 2005). With Bill Gates’ educational missionary help, ‘big history’ is effectively knocking young earth creationism out of textbooks and public school classrooms as simply undereducated USAmerican provincialism. A proper ‘anthropic’ (not necessarily anthropocentric) scale thus seems required to beat back the imperialist manoeuvres of misanthropic biologism (& economism). With that we can explore specifically human activities including origins and processes, design and manufacture, etc.

At the end of the day we can still hope for improved proportionality in the SSH–NPS relationship as the voices of SSH against evolutionism and Darwinism are heard, respected and listened to in terms of what escaping from the ideological evolutionistic prison might entail. What we don’t want on the way out is to turn human extensions into a kind of technological self-manipulation that echoes what McLuhan predicted with electric (psycho-somatic) engineering of more and more social environments.

Author Information: Brian Martin, University of Wollongong,

Martin, Brian. “An Experience with Vaccination Gatekeepers.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 27-33.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: Jennifer Moo, via flickr

For those promoting vaccination, one option is censoring critics, but this could be counterproductive. The response of editors of two journals suggests that even raising this possibility is unwelcome.


For several years, I have been writing about the Australian vaccination debate. My primary concern is to support free expression of views. Personally, I do not take a stand on vaccination.

The trigger for my interest was the activities of a citizens’ group named Stop the Australian Vaccination Network (SAVN), formed in 2009. SAVN’s explicit purpose was to silence and shut down a long-standing citizens’ group critical of vaccination, the Australian Vaccination Network (AVN). In several articles, I have described the methods used by SAVN including verbal abuse, harassment (especially via numerous complaints) and censorship.[1] Over decades of studying several public scientific controversies, I had never seen or heard about a campaign like SAVN’s using such sustained and diverse methods aimed at silencing a citizens’ group that was doing no more than expressing its viewpoint in public. (Campaigners on issues such as forestry who use direct action techniques such as blockades are sometimes met with violent repression.)

Prior to this, in 2007, I started supervising a PhD student, Judy Wilyman, who undertook a critical analysis of the Australian government’s vaccination policy. Judy was also active in making public comment. After the formation of SAVN, Judy came under attack. SAVNers criticised her thesis before it was finished and before they had seen it, and made complaints to the university. After Judy graduated and her thesis was posted online, a massive attack was mounted on her, her thesis, me as her supervisor and the University of Wollongong.[2] This involved prominent stories in the daily newspaper The Australian, hostile tweets and blogs, a petition and complaints, among other things.[3]

Here I report on a small spinoff experience that provides insight into thinking about vaccination issue. Two senior Australian public health academics, David Durrheim from the University of Newcastle and Alison Jones from the University of Wollongong, wrote a commentary published in the journal Vaccine.[4] They argued that academic freedom might need to be curtailed in cases in which public health is imperilled by academic work. Their specific concern was criticism of vaccination, and they mentioned two particular cases: Judy’s PhD thesis and a course taught at the University of Toronto by Beth Landau-Halpern.

Durrheim and Jones are established scholars with long publication records in their usual areas of research. However, in writing their commentary in Vaccine they ventured into social science. As I wrote in a previous article in the SERRC, Durrheim and Jones’ commentary was based on an inadequate sample, just two cases.[5] Furthermore, in both cases they appeared to rely on newspaper articles without obtaining independent assessments of the reliability of the information. Furthermore, they provided no evidence supporting the effectiveness of the measures they proposed to prevent unsound academic research and teaching on public health, nor examined the potential negative consequences of these measures, in particular for open inquiry. Ironically, in criticising allegedly unsound social-science teaching and research, they produced an unsound piece of social science writing.

I wrote a reply to Durrheim and Jones’ commentary and then contacted Vaccine about whether it would be suitable for submission. However, the editor-in-chief ruled that the journal would not publish replies to its published commentaries. This led me to publish my reply, along with an explanation of its context, in SERRC.[6] Beth Landau-Halpern wrote her own response.[7]

I then proposed to Vaccine to submit a commentary about the vaccination debate. The editor-in-chief asked for a summary of what I proposed. After receiving my summary, I was informed that the editor-in-chief (EiC) “has advised you can proceed to submission, however the EiC has requested a fresh viewpoint in the commentary which would add something new to the literature.” I prepared a short piece, “Should vaccination critics be silenced?,” making the case that censoring critics could be counterproductive. I submitted it through the usual online system, listing four potential referees. The managing editor told me that my submission would be handled by the editor-in-chief. Not long after, I received a form-letter rejection, a “desk reject,” including the following text:

We regret to inform you of our decision to decline your manuscript without offer of peer-review.

Vaccine receives a large number of submissions for which space constraints limit acceptance only to those with the highest potential impact within our vast readership. […]

If any specific comments on your paper are available, they are provided at the bottom of this message.

There were no comments on my submission. Normally, after submitting a proposed outline of points to be covered, I would have expected that my submission would be sent to referees, or at the very least that the editor would offer a justification for rejection without refereeing. My submission is reproduced below so that readers can judge its quality. Vaccine accepted Durrheim and Jones’ commentary three days after receipt, implying very rapid refereeing.

I next sent my commentary to the Journal of Public Health Policy. The co-editors soon wrote back declining my submission, saying “Perhaps you can find a journal with an audience for whom this material is new. If you submit it elsewhere, I suggest that you look at the attached article.” The attached article in one page argued that safety is important in vaccines, concluding “It will be far easier to achieve herd immunity when risks associated with vaccines are known to be so small that public confidence in the safety of vaccines is secure.”[8]

The co-editors’ reply perplexed me. I wrote back as follows:

I am not arguing for or against vaccination. Nor am I arguing about the benefits of herd immunity or measures taken to improve vaccination rates, the topics covered in the article you kindly sent.

My concern is about the wisdom of silencing critics, for example trying to block public talks, prevent speaking tours, shut down websites, force organisations to close and verbally attacking individuals to discourage them from making public comment. Possibly I did not spell this out clearly enough. Whether silencing critics using such methods is a good way to promote vaccination has seldom been addressed.

The co-editors responded:

We have followed vaccination policy and the problem with your comments about critics is that because the critics focus on decisions by parents and patients they strengthen the perception a person takes a vaccine to protect him or herself, rather than to protect the whole community.  You do not challenge that. Although not the focus of your submission, it gives some comfort to those who focus on protecting themselves or their children. Perhaps you can work around that problem, but your otherwise find [sic] submission does not do it.

The implication of this response is that any comment about vaccination that “gives some comfort to those who focus on protecting themselves or their children” is unwelcome. This sort of perspective, with herd immunity being an overriding concern, helps to explain the resistance of vaccination proponents to any analysis of attacks on vaccination critics.

My experience with just two journals is an inadequate basis for passing judgement about peer review and editorial decision-making concerning vaccination. However, it is compatible with there being a view that publishing anything that might be used by vaccine critics is to be avoided.

The vaccination controversy, like many other public scientific controversies, is highly polarised. Partisans on either side look for weaknesses in the positions of their opponents. It seems that even if censoring vaccination critics is counterproductive, raising this possibility is unwelcome among proponents. After all, it might give comfort to the critics.

Should Vaccination Critics Be Silenced? (submission to Vaccine and Journal of Public Health Policy)


If vaccine critics seem to threaten public confidence in vaccination, one option is to censor them. However, given the decline in public trust in authorities, in health and elsewhere, a more viable long-term strategy is to accept open debate and build the capacity of citizens to make informed decisions.

Keywords: vaccination; critics; free speech; censorship

Ever since the earliest days of vaccination, there have been disputes about its effectiveness and safety. Today, although medical authorities almost universally endorse vaccination, opposition continues (Hobson-West, 2007). From the point of view of vaccination supporters, the question arises: what should be done about vaccine critics?

Proponents fear that if members of the public take vaccine critics too seriously, this may undermine confidence in vaccination and lead to a decline in vaccination rates and an increase in infectious disease. How to counter critics, though, is not clear, given that there are no studies systematically comparing different strategies.

One approach is simply to ignore critics, hoping that they will not have a significant impact. Another is to respectfully address concerns raised by parents and others on a case-by-case basis, depending on their level of opposition to vaccination, countering vaccine criticisms with relevant information (Danchin and Nolan, 2014; Leask et al., 2014). Then there is the option of trying to discredit and censor public vaccine critics, an approach used systematically in Australia for some years (Martin, 2015).

It may seem obvious that silencing critics is beneficial for maintaining high levels of vaccination. However, setting aside the ethics of censorship, there are several pragmatic reasons to question this strategy.

An initial problem is the lack of evidence that organized vaccine-critical groups are significant drivers of public attitudes towards vaccination. Although it seems plausible that efforts by these groups will induce more parents to decline vaccination, a different dynamic may be involved. It is possible that organized opposition is a reflection, rather than a major cause, of parental concerns that may be triggered by other reasons, for example awareness of apparent adverse reactions to vaccines or arrogant attitudes by doctors (Blume, 2006). There is some evidence for this view: a survey of members of the Australian Vaccination-skeptics Network showed that most had developed concerns about vaccination before becoming involved (Wilson, 2013).

Another problem is that trying to discredit vaccine critics can seem heavy-handed and trigger greater support for them in what is called the Streisand effect or censorship backfire (Jansen and Martin, 2015). The targets of censorship are likely to feel disgruntled, and suppression of their views provides ammunition for their claims that a cover-up is involved. When critics are attacked or silenced, some observers may conclude there is something being hidden.

Underlying the drive to censor criticism of vaccination can be a fear that members of the public cannot be relied upon to make sensible judgments based on the evidence and arguments. Instead, they must be protected from dangerous ideas and repeatedly told to trust authorities.

However, reliance on authority is a precarious basis for maintaining policy goals given evidence—though complex and contested—for a decline in respect for authorities over the past several decades in health (Shore, 2007) and other arenas (Gauchat, 2012; Inglehart, 1999). When education levels were lower and dominant institutions seldom questioned, it could be sufficient to assert authority and most people would follow. However, many authorities have been discredited in the public eye, for example politicians for lying about war-making, companies for lying about product hazards, and churches for covering up paedophilia among clergy. Although scientists and doctors remain among the more trusted groups in society, they are increasingly questioned too, with various scandals having tarnished their reputations.

In addition, the greater availability of information means far more people are educating themselves and challenging experts. This is not simply an Internet phenomenon. In the early years of the AIDS crisis in the US, activists studied research and organized to challenge officials over HIV drug policy (Epstein, 1996). Similarly, the women’s health movement challenged patriarchal orientations in the medical profession (Boston Women’s Health Book Collective, 1971). The questioning of dominant views has spread to a wide range of issues, including for example the health effects of genetically modified organisms and electromagnetic radiation.

Therefore, it is only to be expected that there will be increasing questioning of vaccination policies, especially when they are presented as a one-size-fits-all application brooking no dissent. In this context, attempts to suppress criticisms appear to be pushing against a social trend towards greater independent thinking.

Rather than continuing to rely on authority, a different approach is to encourage open discussion and to help parents and citizens to develop a more nuanced understanding of vaccination. If the evidence for vaccination is overwhelming, there should be little risk in assisting more people to understand it. The strategy behind this approach is to democratize expert knowledge about vaccines, so that uptake depends less on the authority of credentialed experts and more on the informed investigations of well-read members of the public.

Possible consequences of this approach are highlighting shortcomings in the vaccination paradigm, for example the possibility that adverse effects are more common than normally acknowledged, and considering the possibility that childhood vaccination schedules could be modified according to individual risk factors. By being open to weaknesses in the standard recommendations and making changes in the light of concerns raised, the more important recommendations may be protected in the longer term. This would be in accord with the general argument for free speech that it enables weak ideas to be challenged and a stronger case to be formulated (Barendt, 2005).

However, such openness to constructive debate will remain elusive so long as vaccine critics are stigmatized and marginalized. While the vaccination debate remains highly polarized, it is difficult for either side to make what seem to be concessions and almost impossible for there to be an open and honest engagement with those on the other side. If this remains the case, it is easy to predict that critics will persist despite (or perhaps because of) attempts to silence them, and people’s increasing expectation for educating themselves rather than automatically deferring to authorities will continue to confound vaccination proponents.


Barendt, Eric. Freedom of Speech. 2nd ed. Oxford: Oxford University Press, 2005.

Blume, Stuart. “Anti-Vaccination Movements and their Interpretations.” Social Science and Medicine 62, no. 3 (2006): 628–642.

Boston Women’s Health Book Collective. Our Bodies, Ourselves. Boston: New England Free Press, 1971.

Danchin, Margie and Terry Nolan. “A Positive Approach to Parents with Concerns about Vaccination for the Family Physician.” Australian Family Physician 43, no. 10 (2014): 690–694.

Durrheim, D. N., and A. L. Jones. “Public Health and the Necessary Limits of Academic Freedom?” Vaccine 34 (2016): 2467–2468.

Epstein, Steven. Impure Science: AIDS, Activism, and the Politics of Knowledge. Berkeley, CA: University of California Press, 1996.

Freeman, Phyllis. “Commentary on Vaccines.” Public Health Reports 112 (January/February 1997): 21.

Gauchat, Gordon. “Politicization of Science in the Public Sphere: A Study of Public Trust in the United States, 1974 to 2010.” American Sociological Review 77, no. 2 (2012): 167–187.

Hobson-West, Pru. “‘Trusting Blindly Can Be the Biggest Risk of All’: Organised Resistance to Childhood Vaccination in the UK.” Sociology of Health & Illness 29 (2007): 198–215.

Inglehart, Ronald. “Postmodernization Erodes Respect for Authority, but Increases Support for Democracy.” In Critical Citizens: Global Support for Democratic Government, edited by Pippa Norris, 236-256. Oxford: Oxford University Press, 1999.

Jansen, Sue Curry and Brian Martin. “The Streisand Effect and Censorship Backfire.” International Journal of Communication 9 (2015): 656–671.

Landau-Halpern, Beth. “The Costs and Consequences of Teaching and Analyzing Alternative Medicine.” Social Epistemology Review and Reply Collective 5, no. 9 (2016): 42–45.

Leask, Julie, Paul Kinnersley, Cath Jackson, Francine Cheate, Helen Bedford, and Greg Rowles. “Communicating with Parents about Vaccination: A Framework for Health Professionals.” BMC Pediatrics 12, no. 154 (2012).

Martin, Brian. “Censorship and Free Speech in Scientific Controversies.” Science and Public Policy 42, no. 3 (2015): 377–386.

Martin, Brian. “An Orchestrated Attack on a PhD Thesis.” 1 February 2016a,

Martin, Brian. “Public Health and Academic Freedom.” Social Epistemology Review and Reply Collective 5, no. 6 (2016b): 44–49.

Shore, David A., ed. The Trust Crisis in Healthcare: Causes, Consequences, and Cures. Oxford: Oxford University Press, 2007.

Wilson, Trevor. A Profile of the Australian Vaccination Network 2012. Bangalow, NSW: Australian Vaccination Network, 2013.

Wilyman, Judy. “A Critical Analysis of the Australian Government’s Rationale for its Vaccination Policy.” PhD thesis, University of Wollongong, 2015.

[1] See for my publications and commentary on the vaccination controversy.

[2] Wilyman, “A Critical Analysis of the Australian Government’s Rationale for its Vaccination Policy.”

[3] Martin, “An Orchestrated Attack on a PhD Thesis.”

[4] Durrheim and Jones, “Public Health and the Necessary Limits of Academic Freedom.”

[5] Martin, “Public Health and Academic Freedom.”

[6] Ibid.

[7] Landau-Halpern, “The Costs and Consequences of Teaching and Analyzing Alternative Medicine.”

[8] Freeeman, “Commentary on Vaccines.”

Author Information: Hans Radder, Vrije Universiteit Amsterdam,

Radder, Hans. “Everything of Value is Useful: How Philosophy Can be Socially Relevant.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 20-26.

The PDF of the article gives specific page numbers. Shortlink:


Image credit: blog100days, via flickr

An article on the usefulness of philosophy requires some explanation.[1] It calls for an analysis and evaluation of the divergent interpretations of the concepts “philosophy” and “usefulness,” as well as their possible connections. This article concerns the relationship of professional philosophy with personal, social, or academic life. I will outline this relationship and, based on this, explain what is, or could be, the usefulness of philosophy.


First, there is the question of what should be understood by philosophy. There is no straightforward unambiguous answer to this question. The simple fact is that there is no agreement amongst those who call themselves philosophers by profession about what philosophy is and how it should be pursued. There are, however, a number of approaches that substantial groups of philosophers follow. I mention three here. The first assumes that the pursuance of philosophy is more or less engrossed in the explanation of, and commentary on, the texts of the “great philosophers.” One could call this commentarism. This group will, for instance, write about “Aristotle and Kant’s concept of experience.” A comprehensive example of this approach can be found in my “Socrates of the Commentarists.” [2] A second approach primarily regards philosophy as conceptual analysis. One analyzes the meaning of various concepts and their mutual relationships. In this tradition, “Causality and Causation: A Conceptual Analysis” could be the title of an article. A third approach, naturalism, positions philosophy within, or as a continuation of, science. Epistemology should base itself on the results of psychology and neurosciences; ethics becomes part of evolutionary or socio-biology; and a naturalistic philosophy of science can be reduced to the historical or social-scientific study of the practice of science.

According to these three approaches, philosophy only has a limited usefulness for social, scientific, or personal practices. This specifically applies to commentarism. Here, philosophers voluntarily join a secluded world of inaccessible, sometimes even unfathomable texts. Nobody can deny that we can still learn from Aristotle and Immanuel Kant, but in the twenty-first century philosophers face new problems and challenges of which Aristotle and Kant had no notion at all.

Conceptual analysis could perhaps provide useful results. For example, in the current political situation, there is a strong tendency to characterize all forms of resistance in terms of terror and terrorism. The Paris attacks, the Ukrainian conflict, the occupation of the Maagdenhuis (the administrative center of the University of Amsterdam, occupied in early 2015), and sometimes even all forms of fundamental critique are all generalized as acts of terrorism. In this context, a critical conceptual analysis of the current use and abuse of the concept “terrorism” and the differences with notions such as “civil war,” “independence struggle,” “civil disobedience,” and “fundamental social critique” could provide a valuable contribution to the political debate. Something similar holds true for the term “populism,” which is equally, without any additional reasoning, used far too often to disqualify views deviating from those of the reigning political elite.

Unfortunately, much philosophical analysis of concepts suffers from a number of self-imposed limitations, thereby reducing its practical relevance. In the Wittgensteinian variant, it generally suffices to provide a description of the existing use of concepts: a critical intervention would be unacceptable. Thus, the British sociologist H.M. Collins argues that his Wittgensteinian approach should hand over science in the same state as it found it.[3] The more essentialist variants of conceptual analysis (attempting to find the one and only true essence of concepts) have the disadvantage that too little attention is paid to the question of whose concepts they have in mind. Consequently, the historical, socio-cultural or linguistic variability of concepts disappears rapidly beyond the horizon. One can, for instance, not expect too much from a conceptual analysis that (not only inside but also outside professional philosophy) uncritically equates the Dutch concept of wetenschap (or the German Wissenschaft and the French science) with the Anglo-American “science.”[4]

Finally, there is naturalism. This approach views philosophy as a “type of general science.” For this reason, it can, in principle, make a contribution to the practical development of science. Naturalism, however, also struggles with a fundamental problem. Which “science” should the naturalistic philosopher pursue: biology, psychology, sociology, or historiography? Different choices lead to completely different “naturalistic” philosophies. The result is that much energy is spent on all sorts of debates that are of no use for the practicing scientist. With regard to its social or personal usefulness, the significance of a strictly naturalistic philosophy is dubious as well. The reason for this is that supporters of this approach too often advocate a scientistic vision of science as a neutral means for solving social and/or personal problems.[5]

Philosophy in Practice

Thus, the usefulness of commentarism, conceptual analysis, and naturalism for actual social, scientific or personal practices is rather limited. Especially the first two approaches view professional philosophy as a more or less autonomous activity. From this viewpoint, it is indeed problematic to subsequently have something to say about the usefulness of philosophy outside the context of the professional philosopher. At best, this will result in a one-way conversation. The philosopher has discovered or learnt something that could benefit the people in their everyday lives.

However, the relationship between professional philosophy and our social, academic or personal life can also be regarded as a mutual interaction. The reason for this is that philosophizing is not limited to professionals, but takes place everywhere. There is no essential tension between a theoretical philosophy and the practical world. Besides being agents, people are also reflective beings. More precisely: reflection, including philosophical reflection, is a normal element of all kinds of personal, social, and scientific practices.

I will illustrate this with two brief examples. The first concerns the reductionism issue and, more specifically, the question whether human beings can be, in an ontological sense, reduced to their physical-chemical constituents. Are humans no more than their material components? This question arises in various guises. One of these is the reduction of mental phenomena to brain processes. Another concerns the reduction of biological, psychological or social processes to their genetic “basis.”

I would like to expand on the latter. Genetic reductionism is discussed extensively in philosophy. It is, however, not solely an academic-philosophical issue.[6] A topical development, in which genetic reductionism plays a direct practical role, is the debate on what is called “surrogate motherhood.”[7] Surrogate motherhood can take various forms. Consider the case when a woman wants a child in this way, with the fertilized genetic material (hers or that of another woman) being implanted into a surrogate mother, who then gives birth to a child, after nine months. Who is the real mother of this child? This debate has generally provided three types of answers to this question: the woman who contractually “ordered” the child and paid for the implantation costs, pregnancy, and childbirth; the woman who donated the ovum; and the woman who carried the child and gave birth to it. From a legal and economic perspective the first answer is the obvious one; on the basis of biological and socio-psychological principles, one could argue for the third; while the geneticist approach could serve as support for the second answer.

Here, we do not merely have a fundamental philosophical issue and debate “in practice,” but it is also a case where professional philosophy can learn from this practice. In such cases, we face real dilemmas, revealing the inadequacy of the philosophical doctrine of reductionism. The answer to the question “who is the real mother” requires a comparative assessment of the legal and economic arguments, of the (genetic and non-genetic) biological arguments, and of the social and psychological arguments. Which argument will be the deciding factor in practice will depend on the specific context.[8] This means that, in theory, all three answers have some validity and that none of the three can, or should be, elevated to an “-ism” by philosophers.

A second example concerns the issue of patenting. This is currently a very topical issue. First of all, there is the debate on the judicial legitimacy and moral justice of patenting parts of plants, animals, and humans. Do the patentability criteria for technological artifacts also apply to living or natural organisms?[9] A second issue concerns the role of public science. Recently, strong pressure has been exerted on university researchers to capitalize on the results of their work through patenting. In this respect, an important question is whether such privatization is compatible with the public nature of (university) science. The general point is that the people involved in such judicial, moral and political debates take (implicit or explicit) positions on philosophical questions. For instance, regarding the ontological question of the difference between natural and artificial things; or the scientific-philosophical question of the criteria required for good science.[10]

In the light of these examples—which can easily be supplemented by many others—professional philosophy emerges as a thematization of questions that already play a role in “non-philosophical” contexts. Socially relevant academic philosophy is not a reflection on a pre-reflective life, or a theory about a theory-free practice. Because humans are both active and reflective beings, there is and has always been reflection in our life-worlds and theory in our practices. Even if the approach by professional philosophers is, or should be, more systematic and scholarly, this still does not grant it autonomy regarding personal, social or scientific practices. Viewed in this way, there is no conflict between “basic” and “applied” philosophical research. Both are required if philosophy wants to make a contribution to social debates. For example, the ontological theory of the abstract meaning of concepts can be shown to be directly relevant for debates on the patentability of scientific research results, because the relationship between the abstract-conceptual and the concrete-material is itself at issue in this patenting practice.[11]

The conception of the relationship between philosophy and practice outlined here differs distinctly from a recent new approach, curiously called “experimental philosophy.” In fact, experimental philosophy is an empirical social-scientific study of the philosophical intuitions of ordinary people, often with the aim of testing whether the intuitions that academic philosophers employ are also shared by non-philosophers.[12] Viewed as such, the provision of critical and reflexive contributions to philosophy in social practices falls outside the scope of this experimental philosophy.

Philosophy and Usefulness

“Usefulness” (and the related “utility”) are rather controversial terms for many philosophers. They are associated with down-to-earth issues, such as money and material gain, and are contrasted with “everything of value that is vulnerable.”[13] This would particularly apply to the “useless” philosophy. According to its Greek roots, philosophy is then literally interpreted as the desire for wisdom. This seeking of wisdom does not focus on particular things and their instrumental value, but rather on the “higher” questions about the totality, the essence or the intrinsic meaning of reality.

This view is problematic for various reasons. The first concerns the idea of philosophy as the desire for wisdom. Unfortunately, time and time again one hears this equally arrogant as erroneous association of philosophy with wisdom during discussions on the practical meaning of philosophy. That philosophers are, in a psychological sense, driven by this desire is already a far-fetched notion. But even if this were the case, they are not really successful in this endeavor. Consequently, there is no reason why philosophers, as a group, should be deemed wiser than any other profession in society, be it plumbers, actresses or professional tennis players.

The above-mentioned implications of the examples of philosophy in practice (surrogate motherhood, patenting) are, however, more important. They show the inadequacy of the dichotomy between the higher and the useless, on which philosophers are supposed to focus, as opposed to the lower and instrumental, by which ordinary people, experts, and politicians would be driven or even obsessed.

A second view of the issue of the usefulness of philosophy emphasizes the existence of two scientific cultures: the culture of the exact sciences, focusing on prediction and control versus the culture of the humanities, including philosophy, which deals with the preservation and disclosure of cultural meanings. However, according to this view the tension between these two cultures is not irreconcilable. Philosophy might to date not have been that useful, but this need not necessarily be so in the future. This view, which plays a major role in many of the current debates on the “valorization” (the social impact) of the humanities, implies that philosophy is, or could be, “actually quite useful.” An example of such an engaged and actually-quite-useful humanities’ scholar is a philosopher of language who focuses on developing “better” storylines in computer games.[14]

Both views of the usefulness of philosophy assume that there is a gap between philosophy and usefulness. The seekers of wisdom see the gap as essential, while the actually-quite-useful philosophers argue that the gap can be bridged and that even philosophy can contribute to “the necessary innovation of the Dutch knowledge economy.” Despite this difference, both views employ the notion of usefulness in much the same way.

I suggest that these two approaches should not be allowed to monopolize this notion, but that it should be interpreted far more broadly. Something is useful if it contributes to accomplishing a particular goal, which is deemed worthwhile. This does not predetermine the nature of the goal. It may involve the aforementioned necessary innovation of the Dutch knowledge economy, achieving mutual understanding through undistorted communication, or the seeking and finding of wisdom.

For these reasons, we can truly describe a professional philosophy that contributes to a quality increase in non-academic philosophical reflection as really useful. I deliberately refer to professional philosophy and not to a professional philosopher. As a discipline, philosophy is useful if it makes a valuable contribution to social, scientific or personal reflection. In this view, it is entirely consistent that individual philosophers maintain a certain division of labor. Some will focus more on the description and analysis of relevant non-academic reflections, while others will be more engaged with the systematic, scholarly thematization of disputed issues.

Does this mean that such a professional philosophy is useful under all circumstances? To answer this question, we should not only consider what philosophy is not or should not be (commentarism, conceptual analysis or naturalism), but, above all, what it actually comprises. Elsewhere, I have provided an account of professional philosophy as theoretical, normative, and reflexive. This characterization shows that philosophizing entails a specific approach. It primarily comprises a theoretical and evaluative clarification of issues that is also reflexively related to the situation of the philosophers themselves.[15] This specificity implies that philosophy is not always or everywhere relevant. Philosophy is a discursive activity in which the linguistic and argumentative aspects are dominant. At certain times, however, action is needed, with theoretical or normative contemplation being unsuitable or even inappropriate. Some issues may be better tackled by means of a meal or a stroll enjoyed together than by means of a theoretical, normative and reflexive analysis. And sometimes, as in the recent occupations of university buildings in Amsterdam, civil disobedience is legitimate and necessary to subvert a situation of structural abuse of power. Nevertheless, I hope that I have succeeded in clarifying that many other present-day issues in our personal, social, and scientific world could definitely benefit from philosophical involvement.


Bijker, Wiebe and Ben Peperkamp, eds. Geëngageerde Geesteswetenschappen. Perspectieven op Cultuurveranderingen in een Digitaliserend Tijdperk. Den Haag: AWT-Achtergrondstudie nr. 27, 2002.

Collins, Harry M. “In Praise of Futile Gestures: How Scientific is the Sociology of Scientific Knowledge?” Social Studies of Science 26, no. 2 (1996): 229-244.

Goldman, Alvin and Matthew McGrath. Epistemology: A Contemporary Introduction. Oxford: Oxford University Press, 2015.

Radder, Hans. In and About the World: Philosophical Studies of Science and Technology. Albany, NY: State University of New York Press, 1996.

Radder, Hans. “Philosophy and History of Science: Beyond the Kuhnian Paradigm.”  Studies in History and Philosophy of Science Part A 28, no. 4 (1997): 633-655.

Radder, Hans. “Exploiting Abstract Possibilities: A Critique of the Concept and Practice of Product Patenting.” Journal of Agricultural and Environmental Ethics 17, no. 3 (2004): 275-291.

Radder, Hans. “How Inclusive is European Philosophy of Science?” International Studies in the Philosophy of Science 29, no. 2 (2015): 149-165.

Radder, Hans. Er Middenin! Hoe Filosofie Maatschappelijk Relevant Kan Zijn. Amsterdam: Uitgeverij Vesuvius, 2016.

Schermer, Maartje and Jozef Keulartz. “How Pragmatic is Bioethics?” In Pragmatist Ethics for a Technological Culture, edited by Jozef Keulartz, Michiel Korthals, Maartje Schermer and Tsjalling E. Swierstra, 41-68. Dordrecht: Kluwer, 2002.

Spohn, Wolfgang. “On the Objectivity of Facts, Beliefs, and Values.” In Science, Values, and Objectivity, edited by Peter K. Machamer and Gereon Wolters, 172-189. Pittsburgh: University of Pittsburgh Press, 2004.

Sterckx, Sigrid, ed. Biotechnology, Patents and Morality 2nd edition. Aldershot: Ashgate, 2000.

Van den Belt, Henk. “Enclosing the Genetic Commons: Biopatenting on a Global Scale.” In Patente am Leben? Ethische, Rechtliche und Politische Aspekte der Biopatentierung, edited by Christoph Baumgartner and Dietmar Mieth, 229-244. Paderborn: Mentis Verlag,  2003.

Van den Belt, Henk. “‘Mag ik uw Genen Even Patenteren?’ Een Nieuwe Enclosure-Beweging.” Krisis 5 no. 2 (2004): 22-37.

[1]. This article comprises the first chapter of a recently published Dutch book, a collection of essays written for a broad readership (Radder 2016). Its title is Er Middenin! Hoe Filosofie Maatschappelijk Relevant Kan Zijn (In the Midst of It! How Philosophy Can be Socially Relevant). I have edited it slightly in places to make it more accessible for English readers. I list the titles of the other chapters to give an impression of the entire book: 2. The social and moral assessment of science and technology; 3. Philosophy without heroes (but with vision); 4. The Free University Amsterdam and the us-them dichotomy; 5. Science as a commodity? A philosophical critique; 6. How do we regain “the soul of science and scholarship?” Academic research and the university’s knowledge economy; 7. Dutch science policy: two structural problems; 8. What is the good of science and scholarship? Their philosophical justification and social legitimacy; 9. In the midst of it! — Translation: Jan and Ilse Evertse.

[2]. Radder, Er Middenin!, Ch. 3.

[3]. Collins, “In Praise of Futile Gestures,” 230.

[4]. See Radder, “How Inclusive is European Philosophy of Science?”

[5]. For these critiques of naturalism, see Radder, In and About the World, 175-183.

[6]. The reason for including this example was this excerpt in an article by Wolfgang Spohn: “What is ontological independence? An object X ontologically depends on an object Y if X cannot exist without Y, if it is metaphysically impossible that X exists, but not Y. For instance, each human being ontologically depends on its mother, or has its mother essentially.” (Spohn, “On the Objectivity of Facts, Beliefs, and Values,” 173). In other words, Spohn assumes that each person has only one (natural) mother and views this as an essential ontological characteristic of motherhood and personhood.

[7]. Schermer and Keulartz, “How Pragmatic is Bioethics?” 45-56.

[8]. There is a notable difference between surrogate motherhood and embryo donation. While, in the latter case, the woman giving birth is regarded as the mother, in the former case she is generally not. It therefore seems quite plausible that who is regarded as the real mother also depends on the ethnicity and socio-economic position of those involved.

[9]. Sterckx, Biotechnology, Patents and Morality; Van den Belt, “Enclosing the Genetic Commons” and “Mag ik uw Genen Even Patenteren?”

[10]. In Radder, Er Middenin!, Ch. 5 and 6, I discuss these questions in some detail.

[11]. Radder, “Exploiting Abstract Possibilities.”

[12]. See Goldman and McGrath, Epistemology, 190-199.

[13]. The famous line from a 1974 poem by the Dutch poet Lucebert reads Alles van waarde is weerloos (“Everything of value is vulnerable”).

[14]. See Bijker and Peperkamp, Geëngageerde Geesteswetenschappen.

[15]. I cannot expand any further on this characterization here. For more on this subject, see Radder, In and About the World, Ch.8; Radder, “Philosophy and History of Science,” 648-652.