Author Information: James Collier, Virginia Tech,


Editor’s Note: The publishers of Social Epistemology—Routledge and Taylor & Francis—have been kind enough to allow me to publish the full-text “Introduction” to issues on the SERRC and on the journal’s website.

At the beginning of August 2016, I received word from Greg Feist that Sofia Liberman had died. I was taken aback having recently corresponded with Professor Liberman about the online publication of her article (coauthored with Roberto López Olmedo). Professor Liberman’s work came to my attention through her association with Greg, Mike Gorman and scholars studying the psychology of science. We offer our sincere condolences to Sofia Liberman’s family, friends and colleagues. With gratitude and great respect for her intellectual legacy, we share Sofia Liberman’s scholarship with you in this issue of Social Epistemology.

Since the advent of publishing six issues a year, we adopted the practice of printing the journal triannually; thus, combining two issues for each print edition. The result makes for a panoply of fascinating topics and arguments. Still, we invite our readers to focus on the first four articles in this edition—articles addressing topics in the psychology of science, edited by Mike Gorman and Greg Feist—as a discrete, but linked, part of the whole. These articles signal the Social Epistemology’s wish to renew ties with the psychology of science community, ties established since at least the publication of William Shadish and Steve Fuller’s edited book The Social Psychology of Science (Guilford Press) in 1993.

Beginning by reflexively tracing the trajectory of his own research Mike Gorman, and Nora Kashani, ethnographically and archivally examine the work of A. Jean Ayres. Ayers, known for inventing Sensory Integration (SI) theory, sought to identify and treat children having difficulty interpreting sensation from the body and incorporating those sensations into academic and motor learning. To gain a more comprehensive account of the development and reception of SI, Gorman and Kashani integrated a cognitive historical analysis—a sub species historiae approach—of Ayers’ research with interactions and interviews with current practitioners—an in vivo approach. Through Gorman and Kashani’s method, we map Ayers’ ability to build a network of independent students and clients leading both to the wide acceptance and later fragmentation of SI.

We want scientific research that positively transforms an area of inquiry. Yet, how do we know when we achieve such changes and, so, may determine in advance the means by which we can achieve further transformations? Barrett Anderson and Greg Feist investigate the funding of what became, after 2002, impactful articles in psychology. While assessing impact relies, in part, on citation counts, Anderson and Feist argue for “generativity” as a new index. Generative work leads to the growth of a new branch on the “tree of knowledge”. Using the tree of knowledge as a metaphorical touchstone, we can trace and measure generative work to gain a fuller sense of which factors, such as funding, policy makers might consider in encouraging transformative research.

Sofia Liberman and Roberto López Olmedo question the meaning of coauthorship for scientists. Specifically, given the contentiousness—often found in the sciences—surrounding the assignation of primary authorship of articles and the priority of discovery, what might a better understanding of the social psychology of coauthorship yield? Liberman and López Olmedo find that, for example, fields emphasizing theoretical, in contrast to, experimental practices consider different semantic relations, such as “common interest” or “active participation”, associated with coauthroship. More generally, since scientists do not hold universal values regarding collaboration, differing group dynamics and reward structures affect how one approaches and decides coauthorship. We need more research, Liberman and López Olmedo claim, to further understand scientific collaboration in order, perhaps, to encourage more, and more fruitful, collaborations across fields and disciplines.

Complex, or “wicked”, problems require the resources of multiple disciplines. Moreover, addressing such problems calls for “T-shaped” practitioners—students educated to possess, and professionals possessing, both a singular expertise—the vertical part of the “T”—and the breadth expert knowledge—the horizontal part of the “T”. On examining the origin and development of the concept of the “T-shaped” practitioner, Conley et al. share case studies involving teaching students at James Madison University and the University of Virginia learning to make the connections that underwrite “T-shaped” expertise. Conley et al. analyze the students use of concept maps to illustrate connections, and possible trading zones, among types of knowledge.

Are certain scientists uniquely positioned—given their youth or age, their insider or outsider disciplinary status to bring about scientific change? Do joint commitments to particular beliefs—and, so, an obligation to act in accord with, and not contrarily to, those beliefs—hinder one’s ability to think differently and pose potential alternative solutions? Looking at these issues, Line Andersen describes Kenneth Appel and Wolfgang Haken’s solution to Four Color Problem—“any map can be colored with only four colors so that no two adjacent countries have the same color.” From of this case, and other examples, Andersen suggests that a scientist’s outsider status may enable scientific change.

We generally, and often blithely, assume our knowledge is fallible. What can we learn if we take fallibility rather more seriously? Stephen Kemp argues for “transformational fallibilism.” In order to improve our understanding should we question, and be willing to revise or reconstruct, any aspect in our network of understanding? How should we extend our Popperian attitude, and what we learn accordingly, to knowledge claim and forms of inquiry in other fields? Kemp advocates that we not allow our easy agreement on knowledge’s fallibility to make us passive regarding accepted knowledge claims. Rather, coming to grips with the “impermanence” of knowledge sharpens and maintains our working sense of fallible knowledge.

Derek Anderson introduces the idea of “conceptual competence injustice”. Such an injustice arises when “a member of a marginalized group is unjustly regarded as lacking conceptual or linguistic competence as a consequence of structural oppression”. Anderson details three conditions one might find in a graduate philosophy classroom. For example, a student judges a member of a marginalized group, who makes a conceptual claim, and accords their claim less credibility than it actually has. That judgment leads to a subsequent assessment regarding the marginalized person’s lower degree of competence—than they in fact have—with a relevant word or concept. By depicting conceptual competence injustice, Anderson gives us important matters to consider in deriving a more complete accounting of Miranda Fricker’s forms of epistemic injustice.

William Lynch gauges Steve Fuller’s views in support of intelligent design theory. Lynch challenges Fuller’s psychological assumptions, and corresponding questions as to what motivates human beings to do science in the first place. In creating and pursuing the means and ends of science do humans—seen as the image and likeness of God—seek to render nature intelligible and thereby know the mind of God? If we take God out of the equation—as does Darwin’s theory—how do we understand the pursuit of science in both historical and future terms? Still, as Lynch explains, Fuller desires a broader normative landscape in which human beings might rewardingly follow unachieved, unconventional or forgotten, paths to science that could yield epistemic benefits. Lynch concludes that the pursuit of parascience likely leads both to opportunism and dangerous forms of doubt in traditional science.

Exchanges on many of the articles that appear in this issue of Social Epistemology—and in recent past issues—can be found on the Social Epistemology Review and Reply Collective: Please join us. We realise knowledge together.

Author Information: Kurtis Hagen, Independent Scholar,

Hagen, Kurtis. “What Are They Really Up To? Activist Social Scientists Backpedal on Conspiracy Theory Agenda.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 89-95.

The PDF of the article gives specific page numbers. Shortlink:

Editor’s Note:

    Given the extent of the exchange to which Hagen’s reply belongs, please refer to the section after the endnotes for related articles. [a]

Image credit: Rool Paap, via flickr

In a joint statement published in Le Monde, a group of social scientists called for more research on conspiracy theorists in order to more effectively “fight” the “disease” of conspiracy theorizing (see Basham and Dentith 2016, 17).[1] In response, a number of scholars, including myself, signed an open letter criticizing this agenda (Basham and Dentith 2016).[2] In response to us, the authors of the Le Monde statement (minus Karen Douglass) published a sprawling rebuttal entitled, “‘They’ Respond” (Dieguez et al. 2016). Matthew Dentith and Martin Orr have already offered their response in turn (2017), as has Basham (2017). I will here add my own. I will often refer to the scholars who authored the Le Monde statement, and the response to our objection to it, simply as “they.” I find this to be quite ordinary English, and, frankly, I think they have overreacted to our previous usage of this innocuous pronoun.

To keep this short, I will focus on just three issues: (1) They misrepresent their own previously stated intentions. (2) They misrepresent our critique of those intentions. (3) They fail completely in their attempt to show that, regarding the inappropriate pathologizing of conspiracy theorists, we are as guilty as they are. In restricting myself to these three issues I by no means wish to imply that the rest of their response was unproblematic.

The Misrepresentation of Their Own Original Position

So, what were they up to? The very title of the Le Monde statement makes it clear, “Let’s fight conspiracy theories effectively.” They worry that the “wrong cure might only serve to spread the disease” (see Basham and Dentith 2016, 17). The “disease,” of course, is conspiracy theorizing, which they conflate with “conspiracism,” expressing their desire to help “fight against this particular form of contemporary misinformation known as ‘conspiracism’” (17). In putting it this way, they reveal their bias: the presupposition that conspiracy theories are a form of misinformation. They believe that “the growth of conspiracy theories” is “a major problem” (17). And so, they aim to provide research that will help “remedy the problem” of “adherence to conspiracy theories” (18). This research is necessary, they reason, because “Conspiracism is indeed a problem that must be taken seriously” (17)—again conflating conspiracy theories with conspiracism.

It was this objective with which we took issue. But now, in response to our criticism, they have recast their position. Although they had originally characterized the intentions of governmental initiatives to undermine conspiracy theories as “laudable” (17), they now characterize their original Le Monde letter in the following ways:

[Our commentary] cautioned against governmental initiatives to counter conspiracy theories among youths and advocated for more research on the topic” (Dieguez et al. 2016, 20).

[We] took issue with French governmental and local initiatives designed to tackle the apparent proliferation of conspiracy theories among youths” (20-21).

Both of these statements are technically true, but quite misleading. These ways of putting it makes it sound as though they are against governmental initiatives to counter conspiracy theories. Reinforcing this impression, they go so far as to suggest that they are, in part, trying to “ascertain whether there is a problem [with conspiracy theories] at all” (21), and that they want to “help everybody become better conspiracy theorists” (20). Oh really? That is not at all the impression one gets from the Le Monde statement, as indicated above.

In reality, the original Le Monde statement was not cautioning against governmental initiatives to counter conspiracy theories. They expressed full support for that objective. They were merely cautioning against doing it without first funding more research[3] (to be done by themselves),[4] so that, armed with this research, the government could counter conspiracy theories more effectively. In our response we took issue with that objective. But now I am taking issue with something different. I’m taking issue with the way they, in their response to us, have misleadingly characterized their own previously expressed purpose.

Though they have attempted to recast their intentions, they have not fully retreated from activism. They say that they “thought…that something should be done” (21). About what? Why, about “ideological polarization… hate-speech and misinformation” (21). But who said anything about those things? It seems that a number of questions have suddenly been begged. Then, almost admitting what their original position had been all along, they worry that “early and hasty endeavours had the potential to misfire or simply be ineffective” (21). Endeavors to do what? Now they seem to be suggesting that they are for efforts to reduce hate speech and misinformation. But their original statement was about being ineffective in undermining conspiracy theories. Rather than straightforwardly defend that position, they equivocate between conspiracy theories and “ideological polarization… hate-speech and misinformation.”

Finally, after a ten-page exercise in distraction, they return to the central issue, under the heading, “A Cure?” Here, once again, they reframe their purpose in neutral terms. They write, “What ‘they’ had in mind, as must be clear by now, was to study how people, on their own or under some external influence, think and come to endorse some beliefs about such things” (Dieguez et al. 2016, 32, emphasis in original). They maintain that they just want to use objective science to answer questions such as whether a new “remedy is not needed after all, as the disease might be transitory, or even not a disease at all” (33). They continue, “Scientific research turns out to be the best currently available tool to answer such questions, and that’s where the analogy lies with programs devised to counter conspiracy theories.” It’s a curious position, if we are to take it seriously. They support “programs devised to counter conspiracy theories,” wanting to try to make such programs more effective, because, they seem to suggest, “Who knows? We might end up finding that there was no problem to begin with!” But how likely is it that biased researchers, funded by grants directed for a purpose that aligns with that bias, are going to produce findings that run directly counter to that purpose and so support the conclusion that no more such funding is warranted? No conspiracy theory is needed to recognize this as flawed, if not intellectually dishonest, approach.

The Misrepresentation of Our Critique

Naturally, since they misrepresented their original position, they needed to misrepresent our critique of it as well. And so they did. They did not focus directly on the substance of our actual critique, namely, that seeking to use what passes for “science” to assist the state in undermining belief in conspiracy theories (without concern to whether or not there is justification for those theories) is a bad idea. Instead, they attributed to us a number of positions that we never asserted. Then they produced a wide variety of points in response to these positions, some of which are unobjectionable, others quite problematic, but none directly germane to our central complaint.

For example, they suggest that our objection to their project involved the idea that “everything there is to know on the matter is in fact already known, and that any further attempt to investigate the topic would be a ‘grave intellectual, ethical and prudential error,’ or worse, a genocidal crime against the masses, destroying lives ‘by the thousands, even millions’” (21). Wow! Did we write anything as crazy as that? Or, more likely, is this a rather egregious misrepresentation of our critique? Let’s find out. While it is true that much of the social science research on conspiracy theorists is deeply flawed (as forthcoming articles will show in detail,[5] and Basham 2017 explains more briefly), we did not even mention this in our objection to their proposal. We certainly did not claim that “any further attempt to investigate the topic” would be necessarily problematic. After all, we ourselves, in our own ways, investigate the topic. No. That was not the problem we were pointing out. Neither did we suggest, needless to say, that merely investigating the topic would destroy lives by the thousands or millions. So, what exactly did we write? We wrote this:

Political conspiracy theorizing in Western-style democracies should not be restricted, because to do so is a grave intellectual, ethical, and prudential error. As such, the declaration by respected scholars like these is likewise a grave intellectual, ethical and prudential error (Basham and Dentith 2016, 15).

So, quite plainly, we were not saying that any investigation would be inappropriate. We were saying that there should not be an effort to restrict (it would have been better to have said “undermine”) political conspiracy theories. That is what would be the “intellectual, ethical, and prudential error.” And, remember, that is precisely the goal that the Le Monde authors were originally supporting, though they are now, in their response, not straightforwardly admitting.

We continued, writing:

Conspiracy theory saves lives, by the thousands, even millions, if we would let it. Its automatic dismissal leaves blood on our hands (16).

What were we talking about? Certainly not that merely investing the topic would result in untold carnage. Perhaps our explanation bears repeating:

High-placed political conspiracies of lesser ambition often lie behind the political catastrophes of recent history. Very recent. For example, the catastrophe of the invasion of Iraq comes to mind. There is little doubt in the public or scholars that NATO, and many other governments, were intentionally misled and manipulated into this war, particularly by the U.S. government. This truth, well-evidenced at the time of grave decision, was silenced as an “outrageous conspiracy theory” by heads of state, mainstream media and yes, certain members of academia. Thus, a war that ultimately led to the death of hundreds of thousands, and a desperate global refugee crisis, was powerfully enabled by an anti-conspiracy theory panic. One that these scholars would seem to like to embrace and nurture as general policy (14).

We gave other examples as well. So, quite plainly, we were saying that it is engaging in an effort to disable a mechanism for thwarting potentially disastrous conspiracies that “leaves blood on our hands,” not merely investigating the topic. Further, let me be emphatically clear about this: they were not originally advocating investigating the topic in a fair and neutral way. They have a clear bias (they assume that conspiracy theories are a disease that needs to be cured), and they have an explicit agenda, namely, to “fight conspiracy theories effectively.”

Now, I am not opposed to activism, and there is nothing inherently wrong with having an agenda. Indeed, I have an agenda in writing this. I am making a case for what I believe to be true, and defending what I think is important. But here is the crucial difference: I am not pretending to be a neutral scientist, objectively collecting the data and letting it speak for itself. These scholars, on the other hand, do claim to be in precisely that business. Perhaps that is why they have a hard time admitting their agenda. And so, having been called out for their agenda, they are now trying to claim that all they wanted to do was to dispassionately and scientifically investigate the topic. They are “just asking questions” (28; cf., 20, 21) and gathering data, they claim.[6] But they are not convincing. As shown above, that position is refuted by their own words in their original statement.

They also claim that we “call… for more conspiracy theories and less ‘conspiracy theory panic’” (20). Here they are half right. It seems fair to say that we are against “conspiracy theory panic,” but it is silly to say we want “more conspiracy theories.” For my part, I would say that I want fairness toward conspiracy theories (a desire also expressed by Basham 2017). I do not want to see the state allied with biased social scientists for the purpose of producing research designed to help the state undermine legitimate conspiracy theorizing. But that is not the same as calling “for more conspiracy theories,” as if we think that the more conspiracy theories in circulation the better, regardless of their merits. No. We were calling out those who would use “science” to try to undermine a legitimate and important activity.

In addition, they also suggest that we accused them of being part of a conspiracy (30). But we did not maintain that they were secretly up to something morally dubious. Their morally dubious agenda was openly articulated in a public forum. However, given their bizarre response, it now seems that they are retrospectively trying to pretend that they were up to something different from what they clearly and repeatedly stated originally. But I, speaking just for myself, do not maintain that they plotted any of this. No, in this case, I favor a cock-up theory.

Pathologizing Conspiracy Theorists

Another central concern that we raised was their pathologizing of conspiracy theorizing, suggesting that conspiracy theories are a “disease” (Basham and Dentith 2016, 17). Basham 2017 addresses this issue more broadly. I’ve chosen here to focus narrowly on reasoning errors in their attempt to vindicate themselves by suggesting that we are equally guilty of the same offence. They accused us of inconsistency since we oppose the generic pathologizing of conspiracy theories and yet some of us had, on their reading, pathologized certain particular conspiracy theories. Hmmm. Actually, even if they had read us correctly (which in at least one case they have not), there is nothing inconsistent about that.

Since I was one of those accused of this supposed inconsistency, and since they have indeed misread me, I’ll use their critique of my work to set both matters straight. Specifically, they accuse me of “delegitimiz[ing]” Roswell conspiracy believers (Dieguez et al. 2016, 26). Neither did I intend to do that nor would it have been in any way significant if I had. Here is what I wrote:

[Sunstein and Vermeule’s] deliberate intent to be dismissive becomes unambiguously apparent. Immediately after the mention of Operation Northwoods they write: “In 1947, space aliens did, in fact, land in Roswell, New Mexico, and the government covered it all up. (Well, maybe not).” This trivializes a whole list of significant conspiracies that they could not but admit were real, though the list could have been much longer (Hagen 2011, 13).

I was objecting to an obvious appeal to ridicule and inappropriate trivialization of agreed upon facts by throwing in a widely disbelieved example, accompanied with a snarky comment. As for my own position on the issue of alien visitations in general, and the Roswell incident in particular, I have no firm opinion, as I have not studied these issues in any depth (interesting though they are).

The point of the claim that I delegitimized Roswell conspiracy believers is that I had thereby, presumably, engaged in the pathologizing of a particular group of conspiracy theorists, as others in our group are likewise accused. This is a problem, they think, because we were critical of their attempt to pathologize conspiracy theories in general.

There are multiple layers of problems with their analysis. To begin with, as I have just explained, I had not even claimed that Roswell conspiracy believers were wrong, or that their belief is poorly evidenced. I did not take a position on that, and I have none. But even if I had, it would not follow that I pathologized them. Asserting that someone’s position is wrong, or is not well evidenced, does not suggest that the person is defective. But that is what the Le Monde scholars seek to do. They aim to describe a presumed-to-be-defective conspiracist “mindset” (Basham and Dentith 2016, 18; Dieguez et al. 2016, 20, 23-25, 29-30, 34). And they advertise that their studies will help make efforts to undermine conspiracy theories more effective.

Their project is a delegitimizing one. Ours is not. And further, even if I had pathologized a particular group of conspiracy theorists, that would not mean I had acted hypocritically in criticizing the Le Monde scholars for pathologizing conspiracy theorists in general. (After all, while it is wrong to generically pathologize Atheists, Republicans, or Norwegians, that does not mean there are no individuals in those groups who may legitimately be regarded as, in some sense, pathological.) At minimum, pathologizing conspiracy theorists in general is an instance of inappropriate pathologizing, since believing in conspiracy theories is not necessarily, or even typically, pathological—even if there are particular instances that are (about which I have taken no position). In sum, their argument goes wrong at every turn. No wonder they value “data” and disparage reason.[7]


If these scholars want to help move the dialog forward, they must respond in a way that does not mischaracterize what they had originally said, and mischaracterize the critique of what they said. (It would be nice if they did not get so much else wrong besides, but perhaps that cannot be helped.) Indeed, their response so far further undermines confidence in their ability to conduct fair and reasonable studies of conspiracy theorists, or on any subject for that matter. And thus their response calls into question the wisdom of their original proposal, even if its objective had been defensible, which even they seem unwilling to defend. Mere incantations of the holy words “science” and “data” will not turn invalid arguments into valid ones, nor remove the stain of flagrant misrepresentation.[8]


Basham, Lee. “Pathologizing Open Societies: A Reply to the Le Monde Social Scientists.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 59-68.

Basham, Lee and Matthew R. X. Dentith. “Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 12-19. (An English version of the Le Monde publication, “Let’s fight conspiracy theories effectively,” is also contained herein.)

Dentith, Matthew R. X. and Martin Orr. “Clearing Up Some Conceptual Confusions About Conspiracy Theory Theorising.” Social Epistemology Review and Reply Collective 6, no. 1 (2017): 9- 16.

Dieguez, Sebastian, Gérald Bronner, Véronique Campion-Vincent, Sylvain Delouvée, Nicolas Gauvrit, Anthony Lantian, and Pascal Wagner-Egger. “‘They’ Respond: Comments on Basham et al.’s ‘Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone’.” Social Epistemology Review and Reply Collective 5, no. 12 (2016): 20-39.

Hagen, Kurtis. “Conspiracy Theories and Stylized Facts.” Journal for Peace and Justice Studies 21, no. 2 (2011): 3–22.

[1] References are to an English translation of the Le Monde statement affixed to the end of Basham and Dentith 2016. (Page numbers to Social Epistemology Review and Reply Collective articles refer to the PDF versions.)

[2] Regarding the original critique of the Le Monde statement (namely, Basham and Dentith 2016), it should be noted that while eight scholars, including myself, endorsed the critique, only two of us, Basham and Dentith, actually did the writing. Just to be perfectly clear, while I am proud to be associated with the critique, and refer to it as “our” response, I did not substantially contribute to it, other than offering some comments on a couple drafts. So, it seems to me perfectly sensible for it to be published as, and referenced as, “Basham and Dentith 2016,” giving credit where it is due.

[3] Here is how they pitch it: “[The current] more or less random campaigns [to combat belief in conspiracy theories] are expensive, and this investment is automatically taken from more methodical studies of the phenomenon. It is therefore urgent that we launch widespread research programmes aimed at evaluating present educational initiatives rather than continuing to promote them” (Basham and Dentith 2016, 18, emphasis added).

[4] Further, is it not a tad hypocritical of them to charge us with a “self-serving” (Dieguez et al. 2016, 22) interpretation while they are calling for more funding for research in which they would like to engage? But for critics of conspiracy theories, double standards are par for the course.

[5] For example, a special issue of Argumenta on the ethics and epistemology of conspiracy theory will include articles by Matthew Dentith (“The Problem of Conspiracism”), Lee Basham (“Joining the Conspiracy”), and myself (“Conspiracy Theories and Monological Belief Systems”). In addition, “Conspiracy Theory Phobia,” by Juha Räikkä and Lee Basham, is forthcoming in Conspiracy Theories and the People Who Believe Them (Oxford University Press), edited by Joseph Uscinski and Joseph Parent.

[6] They write, “So, what were ‘they’ up to? Quite simply, ‘they’ advocated for more research. ‘They’ figured that, before ‘fighting’ against, or ‘curing’, conspiracy theories, it would be good to know exactly what one is talking about. Are conspiracy theories bad? Are they good? Are they always bad, are they always good? … ‘They’, in fact, are ‘just asking’ some questions” (Dieguez et al. 2016, 21). Once again, this is a clearly misleading representation of what they were up to. They now ask in a neutral voice, “Are conspiracy theories bad?” Yet they had already answered this when they described belief in conspiracy theories as a disease and conflated it with “contemporary misinformation known as ‘conspiracism’” (Basham and Dentith 2016, 17). Have they truly turned over a new leaf? If so, why not be honest about what they had originally said?

[7] They contrast data, data collection, experimental designs, and empirical research with “armchair” reasoning and various derogatory versions of the same (Dieguez et al. 2016, 22, 25, and 32).

[8] I would like to thank Lee Basham and Matthew Dentith for their helpful comments on an earlier draft of this response.

[a] For articles in this exchange, from least recent to most recent, please refer to:

Call for Papers: “Charting trans and posthumanist imaginaries in future-making”
(see Panel 53:

Science in Public Conference, University of Sheffield 10th-12th July, 2017
Emilie Whitaker, University of Salford,

The call for papers closes April 18, 2017.

Posthumanism and transhumanism are two emerging cultural movements that use recent developments in science and technology to challenge, in rather different ways, conventional conceptions of the human condition. Originally seen as more aligned with science fiction than science fact, they now straddle the divide, helped along with increasing media attention and capital investment. Whilst posthumanism continues to be theoretically explored within the social sciences and humanities, transhumanism remains an outlier to the academy. This is despite developments in science and technology which decouple traditional understandings of human/non- human action, agency, labour and capital. In this respect, both trans and posthumanism come very well adapted to our ‘post-truth’ times. We welcome submissions on this general theme, including the following topics:

  • Post- vs trans- humanist projections of the future of humanity, both utopic and dystopic
  • The appeal to post- and trans- humanist ideas and images in the general culture
  • Scientific bases – or not – for post- and trans- humanist knowledge claims
  •  The influence – or not – of post- and trans- humanist views on public policy
  • The place of capitalism in post- and trans- humanist imaginaries
  • The place of post- and trans- humanism in the academy: Do they bridge the ‘˜two cultures’?
  • How trans and post humanism conceive of the place of democracy in guiding the future
  • Exploration of how science communication invokes, borrows or rejects trans and posthumanist tropes.

We welcome ‘alternative’ contributions – for example, short pieces of prose or extracts of speculative near-future fiction – as well as empirically-based findings papers. We are also particularly keen to support early career researchers.

Author Information: Jamie Shaw, Western University,

Shaw, Jamie. “Feyerabend and the Cranks: On Demarcation, Epistemic Virtues, and Astrology.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 74-88.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: Jonathan Khoo, via flickr

In a well-known paper, Larry Laudan announces the demise of providing any criteria to distinguish science from non-science or pseudoscience.[1] He writes “the [demarcation] question is both uninteresting and, judging by its checkered past, intractable.”[2] While there were many philosophers who contributed to this “checkered path,” one of the most noteworthy critics of demarcation was Paul Feyerabend who argued against the ability to provide any meaningful demarcation criterion that does not simultaneously deny the scientific status of many of the most important transitions in the history of science. The primary aim of this paper is to reconstruct Feyerabend’s arguments for pluralism and the corresponding implications for the very idea of a demarcation criterion and show how Pigliucci’s revival of the demarcation problem fails to address these arguments. I then evaluate Kidd’s attempt to reintroduce Feyerabend into this discourse via his defense of purported pseudosciences. I conclude by highlighting Feyerabend’s numerous remarks about “the cranks,” which shows his intellectual allegiance to some of Pigliucci’s and Kidd’s goals.

The structure of this paper is as follows. In the first section, I reconstruct Feyerabend’s views of pluralism.[3] Specifically, I focus on his principles of proliferation and tenacity and what consequences they have for the demarcation problem. In the second section, I show how Pigliucci’s demarcation criteria fail in light of this reconstruction. In the third section, I consider Kidd’s analysis of Feyerabend’s defense of astrology and its reformulation in light of Pigliucci’s criticisms and defend a revised formulation of Kidd’s original position. The final section highlights Feyerabend’s disdain for “the cranks,” which appears to line up with Pigliucci and Kidd concerns.

A Tale of Two Principles: Feyerabend on Proliferation and Tenacity

Though Popper inspired Pigliucci’s revival of the demarcation problem, his own criteria differs from falsificationism. Since Feyerabend was one of the most vociferous and important critics of Popper’s philosophy and the demarcation criteria in general, his arguments must be circumvented for a revival of the demarcation problem to be successful. Indeed, in Pigliucci and Boundry’s 2013 collection, Feyerabend is only referenced once en passant. The burden of proof, therefore, lies on Pigliucci to show how Feyerabend’s arguments against demarcation have been mistaken. This section will repeat Feyerabend’s arguments against any demarcation criteria via his defense of pluralism that can serve as a standard of evaluating Pigliucci’s own model.

Pluralism is the most dominate theme throughout Feyerabend’s career. As Robert Farrell rightly notes:

The most long-lived, ubiquitous and deepest theme of Feyerabend’s philosophy is pluralism. The changes in Feyerabend’s philosophy, over the decades is best interpreted as the gradual drawing out of the consequences of a pluralistic philosophy: pluralism is the hard-core of the Feyerabendian philosophical program and it came to permeate all aspects of his thought.[4]

Similarly, Oberheim writes that “almost all of [Feyerabend’s] major publications, and even most of the minor ones, contain some form of a methodological argument for pluralism.”[5] As such, I cannot hope to capture the details of Feyerabend’s views, their development, and their motivation.[6] In this section, I outline two principles that comprise Feyerabend’s pluralism: the principles of proliferation and tenacity.

In the late 1950s and early 60s, Feyerabend argued that all observation statements (“facts”) rely on theoretical assumptions. For any observation statement to be true, we must make certain theoretical assumptions about the nature of observation. This may include theories about observation itself (e.g., perception, physiology, etc.) or about what Feyerabend calls “mediating terms” which are not immediately present in observation (e.g., the laws of optics, relative motion, Coriolis forces, etc.). Furthermore, the meaning of observation terms is, at least partially, dependent on theories. Demon possessions used to be (and, for some, still are) observational facts in the same way we “observe” seizures.

If we deny medieval demon psychology, then observation statements such as “I see a demon possession” are false.[7] Facts, therefore, can be tested. Feyerabend’s favorite example of this is Brownian motion which, he argues, would never have refuted the second law of phenomenological thermodynamics if it weren’t for the kinetic theory of heat. This forms the basis of the principle of proliferation: we should “[i]nvent, and elaborate theories which are inconsistent with the accepted point of view, even if the latter should happen to be highly confirmed and generally accepted.”[8] As Feyerabend’s thought develops, his notion of a “test” becomes multifarious. For example, we may also:

  1. Compare the structures of infinite sets of elements and see whether there is an isomorphism or not.
  1. Compare theories via their “local grammars’, defined as “that part of a [statement’s] rules of usage which is connected with such direct operations as looking, uttering a sentence in accordance with extensively taught (not defined) rules.”[9]
  1. Construct a model of a theory “T” within its… alternative “T” and “consider its fate.”

Additionally, alternatives change the importance of facts. Even though the discrepancies between Newton’s celestial mechanics and the orbit of Mercury at its perihelion was known since Le Verrier’s observations and calculations in 1859, it wasn’t until general relativity’s alternative explanation that this minor problem became a major problem. As Feyerabend puts it, theories “on the basis of new principles will lift them out of the background and deviational noise and then turn them into an effect that is capable of refuting the [alternative] scheme.”[10] Finally, alternatives have psychological benefits; “a mind which is immersed in the contemplation of a single theory may not even notice its most striking weaknesses.”[11] This means that even if the alternatives are not true (or empirically successful), they should still be welcomed into scientific discourses for their heuristic import.[12] This, in a nutshell, is the basis of the principle of proliferation.

The principle of proliferation, on its own, is empty. It would merely result in half-baked theories rather than sophisticated theories making interesting criticisms. This is why proliferation must be complemented by the principle of tenacity which states that we should “select from a number of theories the one that promises to lead to the most fruitful results, and stick to this theory even if the actual difficulties it encounters are considerable.”[13] In other words, we must develop theories from their infantile stages with internal contradictions, apparent paradoxes, and recalcitrant evidence to more sophisticated theories that can reconcile at least some of their initial problems. One could easily claim that the slogan of Feyerabend’s pluralism is “Proliferation without Tenacity is empty and Tenacity with Proliferation is blind” or, as Feyerabend puts it, “[t]he interplay between tenacity and proliferation which we described in our little methodological fairy tale is also an essential feature of the actual development of science.”[14]

Kuhn was the first to recognize the principle of tenacity: all theories are constantly beset by anomalies. As Lakatos puts it, all theories are “born refuted.”[15] If we were to abandon theories the moment they came into difficulties, we would have abandoned many of the most successful theories throughout the history of science. The justification of some kind of tenacity is, therefore, quite reasonable. However, Feyerabend’s mature view of tenacity is exceptionally radical in two ways. Firstly, it has no conditions for acceptance; any theory can be held tenaciously. This is because only research can determine what theories are useful and in what ways.[16] Even theories that have blatant internal contradictions or seem to conflict with facts can be, and often are, developed into useful research programs; all that is needed is “[a] brilliant school of scholars (backed by a rich society to finance a few well-planned tests).”[17] Secondly, and more importantly, for Feyerabend, tenacity, has no “expiry date.” There are three primary arguments for this. First, any expiry date will be arbitrary. “If not now why not wait a bit longer?”[18] Second, the reason for granting a theory “breathing space” in the first place remains true; the theory may make a comeback. This is not a mere “logical possibility,” as Achinstein suggests,[19] but one that has been substantiated many times throughout the history of science.[20] Finally, any view that theories cannot make comebacks must make various metaphysical assumptions about the simplicity of nature.[21] The principle of tenacity does not, of course, commit us to indefinitely pursuing every line of research we inquire about but simply that it is always perfectly rational to continue developing ideas despite their extant problems. Furthermore, tenacity must be complemented by proliferation; so it is not the case that the entire scientific community should tenaciously develop one theory, as Kuhn thought, but multiple theories competing and complementing each other in a variety of ways. While this provides only a cursory glance at Feyerabend’s pluralism, it provides us with a starting point for evaluating demarcation criteria.

The principle of proliferation applies equally to many features of science; methods, theories, experimental designs, and so forth. Furthermore, what is proliferated need not be consistent with the features already at play in a given research context since “[a]lternatives will be more efficient the more radically they differ from the point of view to be investigated.”[22] Because of this, any theory of demarcation will rule out some features that have played or could play important roles in advancing knowledge. Furthermore, the principle of tenacity has important consequences for theories of scientific rationality. This is because if at any given time, t1, a theory does not meet the requirements of that theory of rationality (e.g., that theories conform to the facts, are made as simply as possible, etc.), cannot be rejected since it could eventually come to meet those requirements at t2 given sufficient attention to these issues. Because of this, what is “non-scientific” one day is “scientific” the next and the transition between the two requires being placed within scientific debates. While there is much more that could be said about the details of these principles and their justification, this should be sufficient for evaluating Pigliucci’s proposal for a model of demarcation.

A Feyerabendian Criticism of Pigliucci’s Demarcation Criterion

If science is as diverse as Feyerabend claims, and cannot be understood as a single entity, then any demarcation criterion that provides necessary conditions that theories or methods must meet to be scientific will inevitably exclude other valuable scientific endeavors. Pigliucci is sensitive to this point and does not wish to return to the “old-fashioned” ways of distinguishing science from pseudoscience via some set of necessary and sufficient conditions.[23] Pigliucci, instead, suggests that demarcation must be understood as a family resemblance concept “characterized by a number of threads connecting instantiations of the concept, with some threads more relevant than others to specific instantiation.”[24] Pigliucci immediately follows up by stating that “[a]t a very minimum, two ‘threads’ run throughout any meaningful treatment of the differences between science and pseudoscience: what I label ‘theoretical understanding’ and ‘empirical knowledge.’”[25] This definition, admittedly preliminary,[26] provides necessary conditions for what constitutes science.[27] He then states that theoretical understanding and empirical knowledge come in degrees, with pseudoscience possessing little to none of either virtues. While Pigliucci does not define what he means by “empirical knowledge,” he appears to mean that “confirmed predictions” and “theoretical understanding” involves “internal coherence and logic.”[28] I have no clue what it means for a theory to “have logic,” but internal coherence is cashed out as a lack of internal contradictions or contradicting other well-established scientific theories. Pigliucci concludes by providing three meta-criteria for any demarcation criteria:

  1. A viable demarcation criterion should recover much (though not necessarily all) of the intuitive classification of sciences and pseudosciences generally accepted by practicing scientists and many philosophers of science…
  1. Demarcation should not be attempted on the basis of a small set of individually necessary and jointly sufficient conditions…A better approach is to understand them via a multidimensional conditions classification based on degrees of theoretical and soundness and empirical support…
  1. Philosophers ought to get into the political and social fray raised by discussion about the value (or lack thereof) of both science and pseudoscience.[29]

Let us now consider these statements from what we have learned in section 1. First, theories that contain low degrees of empirical support (or even conflict with known facts) or are theoretically confused are perfectly pursuit-worthy on Feyerabend’s account. This is because these theories can gain empirical support, can “correct” evidence, and become more coherent. Furthermore, even if theories are not pursued as a potentially true description of the world, they can be pursued for a variety of heuristic purposes (e.g., instruments of criticism, points of contrast, serve a number of psychological functions necessary for more general critical attitudes, and so forth). Therefore, Pigliucci’s criteria fail to provide reasonable grounds to prevent the consideration of “pseudosciences.”[30]

Furthermore, not only does the principle of tenacity allow us to pursue theories with internal contradictions, we can pursue theories that contradict previously well-established theories as well. Pigliucci wrongfully states that “[f]ollowing a Quinean conception of the web of knowledge, one would then be forced to either throw out astrology (and, for similar reasons, creationism) or reject close to the entirely of the established sciences…The choice is obvious.”[31] We don’t need to “throw out” anything! We can retain both theories, develop them, and see what happens.[32] As for the meta-criteria, seems suspicious for two main reasons.[33] The first concerns virtue epistemology. Pigliucci concedes to Kidd that it is a virtue to not make declarations about fields that are alien to their field of expertise.[34] However, demarcation criteria affect people with different intellectual backgrounds. They affect funding distribution policies, taxation policies, those who benefit or are harmed by the creation (or lack thereof) of particular pieces of scientific knowledge, and so on. This is far beyond the domain of scientists or philosophers of science who provide, at best, one perspective on demarcation. Additionally, the intuitions of scientists and philosophers may have been shaped by social forces which themselves are problematic. If scientists are forced to conform to certain views because their education does not provide viable alternatives, if peer review is so conservative that it causes long-term conformity, and so on, then those intuitions aren’t worth taking seriously.[35] They are products of sociological forces which themselves are open to criticism. On this view, scientists and philosophers of science may have the wrong intuitions that need to be corrected. I have no immediate complaints about (2)[36] and (3) is completely Feyerabendian. If we are to have a theory of demarcation, it should be of practical relevance.

I welcome a response from Pigliucci and his sympathizers to reformulate their views in light of these problems. In the meantime, there appears to be little reason to find this view appealing in light of the many criticisms of Feyerabend and others.[37] I will leave this issue aside for now and move on to Kidd’s arguments on Feyerabend’s defense of astrology.

On Feyerabend’s Defense of Astrology and Virtue Epistemology

Kidd’s paper does not directly target Pigliucci’s claims on demarcation. However, as evidenced by their dialogue, their arguments overlap. In his paper, Kidd makes two primary claims. First, that Feyerabend defended the epistemic integrity of some practitioners of astrology because he was practicing the pluralism he preached and decided to defend views that were dismissed or ostracized from the philosophy of science. In other words, Feyerabend was proliferating.[38] Secondly, these actions can be understood using the resources of contemporary virtue epistemology. In this section, I outline Kidd’s original claims, show his concessions in light of Pigliucci’s criticisms, and argue that Kidd’s original claims are correct. I then point out a few potential pitfalls for the subsequent development of a Feyerabendian account of virtue epistemology.

Kidd’s paper attempts to “identify the epistemic rationale for Paul Feyerabend’s defences of astrology, voodoo, witchcraft, Chinese traditional medicine, and other ‘non-scientific’ beliefs, practices, and traditions.”[39] His thesis is that the epistemic rationale motivating Feyerabend’s defense of purported pseudosciences is not that he is committed to them (i.e., believes them to be true) but that he is practicing his own brand of pluralism which derives from Mill.[40] Feyerabend lays out his interpretation of Mill’s pluralism as the conjunction of four claims:

  1. Because a view one may have may have reason to reject may still be true. “To deny this is to assume our own infallibility.”
  1. Because a problematic view “may and very commonly does, contain a portion of truth; and since the general or prevailing opinion on any subject is rarely or never the whole truth, it is only by the collision of adverse opinions that the remainder of the truth has any chance of being supplied.”
  1. Even a point of view that is wholly true but not be contested “will…be handled in the manner of a prejudice, with little comprehension of feeling of its rational grounds.”
  1. One will not understand its meaning, subscribing to it will become “a mere formal confession” unless a contrast with other opinions shows wherein this meaning consists.[41]

Or, in Kidd’s words:

Central to [] pluralism is the epistemological conviction that the use of “radical alternatives” to prevailing theories and methods enables “immanent critique” of entrenched systems of thoughts and practice. The use of radical alternatives can afford new and otherwise unavailable forms of empirical and theoretical critique and so provides an essential strategy for countering…a tendency for enquirers to drift into a state of unreflective reliance upon a fixed set of epistemic resources.[42]

There are plenty of empirical reasons to think that pluralism of this kind can deliver its promises so we can reasonably expect pluralism to achieve its desired results.[43] Feyerabend’s defense of astrology, according to Kidd, can be seen as an attempt to combat the epistemic vice of arrogance (or, conversely, to promote the epistemic virtue of humility).[44] To support this interpretation, Kidd considers Feyerabend’s “The Strange Case of Astrology” which was written in response to a statement made in The Humanist with 186 signatures from prominent scientists condemning astrology as contributing to the “growth of irrationalism and superstition.”[45] Without going into the details of Feyerabend’s article, he essentially argues that the writers of the Humanist statement are often historically inaccurate, make confused conceptual statements about astrology, and, more generally, do not know anything about astrology. Astonishingly, Feyerabend writes:

This [that the writers of the statement “certainly do not know what they are talking about”] is quite literally true. When a representative of the BBC wanted to interview some of the Nobel Prize Winners they declined with the remarks that they had never studied astrology and had no idea of its details.[46]

Feyerabend admits that there are genuine problems with modern astrology (which are not the same problems of the astrology of, say, Kepler); modern astrology is “not used for research; there is no attempt to proceed into new domains and to enlarge our knowledge…. they simply serve as a reservoir of naïve rules suited to impress the ignorant.”[47] However, “this is not the objection that is raised by our scientists.”[48] By revealing the ignorance of this statement, Feyerabend defends modern astrology not because he thinks its true (or even valuable) but because its critics are being arrogant, so defending a “pro-astrology” perspective is necessary to combat this vice. For scientists to enjoy any epistemic authority, they must display the proper epistemic virtues that were not demonstrated in The Humanist response.

We can see how Pigliucci’s demarcation conflicts with Feyerabend’s pluralistic defense of astrology. Astrology in its modern form is not an empirically successful science and thereby fails to meet his demarcation criterion.[49] Remember, alternatives have many different functions and Kidd has highlighted one of them in Feyerabend’s defense of astrology: combating arrogance and ignorance. Pigliucci makes a few criticisms in his reply to Kidd that Kidd concedes to. Pigliucci admits that the Humanist statement is indeed problematic. Specifically, it is a form of scientism which Pigliucci defines as “scientific claims overstepping the epistemic authority of science…largely directed at delegitimizing the humanities and establishing a sort of scientific imperialism on all human knowledge and understanding.”[50]

Scientism, Pigliucci claims, is the common enemy; he, Kidd, and Feyerabend merely “disagree on how most effectively to deal with the menace.”[51] These disagreements are in two primary forms:

  1. That astrology is a particularly bad choice of proliferation,
  1. Feyerabend displayed the vice of “epistemic recklessness” in defending astrology.

For the former, Pigliucci argues that “astrology has never been a research program” and, even more strongly, that “both astrology and voodoo have no epistemic value whatsoever.”[52]

Pigliucci then generalizes this claim to other purported pseudosciences and states “radical alternatives are fine if they are credible and constructive, but astrology, voodoo, homeopathy and the like are light-years away from being either.”[53] For 2), Pigliucci states that the results of Feyerabend’s “attitude” are deeply troublesome; “rampant denial of climate change, the anti-vaccination movement, AIDS denialism, and so form. All of which is costing us in the hard currency of actual pain, suffering, and death.”[54]

Kidd then backs off from a few of his claims. He writes that Pigliucci is “quite right” that “Feyerabend is wrong to say that astrology is a good example of the limits of scientific explanation” and that he is “happy to concede” that astrology was not a research program though he does not respond to the stronger claim that pseudosciences are completely worthless.[55] Kidd also concedes that Feyerabend himself had “epistemically vicious positions at certain times of his life [and] joins the rest of us in having a dappled character.”[56]

I argue that Pigliucci hasn’t offered any good reasons for Kidd to back down on any of these claims. First, Pigliucci never addresses the pluralist motivation behind Feyerabend’s defense of astrology. Remember tenet (3) of Feyerabend’s Millian justification of pluralism: we do not understand the rational basis for, say, rejecting astrology and preferring modern astronomy without knowing what astrology was, what the arguments for and against it were, and so forth. In other words, it must be taught and discussed. The lack of pluralism is a partial cause for the ignorance of the writers of the Humanist manifesto and, therefore, astrology doesn’t need to be true to be a part of some kinds of scientific discussions. Second, astrology most certainly was a research program in a loose sense.[57] Feyerabend even supplies some of the preliminary arguments for this in his article.

Depending on how loosely one interprets the astrological tenet that celestial events influence human affairs, there was research in the early 70s suggesting that there are many causal links between certain celestial events and non-reproducible physico-chemical processes. This research spawned a number of further studies, the citations of which Feyerabend provides, which even filled a (then) lacunae in environmental studies.[58] Feyerabend also discusses Kepler’s arguments and evidence for retaining a constrained version of sidereal astrology (though not tropic astrology) and there is much more that could be discussed about the developments of astrology over centuries of overlapping research programs.[59] This is a part of Feyerabend’s complaint: these expansive explorations with varying degrees of success all become subsumed under the single heading of “astrology” with the assumption that the entire research program contains the rigor found in newspaper horoscopes.

Finally, Pigliucci has not given any reason to think that Feyerabend’s defense of astrology was an instance of “epistemic recklessness.” While Kidd has argued elsewhere that Feyerabend chagrined many intellectually dishonest endeavors that paraded his arguments,[60] Feyerabend never, to my knowledge, discusses climate change, anti-vaccination movements, or AIDS denialism; these (mostly) became issues after Feyerabend’s death. Furthermore, there is no legitimate inference from Feyerabend’s pluralism to defending these topics in a direct way. Feyerabend repeatedly states that each case must be analyzed on its own and not lumped into more general categories.[61] Since Feyerabend made no specific comments about these issues, he has no commitment to any of the peculiarities of these subjects (which are also all multifaceted and disunified subjects themselves).[62] Therefore, Pigliucci cannot ascribe any of these particular consequences as emanating from Feyerabend. It is because of these reasons that I urge Kidd to retain his initial arguments that Feyerabend’s defense of the epistemic authority of scientists via astrology is a perfectly fine choice; both in terms of virtue epistemology and its scientific credentials.

I’d like to finish this section by remarking that if Kidd wishes to elaborate on his virtue epistemology reading of Feyerabend, which I would certainly encourage, there are pitfalls that he (and those similarly inclined) should be careful of. Many epistemic vices contain functions that may be of value to the scientific community as a whole. Feyerabend points out how vices like stubbornness (e.g., Boltzmann’s defense of atomism) or deceptiveness (e.g., his case study of Galileo), for example, can be important for the growth of knowledge. This argument is most prominent in Feyerabend’s defense of propaganda: contingent idiosyncrasies of particular communities may require overcoming by unorthodox and potentially “vice-like” behaviour.[63] Unless Kidd wants to suggest that vices are inherently problematic, he must allow for a flexible notion of what counts as a “vice” or a “virtue.” I think this accommodation can be easily made, but it does require attention in the subsequent development of a Feyerabendian virtue epistemology. Regardless, it would be an interesting topic to see what virtue epistemology Feyerabend may have endorsed given his recognition of the diverse kinds of mindsets needed for a flourishing community and his radical cultural pluralism.

Feyerabend and the Cranks

Throughout Feyerabend’s career, he complains about what he calls “the cranks.” While Feyerabend did not, and would not, provide a definition of who counts as a “crank,” his general description of cranks should sound familiar to those worried about intellectual honesty in science. Early in Feyerabend’s career, he writes the following:

The distinction between the crank and the respectable thinker lies in the research that is done once a certain point of view is adopted. The crank usually is content with defending the point of view in its original, undeveloped, metaphysical form, and he is not prepared to test its usefulness in all those cases which seem to favor the opponent, or even admit that there exists a problem. It is this further investigation, the details of it, the knowledge of the difficulties, of the general state of knowledge, the recognition of objections, which distinguishes the “respectable thinker” from the crank. The original content of his theory does not.[64]

Indeed, Feyerabend’s aforementioned complaints about modern astrology fall under this category. Those who do not wish to assess astrology critically, attempt to apply it in new ways, test it, and so forth are, simply put, cranks. One can infer that Feyerabend is not supporting the proliferation of the cranks, but serious researchers who get lumped together with the cranks. This is evidenced by who Feyerabend cites. In his defense of Voodoo, he doesn’t defend con-artists on Bourbon street, but the sophisticated and extensive work by C.R. Richter and W.H. Cannon[65] which is scientific by any reasonable standard![66] Similarly, in Against Method, Feyerabend complains about “intellectual pollution” where “illiterate and incompetent books flood the market, empty verbiage full of strange and esoteric terms claims to express profound insights, ‘experts’ without brains, without character, and without even a modicum of intellectual, stylistic, emotional temperament tell us about our ‘condition’ and the means of improving it.”[67] It is clear that there is a commonality between Pigliucci, Kidd, and Feyerabend: their disdain for the cranks! Feyerabend’s lack of defense of the cranks[68] clarifies what kind of proliferation Feyerabend is interested in and what attitudes he thinks belong in scientific communities.

Concluding Remarks

Pigliucci is right to stress the social, political, and epistemic importance of the demarcation problem. For decades, the preoccupation with uncovering what is unique and praiseworthy about science dominated the philosophy of science. But times have changed. Increasing investigations into various scientific practices throughout history and across the globe have made it seemingly impossible to resuscitate the universal standards that philosophers once sought. I hope to have contributed to this discussion by ensuring that our revitalization of the demarcation debate does not repeat the mistakes of the past and that we begin thinking of demarcation in terms of its conditions of applications and its relationship to pluralism.


Many thanks for Ian James Kidd’s helpful comments. I tried to address as many of them as I could. Marie Gueguen, Erlantz Etxeberria, and Adam Koberinski also provided superb feedback while workshopping an earlier draft of this paper.


Achinstein, Peter. “Proliferation: Is It a Good Thing?” In The Worst Enemy of Science?: Essays in Memory of Paul Feyerabend, ediyed by John Preston, Gonzalo Munévar & David Lamb. Oxford: Oxford University Press, 2000.

Bigo, Vinca and Ioana Negru. “From Fragmentation to Ontologically Reflexive Pluralism.” Journal of Philosophical Economics 1, no. 2 (2008): 127-150.

Bschir, Karim. “Feyerabend and Popper on Theory Proliferation and Anomaly Import: On the Compatibility of Theoretical Pluralism and Critical Rationalism.” HOPOS: The Journal of the International Society for the History of Philosophy of Science 5, no. 1 (2015): 24-55.

Desjardins, E., J. Shaw, G. Barker, G., and J. Bzovy. “Geofunctions, Pluralism, and Environmental Management.” In From the North of 49: New Perspectives in Canadian Environmental Philosophy. Montreal: McGill-Queen’s University Press, forthcoming.

Epstein, Steven. “The Construction of Lay Expertise: AIDS Activism and the Forging of Credibility in the Reform of Clinical Trials.” Science, Technology & Human Values 20, no. 4 (1995): 408-437.

Farrell, Robert. Feyerabend and Scientific Values: Tightrope-Walking Rationality. Dordrecht: Kluwer, 2003.

Feyerabend, Paul. “Explanation, Reduction and Empiricism” In Minnesota Studies in the Philosophy of Science, Volume III: Scientific Explanation, Space and Time, edited Herbert Feigl and Grover Maxwell, 28-97. University of Minnesota Press: Minneapolis, 1962.

Feyerabend, Paul. “Realism and Instrumentalism: Comments in the Logic of Factual Support.” In Critical Approaches to Science and Philosophy, edited by Mario Bunge, 260-308. Princeton: The Free Press, 1964.

Feyerabend, Paul. “Reply to Criticism: Comments on Smart, Sellars and Putnam.” Proceedings of the Boston Colloquium for the Philosophy of Science (1965a): 223-61.

Feyerabend, Paul. “Problems of Empiricism.” In Beyond the Edge of Certainty: Essays in Contemporary Science and Philosophy, edited by Robert G. Colodny, 145-260. University of Pittsburgh Series in the Philosophy of Science, Vol. 2. Englewood Cliffs, NJ: Prentice-Hall, 1965b.

Feyerabend, Paul. “Against Method: Outline of an Anarchistic Theory of Knowledge.” In Minnesota Studies in the Philosophy of Science, Volume 4: Analysis of Theories and Methods of Physics and Psychology, eds. Michael Radner and Stephen Winokur, 17-130. Minneapolis: University of Minnesota Press, 1970a.

Feyerabend, Paul. Consolations for the Specialist. In Criticism and the Growth of Knowledge, edited by Imre Lakatos and Alan Musgrave, 197-231. Cambridge: Cambridge University Press, 1970b.

Feyerabend, Paul. “In Defence of Classical Physics.” Studies in History and Philosophy of Science (1970c): 59-85.

Feyerabend, Paul. Against Method. London: Verso Books, 1975.

Feyerabend, Paul. Science in a Free Society. London: Verso Books, 1978.

Feyerabend, Paul. “Proliferation and Realism as Methodological Principles.” In Rationalism, Realism, and Scientific Method: Philosophical Papers Volume 1, 139-145. Cambridge: Cambridge University Press, 1981.

Kassell, Lauren. “Stars, Spirits, Signs: Towards a History of Astrology 1100–1800.” Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 41, no. 2 (2010): 67–69.

Kitcher, Philip. Science In a Democratic Society. New York: Prometheus Books, 2011.

Kidd, Ian James. “Why Did Feyerabend Defend Astrology? Integrity, Virtue, and the Authority of Science.” Social Epistemology 30, no. 4 (2016a): 464-482.

Kidd, Ian James. “How Should Feyerabend Have Defended Astrology? A Reply to Pigliucci.” Social Epistemology Review and Reply Collective 5, no. 6 (2016b): 11-17.

Kidd, Ian James. “Was Feyerabend a Postmodernist?” International Studies in the Philosophy of Science 30, no. 1 (2016c): 55-68.

Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago: The University of Chicago Press, 1962.

Lakatos, Imre. “Falsification and the Methodology of Scientific Research Programmes.” In Criticism and the Growth of Knowledge, eds. Imre Lakatos and Alan Musgrave, 91-197. Cambridge: Cambridge University Press, 1970.

Lakatos, Imre. “On Popperian Historiography.” In Philosophical Papers Volume 2: Mathematics, Science and Epistemology, eds. John Worrall and Gregory Currie, 201-210. Cambridge: Cambridge University Press, 1978.

Laudan, Larry. “The Demise of the Demarcation Problem.” In Physics, Philosophy and Psychoanalysis, 111-127. Springer Netherlands, 1983.

Lloyd, Elisabeth A. “Feyerabend, Mill, and Pluralism.” Philosophy of Science 64 (1997): S396-S407.

Oberheim, Eric. Feyerabend’s Philosophy. Berlin: Walter de Gruyter, 2006.

Pigliucci, Massimo. “The Demarcation Problem: A (Belated) Response to Laudan.” In Philosophy of Pseudoscience: Reconsidering the Demarcation Problem, edited by Massimo Pigliucci and Maarten Boudry, 9-28. Chicago: University of Chicago Press, 2013.

Pigliucci, Massimo. “Was Feyerabend Right in Defending Astrology? A Commentary on Kidd.” Social Epistemology Review and Reply Collective 5, no. 5 (2016): 1-6.

Post, H.R. “Correspondence, Invariance and Heuristics: In Praise of Conservative Induction.” Studies in History and Philosophy of Science Part A 2.3 (1971): 213-255.

Preston, Christopher J. “Pluralism and Naturalism: Why the Proliferation of Theories is Good for the Mind.” Philosophical Psychology 18, no. 6 (2005): 715-735.

Preston, John. Feyerabend: Philosophy, Science and Society. Hoboken: John Wiley & Sons, 1997.

Roberts, Royston M. Serendipity: Accidental Discoveries in Science. Hoboken: John Wiley & Sons, 1989.

Stanford, P. Kyle. “Unconceived Alternatives and Conservatism in Science: The Impact of Professionalization, Peer-Review, and Big Science.” Synthese (2015): 1-18. DOI: 10.1007/s11229-015-0856-4

Tsui, Anne S. “From Homogenization to Pluralism: International Management Research in the Academy and Beyond.” Academy of Management Journal 50, no. 6 (2007): 1353-1364.

[1] Laudan, “The Demise of the Demarcation Problem.”

[2] Ibid, 125.

[3] I acknowledge that the act of treating Feyerabend’s pluralism as a unified doctrine conflicts with Oberheim’s reading of Feyerabend as having no unified view (Oberheim, Feyerabend’s Philosophy, 12). I disagree with this reading, since there is substantial theoretical continuity across Feyerabend’s published works up to (and including) Against Method, but I will not make this argument here.

[4] Farrell, Feyerabend and Scientific Values, 135.

[5] Oberheim, Feyerabend’s Philosophy, fn. 338 246.

[6] The most detailed discussions of Feyerabend’s pluralism can be found in chapters 7-9 in Oberheim, Feyerabend’s Philosophy; chapter 7 of Preston, Feyerabend; Lloyd, “Feyerabend, Mill, and Pluralism”; and chapters 5 and 6 in Farrell, Feyerabend and Scientific Values; though these accounts differ in various ways. I do not think any of these accounts is completely accurate for reasons I will not go into here. However, they should provide the reader with a starting point for understanding Feyerabend’s pluralism.

[7] The same point is true for less complicated observation terms since any term licenses particular inferences and, therefore, makes theoretical assumptions about the entity observed. The sentence “I see a tree” is false if what is seen does not, say, absorb carbon dioxide or engage in photosynthesis.

[8] Feyerabend, “Reply to Criticism,” 105. For a more detailed description of this process of “anomaly import” see Bschir, “Feyerabend and Popper on Theory Proliferation and Anomaly Import” and Couvalis, “Feyerabend, Ionesco, and the Philosophy of the Drama” for a reconstruction of Feyerabend’s account of Brownian motion.

[9] Ibid, fn. 32 116.

[10] Ibid, fn. 7 106.

[11] Ibid. See Preston, “Pluralism and Naturalism” for an empirically updated defense of this view.

[12] Feyerabend cites many empirical studies to support this intuition and a few which show its limits (cf. Feyerabend “Against Method,” fn. 42 107). Contemporary empirical literature also supports a Feyerabendian view (Preston, “Pluralism and Naturalism”).

[13] Feyerabend, “Consolations for the Specialist,” 203.

[14] Ibid, 209.

[15] See chapters 6 and 7 of Kuhn, The Structure of Scientific Revolutions; Lakatos, “Falsification and the Methodology of Scientific Research Programmes,” 3(c) and (d); and Feyerabend, “Against Method,” 37-40 for examples and discussions.

[16] While Feyerabend does not mention this explicitly, many theories are fruitful in unexpected ways. See Roberts, Serendipity and the subsequent literature on serendipity in scientific discovery for examples.

[17] Lakatos, “Falsification and the Methodology of Scientific Research Programmes,” 100.

[18] Feyerabend, “Against Method,” 77.

[19] Achinstein, “Proliferation.”

[20] Feyerabend’s favorite example of this is Boltzmann’s defense of atomism (see Feyerabend, “Problems of Empiricism,” 108). Furthermore, while Feyerabend never makes this connection, comebacks can include theories that were pursed without a gap and theories that were abandoned at one point and resurfaced later on (see chapter 4 of Against Method and his “In Defence of Classical Physics” (especially fn. 20, 66) for his defense of the revival of classical physics in the 1960s and recent literature on Kuhn-loss (cf. Post 1971) for several examples).

[21] Feyerabend, Against Method, fn. 12 185. Defending the simplicity of nature thesis is remarkably difficult to do in a non-circular fashion since Hume. However, one could conceivably have other metaphysical theses that entail that theories that fail will continue to fail.

[22] Feyerabend, “Problems of Empiricism,” 214.

[23] Pigliucci, “The Demarcation Problem,” 19.

[24] Ibid, 21.

[25] Ibid, 22.

[26] “I am certainly not suggesting that these are the only criteria by which to evaluate the soundness of a science (or pseudoscience), but we need to start somewhere” (Pigliucci, “The Demarcation Problem,” 22).

[27] Pigliucci states that theoretical understanding and empirical knowledge can both be made rigorous with fuzzy logic with no clearly defined borders and this is what he means by a “family resemblance concept.’ But these are completely separate issues. A family resemblance concept would allow that a concept can be missing some conditions entirely which is different from saying these conditions have fuzzy boundaries. I will leave this ambiguity alone for the moment, as it does not affect his primary claims.

[28] Pigliucci, “The Demarcation Problem,” 22.

[29] Ibid, 25-26.

[30] See Desjardins et al. (forthcoming) for a defense of the use of non-testable theories to ground policy decisions.

[31] Pigliucci, “The Demarcation Problem,” 24.

[32] This, of course, is a practical impossibility since we must make choices about what to fund and what to abandon. However, it is a separate question about how the hypothetical unconstrained nature of tenacity and proliferation must be adapted to meet these practical demands.

[33] Pigliucci, “The Demarcation Problem,” 1.

[34] Pigliucci, “Was Feyerabend Right in Defending Astrology?,” 1.

[35] This, often times, seems to be the case (cf. Stanford, “Unconceived Alternatives and Conservatism in Science”).

[36] Kidd has pointed out to me that Feyerabend himself may have been sympathetic to this notion (see the introduction to the Chinese edition of Against Method).

[37] There are similar, but importantly distinct, justifications of tenacity from Lakatos, “Falsification and the Methodology of Scientific Research Programmes” and Kuhn The Structure of Scientific Revolutions. Feyerabend’s criticisms are not the only ones that need to be overcome to advance our knowledge on demarcation.

[38]. “The principle of proliferation not only recommends invention of new alternatives, it also prevents the elimination of older theories which have been refuted” (Feyerabend, “Problems of Empiricism,” 107).

[39] Kidd, “Why did Feyerabend Defend Astrology?,” 464.

[40] Kidd credits Oberheim’s Feyerabend’s Philosophy for the arguments that Feyerabend was not committed to his defense of pseudosciences and Lloyd’s “Feyerabend, Mill, and Pluralism” for the argument that Feyerabend’s polemics can be seen as his pluralism in action.

[41] Feyerabend, “Proliferation and Realism as Methodological Principles,” 139.

[42] Kidd, “Why did Feyerabend Defend Astrology?,” 468. This Millian defense of pluralism extends the account roughly sketched out in section 1 though I will not go into the fine-grained details of how Feyerabend’s understanding of pluralism evolved from the early “60s to the “early 80s.”

[43] Cf. Preston, “Pluralism and Naturalism”; Tsui, “From Homogenization to Pluralism”; Bigo and Negru, “From Fragmentation to Ontologically Reflexive Pluralism.”

[44] Kidd, “Why did Feyerabend Defend Astrology?,” 473.

[45] Quoted in Kidd, “Why did Feyerabend Defend Astrology?,” 470.

[46] Feyerabend, Science in a Free Society, fn. 13 91.

[47] Ibid, 96.

[48] Ibid.

[49] It is unclear what the practical applications of Pigliucci’s demarcation criterion are supposed to be. Should pseudoscience not appear in journals? Textbooks? University curriculum? Subjugated to further research? All of the above? The answer to this question is crucial if we are to understand what exact functions pseudosciences should or should not play within science.

[50] Pigliucci, “Was Feyerabend Right in Defending Astrology?,” 1. Kidd, “How Should Feyerabend have Defended Astrology?,” 11 reaffirms his and Feyerabend’s allegiance to combat scientism.

[51] Ibid.

[52] Ibid, 2.

[53] Ibid, 3.

[54] Ibid.

[55] Kidd, “How Should Feyerabend have Defended Astrology?,” 11-12.

[56] Ibid, 15. Kidd states that this is “affirmed in [Feyerabend’s] autobiography” but does not offer any quotations or hints as to what these epistemic vices are or how they are relevant to Feyerabend’s defense of astrology. I certainly would not argue that Feyerabend, nor anyone else, was an epistemic saint, but these ambiguities should be addressed.

[57] Pigliucci cites Lakatos suggesting that he means “research program’ in his sense (though nowhere in that volume does Lakatos make that argument). This would require an exceptionally complicated historical analysis to show that this is the case. For now, I will merely argue that astrology was a research program in the more casual sense that Pigliucci seems to use.

[58] See Feyerabend, Science in a Free Society, fn. 16 93.

[59] For a fraction of the expansive literature on the history of astronomy and its applications in medicine, meteorology, astrobiology, and many other disciplines see the references contained in Kassell, “Stars, Spirits, Signs.”

[60] Kidd, “Was Feyerabend a Postmodernist?”

[61] As a side note, both Pigliucci and Kidd often lump together many distinct research programs together and discuss them as if they could be treated uniformly. It is important to note that astrology, voodoo, homeopathy, climate change skepticism, and so on are distinct disciplines with their own histories, successes and problems, methods, and so forth and should not be treated under a single heading.

[62] Pigliucci also argues that Feyerabend’s support for the democratization of science has had “horrible results’ citing the decisions of parents to not vaccinate their children (Pigliucci “Was Feyerabend Right in Defending Astrology?,” 4). First, the decision to vaccinate or not is partially a value decision and, therefore, certainly one that should be discussed in a democratic fashion. Second, there is a wealth of literature on the positive effects of the democratization of science, such as racial inclusivity in AIDS control trials (Epstein, “The Construction of Lay Expertise”), increasing safety standards of nuclear waste transportation, and many other important social issues. See Kitcher, Science In A Democratic Society for a brief overview of some of these discussions.

[63] “Even the most puritanical rationalist will then be forced to stop reasoning and to use propaganda and coercion, not because some of his reasons have ceased to be valid, but because the psychological conditions which make them effective, and capable of influencing others, have disappeared. And what is the use of an argument that leaves people unmoved?” (italics in original, Feyerabend, Against Method, 16).

[64] Feyerabend, “Realism and Instrumentalism,” 305.

[65] Feyerabend, Against Method, ft. 7 30.

[66] The case is more difficult with witchcraft and ancient Chinese medicine since his references are more oblique and sporadic. See chapter 4 of Against Method for a somewhat sustained discussion of ancient Chinese medicine and witchcraft.

[67] Feyerabend, Against Method, 219.

[68] He does, however, explicitly defend the use of the cranks’ ideas (Feyerabend, Against Method, 26). This can also be seen in the “Realism and Instrumentalism” quote where he states that the content does not distinguish the respectable thinker from the crank.

Author Information: Adam Riggio, New Democratic Party of Canada,

Riggio, Adam. “Subverting Reality: We Are Not ‘Post-Truth,’ But in a Battle for Public Trust.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 66-73.

The PDF of the article gives specific page numbers. Shortlink:

Image credit: Cornerhouse, via flickr

Note: Several of the links in this article are to websites featuring alt-right news and commentary. This exists both as a warning for offensive content, as well as a sign of precisely how offensive the content we are dealing with actually is.

An important purpose of philosophical writing for public service is to prevent important ideas from slipping into empty buzzwords. You can give a superficial answer to the meaning of living in a “post-truth” world or discourse, but the most useful way to engage this question is to make it a starting point for a larger investigation into the major political and philosophical currents of our time. Post-truth was one of the many ideas American letters haemorrhaged in the maelstrom of Trumpism’s wake, the one seemingly most relevant to the concerns of social epistemology.

It is not enough simply to say that the American government’s communications have become propagandistic, or that the Trump Administration justifies its policies with lies. This is true, but trivial. We can learn much more from philosophical analysis. In public discourse, the stability of what information, facts, and principles are generally understood to be true has been eroding. General agreement on which sources of information are genuinely reliable in their truthfulness and trustworthiness has destabilized and diverged. This essay explores one philosophical hypothesis as to how that happened: through a sustained popular movement of subversion – subversion of consensus values, of reliability norms about information sources, and of who can legitimately claim the virtues of subversion itself. The drive to speak truth to power is today co-opted to punch down at the relatively powerless. This essay is a philosophical examination of how that happens.

Subversion as a Value and an Act

A central virtue in contemporary democracy is subversion. To be a subversive is to progress society against conservative, oppressive forces. It is to commit acts that transgress popular morality while providing a simultaneous critique of it. As new communities form in a society, or as previously oppressed communities push for equal status and rights, subversion calls attention to the inadequacy of currently mainstream morality to the new demands of this social development. Subversive acts can be publications, artistic works, protests, or even the slow process of conducting your own life publicly in a manner that transgresses mainstream social norms and preconceptions about what it is right to do.

Values of subversiveness are, therefore, politically progressive in their essence. The goal of subversion values is to destabilize an oppressive culture and its institutions of authority, in the name of greater inclusiveness and freedom. This is clear when we consider the popular paradigm case of subversive values: punk rock and punk culture. In the original punk and new wave scenes of 1970s New York and Britain, we can see subversion values in action. Punk’s embrace of BDSM and drag aesthetics subvert the niceties of respectable fashion. British punk’s embrace of reggae music promotes solidarity with people oppressed by racist and colonialist norms. Most obviously, punk enshrined a morality of musical composition through simplicity, jamming, and enthusiasm. All these acts and styles subverted popular values that suppressed all but vanilla hetero sexualities, marginalized immigrant groups and ethnic minorities, denigrated the poor, and esteemed an erudite musical aesthetic.

American nationalist conservatism today has adopted the form and rhetoric of subversion values, if not the content. The decadent, oppressive mainstream the modern alt-right opposes and subverts is a general consensus of liberal values – equal rights regardless of race or gender, an imperative to build a fair economy for all citizens, end police oppressive of marginalized communities, and so on. Alt-right activists push for the return of segregation and even ethnic cleansing of Hispanics from the United States. Curtis Yarvin, the intellectual centre of America’s alt-right, openly calls for an end to democratic institutions and their replacement with government by a neo-cameralist state structure that replaces citizenship with shareholds and reduces all public administration and foreign policy to the aim of profit. Yet because these ideas are a radical front opposing a broadly liberal democratic mainstream culture, alt-right activists declare themselves punk. They claim subversiveness in their appropriation of punk fashion in apparel and hair, and their gleeful offensiveness to liberal sensibilities with their embrace of public bigotry.

Subversion Logics: The Vicious Paradox and Trolling

Alt-right discourse and aesthetic claim to have inherited subversion values because their activists oppose a liberal democratic mainstream whose presumptions include the existence of universal human rights and the encouragement of cultural, ethnic, and gender diversity throughout society. If subversion values are defined entirely according to the act of subverting any mainstream, then this is true. But this would decouple subversion values from democratic political thought. At question in this essay – and at this moment in human democratic civilization – is whether such decoupling is truly possible.

If subversion as an act is decoupled from democratic values, then we can understand it as the act of forcing an opponent into a vicious paradox. One counters an opponent by interpreting their position as implying a hypocritical or self-contradictory logic. The most general such paradox is Karl Popper’s paradox of tolerance. Alt-right discourse frames their most bigoted communications as subversive acts of total free speech – an absolutism of freedom that decries as censorship any critique or opposition to what they say. This is true whether they write on a comment thread, through an anonymous Twitter feed, or on a stage at UC Berkeley. We are left with the apparent paradox that a democratic society must, if we are to respect our democratic values without being hypocrites ourselves, accept the rights of the most vile bigots to spread racism, misogyny, anti-trans and heterosexist ideas, Holocaust denial, and even the public release of their opponents’ private information. As Popper himself wrote, the only response to such an argument is to deny its validity – a democratic society cannot survive if it allows its citizens to argue and advocate for the end of democracy. The actual hypocritical stance is free speech absolutism: permitting assaults on democratic society and values in the name of democracy itself.

Trolling, the chief rhetorical weapon of the alt-right, is another method of subversion, turning an opponent’s actions against herself. To troll is to communicate with statements so dripping in irony that an opponent’s own opposition can be turned against itself. In a simple sense, this is the subversion of insults into badges of honour and vice versa. Witness how alt-right trolls refer to themselves as shitlords, or denounce ‘social justice warriors’ as true fascists. But trolling also includes a more complex rhetorical strategy. For example, one posts a violent, sexist, or racist meme – say, Barack Obama as a witch doctor giving Brianna Wu a lethal injection. If you criticize the post, they respond that they were merely trying to bait you, and mock you as a fragile fool who takes people seriously when they are not – a snowflake. You are now ashamed, having fallen into their trap of baiting earnest liberals into believing in the sincerity of their racism, so you encourage people to dismiss such posts as ‘mere trolling.’ This allows for a massive proliferation of racist, misogynist, anti-democratic ideas under the cover of being ‘mere trolling’ or just ‘for the lulz.’

No matter the content of the ideology that informs a subversive act, any subversive rhetoric challenges truth. Straightforwardly, subversion challenges what a preponderant majority of a society takes to be true. It is an attack on common sense, on a society’s truisms, on that which is taken for granted. In such a subversive social movement, the agents of subversion attack common sense truisms because of their conviction that the popular truisms are, in fact, false, and their own perspective is true, or at least acknowledges more profound and important truths than what they attack. As we tell ourselves the stories of our democratic history, the content of those subversions were actually true. Now that the loudest voices in American politics claiming to be virtuous subversives support nationalist, racist, anti-democratic ideologies, we must confront the possibility that those who speak truth to power have a much more complicated relationship with facts than we often believe.

Fake News as Simply Lies

Fake news is the central signpost of what is popularly called the ‘post-truth’ era, but it quickly became a catch-all term that refers to too many disparate phenomena to be useful. When preparing for this series of articles, we at the Reply Collective discussed the influence of post-modern thinkers on contemporary politics, particularly regarding climate change denialism. But I don’t consider contemporary fake news as having roots in these philosophies. The tradition is regarded in popular culture (and definitely in self-identified analytic philosophy communities) as destabilizing the possibility of truth, knowledge, and even factuality.

This conception is mistaken, as any attentive reading of Jacques Derrida, Michel Foucault, Gilles Deleuze, Jean-Francois Lyotard, or Jean Beaudrillard will reveal that they were concerned – at least on the question of knowledge and truth – with demonstrating that there were many more ways to understand how we justify our knowledge and the nature of facticity than any simple propositional definition in a Tarskian tradition can include. There are more ways to understand knowledge and truth than seeing whether and how a given state of affairs grounds the truth and truth-value of a description. A recent article by Steve Fuller at the Institute of Art and Ideas considers many concepts of truth throughout the history of philosophy more complicated than the popular idea of simple correspondence. So when we ask whether Trumpism has pushed us into a post-truth era, we must ask which concept of truth had become obsolete. Understanding what fake news is and can be, is one productive probe of this question.

So what are the major conceptions of ‘fake news’ that exist in Western media today? I ask this question with the knowledge that, given the rapid pace of political developments in the Trump era, my answers will probably be obsolete, or at least incomplete, by publication. The proliferation of meanings that I now describe happened in popular Western discourse in a mere two months from Election Day to Inauguration Day. My account of these conceptual shifts in popular discourse shows how these shifts of meaning have acquired such speed.

Fake news, as a political phenomenon, exists as one facet of a broad global political culture where the destabilization of what gets to count as a fact and how or why a proposition may be considered factual has become fully mainstream. As Bruno Latour has said, the destabilization of facticity’s foundation is rooted in the politics and epistemology of climate change denialism, the root of wider denialism of any real value for scientific knowledge. The centrepiece of petroleum industry public relations and global government lobbying efforts, climate change denialism was designed to undercut the legitimacy of international efforts to shift global industry away from petroleum reliance. Climate change denial conveniently aligns with the nationalist goals of Trump’s administration, since a denialist agenda requires attacking American loyalty to international emissions reduction treaties and United Nations environmental efforts. Denialism undercuts the legitimacy of scientific evidence for climate change by countering the efficacy of its practical epistemic truth-making function. It is denial and opposition all the way down. Ontologically, the truth-making functions of actual states of affairs on climatological statements remain as fine as they always were. What’s disappeared is the popular belief in the validity of those truth-makers.

So the function of ‘fake news’ as an accusation is to sever the truth-making powers of the targeted information source for as many people who hear the accusation as possible. The accusation is an attempt to deny and destroy a channel’s credibility as a source of true information. To achieve this, the accusation itself requires its own credibility for listeners. The term ‘fake news’ first applied to the flood of stories and memes flowing from a variety of dubious websites, consisting of uncorroborated and outright fabricated reports. The articles and images originated on websites based largely in Russia and Macedonia, then disseminated on Facebook pages like Occupy Democrats, Eagle Rising, and Freedom Daily, which make money using clickthrough-generating headlines and links. Much of the extreme white nationalist content of these pages came, in addition to the content mills of eastern Europe, from radical think tanks and lobby groups like the National Policy Institute. These feeds are a very literal definition of fake news: content written in the form of actual journalism so that their statements appear credible, but communicating blatant lies and falsehoods.

The feeds and pages disseminating these nonsensical stories were successful because the infrastructure of Facebook as a medium incentivizes comforting falsehoods over inconvenient truths. Its News Feed algorithm is largely a similarity-sorting process, pointing a user to sources that resemble what has been engaged before. Pages and websites that depend on by-clickthrough advertising revenue will therefore cater to already-existing user opinions to boost such engagement. A challenging idea that unsettles a user’s presumptions about the world will receive fewer clickthroughs because people tend to prefer hearing what they already agree with. The continuing aggregation of similarity after similarity reinforces your perspective and makes changing your mind even harder than it usually is.

Trolling Truth Itself

Donald Trump is an epically oversignified cultural figure. But in my case for the moment, I want to approach him as the most successful troll in contemporary culture. In his 11 January 2017 press conference, Trump angrily accused CNN and Buzzfeed of themselves being “fake news.” This proposition seems transparent, at first, as a clear act of trolling, a President’s subversive action against critical media outlets. Here, the insulting meaning of the term is retained, but its reference has shifted to cover the Trump-critical media organizations that first brought the term to ubiquity shortly after the 8 November 2016 election. The intention and meaning of the term has been turned against those who coined it.

In this context, the nature of the ‘post-truth’ era of politics appears simple. We are faced with two duelling conceptions of American politics and global social purpose. One is the Trump Administration, with its propositions about the danger of Islamist terror and the size of this year’s live Inauguration audience. The other is the usual collection of news outlets referred to as the mainstream media. Each gives a presentation of what is happening regarding a variety of topics, neither of which is compatible, both of which may be accurate to greater or lesser degrees in each instance. The simple issue is that the Trump Administration pushes easily falsified transparent propaganda such as the lie about an Islamist-led mass murder in Bowling Green, Kentucky. This simple issue becomes an intractable problem because significantly large spaces in the contemporary media economy constitutes a hardening of popular viewpoints into bubbles of self-reinforcing extremism. Thanks to Facebook’s sorting algorithms, there will likely always be a large group of Trumpists who will consider all his administration’s blatant lies to be truth.

This does not appear to be a problem for philosophy, but for public relations. We can solve this problem of the intractable audience for propaganda by finding or creating new paths to reach people in severely comforting information bubbles. There is a philosophical problem, but it is far more profound than even this practically difficult issue of outreach. The possibility conditions for the character of human society itself is the fundamental battlefield in the Trumpist era.

The accusation “You are fake news!” of Trump’s January press conference delivered a tactical subversion, rendering the original use of the term impossible. The moral aspects of this act of subversion appeared a few weeks later, in a 7 February interview Trump Administration communications official Sebastian Gorka did with Michael Medved. Gorka’s words first appear to be a straightforward instance of authoritarian delegitimizing of opposition, as he equates ‘fake news’ with opposition to President Trump. But Gorka goes beyond this simple gesture to contribute to a re-valuation of the values of subversion and opposition in our cultural discourse. He accuses Trump-critical news organizations of such a deep bias and hatred of President Trump and Trumpism that they themselves have failed to understand and perceive the world correctly. The mainstream media have become untrustworthy, says Gorka, not merely because many of their leaders and workers oppose President Trump, but because those people no longer understand the world as it is. That conclusion is, as Breitbart’s messaging would tell us, the reason to trust the mainstream media no longer is their genuine ignorance. And because it was a genuine mistake about the facts of the world, that accusation of ignorance and untrustworthiness is actually legitimate.

Real Failures of Knowledge

Donald Trump, as well as the political movements that backed his Presidential campaign and the anti-EU side of the Brexit referendum, knew something about the wider culture that many mainstream analysts and journalists did not: they knew that their victory was possible. This is not a matter of ideology, but a fact about the world. It is not a matter of interpretive understanding or political ideology like the symbolic meanings of a text, object, or gesture, but a matter of empirical knowledge. It is not a straightforward fact like the surface area of my apartment building’s front lawn or the number of Boeing aircraft owned by KLM. Discovering such a fact as the possibility conditions and likelihood of an election or referendum victory involving thousands of workers, billions of dollars of infrastructure and communications, and millions of people deliberating over their vote or refusal to vote is a massively complicated process. But it is still an empirical process and can be achieved to varying levels of success and failure. In the two most radical reversals of the West’s (neo)liberal democratic political programs in decades, the press as an institution failed to understand what is and is not possible.

Not only that, these organizations know they have failed, and know that their failure harms their reputation as sources of trustworthy knowledge about the world. Their knowledge of their real inadequacy can be seen in their steps to repair their knowledge production processes. These efforts are not a submission to the propagandistic demands of the Trump Presidency, but an attempt to rebuild real research capacities after the internet era’s disastrous collapse of the traditional newspaper industry. Through most of the 20th century, the news media ecology of the United States consisted of a hierarchy of local, regional, and inter/national newspapers. Community papers reported on local matters, these reports were among the sources for content at regional papers, and those regional papers in turn provided source material for America’s internationally-known newsrooms in the country’s major urban centres. This information ecology was the primary route not only for content, but for general knowledge of cultural developments beyond those few urban centres.

With the 21st century, it became customary to read local and national news online for free, causing sales and advertising revenue for those smaller newspapers to collapse. The ensuing decades saw most entry-level journalism work become casual and precarious, cutting off entry to the profession from those who did not have the inherited wealth to subsidize their first money-losing working years. So most poor and middle class people were cut off from work in journalism, removing their perspectives and positionality from the field’s knowledge production. The dominant newspaper culture that centred all content production in and around a local newsroom persisted into the internet era, forcing journalists to focus their home base in major cities. So investigation outside major cities rarely took place beyond parachute journalism, visits by reporters with little to no cultural familiarity with the region. This is a real failure of empirical knowledge gathering processes. Facing this failure, major metropolitan news organizations like the New York Times and Mic have begun building a network of regional bureaus throughout the now-neglected regions of America, where local independent journalists are hired as contractual workers to bring their lived experiences to national audiences.

America’s Democratic Party suffered a similar failure of knowledge, having been certain that the Trump campaign could never have breached the midwestern regions – Michigan, Wisconsin, Pennsylvania – that for decades have been strongholds of their support in Presidential elections. I leave aside the critical issue of voter suppression in these states to concentrate on a more epistemic aspect of Trump’s victory. This was the campaign’s unprecedented ability to craft messages with nuanced detail. Cambridge Analytica, the data analysis firm that worked for both Trump and, provided the power to understand and target voter outreach with almost individual specificity. This firm derives incredibly complex and nuanced data sets from the Facebook behaviour of hundreds of millions of people, and is the most advanced microtargeting analytics company operating today. They were able to craft messages intricately tailored to individual viewers and deliver them through Facebook advertising. So the Trump campaign has a legitimate claim to have won based on superior knowledge of the details of the electorate and how best to reach and influence them.

Battles Over the Right to Truth

With this essay, I have attempted an investigation that is a blend of philosophy and journalism, an examination of epistemological aspects of dangerous and important contemporary political and social phenomena and trends. After such a mediation, I feel confident in proposing the following conclusions.

1) Trumpist propaganda justifies itself with an exclusive and correct claim to reliability as a source of knowledge: that the Trump campaign was the only major information source covering the American election that was always certain of the possibility that they could win. That all other media institutions at some point did not understand or accept the truth of Trump’s victory being possible makes them less reliable than the Trump team and Trump personally.

2) The denial of a claim’s legitimacy as truth, and of an institution’s fidelity to informing people of truths, has become such a powerful weapon of political rhetoric that it has ended all cross-partisan agreement on what sources of information about the wider world are reliable.

3) Because of the second conclusion, journalism has become an unreliable set of knowledge production techniques. The most reliable source of knowledge about that election was the analysis of mass data mining Facebook profiles, the ground of all Trump’s public outreach communications. Donald Trump became President of the United States with the most powerful quantitative sociology research program in human history.

4) This is Trumpism’s most powerful claim to the mantle of the true subversives of society, the virtuous rebel overthrowing a corrupt mainstream. Trumpism’s victory, which no one but Trumpists themselves thought possible, won the greatest achievement of any troll. Trumpism has argued its opponent into submission, humiliated them for the fact of having lost, then turned out to be right anyway.

The statistical analysis and mass data mining of Cambridge Analytica made Trump’s knowledge superior to that of the entire journalistic profession. So the best contribution that social epistemology as a field can make to understanding our moment is bringing all its cognitive and conceptual resources to an intense analysis of statistical knowledge production itself. We must understand its strengths and weaknesses – what statistical knowledge production emphasizes in the world and what escapes its ability to comprehend. Social epistemologists must ask themselves and each other: What does qualitative knowledge discover and allow us to do, that quantitative knowledge cannot? How can the qualitative form of knowledge uncover a truth of the same profundity and power to popularly shock an entire population as Trump’s election itself?

Author Information: Elena Trufanova, Institute of Philosophy, Russian Academy of Sciences,

Trufanova, Elena. “A Reply to ‘The Destiny of Atomism in the Modern Science and the Structural Realism’.” Social Epistemology Review and Reply Collective 6, no. 3 (2016): 62-65.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:

Image credit: Cezary Borysiuk, via flickr

The idea of atoms as “basic” elements of matter is one of the classical “thematic structures” as it was put by Holton (1988)—the ever-recurring idea that was present in human thought since ancient Greeks and Indians, disappearing for a while then coming back again. I have always seen Democritus’ atomism as a genius insight ahead of its time. This really shows how science does not always need to be empirical to be productive in the ways of explaining the world. Professor Mamchur warns after Heisenberg against interpreting Greek atomism as an origin of the modern science atomism, however the fact remains: even if the modern scientists do formulate new notion of atoms, they still come back in time in looking for the name itself to the ancient Greeks.

Nowadays atoms, as well as their parts—the particles that we now call elementary (and that can be found out to be not so elementary in a while)—are often referred to as examples of “socially constructed” elements of the modern science. Their very existence is questioned, mostly due to their “unobservable” nature. The main problem is indeed not the question of the atoms in particular, but the question of reality of the scientific objects in general. The unobservables are just a very good example because it is very easy to suggest that such things do not exist. This is in fact a typical naïve realism stance—I do not see or feel them hence they are not real.

However, as I teach my students and explain to them what the objective existence of matter means, I always give them the following example: we don’t feel the radioactive particles “piercing” our body, but our body starts to break apart nonetheless if the radiation level is high enough. That is if something is not given or is not accessible to our perception, it doesn’t mean we don’t have proof for its existence. We may try convincing ourselves that radiation is socially constructed, that what we call our scientific knowledge about radiation is totally wrong and has nothing to do with reality, but still there is some real phenomena behind the word “radioactivity” and the warning yellow-black sign—and this reality can hit us really hard.

Atoms, Quarks and Social Construction

Professor Mamchur starts her paper (2017) with considering the idea of linguistic origin of the atomistic idea which suggests that the idea of atoms comes from the alphabetic structure of Indo-European languages. She is not much convinced by this idea, though it could have added some wood to the flame in the so-called “science wars”. Atoms have since the ancient Greeks lost the status of the indivisible and basic elements, of the last bricks of matter, but we have quarks and gluons and elementary particles instead that have taken their place. Do we have them for real? Pickering (1999) says nay, we have just “constructed” them for our purposes. Pickering’s work became an easy target because the common sense cries to us—come on, for Heaven’s sake!—we know for sure that physics describe matter and matter is composed of atoms and their nuclei are built of quarks. What is more real than physical matter? How can quarks then be socially constructed?

According to Pickering, we have constructed the theory of matter that presupposes the idea of quarks. We could have constructed theory of matter differently, says Pickering, and thus our science could have been developing in the non-quark trend.

It is very easy to criticize this point of view without even giving it a second thought: the very idea that the structural elements of matter can be socially constructed seems ridiculous. But that is not exactly what Pickering meant. What he meant is: we do not know the physical reality behind our theories. This is the main challenge for the scientific realism in all its different varieties. This is what “science wars” was about—when we speak about the social processes or mental states most of us will agree that we can allow them to be considered as socially constructed, but the “solid” physical reality, as many would say, should be spared from the constructionist blow.

I am sympathetic with Hacking’s theory of “experimental” realism, even if, as Professor Mamchur suggests, it can be easily criticized. However, I see some other criteria that can be used when we talk about the reality of scientific objects that suggest that they are not just social constructions. Let us take quarks—if we say that they are purely theoretical, why do we classify them using the terms “flavours”, “colours”, and “generations”? There must be some characteristics that make us do this kind of sorting. The idea is that there are unexplained bits of information about quarks that we can study and explore. If quarks are purely theoretical constructions, if they were born in our minds, why don’t we hold complete information about them? Why is it possible for them to surprise us as we do our further research? Thus, the scientific objects are real if they are able to provide us with new unexplained data about them.

As Professor Lektorsky (2015, 21) puts it:

Theoretical knowledge often uses so-called ideal objects: material point, perfectly rigid body, incompressible gas … The scientists that suggest these objects are completely aware that they cannot be real. For example, the body volume of the object that has a mass cannot be a point. We should distinguish these objects from theoretical objects that refer to real referents: atom, electron, quark. It is useless to suppose that we can discover new qualities of ideal objects: these qualities are determined by the very means of constructing of these objects. But when we take real objects like atom, we can discover their new characteristics, build new theories about them, specify these theories, change them etc.

I would call these criteria “the criteria of the limited knowledge”—that is, if we have limited knowledge about a certain object, then it is probably real, because if it were our own construction we would have the complete knowledge about it.

Realism and Mutual Understanding

What is usually neglected by the supporters of the social constructionist approach is the fact that we do not really invent the scientific theories out of thin air, we are trying to explain certain natural phenomena. Our explanations may be faulty, but the phenomena are real anyway—they are “out there” in the world—maybe even beyond our reach, but they are still there.

Another very important point is made by Agazzi (2016, 18) when he speaks about the necessity of the clear distinction between

[T]he “things” of ordinary experience and the “objects” of the different sciences, though recognizing that precise links exist between them. Now, while it would be wrong to say that every science specifically deals with a particular domain of “things” (because any “thing” can become the “object” of several sciences) one can say that every science deals with whatever thing “from its own point of view” and it is owing to this particular point of view that it makes this thing one of its proper “objects”. Therefore, one could say that the objects of a science are the “clippings” obtained in things by considering them from the point of view of that science.

That is to say—scientific objects are not identical to the “things” that constitute our reality, but they reflect certain characteristics that exist in those things, they refer to reality.

Also, I do not see the argument of pessimistic induction as a substantial threat to realism. The natural phenomena we encounter are still there, they are still real, the change of the ontological set is like a different language: the table doesn’t cease to exist if we start calling it “tavola” or “Tisch”. This might be a crude analogy, but I hope it makes its point.

What I would like to underline in the conclusion of these fleeting remarks is that the question of scientific realism is not purely academic. I see it as a question of the possibility of mutual understanding. If we cannot agree that the physical world around us is real, how can we agree on anything else, how can we understand each other? We have to start from some basic foundations, and I feel like physical world is a good place to start—it shows us that whatever our cultural or social differences are we still live in the same world (Trufanova 2017).


Agazzi, Evandro. “The Truth of Theories and Scientific Realism.” In Varieties of Scientific Realism edited by Evandro Agazzi, 49-68. Springer International Publishing, 2017.

Holton, Gerald. Thematic Origins of Scientific Thought: Kepler to Einstein. Cambridge, MA: Harvard University Press, 1988.

Lektorsky, Vladislav. “Konstruktivizm vs Realism” (“Constructivism vs Realism”).  Epistemology & Philosophy of Science 43, no.1, (2015): 20-26. (in Russian).

Mamchur, Elena. “The Destiny of Atomism in the Modern Science and the Structural Realism.” Social Epistemology 31, no. 1 (2017): 93-104.

Pickering, Andrew. Constructing Quarks: A Sociological History of Particle Physics. Chicago: University of Chicago Press, 1999.

Trufanova, Elena. “Uskol’zayushchaya Real’nost’ i Sotsial’nye Konstruktsii” (“Elusive Reality and Social Constructions.” Philosophy of Science and Technology 22, no. 1 (2017) (in Russian).

Author Information: Lyudmila A. Markova, Russian Academy of Science,


Please refer to:

It is difficult to find a place for the concept of truth in social epistemology. Current philosophers disagree on the status “truth” and “objectivity” as the basis of thinking about science. Meanwhile, the very name ‘social epistemology’ speaks to a serious inevitable turn in our attitude toward scientific knowledge.  Once epistemology becomes social, scientific knowledge is oriented not to nature, but to human beings. Epistemology, then, addresses not the laws of nature, but the process of their production by a scientist. In classical epistemology we have, as a result of scientific research, laws regarding the material reality of the world created by us. Experimental results, obtained in classical science, must be objective and true, or they become useless.

In social epistemology, scientific results represent social communication among scientists (and not just among scientists), their ability to produce new knowledge, and their professionalism. In this case, knowledge helps us to create not a material artificial world, but a virtual world which is able to think. For such knowledge, notions like “truth” and “objectivity” do not play a serious role. Other concepts such as “dialog”, “communication”, “interaction”, “difference” and “diversity” come to the fore. In these concepts, we can see a turn in the development of epistemological thinking.

However, social epistemology does not destroy its predecessor. Let us remember this definition of social epistemology which Steve Fuller gives in 1988:

How should the pursuit of knowledge be organized, given that under normal circumstances knowledge is pursued by many human beings, each working on a more or less well-defined body of knowledge and each equipped with roughly the same imperfect cognitive capacities, albeit with varying degree of access to one another’s activities?

It is not difficult to see that Fuller does not consider the aim of social epistemology as obtaining objective knowledge about the external world. He remains concerned about the diversity of social conditions in which scientists work. Changes in these conditions and features of an individual scientist such as professional competence, among others, should be taken into consideration.  Exactly these characteristics of thinking that come to the fore allow us to speak about a turn in the development of thinking. Now, the problems that exist in science and society require, for their solution, a new type of thinking. Still, we can find empirical reality the foundation both for classical (modern) and non-classical (based on social epistemology) logic.

Let us take an example. You bathe every day in the river Volga. You bathe today and you come to bathe tomorrow in the same river Volga. You cannot object that the river is still the Volga. Yet, at the same time, you see numerous changes from one day to the next—ripples appearing in, and new leaves appearing on, the water’s surface, the water temperature turning slightly colder and so on. It is possible to conclude that the river, after all, is not as it was yesterday. As Heraclitus famously observed: “You cannot enter the same river twice.”

Both conclusions are right. However, notions such as truth and objectivity did not lose their logical and historical significance; rather, they became marginal. Proponents of social epistemology should establish communication with classical logic and not try to destroy it.

Author Information: Søren Harnow Klausen, University of South Denmark,

Klausen, Søren Harnow. “No Cause for Epistemic Alarm: Radically Collaborative Science, Knowledge and Authorship.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 38-61.

The PDF of the article gives specific page numbers. Shortlink:

Image credit: stop that pigeon!, via flickr


New forms of radical collaboration—notably “big science,” multi-authorship and academic ghostwriting—have brought renewed attention to the social nature of science. They have been thought to raise new and pressing epistemological problems, especially because they appear to have put in jeopardy the transparency, accountability and responsibility associated with traditional scientific practice. Against this worried stance, I argue that the new practices can be adequately accounted for within a standard epistemological framework. While radical collaboration may carry serious practical problems and risks, and requires critical attention to the way science is organized and communicated, it raises no fundamentally new epistemological problems. It may even serve as an example of a less restrained and more fruitful, albeit calculatedly risky, mode of conduct that could enhance scientific creativity.

Science is a collaborative enterprise. It is arguably becoming ever more collaborative. A number of contemporary trends seem to support such a diagnosis. There is, first, the rise of big science, that is, of large-scale, infrastructure-dependent research, epitomized by high-energy physics or the Human Genome Project. Even in fields still dominated by smaller-scale science, research has become increasingly collective. The research group has long been recognized as the fundamental unit of scientific knowledge production,[1] and global and regional research networks are gaining importance.[2]

A further significant manifestation of the trend towards increased collaboration is multi-authorship. The average number of authors per paper is growing steadily, with some fields now turning out papers with several hundreds or even thousands of names on the author list. Many of the persons named may not have done any authoring in the traditional sense, but appear on the byline due to their contributions as fundraisers, managers, project partners or engineers, and some may have been granted so-called honorary or gratuitous authorship. And although bylines are getting crowded, not all authors may actually be listed as such, as academic ghostwriting is also becoming widespread.

The recent trends towards free or forced collectivization of science have prompted a new wave of critical inquiry. It has been argued that radically collaborative research (henceforth RC) raises a new kind of epistemic problem, because it has put in jeopardy the transparency, accountability and responsibility associated with traditional scientific practice. When there is no centre of command, when epistemic labour is distributed widely over a seemingly uncoordinated mass of people, it not only gives rise to moral and political concerns or engineering challenges, but calls into doubt whether the activities in question can count as scientific knowledge production at all.

Worries like these have been raised by a group of philosophers of science and cognition whom I shall refer to as the Georgetown Alarmists, consisting of Bryce Huebner, Rebecca Kukla and Eric Winsberg.[3] In a series of papers written jointly or individually, the Georgetown Alarmists argue that we are facing are not only problems of scale, but a whole new quality of problems.[4]

Against this I will argue that although multi-authorship and collectivization are obviously trends that call for critical attention, they do not give rise to any significantly new problems. In particular, they should not be a cause for epistemic alarm. There is plenty of reason for ethical and political concerns about how big science is conducted—but that is a different issue (and, again, it can be doubted whether there is anything inherently problematic about big science, even when measured by these non-epistemic standards). More traditional forms of science and scientific authorship exhibit the same basic features. So if there is a problem, it is neither new nor special to large-scale collaborative research. Moreover, I will argue that traditional mainstream epistemology has all the resources needed to handle the new cases of collaborative science.

The Case for Epistemic Alarm

The reasoning of the Georgetown Alarmists (hereafter abbreviated GA) can be summarized as follows (it should not be understood as a single argument, but rather a set of more or less interrelated theses).[5]

i) Genuine authorship requires accountability (being able to justify and vouch for the truth for the claims made in one’s publication)

ii) Genuine group authorship is possible, but requires a unified and coherent group. It requires that each author is accountable for all the claims made in the publication, or that each author knows which collaborator is responsible for which claims, or that at least one member of the group retains centralized control over the research process[6]

iii) Genuine authorship is more than a purely institutional status; it must represent a “specific form” of epistemic labor[7]

iv) Authorship in RC does not meet the criteria for genuine authorship (since neither i) nor ii) is fulfilled). Radical collaboration leads to authorless publications[8]

v) Epistemic responsibility requires accountability[9]

vi) Knowledge requires epistemic responsibility[10]or accountability[11]

vii) Radical collaborations yield a fundamental epistemic problem rather than a mere engineering problem[12]; they lead to a lack or loss of scientific knowledge

To put it very briefly: Radical collaboration leads to authorless publications, which in turn lead to a loss of knowledge. One way of construing the position of GA on authorship is to say that they accept the poststructuralist “death of the author” view, famously expounded by Barthes[13] and Foucault,[14] as an account of radically collaborative authorship, but reject it as an account of traditional authorship—and that they take traditional authorship to be the normatively superior notion of authorship, the notion of genuine authorship. Contrary to the poststructuralists, they do not welcome the death of the author.

It must be said that although the GA appear to be conservative or “traditionalists” in some of their attitudes toward science, they do have an accurate and realistic understanding of contemporary scientific practice, and they can hardly be accused of being luddites. They take the recent trends to be far from surprising.[15] And not only do they recognize that a return to smaller-scale formats is practically impossible; they agree that it would hardly be desirable.[16] Instead they seem to call for a new framework for assessing and regulating collaborative research processes. Still, their sketchy suggestions for what has to be done do point towards a relatively tight system of control and more rigorous demands for transparency and accountability, which I fear could hamper scientific progress.

I find it difficult to render GA’s reasoning in a balanced way, since it strikes me as relying on a series of questionable assumptions. But I think at least the following can be said to in favour of their conclusions: Collaborative science is an extremely messy affair. It exhibits little transparency or personal accountability. It can surely cause some initial worry to see how bits of evidence and interpretation are tossed around, and how little individual researchers understand, at least in some cases, of what their collaborators are doing or even of the overall process of which they are part.

Moreover, it is a widespread assumption in epistemology that knowledge requires a subject, and that a subject needs to be both sufficiently unified—i.e. have an integrated and coherent mental architecture—and have some kind of reflective access to its own mental states and processes. It is, furthermore, common to expect the process of scientific knowledge-production to exhibit an extraordinary high degree of transparency, reflectivity, unity and systematic coherence. In order to rebut GA’s claims, I have to show these assumptions, which cannot be denied a certain naturalness or initial appeal, to be either wrong or irrelevant.

It may be objected that my attribution of a distinctive, and joint, ”alarmist” position to Kukla, Huebner and Winsberg is an untenable construction. Thus it could be noted that only Huebner has been directly concerned with group knowledge. [17] Now my main interest is of course not exegesis or contemporary intellectual history. It suffices that the views discussed are typical, influential and have been voiced or suggested by at least part of GA. I do, however, find ample evidence in the writings of GA that they do hold a distinctive joint position. They do make clearly located epistemic agency a central condition for scientific knowledge.[18] While the term “knowledge” may not surface in all of their writings, they see multi-authorship as the source of an epistemic problem—and since they all adopt a fairly narrow an orthodox conception of the epistemic, it seems fair to assume that this must mean a problem concerning knowledge. Moreover, while they do distinguish authorship from knowing (as one should; an author can of course be wrong!), they come very close to claiming, and clearly do suggest, that authorship is a necessary condition for knowledge in the cases of radical collaboration they consider.

For example, Kukla writes, following up immediately on her claim that the traditional author in collaborative research is dead, that “[i]n radically distributed, collaborative research, there is no one who has a cognitive state instantiating a full justification for the claims that make it to print.”[19] This sounds very much like a claim that inasmuch as there is no author in the traditional sense, the standard conditions for knowledge are not fulfilled. Moreover, the connection between accountability and scientific knowledge production is posited by Kukla, Huebner and Winsberg alike; and they analyse the alleged lack of epistemic accountability in radically collaborative research in terms of a failure to meet the conditions for group authorship.[20] At any rate, should it turn out, contrary to these strong indications, that GA do not posit any necessary connection between authorship and knowledge, then they owe us an explanation of the sort of pressing epistemic problem they do, quite persistently, claim has been raised by multi-authorship.

Dismantling the Case for Epistemic Alarm

The structure of GA’s argument makes several different lines of response possible. One might (A) accept that publications in radically collaborative science are authorless, but reject the connection between authoring and knowledge (v-vii). Or one might (B) accept this connection, but insist that the requirements for authorship can be met. This in turn can be done either by trying (B1) to show that collaborative science actually meets the requirements laid down by GA (i-iii), or by arguing (B2) that these requirements are too strong. In fact I think that both (A) and (B2) can be developed quite convincingly. (B1) appears less promising, since GA are obviously right about the empirical facts, i.e. the messiness of collaborative research (though even here there is room for debate, as we shall see).

There is also the possibility of (C) accepting the whole reasoning up until vi)—agreeing that new forms of collaboration leads to a loss of knowledge, but denying that this makes for an epistemic crisis. One might hold that knowledge is not the most relevant epistemic desideratum, arguing that the production of reliable information may be valuable enough, perhaps that such information feeds into a larger societal process that is likely to lead to a gain in significant knowledge in the long run. I am less attracted to this line of reply, which would leave intact GA’s spectacular and apparently alarming claim that large parts contemporary science are unable to directly produce knowledge. But it provides a relevant fall-back position, because some might want to follow GA in upholding some relatively strong internalist requirements on knowledge.[21]

Now to the arguments. I will proceed in two steps, first considering the requirements for authorship and then the requirements for knowledge. I shall argue that the production of publications in radical collaborative research may still qualify as authorship, if this is understood in a less demanding and more realistic way. I shall then further argue that at least the kind of authorship favoured by GA is not a necessary condition for knowledge.

Forms and Conditions of Authorship

GA reject the possibility that the publication practices associated with radical collaboration can qualify as group authorship. This appears to fit well with the received view of such authorship. It is common to require of a group that there must be some relation of mutual recognition among its members. Group membership has also been taken to entail reflexivity—i.e. each and every member of a group must view herself as a member of group in question.[22] Last, but not least, it has been assumed that for a group to function as an epistemic agent, it must exhibit joint attention i.e. all the members must attend to—and take a stand on—a common target proposition or set of propositions,[23] or a common body of evidence.[24] In line with these views, Livingston has proposed that genuine joint authorship requires a significant degree of “mutual knowledge” and “reciprocal monitoring and assistance.”[25]

Cases of RC do not meet these criteria. But it should be noted that in spite of their almost axiomatic status among philosophers of collective agency, the strict conditions for group membership just outlined appear rather idiosyncratic. They seem to limit the domain of collective epistemic agency quite substantially. Many groups have a much loser structure; and it is debatable whether even the paradigmatic cases of small and tightly knit groups really meet the proposed criteria. Moreover, even the otherwise strict criteria imposed by theories of collective agency do not necessarily add up to an accountability requirement of the sort espoused by GA. With the possible exception of Mathiesen’s[26] account of groups with explicitly epistemic goals,[27] such theories do not demand that the group members should be able to justify the beliefs to which they commit themselves collectively.

Outside the narrow field of the philosophy of social agency, groups have been defined less demandingly. In organization theory groups are individuated with reference to their tasks.[28] I have myself suggested that we delimit an epistemic collective by taking it to consist of all and only those members who contribute significantly to an epistemic task—a task which does not need to be recognized as such by all, or even any, of them.[29] I am aware that such an inclusive notion of an epistemic collective is controversial. It makes it difficult to draw a clear boundary between the members of the group and those with whom the group is merely interacting; a problem that becomes especially pressing in the absence of a clear notion of what should count as a significant epistemic contribution. In any case, GA would no doubt insist that an epistemic collective in this more inclusive sense is unable to function as a genuine author.

Still, the appropriateness of the strict requirements on group authorship is put in serious doubt by the fact that even individual subjects are hardly able to meet them. It seems unlikely that even so-called individual authors retain a high degree of “centralized control” over the research processes documented in their publications.

For one thing, researchers have to depend extensively on the work of other researchers, often without knowing very much about its epistemic merits. A famous example has been given by Hardwig,[30] as part of his case for the claim that scientists generally have to rely on what he describes as blind trust. Hardwig pointed out that even though the Bieberbach conjecture is considered to have been proven by de Branges in 1985, no single mathematician, including de Branges himself, has ever had sufficient justification for each step in the proof. De Branges relied on computer verification by Cautchy, and especially on work of Askey, who had the specialized knowledge of hypergeometric functions which he himself lacked. Askey, on the other hand, did not know enough complex analysis to complete or verify the proof himself. Though de Branges’ original 1985 paper seems to be a typical case of classical authorship, and even the result of an individual research project, the knowledge formation process behind it turns out to have been highly collective, complex, and far from completely transparent.

It may be said that the broad accountability requirement laid down by GA[31] is still met by this example, since de Branges at least knew which contributor was responsible for which claims. He did retain some kind of centralized control, and probably also had a fairly clear idea of the sort of work Askey was good at, since he had some knowledge of hypergeometric functions himself. But the example shows that even paradigm examples of centralized control are often indirect and partly blind, based on a more or less superficial identification of collaborators’ competences and the relevance of their contributions. Other examples of scientific knowledge production can be expected to be further out on the continuum between the completely controlled and completely uncontrolled processes. The celebrated modern evolutionary synthesis is a complex network of results and hypotheses from, inter alia, selectionism, genetics, statistics, paleontology, botanics, cytology, ecology and geology. While it is again likely that the key proponents of this view have been able to gauge the overall significance and reliability of the different contributions, it is also clear that have not been able to epistemically penetrate the whole system of interdependent assumptions. To take one of GA’s favourite themes, they have not, for example, been able to assess the underlying inductive risk-taking decisions.[32]

It is an open question to what extent centralized control and monitoring is aimed at as a regulative ideal, and to what extent it is actually achieved. Authors of literary fiction frequently employ deliberate strategies for reducing the impact of authorial metacognition, allowing for a freer play of ideas.[33] Scientific authors seldom do something like that. Standard formats for scientific papers function as a means for forcing the author to say all and only what ought to be said, to lay all her cards on the table and address the key issues directly. There may also be a relevant difference between papers in the natural sciences and the humanities, inasmuch as the former usually report the results of independent research processes, whereas in the case of a least some of the latter, the writing itself is an integral part of the research process, making it more open-ended and less subject to complete authorial control.

Yet even in the case of standard natural science papers, complete transparency and metacognitive control is at most a regulative ideal. Idioms are borrowed uncritically; references and quotations are made with incomplete knowledge and understanding of the work cited; formulas and rules of inference are applied blindly, and so on. Textbook accounts of the history of science give an impression of a smooth process of almost perfect dissemination; the original discovery was the hard bit, but afterwards it was easy to pick up for other scientists. Yet closer scrutiny reveals that in many cases, scientists assented to and built on theories they did not yet fully understand. A famous example is the reception of Newton’s Principia, which was very quickly recognized as immensely important, but appeared inaccessible and almost incomprehensible to his contemporaries, even the most capable of whom were only able to achieve a partial understanding. Locke is told to have got the gist of Principia from Huygens, who reassured him that the mathematics and mechanics were sound; and many physicists of his time likewise had to rely on authority and their overall estimation of the credibility of the work.[34]

I am not saying that we should give up the very idea of authorial control and authority, only that we should understand these notions more modestly and realistically. By doing so, we may avoid buying into the death-of-the-author-thesis even in the case of RC. Like other texts, multi-authored papers do have real authors, who are, to a certain degree, responsible for their content. In most cases, pains have been taken to establish and follow procedures that regulate the writing process. Rules are laid down as to who should and who should not be included among the authors. Approval is required from the leaders of different subgroups. Not least, measures are taken for imposing coherence on the process and assure that a unified result is arrived at and communicated unambiguously.[35] The individual contributors can all be assumed to know at least the general kind and rough outline of the project in which they are involved. This seems similar to cases from non-scientific text-production in which authors willingly and knowingly engage in processes of interaction that are likely to lead to results that they may not have specifically intended as such.[36]

Hence there is plenty of unifying metacognition present in cases of RC, even if it can be questioned how far it is relevant. In the case of so-called individual authorship, it apparently suffices that a text can be seen as depending on, and informed by, the intentions of authors. There is no need to require the presence of overarching, strategic intentions—though in most actual cases, some such intentions have clearly been at work. Even among conservative defenders of authorial authority, it is widely acknowledged that meaning-constitutive intentions can be unconscious, in the sense that the author need not herself be aware of having them.[37] Traditional intentionalists about literary meaning may have emphasized first-order intentions too strongly and given too little emphasis to the role of authorial metacognition.[38] Nevertheless, their view seems plausible enough to serve as a clear indication that authorship in general does not require any particularly comprehensive or effective strategic intentions. What matters it that what the author meant was successfully communicated, not whether it was her intention to communicate it thus.[39]

It may still be argued that collective authorship requires more in terms of metacognition, because in this case the unity of the author subject and the writing process is much more precarious. Individual authors are individuated as a particular subjects independently of the writing process, whereas groups authors have no such “natural” unity.

There are least three things to say in reply. First, it is an open question how much “natural unity” there is to an individual subject qua author. Though she may be a distinct human being, the mental subsystems responsible for her production of meaningful text need not be closely integrated; and she may function mostly as a transmitter for external influences. Secondly, for all the undeniable messiness of RC, it is still not impossible that at least the sensible, more or less intuitive metacognition requirements are actually met. The rules and procedures followed may not satisfy the strong transparency required by GA; but they may suffice for ensuring something like the “meshing of plans” and “monitoring” required by Livingston’s analysis of collective authorship.[40] Since joint commitments are generally allowed to be merely implicit, the regulatory policies adopted may also suffice for turning even large and heterogeneous groups of collaborators into something like a plural subject (even if it must then be admitted that such a subject can, in other respects, be disintegrated, and its doings messy and intransparent).

Thirdly, it may be admitted that authorship in radically collaborative science falls short of a certain traditional notion of authorship, but denied that this notion is the relevant one. When it comes to scientific publications, there is long tradition, much older than the trend towards large- scale collaboration, of using the notion of authorship for other purposes than to indicate the intentional creation of bits of meaningful text. Authorship is used to lay claim to scientific results and appoint credit for ideas and discoveries. This does not mean that scientific authorship is a “purely institutional status.” It functions as a means for declaring who has been predominantly responsible for generating the knowledge presented in a publication. In fact, it meets GA’s requirement (iii)) that authorship should represent a “specific form of epistemic labor”; it is just that it is not merely, or primarily writing labor. It is an open question to what extent new forms of radical collaboration still follow this practice in an epistemically innocuous way. But at least there need not be any problem merely because it departs from the traditional “literary” notion of authorship. It should be added that the “credit-appointing” notion is not special to scientific publication practice, but also widely applied to even literature of the more artistic kind. It is always a partly pragmatic decision whether a collaborator and/or strong source of inspiration (be it a muse, an editor or assistant or) should be listed as a co-author or merely mentioned in the acknowledgments.[41]

Loss of Knowledge?

Now to the more fundamental question of the requirements for knowledge. Again, several lines of reply are available. First, consider the assumption that knowledge requires a subject, which is implicit in vii) and connects the worries about authorship with the concern for knowledge. How exactly is this to be understood? I take it too be quite intuitive (though not obviously correct) that knowledge must be realized by a conscious being, or at least a being capable of forming mental states like beliefs.[42] But this does not by itself require any specific degree of mental integration or the presence of metacognitive states.

When thinking about the requirements for group knowledge, we should pay close attention to the kind of psychological requirements that are usually made for individual knowledge, the paradigm case of traditional epistemology. Philosophers do not generally require any general reflective awareness on part of the subject (and in this they are clearly in line with ordinary ways of thinking about and ascribing knowledge). Nor is it required that a knowing subject needs any knowledge of epistemological principles or of the justificatory power of her evidence.[43] It is also regarded as unproblematic that the ingredients of knowledge are distributed among different mental states of the subject, which do not generally embody representations of each other. We should not use a double standard and impose stricter conditions on group knowledge.[44] Hence it seems that a weak group subject could qualify as a bearer of knowledge. If a group of individuals are sufficiently well connected, if they contribute to a common epistemic task, and if they possess the necessary cognitive capacities, so that all the necessary epistemic factors are present among them, then they can be said to know. This is in line with ordinary usage, as we often say things like “biologists knew that traits were passed down from generation to generation” or “the CIA knew that the terrorist group was planning an attack.” We ascribe knowledge to groups that are loosely delineated and within which the relevant epistemic factors are likely to be distributed.[45]

There is nothing inherently externalist about the idea of weak group subject functioning as a bearer of knowledge. We can attribute knowledge to such a subject in virtue of the reliability of the belief-forming processes it employs, but also by observing that it posses sufficient evidence and/or acts according to certain principles, e.g. rules of inference. The only kind of internalism that is ruled out is one that requires a high degree of metacognitive access to the evidence and rule-following in question, and central control of the sub-processes. But this is a version of the theory that has little to say for it and is rejected by leading contemporary internalists. There may be good reason to prefer higher-order or centralized control in specific cases, but only inasmuch as it enhances the reliability of the first-order processes.

A sensible internalism may even be compatible with the possibility of knowledge that is not realized by a subject at all. This may sound outlandish, but once it is accepted that the subject as such contributes little to the epistemic status of its mental states, it becomes hard to say why it should be considered necessary for knowledge. Internalist criteria could be fulfilled simply by the presence of sufficient evidence. Of course this evidence would probably have to consist in, or be necessarily related to, states of consciousness awareness, which arguably presuppose some minimal form of self-consciousness or subjectivity. But this is different from the notion of a cognitive centre of control that monitors and unifies the individual mental states.

GA defend the need for positing a subject in the stronger sense by arguing that epistemic responsibility is a necessary condition (vi). But this again buys into an implausibly extreme form of externalism, viz. a strongly deontological theory, according to which epistemic justification requires the fulfilment of certain epistemic duties. The implausibility of such a theory is highlighted by the fact that individuals are able to acquire knowledge in very passive ways, as when I come to know that the sun has set by seeing the light fade. This does not seem to depend on any kind of norm fulfilling or responsibility on my part; I have come to know, regardless of whether I am prepared to defend any claims or act in any particular way. This does not render epistemic responsibility unimportant. Even though I reject group responsibility as a necessary condition for group knowledge, it is likely that well-functioning epistemic collectives do exhibit a significant degree of distributed responsibility.[46] That is, their members will be alert to risks and errors in their sub-domain, and committed to controlling and improving the sub-procedures they employ. As with authorship and subjectivity, there is also room for alternative interpretations of responsibility.

We have thus explored two ways of countering the claims of GA. One can accept the general subject requirement for knowledge, but insist that a subject does not need to meet the strong criteria for group authorship. The authors, or a significant subset of the authors, of papers in high energy physics might be said to know the results presented, even if they fail to constitute a group author. Or one can agree that there is hardly any subject in a significant sense to whom the knowledge can be attributed, but insist that there may be sufficient knowledge around anyhow.

In spite of this, I reckon that many will want to uphold the subject requirement in a relatively strong form. Fortunately, my case against GA does not hang on any controversial thesis on authorship or group knowledge. There is a much less controversial line of reply. It may be admitted that the author, whoever that might be, cannot not the subject of the putative knowledge. Instead it can be argued that knowledge is produced by the collective[47]—and that it is either possessed by some or one of the authors or will be produced in competent readers of the paper. It is natural to describe the cases of radical collaboration as typical social process of knowledge creation, in which testimony plays a crucial part.

GA also reject this suggestion. Kukla argues that individual collaborators cannot be attributed testimony-based knowledge. For this would have to be based on an assessment of the reliability, in context, of the source of the testimony.[48] And it is precisely such an assessment that, according to GA, cannot be performed in cases of radical collaboration.

Again, the reasoning is based on controversial assumptions. First, GA only consider the—admittedly unrealistic, but also less relevant—possibility of all the authors having testimony-based confidence in all of its parts.[49] In contrast, I am suggesting that a network of local testimonial relationships may be sufficient to bind together the whole process and ensure that someone ends up forming the relevant item of knowledge. Secondly, it appears that GA are committed to a strongly reductionist view of testimony. They require that the recipient should have positive reasons for taking the giver of testimony to be a reliable source. Such a view has been rejected by a large number of philosophers who have instead taken testimony to be a fundamental source of justification and so advocated a non-reductionist or “direct” view.[50] But even though non-reductionism provides an easy, and not altogether implausible, way out, I think that GA are right in requiring some sort of vindication of the testimonial practices in question. This seems especially pertinent in the case of scientific collaboration. The thought of scientists just passing on and taking up bits of putative evidence does seem discomforting. So the question is rather what, and how much, it takes for an individual collaborator to acquire knowledge through testimony, and how far it is exemplified in cases of radical collaboration.

As mentioned above, Hardwig argued that scientists have to rely on blind trust. But this may be a somewhat exaggerated description of the actual practice.[51] Trustworthiness is not assigned randomly. At least in scientific collaboration, it is indeed based on some kind of assessment, even if the assessment is often done quickly and almost instinctively. Assignments of trustworthiness are made on the basis of institutional status, known track record, indications of field of expertise, meta-knowledge about the state and potential of the research domain and approach in question, etc. They are semi-blind: Blind as to the internal epistemic merits of the data or piece of theorizing in question, inasmuch as the recipient would not generally be able to generate the knowledge herself (otherwise the testimony would be more or less redundant)—but not completely blind, as they are sensitive to external features of the contributions, the subject matter, general methodology employed and the qualifications of the contributor.

Moreover, instead of requiring of individual collaborators that they themselves assess the reliability of the source, it might be sufficient that the sources are sufficiently reliable. This could be the case in a scientific community, the members of which can generally be expected to make contributions that are competent and relevant (what Hardwig calls a “climate of trust”). GA are sceptic of such view a, because it substitutes mere reliability for accountability.[52] But even if they were right that such accountability is necessary for the collaborators to qualify as a group author or subject of collective knowledge, it is hard to see why it should be necessary for individual collaborators to acquire knowledge by testimony. Besides, a wide range of intermediate positions are available, which retain smaller or larger internalist elements. Hardwig, for one, did not opt for mere reliability. By taking trust to be a necessary requirement for testimonial knowledge, he demanded that an individual scientist must hold certain warranted beliefs about the testifier.

GA may be right that there is a special problem with interdisciplinarity.[53] It might be feared that even though a source is reliable in its own domain, its reliability when combined with, or applied to, another domain, is not sufficient, or at least something that cannot be taken for granted. I think the best answer to this worry is simply to admit that interdisciplinary science is generally more challenging and risky, but that it is justified by its potential gains.[54] It should be added, however, that much of the same risk pertains to so-called mono-disciplinary science as well, since this does also regularly involve the transfer and application of theories or findings from other domains, be it other parts of the same science or neighbouring sciences (e.g. optics in astronomy, computer or information science in cell biology or sociology in the study of religion). It is one thing to know a theory or a set of data and another to know it to be applicable or significant to some other field. Moreover, despite the lack of transparency and centralized expertise, we should not underestimate the capacity of individual collaborators to understand and even assess contributions from other fields. Goldman argues that even laymen can be in a position to evaluate the trustworthiness of experts.[55]

Collins and Evans[56] point out that there is a kind of expertise that falls short of “full-blown practical immersion” in a field, but still involves mastery in its language—what they call “interactional expertise,” and see as especially important for forging collaborative relationships. Theoretical physicists do usually have some, and sometimes considerable, understanding of the contributions of the experimental physicists, and vice versa. Even research managers, though they may be managers first and researchers second, still know considerably more than the layman about the different fields involved in radical collaboration, their compatibility and potential contributions. And fields like high-energy physics or genetics are, in spite of the huge scale of the research activities and the sophisticated technology involved, probably not the most extreme and challenging forms of interdisciplinarity, as they draw on a common pool of compatible and already partly integrated theories and results from the natural sciences. In sum, it seems likely that less is needed for producing knowledge by radical collaboration than required by GA, but also that more than they assume is actually at hand.

Risks, Costs and Benefits

Let us now finally consider the costs and benefits of attempts restore transparency and accountability in RC. I think there is good reason to believe that the introduction of stricter control procedures will be counter-productive, though it is, admittedly, an open empirical question to what extent this is actually the case.[57]

First, we can ask if there is evidence that lack of transparency and responsibility has lead to significant epistemic loss. This does not seem to be the case. There are some spectacular cases of scientific fraud that might have been prevented by stricter control regimes. But even if still more cases have gone undetected, there is no need to assume that they have led to any general loss of quality or reliability of science.[58]

There are, of course, more subtle forms of negative influence that should also be taken into account. Even relatively few cases of outright fraud may suffice to give the public an unfavourable impression of science, which could in turn lead to waning support. Lack of accountability and control may create false images of state-of-the-art research, i.e. making some fields or paradigms appear more important or robust than they actually are, which in turn could lead to false research priorities and even a skewing of subsequent research, if certain approaches or theoretical paradigms, which are actually less well founded or relevant, come to lead the way for others. There is also a risk that the influence of commercial interests on research might go undetected.

It is, however, debatable how much of this should actually be considered an epistemic loss. There is a fairly well-documented, suspiciously strong correlation between private funding of research and industry-supportive conclusions.[59] Yet it is doubtful in how far this indicates that there is something fundamentally wrong when seen from a narrow epistemic standpoint. It is most likely that commercial interests cause researchers to ask certain questions and refrain from asking others, which might have led to less positive results. I am myself in favour of a highly inclusive notion of the epistemic, and so quite prepared to admit that a negative influence on research priorities, problem selection, relevance or uptake of research results, or even the future of science in general, could be considered an epistemic problem. But according to the more exclusive notion favoured by GA, these must be seen as problems of another, more external, kind. A researcher could be perfectly accountable (and act epistemically responsibly) while also acting badly in a wider sense, by e.g. deliberately avoiding carrying out experiments that might detect negative side-effects of drugs or food products.

In any case, I think the problems just outlined are best addressed by counter-measures on a very general level, rather than by meddling with the internal features of collaborative research processes. For example, decisions about research priorities may be subjected to democratic control (Kitcher 2001; 2011).[60] Privately funded research may be checked and counter-balanced by publicly funded research, some which could be directed specifically at issues of societal concern, so as to serve the wider interests of citizens. Moreover, there seem to be significant risks of the same sort even if one keeps to traditional forms of knowledge production. Quite generally, it seems plausible to assume that the drive towards collaborative research makes it more likely, all things considered, that interests, biases and fraud will be detected. Surely it is very far from certain that they will actually be detected in any specific case. Surely the messy, distributed character of large-scale research may even provide researchers with special opportunities to hide less laudable aspects of their conduct. But this should be compared with traditional individual or small-scale research, where there is hardly any external control prior to publication.

Now to the risks of imposing stricter control on collaborative research processes. It goes without saying that efficient control procedures are costly in terms of time, manpower and other resources. There is a special risk that collaborative science could become exaggeratedly slow and tedious, because the control, which has otherwise been “outsourced,” i.e. left to subsequent debate and testing by other scientists, now has to be done prior to publication. Traditionally, a single scientist has been able to come up with a bold conjecture, or publish apparent findings, with relatively little critical resistance to be surmounted in the first instance, thus quickly bringing it up for consideration and making it into a publicly available source of inspiration. I do not think that a completely free and extremely diverse market of scientific ideas is epistemically superior. Gatekeeping has its merits, as false or insufficiently justified beliefs can be difficult to weed out once they have become socially transmitted.[61] Noise is also a serious problem, and a reason for not allowing too much diversity, especially in an age that is already marked by an explosion in publication output and lack of perspicuity. On the other hand, there is hardly any doubt that a relatively diversified science, and a relatively quick process of scientific communication, are important goods that might justify loosening, or at least not strengthening, some of the prior control. The scientific community will not benefit from a situation in which significant findings are not published until a tedious negotiation process has been completed, nor from an extensive mainstreaming of research output and approaches. Peer-review has developed into a substantially delaying factor, and it should come as no surprise that quicker—and dirtier—publication formats are emerging (as they have in fact existed in some fields for a long time, cf. the “letters”-genre), e.g. journals like Plos One that eliminates subjective assessments of significance or scope from the review process, open or “dynamic” peer review etc.

In fact, it seems that actual large-scale collaboration is already marked by a tendency to introduce internal control mechanisms that slow down the publication and make the output less diverse. The authorship protocols in high energy physics serve not just to ensure the quality of research, but also to streamline and unify publications stemming from a particular research project.[62] Hence it seems again that actual practice may come closer to meeting the requirements of GA; but it is questionable whether this is really desirable, at least when one is concerned with epistemic efficacy. It is certainly possible that internal streamlining can be epistemically beneficial, inasmuch as it brings out the main significance of findings and confers upon them an authority that ensures ready uptake by the scientific community. There is a real risk that allowing for premature publication or ambiguous statements can hinder recognition of really important findings. Still, there is surely a limit to the amount of streamlining and prior control that can be beneficial, all things considered. In any case, it is interesting to notice that the requirements of GA, though allegedly motivated by a concern for transparency, could reduce general transparency: in order to achieve mutual consent and understanding among scientific collaborators, moving towards the ideal of group authorship (in the strong sense), it can be necessary to restrict and delay communication with the outside world, including scientific peers.

Of course it is debatable whether considerations of speed of scientific progress, fecundity or optimal use of resources are epistemically relevant. On a consequentialist understanding of epistemic normativity they obviously are.[63] On a more narrow deontological understanding, as appears to be favoured by GA, they are more likely to be not. But everyone must be somehow concerned with the balance between epistemic gains and other aspects of scientific practices. Feasibility conditions for science policies matter, regardless of whether they are seen as internal or external to scientific knowledge production.

The above discussion shows that the relationship between openness, transparency and responsibility is highly complex. Some of the measures that could enhance responsibility are likely to decrease openness (because results and their significance have to be negotiated internally among the collaborating scientists before being revealed to the larger scientific community). Full openness may not further transparency with regard to the main thrust and significance of joint research; it can also obscure it, causing it to drown in a multitude of statements, interpretations and less relevant details.

Openness (and accountability) is thus not always conducive to our epistemic goals. Communication on the internet is an instructive parallel case. Frost-Arnold has argued persuasively that internet anonymity can enhance both dissemination of true beliefs and error-detection, as it serves to remove social inhibition and so to ensure that relevant knowledge is disseminated quickly.[64] Research on computer-mediated group discussion likewise indicates that anonymity in discussion increase both the quantity and novelty of ideas shared.[65] The potential value of anonymity is acknowledged by the current practice of double-blind peer review, which is partly justified by the assumption that anonymous reviewers are less likely to be overly polite and will feel free to voice all sorts of potentially relevant criticism, and that reviewers would be even harder to recruit if they knew that their identity could be revealed. This is not to say that the practice of double-blind review, in its contemporary form, is generally superior.[66] There is also evidence of less beneficial effects of anonymity,[67] e.g. loafing.[68] As noted above, the function and value of openness is a complex issue, which probably does not admit of any general answer.

More generally, the role of tacit knowledge in science has long been acknowledged. There is often good reason to make such knowledge explicit as far as it goes. But some knowledge, or parts of it, is best left tacit. Explication is costly and may impair performance.[69] It is not just that codification and communication procedures take time; they can even reduce the competence of individual, and possibly also collective, agents.[70]

Both openness and explication may be defended by an appeal to the importance of reproducibility of studies and results in science. It can be argued that this epistemically important virtue is compromised if there are any “black boxes” or instances of blind trust in a scientific process. For example, probabilistic proofs in mathematics have been criticized for not being transferable. They can be performed over and over again, and so they are, strictly speaking, reproducible—but since they rely on e.g. randomization devices, they cannot be expected provide exactly the same justification in each instance, as the evidence, like the numbers picked, will differ from case to case.[71] But again, transferability is not an absolute value, only a desideratum that must be balanced against other concerns, and whose own value is arguably conditional on its contribution to more fundamental goals of truth and error-avoidance. Probabilistic proofs may thus promote the epistemic goals of mathematicians, even if they fall short of being transferable.[72] Moreover, at least from a reliabilist point of view, reproducibility in principle is hardly better than non-reproducibility if it is so rare and difficult to accomplish that actual testing will seldom or never be carried out.

In sum, there is no reason to assume that a reduction in the degree of openness or accountability must necessarily constitute an epistemic problem. Besides, it is even questionable whether radically collaborative science does represent an overall reduction in the degree of openness or accountability as compared to traditional, smaller-scale research.

Scientific Creativity

Another general concern that may go against regimenting scientific collaboration is the undisputed importance of scientific creativity. Reliably identifying conditions of such creativity, which are complex and elusive, has proven highly difficult.[73] But there is ample evidence that interdisciplinarity, and, more generally, diversity and combination of methodological and theoretical approaches, are among the most pervasive features of the processes that are known to have led to significant discoveries.[74] This is further supported by studies in group psychology likewise indicating that diversity is conducive to creativity,[75] though some studies also point to its possible drawbacks, as too many different—or too widely differing—standpoints tend to make mutual understanding and cooperation difficult.[76] Kitcher has argued more abstractly that a diversity of research programs is epistemically beneficial.[77] In a similar vein, Weisberg and Muldoon have demonstrated that scientific “mavericks” are epistemically more productive than “followers.”[78] Zollman has used computer modelling to show that disconnected research teams are more likely to converge upon the right hypothesis than strongly connected networks of scientists, who are more prone to accept initial results that favour the wrong hypothesis.[79]

In spite of all these a priori and a posteriori reasons for assuming diversity, of a certain qualified kind, to be conducive to scientific truth and error avoidance, it would be too much to say that interdisciplinarity or diversity as such is epistemically beneficial tout court. There is reason to assume that more often than not interdisciplinary research, at least of the more radical type, leads to dead ends or at least very meagre results. But the relatively few cases of great success achieved by interdisciplinarity may still suffice to justify it, considering that mono-disciplinary research also yield rapidly diminishing returns and generally have a poor success rate, especially if the goal is defined as the production of significant truths. We have here a case where a process—engaging in interdisciplinary collaboration—may have a reliability significantly below 0.5, yet still count as sufficiently effective, because of its capacity for producing new and relevant truths and the poor record of the alternative processes available.

The concern for creativity might also justify a relaxed stance when it comes to compliance with established standards and methods of particular subfields and their compatibility. It is characteristic of even canonical examples of individual scientific creativity, like Maxwell’s development of the theory of the electromagnetic field, that standards, including theoretical assumptions, have been combined and altered in the process.[80] It is doubtful that typical cases of large-scale collaboration involve such creative twisting of standards. It is more likely that researchers will keep to the usual repertoire of established theories from the natural sciences (and hence the fear that big science could hamper scientific creativity, because of its conformity-enforcing role). But it is still important to remember that there is a trade-off between the conservative and the creative aspects of the scientific process. As Kuhn urged, the tendency to convergent thought must be counterbalanced by a tendency to divergent thought.[81] Additional regulation or insistence that established standards and methods be followed very closely could seriously hamper creativity.

GA may object to the above considerations that they miss the central thrust of their criticism. They do not wish to erect barriers to creativity, or limit the free play or smooth propagation of ideas. They are worried about industry-like conditions being imposed on scientific collaborators, with the risk of minimizing relevant dissent and evading responsibility, and wish to ensure a certain level of critical awareness and dialogue within the scientific collective.

There should be no doubt about GA’s good intentions. And I do see how it can seem inappropriate to associate their view with an almost reactionary attitude, or to defend big science with a concern for creativity and free flow of ideas. But I am afraid that for all the good intentions, almost any practicable solution to the problem posed by GA are likely to have conservative implications. It is hard to see how one could ensure that channels and procedures will be used for supporting divergent, rather than convergent thinking. In the absence of any clear idea about how one might regulate more discriminately, in a way that promotes only epistemically beneficial practices, and does not e.g. lead to group-think, holding back of important results or a general slowing down of scientific progress, we are left with the choice between a generally permissive and a more restrictive policy. I am not averse to regulatory measures in general, and even open to the suggestion that policies could be justified that are not just formal but qualitative and content-related, i.e. a kind of “epistemic affirmative action” aimed at boosting particular processes and suppressing others.[82] But I am worried that the costs of imposing general requirements will outweigh the benefits.

A final worry[83] to be considered is this. I have repeatedly implied that there may be problems similar to those highlighted by GA, only they are not genuinely epistemic. To this is could objected that we ought not care about the label “epistemic.” But it is not me who is obsessed with epistemic purity. Quite to the contrary, I have contended that if GA are right in their view of knowledge, considered as a conceptual analysis, then we should conclude that knowledge matters less than we have assumed, and not necessarily change our practice. I do, however, see a point in distinguishing between the epistemic aspects of collaborative science in a broad sense of the word and distinctively ethical or political issues. Some of the questions I have considered may be said to be matters of stipulation; but again, the burden is on GA to show that theirs are the relevant concepts to bring to the table—and I have provided reasons for thinking that they are not.


I have argued that the reasoning of GA relies on a whole series of implausibly, or at least controversially, strong assumptions about the nature of authorship, group knowledge, collective subjectivity, knowledge, responsibility and testimony. I have argued tentatively in favour of alternatives, some of which may admittedly be perceived as too radical, or at least equally controversial. But I have also tried to show that more mainstream, or even conservative, epistemological positions allow for a less pessimistic diagnosis of the trend towards radical collaboration, as they do not have any alarmist implications. One can stick to epistemological internalism, but adopt a distributed view of justification and responsibility, and/or acknowledge the possibility that radical collaboration terminates in the production of significant knowledge through testimonial transmission.

The upshot of my discussion is that there is nothing fundamentally or inherently problematic about large-scale collaborative research. In fact it may be seen as merely an (admittedly large and otherwise spectacular) institutional rearrangement; a new way of organizing and delimiting the same types of knowledge creation and dissemination processes that have always been characteristic of science. Experts in different fields and subfields communicate and contribute, more or less (un)knowingly, to the solution of scientific problems, trusting each other to various degrees, depending on their beliefs about the credentials of their peers, on processes of certification etc. If anything, multi-authorship and related practices have contributed to make these messy and decentralised processes more regulated and transparent, for good and for bad.

I have given some reasons for believing that the benefits of more loosely regulated, radically collaborative science may trump the inevitable risks and losses. They have, admittedly, been somewhat speculative (although, I contend, much less so than are the alarmist arguments). This is inevitable. We have very little empirical evidence for the superiority or inferiority of specific ways of organizing and conducting research; and unfortunately, such evidence is extremely hard to obtain, as we cannot carry out large-scale experiments, and too little can be gained from consulting the historical record. Many of those who lament the way science is currently organized or conducted do present their views as being based on historical evidence. Their arguments often come down the colloquial wisdom that you should “never change a winning team.” But who knows how much the team has been winning, after all? Surely science has done much better than soothsaying or witchcraft. But we have very little basis for comparison with alternative paths of development that could still be considered developments of science. Hence we still have to rely on a priori reasoning, albeit informed by selective evidence, case-studies and the like. And there is simply no a priori reason to assume that RC should be epistemically inferior.

I must once again stress that I have not been arguing that big science is unproblematic. I have hinted in some ways in which it may, indirectly, have negative epistemic consequences, though these may be outweighed by other and more positive effects—while I have also noted that big science may inhibit creativity and knowledge production not by failing to meet, but rather by conforming too closely to the requirements laid down by GA. More serious, perhaps, are the ethical and political issues.[84] Practices like gratuitous authorship may not matter much for gain or loss of knowledge; but they may be bad for the distribution of credit and wear on scientists’ motivation and reduce mutual trust.[85] This could also have epistemic consequences in the long run, if it threatens the meritocratic system or makes scientists more suspicious and less keen on taking part in collaborative work or even working in specific fields. It is, however, an open question whether something this is likely to happen, and whether the negative side-effects will be balanced by the positive, e.g. the heightened visibility and impact of important research that may come from its being associated with certain persons, regardless of their actual contributions.

Let me finally note that the very notion of RC, though certainly suggestive, is actually too indiscriminate to be of much use for theoretical or empirical studies of contemporary science. It combines features like scale, distribution, decentralization and interdisciplinarity, which are in reality more loosely associated and may not be best exemplified by the favourite examples of GA. One way to move beyond pure speculation would be to carry out more detailed case studies of collaboration in specific fields and of specific types and comparing the results, which could in turn serve as a basis for the construction of more adequate concepts. In the meantime, I will allow myself to assume that for all the spectacular, indeed mind-blowing news about big-scale collaboration and multi-authorship, there is, from a philosophical point of view, really nothing new under the sun.


Adams, Jonathan. “Collaborations: The Rise of Research Networks.” Nature 490 (17 October 2012): 335–336.

Ahlstrom-Vij, Kristoffer and Jeffrey Dunn. “A Defence of Epistemic Consequentialism.” Philosophical Quarterly 64, no. 257 (2014): 541–551.

Andersen, Hanne. “The Second Essential Tension: On Tradition and Innovation in Interdisciplinary Research.” Topoi 32, no. 1 (2013): 3-8.

Barthes, Roland.  “The Death of the Author.” In Image-Music-Text, trans. Stephen Heath, 142-148. Waukegan, Il.: Fontana Press, 1977.

Bird, Alexander. “Social Knowing: The Social Sense of ‘Scientific Knowledge.’” Philosophical Perspectives 24, no. 1 (2010): 23-56.

BonJour, Laurence. “A Version of Internalist Foundationalism.” In Epistemic Justification, eds. Laurence BonJour and Ernest Sosa, 3-93. Oxford: Blackwell, 2003.

Boisot, Max H. Knowledge Assets: Securing Competitive Advantage in the Information Economy. Oxford: Oxford University Press, 1998.

Chamorro-Premuzic, Tomas. “Why Brainstorming Works Better Online.” Harvard Business Review. April 02, 2015.

Christopherson, Kimberly. “The Positive and Negative Social Implications of Anonymity in Internet Interactions: ‘On the Internet, Nobody Knows You’re a Dog.’” Computers in Human Behavior 23 (2007): 3038-3056.

Collins, Harry. Tacit and Explicit Knowledge. Chicago: University of Chicago Press, 2010.

Collins, Harry and Robert Evans. Rethinking Expertise. Chicago: Chicago University Press, 2007.

Connolly, Terry, Leonard M. Jessup and Joseph S. Valacich. “Effects of Anonymity and Evaluative Tone on Idea Generation in Computer-Mediated Groups.” Management Science 36, no. 6 (1990): 689-703.

Conee, Earl and Richard Feldman. “Internalism Defended.” In Evidentialism, eds. Earl Conee and Richard Feldman, 53-82. Oxford: Oxford University Press, 2004.

Donaldson, Lex. American Anti-Management Theories of Organization: A Critique of Paradigm Proliferation. Cambridge: Cambridge University Press, 1995.

Easwaran, Kenny. “Probabilistic Proofs and Transferability.” Philosophia Mathematica (III) 17 (2009): 341-62.

Etzkowitz, Henry. MIT and The Rise of Entrepreneurial Science. London: Routledge, 2002.

Fallis, Don. “What Do Mathematicians Want? Probabilistic Proofs and the Epistemic Goals of Mathematicians.” Logique et Analyse 45 (2002): 373-88.

Fallis, Don. “Probabilistic Proofs and the Collective Epistemic Goals of Mathematicians.” In Collective Epistemology, eds. Hans Bernard Schmid, Marcel Weber, Daniel Sirtes, 157-175. Ontos Verlag, 2011.

Foucault, Michel. “What is an Author?” In Aesthetics, Method and Epistemology, edited by J. D. Faubion. Translated by R. Hurley et al., 205-222. New York: The New Press, 1998.

Frost-Arnold, Karen. “Trustworthiness and Truth: The Epistemic Pitfalls of Internet Accountability.” Episteme 11, no. 1 (2014): 63-81.

Galison, Peter. “The Collective Author.” In Scientific Authorship: Credit and Intellectual Property in Science, edited by Peter Galison and Mario Biagioli, 325-353. New York and Oxford: Routledge, 2003.

Galison, Peter and Mario Biagioli. Scientific Authorship: Credit and Intellectual Property in Science. New York and Oxford: Routledge, 2003.

Gilbert, Margaret. On Social Facts. London: Routledge, 1989.

Gilbert, Margaret. Joint Commitment: How We Make the Social World. Oxford: Oxford University Press, 2014.

Goldman, Alvin I. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63, no. 1 (2001): 85-110.

Graham, Peter. “Liberal Fundamentalism and Its Rivals.” In The Epistemology of Testimony, eds. Jennifer Lackey and Ernest Sosa, 93-115. Oxford: Oxford University Press, 2006.

Hardwig, John. “Epistemic Dependence.” Journal of Philosophy 82, no. 7 (1985): 335-349.

Hardwig, John. “The Role of Trust in Knowledge.” Journal of Philosophy 88, no. 12 (1991): 693-708.

Hirsch, Eric D. Validity in Interpretation. New Haven, CT: Yale University Press, 1967.

Hong, Lu and Scott Page. “Groups of Diverse Problem Solvers Can Outperform Groups of High-Ability Problem Solvers.” Proceedings of the National Academy of Sciences of the United States, 101, no. 46 (2004): 16385–16389.

Huebner, Bryce. Macrocognition: A Theory of Distributed Minds and Collective Intentionality. Oxford: Oxford University Press, 2014.

Huebner, Bryce, Rebecca Kukla, and Eric Winsberg. “Making an Author in Radically Collaborative Research.” In Scientific Collaboration and Collective Knowledge, edited by Thomas Boyer, Connor Mayo-Wilson, and Michael Weisberg. Oxford: Oxford University Press, forthcoming.

Iliffe, Robert. “Butter for Parsnips: Authorship, Audience, and the Incomprehensibility of the Principia.” In Scientific Authorship: Credit and Intellectual Property in Science, edited by Peter Galison and Mario Biagioli, 33-66. New York and Oxford: Routledge, 2003.

Juhl, Peter D. Interpretation: An Essay in the Philosophy of Literary Criticism. Princeton: Princeton University Press, 1980.

Kieser, Alfred and Peter Walgenbach. Organisation. Stuttgart: Schäffer-Poesel, 2010.

Kitcher, Philip. “The Division of Cognitive Labor.” Journal of Philosophy 87, no. 1 (1990): 5–22.

Kitcher, Philip. The Advancement of Science. New York: Oxford University Press, 1993.

Kitcher, Philip. Science, Truth and Democracy. Oxford: Oxford University Press, 2001.

Kitcher, Philip. Science in a Democratic Society. Amherst, NY: Prometheus, 2011.

Klausen, Søren H. “Two Notions of Epistemic Normativity.” Theoria 75 (2009): 161-178.

Klausen, Søren H. “Sources and Conditions of Scientific Creativity.” In Handbook of Research on Creativity, edited by Janet Chan and Kerry Thomas, 33-47. Cheltenham: Elgar, 2013.

Klausen, Søren H. “Group Knowledge: A Real-World Approach.” Synthese 192, no. 3 (2015): 813-839.

Klausen, Søren H. “Levels of Literary Meaning.” Philosophy and Literature 41, no. 1 (2017; in press).

Koestler, Arthur. The Act of Creation. London: Penguin, 1964.

Kornblith, H. On Reflection. Oxford: Oxford University Press, 2012.

Krimsky, Sheldon. “Do Financial Conflicts of Interest Bias Research? An Inquiry into the ‘Funding Effect’ Hypothesis.” Science, Technology & Human Values 38, no. 4 (2013). Effect and Bias.PDF.

Kukla, Rebecca. “‘Author TBD’: Radical Collaboration in Contemporary Biomedical Research.” Philosophy of Science 79, no. 5 (2012): 845-858.

Kuhn, Thomas. The Essential Tension: Selected Studies in Scientific Tradition and Change. Chicago: University of Chicago Press, 1977.

Lesser, Lenard I., Cara B. Ebbeling, Merrill Goozner, David Wypij, and David S. Ludwig. “Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles.” PLOS Medicine 4, no. 1 e5 (2007): 0041-0046. doi:10.1371/journal.pmed.0040005.

Livingston, Paisley. Art and Intention. Oxford: Oxford University Press, 2005.

Levine, John M. et al. “Newcomer Innovation in Work Teams.” In Group Creativity: Innovation Through Collaboration, edited by Paul B. Paulus and Bernard A. Nijstad, 202-224. Oxford: Oxford University Press, 2003.

List, Christian and Philip Petitt. Group Agency. Oxford: Oxford University Press, 2011.

Mathiesen, Kay. “The Epistemic Features of Group Beliefs.” Episteme 2 (2006): 161-175.

Marušić, Ana, Lana Bošnjak and Ana Jerončić. “A Systematic Review of Research on the Meaning, Ethics and Practices of Authorship across Scholarly Disciplines.” Plos One 6.9 e23477 (2011): 1-17. doi:10.1371/journal.pone.0023477.

Milliken, Frances J. et al. “Diversity and Creativity in Work Groups.” In Group Creativity: Innovation Through Collaboration, edited by Paul B. Paulus and Bernard A. Nijstad, 32-62. Oxford: Oxford University Press, 2003.

Myers-Schulz, Blake and Eric Schwitzgebel. “Knowing that P Without Believing that P.” Nous 47, no. 2 (2013): 371-384.

Nauenberg, Michael. “The Reception of Newton’s Principia.” Cornell: arXiv: 1503.06861 (2015).

Nersessian, Nancy. Creating Scientific Concepts. Cambridge, MA: MIT Press, 2008.

Nestle, Marion. “Food Company Sponsorship of Nutrition Research and Professional Activities: A Conflict of Interest? Public Health Nutrition 4.5 (2001): 1015-1022.

Owens, David. Reason Without Freedom: The Problem of Epistemic Normativity. London: Routledge, 2000.

Paulus, Paul B. and Bernard A. Nijstad, eds. Group Creativity. Oxford: Oxford University Press, 2003.

Petersen, E. N. and Schaffalitzky, C. S. D. “Why Not Open the Black Box of Journal Editing in Philosophy? Make Peer Reviews of Published Papers Available.” Forthcoming.

Rowbottom, Darrell. “N-Rays and the Semantic View of Scientific Progress.” Studies in History and Philosophy of Science 39 (2008): 277–278.

Sarewitz, Daniel. Frontiers Of Illusion: Science, Technology, and the Politics of Progress. Philadelphia: Temple University Press, 1996.

Scott, William B. Organizations: Rational, Natural, and Open Systems, 3rd ed. Englewood Cliffs, N.J.: Prentice Hall, 1992.

Schmitt, Frederick F. “Transindividual Reasons.” In The Epistemology of Testimony, edited by Jennifer Lackey and Ernest Sosa, 193-224. Oxford: Oxford University Press, 2006.

Smolin, Lee. “Why No New Einstein?” Physics Today June 2005: 56-57.

Stanley, Jason. Know How. Oxford: Oxford University Press, 2011.

Tuomela, Raimo. The Philosophy of Sociality: The Shared Point of View. Oxford: Oxford University Press, 2007.

Winsberg, Eric, Bryce Huebner, and Rebecca Kukla. “Accountability and Values in Radically Collaborative Research.” Studies in History and Philosophy of Science Part A 46 (2014): 16-23.

Weisberg, Michael and Ryan Muldoon. “Epistemic Landscapes and the Division of Cognitive Labor.” Philosophy of Science 76, no. 2 (2009): 225–252.

Zollman, Kevin J. S. “The Epistemic Benefit of Transient Diversity.” Erkenntnis 72, no. 1 (2010): 17-35.

[1] Etzkowitz, MIT and The Rise of Entrepreneurial Science.

[2] Adams, “Collaborations.”

[3] Winsberg is not from Georgetown, but I count him among the Georgetown Alarmists because of his association with Huebner and Kukla. It is likely that not all of GA subscribe to all of the claims attributed to them in this paper, at least not with equal confidence or emphasis. Nevertheless, a common “alarmist” attitude is clearly detectable in their writings.

[4] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 13; 23.

[5] Thus, iv) follows from i) and ii) together with the empirical facts about RC. vii) follows from vi), v) and iv) (though vii is also inferred directly from the lack of accountability in RC; it may be said that GA do not generally claim that authorship is a condition for scientific knowledge, only that the conditions they lay down for collective scientific knowledge overlap those they lay down for authorship).

[6] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 2ff.

[7] Kukla, “‘Author TBD,’” 848.

[8] Ibid., 857; Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 1.

[9] Winsberg, Huebner and Kukla, “Accountability and values in radically collaborative research,” 1.

[10] Huebner, Macrocognition, 213f.

[11] Winsberg, Huebner and Kukla, “Accountability and values in radically collaborative research,” 1.

[12] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 13.

[13] Barthes, “The Death of the Author.”

[14] Foucault, “What is an Author?”

[15] Kukla, “‘Author TBD,’” 846.

[16] Ibid., 852.

[17] As one reviewer kindly did.

[18] E.g. right at the beginning of Winsberg, Huebner, and Kukla, “Accountability and values in radically collaborative research.”

[19] Kukla, “‘Author TBD,’” 849.

[20] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research.”

[21] It is, moreover, debatable whether scientific progress should be measured in terms of knowledge- or merely truth-production. See Rowbottom 2008 for a defence of the latter view.

[22] Tuomela, The Philosophy of Sociality; List and Pettit speak of a common awareness (Group Agency, 33).

[23] Gilbert, On Social Facts; Joint Commitment.

[24] Mathiesen, “The Epistemic Features of Group Beliefs.”

[25] Livingston, Art and Intention, Ch. 3.

[26] Mathiesen, “The Epistemic Features of Group Beliefs.”

[27] Mathiesen requires of the members of a group with an epistemic goal that its members must commit themselves to follow certain practices, viz. those that are seen as appropriately regulating epistemic endeavours. That still seems weaker than GA’s accountability requirement, which involves an actual ability to justify the claims in question (though it is weakened in some subsequent formulations—cf. 2) above.

[28] Scott, Organizations, 10; Donaldson, American Anti-Management Theories of Organization, 135; Kieser and Walgenbach, Organisation, 6.

[29] Klausen, “Group Knowledge.”

[30] Hardwig, “The Role of Trust in Knowledge.”

[31] Cf. 2.

[32] Nor is it likely that they have been able to point to a neighbouring scientist who could (contrary to what was suggested by a reviewer). The emergence of the evolutionary synthesis was structurally more similar to RC (as described by GA) than to a simple chain of ordered, cumulative epistemic tasks.

[33] Klausen, “Levels of Literary Meaning.”

[34] Iliffe, “Butter for Parsnips”; Nauenberg, “The Reception of Newton’s Principia.”

[35] Galison, “The Collective Author.”

[36] Cf. Livingston, Art and Intention.

[37] Hirsch, Validity in Interpretation; Juhl, Interpretation.

[38] Klausen, “Levels of Literary Meaning.”

[39] The ordinary notion of authorship may be ambiguous—or disjunctive—inasmuch as both the intentional production of first-order meaningful language (of certain specific kinds) and the selection, organization and communication of such language may suffice for authorship.

[40] Livingston, Art and Intention, Ch. 3.

[41] Ezra Pound was not recognized as co-author of Eliot’s The Waste Land, even if he appears to have acted as a kind of editor, or even metacognitive assistant, for Eliot, helping to select and arranging and arrange the vast and heterogeneous material that Eliot had compiled. There are numerous cases of works, by e.g. Wolfe, Yeats and Brecht, which appear to have been produced in a genuinely collective manner, without their co-authors having been explicitly recognized as such. Hence the scientific practice of authorship attribution can be said to be, at least in certain respects, more in line with the commonsense notion of authorship than the traditional “literary” one, which has often generated a wrong impression of solitary work.

[42] Though it has recently been suggested, not quite implausibly, that knowledge does not even require belief. See e.g. Myers-Schulz and Schwitzgebel, “Knowing that P Without Believing that P.”

[43] This is conceded even by leading internalists, e.g. evidentialists like Conee and Feldman, “Internalism Defended” or BonJour, “A Version of Internalist Foundationalism.”

[44] Klausen, “Group Knowledge.”

[45] See Bird, “Social Knowing” for a defense of a similar view.

[46] A socialized version of responsibilism has been proposed by Owens, Reason Without Freedom: “A belief is justified if every rational agent to whom responsibility for the belief applies or can pass acts responsibly with regard to the belief” (cf. Schmitt 2006, 215).

[47] Of course, GA have not claimed that no knowledge is produced in RC, as one reviewer pointed out. But they obviously worry that the relevant kind of knowledge—the putative scientific contribution, the end result—will not really be produced.

[48] “‘Author TBD,’” 850.

[49] Ibid., 849. In fairness to GA, it should be said that they are, perhaps, merely concerned with the possibility that testimony could tie together the whole group of collaborators, in the way required for authorship, and not with the possibility of distributed testimony-based knowledge. The latter possibility should be taken seriously, however.

[50] See e.g. the overview given by Graham, “Liberal Fundamentalism and Its Rivals.”

[51] Although the trust in question is blind as defined by Hardwig himself: The recipient does not have the reasons that are necessary to (directly) justify the belief in question (Hardwig, “The Role of Trust in Knowledge,” 699). Goldman, “Experts” argues convincingly that the actual practice of acquiring knowledge by testimony is less blind (in the wider sense) than Hardwig and proponents of a direct, non-reductionist view usually assume.

[52] Winsberg et al. 2013, 1.

[53] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 13.

[54] Klausen, “ Sources and Conditions of Scientific Creativity.”

[55] Goldman, “Experts.”

[56] Collins and Evans, Rethinking Expertise.

[57] From the theoretical perspective of GA, this may be somewhat different. On their view, improved accountability will automatically increase knowledge production, as it is built into their definition of knowledge.

[58] Cf. Kitcher, Science in a Democratic Society, 145.

[59] Nestle, “Food company sponsorship of nutrition research and professional activities”; Lesser et al., “Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles”; Krimsky, “Do Financial Conflicts of Interest Bias Research?”

[60] Kitcher, Science, Truth and Democracy; Science in a Democratic Society.

[61] Goldman, “Experts,” 205ff.

[62] Galison, “The Collective Author.”

[63] Klausen, “Two Notions of Epistemic Normativity”; Ahlstrom-Vii and Dunn, “A Defence of Epistemic Consequentialism.”

[64] Frost-Arnold, “Trustworthiness and Truth.”

[65] Connolly, Jessup, and Valacich, “Effects of Anonymity and Evaluative Tone on Idea Generation in Computer-Mediated Groups.”

[66] For arguments that point to the relative importance of accountability, see Petersen and Schaffalitzky, “Why not open the black box of journal editing in philosophy?”

[67] As one reviewer kindly pointed out.

[68] Christopherson, “The Positive and Negative Social Implications of Anonymity in Internet Interactions”; but see Chamorro-Premuzic, “Why Brainstorming Works Better Online” for a more favourable assessment.

[69] Stanley, Know How, 173f.

[70] See Boisot 1998, 42ff., for an instructive summary and analysis of the findings from management and organization science.

[71] Easwaran, “Probabilistic Proofs and Transferability.”

[72] See Fallis, “What Do Mathematicians Want?” and “Probabilistic Proofs and the Collective Epistemic Goals of Mathematicians.” The latter also points out that replication of experiments in science generally proceeds in this way; they are not based on the exact same evidence, only on evidence of the same specific type.

[73] Klausen, “Sources and Conditions of Scientific Creativity.”

[74] Koestler, The Act of Creation; Nersessian, Creating Scientific Concepts; Klausen, “Sources and Conditions of Scientific Creativity.”

[75] Milliken et al., “Diversity and Creativity in Work Groups”; Hong and Page, “Groups of Diverse Problem Solvers Can Outperform Groups of High-Ability Problem Solvers.”

[76] Levine et al., “Newcomer Innovation in Work Teams.”

[77] Kitcher, “The Division of Cognitive Labor”; The Advancement of Science.

[78] Weisberg and Muldoon, “Epistemic Landscapes and the Division of Cognitive Labor.”

[79] Zollman, “The Epistemic Benefit of Transient Diversity.”

[80] Nersessian, Creating Scientific Concepts.

[81] Kuhn, The Essential Tension; see also Andersen, “The Second Essential Tension.”

[82] Cf. Goldman 1999, 210; 216.

[83] Kindly raised by a reviewer.

[84] In its present state, big science is entangled with tendencies and assumptions, e.g. about the relationship between science and society, that certainly deserve critical attention. For a critical analysis of the assumptions behind post-WW2 science policy, see Sarewitz 1996

[85] See Marušić et al. 2011 for an instructive, but somewhat inconclusive survey that indicates that there is indeed a serious issue, but no clear evidence that the situation is problematic.

Author Information: Steve Fuller, University of Warwick,


Editor’s Note: Steve Fuller’s “A Man for All Seasons, Including Ours: Thomas More as the Patron Saint of Social Media” originally appeared in ABC Religion and Ethics on 23 February 2017.

Please refer to:

Image credit: Carolien Coenen, via flickr

November 2016 marked the five hundredth anniversary of the publication of Utopia by Thomas More in Leuven through the efforts of his friend and fellow Humanist, Desiderius Erasmus.

More is primarily remembered today for this work, which sought to show how a better society might be built by learning from the experience of other societies.

It was published shortly before he entered into the service of King Henry VIII, who liked Utopia. And as the monarch notoriously struggled to assert England’s sovereignty over the Pope, More proved to be a critical supporter, eventually rising to the rank of “Lord Chancellor,” his legal advisor.

Nevertheless, within a few years More was condemned to death for refusing to acknowledge the King’s absolute authority over the Pope. According to the Oxford English Dictionary, More introduced “integrity”—in the sense of “moral integrity” or “personal integrity”—into English while awaiting execution. Specifically, he explained his refusal to sign the “Oath of Supremacy” of the King over the Pope by his desire to preserve the integrity of his reputation.

To today’s ears this justification sounds somewhat self-serving, as if More were mainly concerned with what others would think of him. However, More lived at least two centuries before the strong modern distinction between the public and the private person was in general use.

He was getting at something else, which is likely to be of increasing relevance in our “postmodern” world, which has thrown into doubt the very idea that we should think of personal identity as a matter of self-possession in the exclusionary sense which has animated the private-public distinction. It turns out that the pre-modern More is on the side of the postmodernists.

We tend to think of “modernization” as an irreversible process, and in some important respects it seems to be. Certainly our lives have come be organized around technology and its attendant virtues: power, efficiency, speed. However, some features of modernity—partly as an unintended consequence of its technological trajectory—appear to be reversible. One such feature is any strong sense of what is private and public—something to which any avid user of social media can intuitively testify.

More proves to be an interesting witness here because while he had much to say about conscience, he did not presume the privacy of conscience. On the contrary, he judged someone to be a person of “good conscience” if he or she listened to the advice of trusted friends, as he had taken Henry VIII to have been prior to his issuing the Oath of Supremacy. This is quite different from the existentially isolated conception of conscience that comes into play during the Protestant Reformation, on which subsequent secular appeals to conscience in the modern era have been based.

For More, conscience is a publicly accessible decision-making site, the goodness of which is to be judged in terms of whether the right principles have been applied in the right way in a particular case. The platform for this activity is an individual human being who—perhaps by dint of fate—happens to be hosting the decision. However, it is presumed that the same decision would have been reached, regardless of the hosting individual. Thus, it makes sense for the host to consult trusted friends, who could easily imagine themselves as the host.

What is lacking from More’s analysis of conscience is a sense of its creative and self-authorizing character, a vulgarized version of which features in the old Frank Sinatra standard, “My Way.” This is the sense of self-legislation which Kant defined as central to the autonomous person in the modern era. It is a legacy of Protestantism, which took much more seriously than Catholicism the idea that humans are created “in the image and likeness of God.” In effect, we are created to be creators, which is just another way of saying that we are unique among the creatures in possessing “free will.”

To be sure, whether our deeds make us worthy of this freedom is for God alone to decide. Our fellows may well approve of our actions but we—and they—may be judged otherwise in light of God’s moral bookkeeping. The modern secular mind has inherited from this Protestant sensibility an anxiety—a “fear and trembling,” to recall Kierkegaard’s echo of St. Paul—about our fate once we are dead. This sense of anxiety is entirely lacking in More, who accepts his death serenely even though he has no greater insight into what lies in store for him than the Protestant Reformers or secular moderns.

Understanding the nature of More’s serenity provides a guide for coming to terms with the emerging postmodern sense of integrity in our data-intensive, computer-mediated world. More’s personal identity was strongly if not exclusively tied to his public persona—the totality of decisions and actions that he took in the presence of others, often in consultation with them. In effect, he engaged throughout his life in what we might call a “critical crowdsourcing” of his identity. The track record of this activity amounts to his reputation, which remains in open view even after his death.

The ancient Greeks and Romans would have grasped part of More’s modus operandi, which they would understand in terms of “fame” and “honour.” However, the ancients were concerned with how others would speak about them in the future, ideally to magnify their fame and honour to mythic proportions. They were not scrupulous about documenting their acts in the sense that More and we are. On the contrary, the ancients hoped that a sufficient number of word-of-mouth iterations over time might serve to launder their acts of whatever unsavoury character that they may have originally had.

In contrast, More was interested in people knowing exactly what he decided on various occasions. On that basis they could pass judgement on his life, thereby—so he believed—vindicating his reputation. His “integrity” thus lay in his life being an open book that could be read by anyone as displaying some common narrative threads that add up to a conscientious person. This orientation accounts for the frequency with which More and his friends, especially Erasmus, testified to More’s standing as a man of good conscience in whatever he happened to say or do. They contributed to his desire to live “on the record.”

More’s sense of integrity survives on Facebook pages or Twitter feeds, whenever the account holders are sufficiently dedicated to constructing a coherent image of themselves, notwithstanding the intensity of their interaction with others. In this context, “privacy” is something quite different from how it has been understood in modernity. Moderns cherish privacy as an absolute right to refrain from declaration in order to protect their sphere of personal freedom, access to which no one— other than God, should he exist—is entitled. For their part, postmoderns interpret privacy more modestly as friendly counsel aimed at discouraging potentially self-harming declarations. This was also More’s world.

More believed that however God settled his fate, it would be based on his public track record. Unlike the Protestant Reformers, he also believed that this track record could be judged equally by humans and by God. Indeed, this is what made More a Humanist, notwithstanding his loyalty to the Pope unto death.

Yet More’s stance proved to be theologically controversial for four centuries, until the Catholic Church finally made him the patron saint of politicians in 1935. Perhaps More’s spiritual patronage should be extended to cover social media users.

In this Special Issue, our contributors share their perspectives on how technology has changed what it means to be human and to be a member of a human society. These articles speak to issues raised in Frank Scalambrino’s edited book Social Epistemology and Technology: Toward Public Self-Awareness Regarding Technological Mediation.


Special Issue 4: “Social Epistemology and Technology”, edited by Frank Scalambrino

For the SERRC’s other special issues, please refer to: