Archives For epistemic authority

Author Information: Valerie Joly Chock & Jonathan Matheson, University of North Florida, n01051115@ospreys.unf.edu & j.matheson@unf.edu.

Matheson, Jonathan, and Valerie Joly Chock. “Science Communication and Epistemic Injustice.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 1-9.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44H

Image by sekihan via Flickr / Creative Commons

 

Epistemic injustice occurs when someone is wronged in their capacity as a knower.[1] More and more attention is being paid to the epistemic injustices that exist in our scientific practices. In a recent paper, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. In what follows we briefly explain his argument before raising several challenges to it.

Overview

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that science communication is fundamentally epistemically unjust. First, let’s get clear on the target. According to Medvecky, science communication is in the business of distributing knowledge – scientific knowledge.

As Medvecky uses the term, ‘science communication’ is an “umbrella term for the research into and the practice of increasing public understanding of and public engagement with science.” (1394) Science communication is thus both a field and a practice, and consists of:

institutionalized science communication; institutionalized in government policies on the public understanding of and public engagement with the sciences; in the growing numbers of academic journals and departments committed to further the enterprise through research and teaching; in requirements set by funding bodies; and in the growing numbers of associations clustering under the umbrella of science communication across the globe. (1395)

Science communication involves the distribution of scientific knowledge from experts to non-experts, so science communication is in the distribution game. As such, Medvecky claims that issues of fair and just distribution arise. According to Medvecky, these issues concern both what knowledge is dispersed, as well as who it is dispersed to.

In examining the fairness of science communication, Medvecky connects his discussion to the literature on epistemic injustice (Anderson, Fricker, Medina). While exploring epistemic injustices in science is not novel, Medvecky’s focus on science communication is. To argue that science communication is epistemically unjust, Medvecky relies on Medina’s (2011) claim that credibility excesses can result in epistemic injustice. Here is José Medina,

[b]y assigning a level of credibility that is not proportionate to the epistemic credentials shown by the speaker, the excessive attribution does a disservice to everybody involved: to the speaker by letting him get away with things; and to everybody else by leaving out of the interaction a crucial aspect of the process of knowledge acquisition: namely, opposing critical resistance and not giving credibility or epistemic authority that has not been earned. (18-19)

Since credibility is comparative, credibility excesses given to members of some group can create epistemic injustice, testimonial injustice in particular, toward members of other groups. Medvecky makes the connection to science communication as follows:

While there are many well-argued reasons for communicating, popularizing, and engaging with science, these are not necessarily reasons for communicating, popularizing, and engaging only with science. Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialized treatment.

This uniqueness creates a credibility excess for science as a field. And since science communication creates credibility excess by implying that concerted efforts to communicate non-science disciplines as fields of reliable knowledge is not needed, then science communication, as a practice and as a discipline, is epistemically unjust. (1400)

While the principle target here is the field of science communication, any credibility excesses enjoyed by the field will trickle down to the practitioners within it. If science is being given a credibility excess, then those engaged in scientific practice and communication are also receiving such a comparative advantage over non-scientists.

So, according to Medvecky, science communication is epistemically unjust to knowers – knowers in non-scientific fields. Since these non-scientific knowers are given a comparative credibility deficit (in contrast to scientific knowers), they are wronged in their capacity as knowers.

The Argument

Medvecky’s argument can be formally put as follows:

  1. Science is not a unique and privileged field.
  2. If (1), then science communication creates a credibility excess for science.
  3. Science communication creates a credibility excess for science.
  4. If (3), then science communication is epistemically unjust.
  5. Science communication is epistemically unjust.

Premise (1) is motivated by claiming that there are fields other than science that are equally important to communicate, popularize, and to have non-specialists engage. Medvecky claims that not only does non-scientific knowledge exists, such knowledge can be just as reliable as scientific knowledge, just as important to our lives, and just as in need of translation into layman’s terms. So, while scientific knowledge is surely important, it is not alone in this claim.

Premise (2) is motivated by claiming that science communication falsely represents science as a unique and privileged field since the concerns of science communication lie solely within the domain of science. By only communicating scientific knowledge, and failing to note that there are other worthy domains of knowledge, science communication falsely presents itself as a privileged field.

As Medvecky puts it, “Focusing and funding only the communication of science as reliable knowledge represents science as a unique and privileged field; as the only reliable field whose knowledge requires such specialised treatment.” (1400) So, science communication falsely represents science as special. Falsely representing a field as special in contrast to other fields creates a comparative credibility excess for that field and the members of it.

So, science communication implies that other fields are not as worthy of such engagement by falsely treating science as a unique and privileged field. This gives science and scientists a comparative credibility excess to these other disciplines and their practitioners.

(3) follows validly from (1) and (2). If (1) and (2) are true, science communication creates a credibility excess for science.

Premise (4) is motivated by Medina’s (2011) work on epistemic injustice. Epistemic injustice occurs when someone is harmed in their capacity as a knower. While Fricker limited epistemic injustice (and testimonial justice in particular) to cases where someone was given a credibility deficit, Medina has forcefully argued that credibility excesses are equally problematic since credibility assessments are often comparative.

Given the comparative nature of credibility assessments, parties can be epistemically harmed even if they are not given a credibility deficit. If other parties are given credibility excesses, a similar epistemic harm can be brought about due to comparative assessments of credibility. So, if science communication gives science a credibility excess, science communication will be epistemically unjust.

(5) follows validly from (3) and (4). If (3) and (4) are true, science communication is epistemically unjust.

The Problems

While Medvecky’s argument is provocative, we believe that it is also problematic. In what follows we motivate a series of objections to his argument. Our focus here will be on the premises that most directly relate to epistemic injustice. So, for our purposes, we are willing to grant premise (1). Even granting (1), there are significant problems with both (2) and (4). Highlighting these issues will be our focus.

We begin with our principle concerns regarding (2). These concerns are best seen by first granting that (1) is true – granting that science is not a unique and privileged field. Even granting that (1) is true, science communication would not create a credibility excess. First, it is important to try and locate the source of the alleged credibility excess. Science communicators do deserve a higher degree of credibility in distributing scientific knowledge than non-scientists. When it comes to scientific matters, we should trust the scientists more. So, the claim cannot be that non-scientists should be afforded the same amount of credibility on scientific matters as scientists.

The problem might be thought to be that scientists enjoy a credibility excess in virtue of their scientific credibility somehow carrying over to non-scientific fields where they are less credible. While Medvecky does briefly consider such an issue, this too is not his primary concern in this paper.[2] Medvecky’s fundamental concern is that science communication represents scientific questions and knowledge as more valuable than questions and knowledge in other domains. According to Medvecky, science communication does this by only distributing scientific knowledge when this is not unique and privileged (premise (1)).

But do you represent a domain as more important or valuable just because you don’t talk about other domains? Perhaps an individual who only discussed science in every context would imply that scientific information is the only information worth communicating, but such a situation is quite different than the one we are considering.

For one thing, science communication occurs within a given context, not across all contexts. Further, since that context is expressly about communicating science, it is hard to see how one could reasonably infer that knowledge in other domains is less valuable. Let’s consider an analogy.

Philosophy professors tend to only talk about philosophy during class (or at least let’s suppose). Should students in a philosophy class conclude that other domains of knowledge are less valuable since the philosophy professor hasn’t talked about developments in economics, history, biology, and so forth during class? Given that the professor is only talking about philosophy in one given context, and this context is expressly about communicating philosophy, such inferences would be unreasonable.

A Problem of Overreach

We can further see that there is an issue with (2) because it both overgeneralizes and is overly demanding. Let’s consider these in turn. If (2) is true, then the problem of creating credibility excesses is not unique to science communication. When it comes to knowledge distribution, science communication is far from the only practice/field to have a narrow and limited focus regarding which knowledge it distributes.

So, if there are multiple fields worthy of such engagement (granting (1)), any practice/field that is not concerned with distributing all such knowledge will be guilty of generating a similar credibility excess (or at least trying to). For instance, the American Philosophical Association (APA) is concerned with distributing philosophical knowledge and knowledge related to the discipline of philosophy. They exclusively fund endeavors related to philosophy and public initiatives with a philosophical focus. If doing so is sufficient for creating a credibility excess, given that other fields are equally worthy of such attention, then the APA is creating a credibility excess for the discipline of philosophy. This doesn’t seem right.

Alternatively, consider a local newspaper. This paper is focused on distributing knowledge about local issues. Suppose that it also is involved in the community, both sponsoring local events and initiatives that make the local news more engaging. Supposing that there is nothing unique or privileged about this town, Medvecky’s argument for (2) would have us believe that the paper is creating a credibility excess for the issues of this town. This too is the wrong result.

This overgeneralization problem can also be seen by considering a practical analogy. Suppose that a bakery only sells and distributes baked goods. If there is nothing unique and privileged about baked goods – if there are other equally important goods out there (the parallel of premise (1)) – then Medvecky’s reasoning would have it that the bakery is guilty of a kind of injustice by virtue of not being in the business of distributing those other (equally valuable) goods.

The problem is that omissions in distribution don’t have the implications that Medvecky supposes. The fact that an individual or group is not in the business of distributing some kind of good does not imply that those goods are less valuable.

There are numerous legitimate reasons why one may employ limitations regarding which goods one chooses to distribute, and these limitations do not imply that the other goods are somehow less valuable. Returning to the good of knowledge, focusing on distributing some knowledge (while not distributing other knowledge), does not imply that the other knowledge is less valuable.

This overgeneralization problem leads to an overdemanding problem with (2). The overdemanding problem concerns what all would be required of distributors (whether of knowledge or more tangible goods) in order to avoid committing injustice. If omissions in distribution had the implications that Medvecky supposes, then distributors, in order to avoid injustice, would have to refrain from limiting the goods they distribute.

If (2) is true, then science communication must fairly and equally distribute all knowledge in order to avoid injustice. And, as the problem of creating credibility excesses is not unique to science communication, this would apply to all other fields that involve knowledge distribution as well. The problem here is that avoiding injustice requires far too much of distributors.

An Analogy to Understand Avoiding Injustice

Let’s consider the practical analogy again to see how avoiding injustice is overdemanding. To avoid injustice, the bakery must sell and distribute much more than just baked goods. It must sell and distribute all the other goods that are as equally important as the baked ones it offers. The bakery would, then, have to become a supermarket or perhaps even a superstore in order to avoid injustice.

Requiring the bakery to offer a lot more than baked goods is not only overly demanding but also unfair. The bakery does not count with the other goods it is required to offer in order to avoid injustice. It may not even have the means needed to get these goods, which may itself be part of its reason for limiting the goods it offers.

As it is overdemanding and unfair to require the bakery to sell and distribute all goods in order to avoid injustice, it is overdemanding and unfair to require knowledge distributors to distribute all knowledge. Just as the bakery does not have non-baked goods to offer, those involved in science communication likely do not have the relevant knowledge in the other fields.

Thus, if they are required to distribute that knowledge also, they are required to do a lot of homework. They would have to learn about everything in order to justly distribute all knowledge. This is an unreasonable expectation. Even if they were able to do so, they would not be able to distribute all knowledge in a timely manner. Requiring this much of distributors would slow-down the distribution of knowledge.

Furthermore, just as the bakery may not have the means needed to distribute all the other goods, distributors may not have the time or other means to distribute all the knowledge that they are required to distribute in order to avoid injustice. It is reasonable to utilize an epistemic division of labor (including in knowledge distribution), much like there are divisions of labor more generally.

Credibility Excess

A final issue with Medvecky’s argument concerns premise (4). Premise (4) claims that the credibility excess in question results in epistemic injustice. While it is true that a credibility excess can result in epistemic injustice, it need not. So, we need reasons to believe that this particular kind of credibility excess results in epistemic injustice. One reason to think that it does not has to do with the meaning of the term ‘epistemic injustice’ itself.

As it was introduced to the literature by Fricker, and as it has been used since, ‘epistemic injustice’ does not simply refer to any harms to a knower but rather to a particular kind of harm that involves identity prejudice—i.e. prejudice related to one’s social identity. Fricker claims that, “the speaker sustains a testimonial injustice if and only if she receives a credibility deficit owing to identity prejudice in the hearer.” (28)

At the core of both Fricker’s and Medina’s account of epistemic injustice is the relation between unfair credibility assessments and prejudices that distort the hearer’s perception of the speaker’s credibility. Prejudices about particular groups is what unfairly affects (positively or negatively) the epistemic authority and credibility hearers grant to the members of such groups.

Mere epistemic errors in credibility assessments, however, do not create epistemic injustice. While a credibility excess may result in an epistemic harm, whether this is a case of epistemic injustice depends upon the reason why that credibility excess is given. Fricker and Medina both argue that in order for an epistemic harm to be an instance of epistemic injustice, it must be systematic. That is, the epistemic harm must be connected to an identity prejudice that renders the subject at the receiving end of the harm susceptible to other types of injustices besides testimonial.

Fricker argues that epistemic injustice is product of prejudices that “track” the subject through different dimensions of social activity (e.g. economic, professional, political, religious, etc.). She calls these, “tracker prejudices” (27). When tracker prejudices lead to epistemic injustice, this injustice is systematic because it is systematically connected to other kinds of injustice.

Thus, a prejudice is systematic when it persistently affects the subject’s credibility in various social directions. Medina accepts this and argues that credibility excess results in epistemic injustice when it is caused by a pattern of wrongful differential treatment that stems in part due to mismatches between reality and the social imaginary, which he defines as the collectively shared pool of information that provides the social perceptions against which people assess each other’s credibility (Medina 2011).

He claims that a prejudiced social imaginary is what establishes and sustains epistemic injustices. As such, prejudices are crucial in determining whether credibility excesses result in epistemic injustice. If the credibility excess stems from a systematically prejudiced social imaginary, then this is the case. If systematic prejudices are absent, then, even if there is credibility excess, there is no epistemic injustice.

Systemic Prejudice

For there to be epistemic injustice, then, the credibility excess must carry over across contexts and must be produced and sustained by systematic identity prejudices. This does not happen in Medvecky’s account given that the kind of credibility excess that he is concerned with is limited to the context in which science communication occurs.

Thus, even if there were credibility excess, and this credibility excess lead to epistemic harms, such harms would not amount to epistemic injustice given that the credibility excess does not extend across contexts. Further, the kind of credibility excess that Medvecky is concerned with is not linked to systematic identity prejudices.

In his argument, Medvecky does not consider prejudices. Rather than credibility excesses being granted due to a prejudiced social imaginary, Medvecky argues that the credibility excess attributed to science communicators stems from omission. According to him, science communication as a practice and as a discipline is epistemically unjust because it creates credibility excess by implying (through omission) that science is the only reliable field worthy of engagement.

On Medvecky’s account, the reason for the attribution of credibility excess is not prejudice but rather the limited focus of science communication. Thus, he argues that merely by not distributing knowledge from fields other than science, science communication creates a credibility excess for science that is worthy of the label of ‘epistemic injustice’. Medvecky acknowledges that Fricker would not agree that this credibility assessment results in injustice given that it is based on credibility excess rather than credibility deficits, which is itself why he bases his argument on Medina’s account of epistemic injustice.

However, given that Medvecky ignores the kind of systematic prejudice that is necessary for epistemic injustice under Medina’s account, it seems like Medina would not agree, either, that these cases are of the kind that result in epistemic injustice.[3] Even if omissions in the distribution of knowledge had the implications that Medvecky supposes, and it were the case that science communication indeed created a credibility excess for science in this way, this kind of credibility excesses would still not be sufficient for epistemic injustice as it is understood in the literature.

Thus, it is not the case that science communication is, as Medvecky argues, fundamentally epistemically unjust because the reasons why the credibility excess is attributed have nothing to do with prejudice and do not occur across contexts. While it is true that there may be epistemic harms that have nothing to do with prejudice, such harms would not amount to epistemic injustice, at least as it is traditionally understood.

Conclusion

In “Fairness in Knowing: Science Communication and Epistemic Injustice”, Fabien Medvecky argues that epistemic injustice lies at the very foundation of science communication. While we agree that there are numerous ways that scientific practices are epistemically unjust, the fact that science communication involves only communicating science does not have the consequences that Medvecky maintains.

We have seen several reasons to deny that failing to distribute other kinds of knowledge implies that they are less valuable than the knowledge one does distribute, as well as reasons to believe that the term ‘epistemic injustice’ wouldn’t apply to such harms even if they did occur. So, while thought provoking and bold, Medvecky’s argument should be resisted.

Contact details: j.matheson@unf.edu, n01051115@ospreys.unf.edu

References

Dotson, K. (2011) Tracking epistemic violence, tracking patterns of silencing. Hypatia 26(2): 236–257.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.

Medina, J. (2011). The relevance of credibility excess in a proportional view of epistemic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1), 15–35.

Medvecky, F. (2018). Fairness in Knowing: Science Communication and Epistemic Justice. Sci Eng Ethics 24: 1393-1408.

[1] This is Fricker’s description, See Fricker (2007, p. 1).

[2] Medvecky considers Richard Dawkins being given more credibility than he deserves on matters of religion due to his credibility as a scientist.

[3] A potential response to this point could be to consider scientism as a kind of prejudice akin to sexism or racism. Perhaps an argument can be made where an individual has the identity of ‘science communicator’ and receives credibility excess in virtue of an identity prejudice that favors science communicators. Even still, to be epistemic injustice this excess must track the individual across contexts, as the identities related to sexism and racism do. For it to be, a successful argument must be given for there being a ‘pro science communicator’ prejudice that is similar in effect to ‘pro male’ and ‘pro white’ prejudices. If this is what Medvecky has in mind, then we need to hear much more about why we should buy the analogy here.

Author Information: Peter Baumann, Swarthmore College, pbauman1@swarthmore.edu.

Baumann, Peter. “Nearly Solving the Problem of Nearly Convergent Knowledge.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 16-21.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-41B

Image by Forest and Kim Starr via Flickr / Creative Commons

 

Contrastivism (see, e.g., Schaffer 2004) is the view that knowledge is not a binary relation between a subject and a proposition but a ternary relation between a subject S, a proposition p (the proposition attributed as known and thus entailed by the knowledge attribution; we can call it the “target proposition”) and an incompatible (cf. Rourke 2013, sec.2) and false contrast proposition q (a “contrast”).[1]

The form of a knowledge attribution is thus not S knows that p but S knows that p rather than q. According to contrastivism, it’s elliptical, at least, to say that Chris knows that that bird is a goldfinch. Rather, we should say something like the following: Chris knows that that bird is a goldfinch rather than a raven. Chris might not know that that bird is a goldfinch rather than a canary. There can, of course be more than one contrasting proposition; in this case we can consider the disjunction of all the contrasting propositions to constitute the contrast proposition.

A Problem and Tweedt’s Proposed Solution

Chris Tweedt’s thought-provoking “Solving the Problem of Nearly Convergent Knowledge” discusses one kind of argument against the binary view and in favor of contrastivism. The argument (see Schaffer 2007) is based on the claim that knowing that p consists in knowing the answer to a question of the form Is p rather than q the case? (“Is this bird a goldfinch rather than a raven?”; “Is it a goldfinch rather than a canary?”). Put differently, knowing that p consists in knowing the correct answer to a multiple choice question (“What bird is this? A: A goldfinch; B: A raven”; What bird is this? A: A goldfinch; B: A canary”).

The binary account faces a problem because it has to claim that if one knows the answer to one such question (“Is it a goldfinch rather than a raven?”) then one also knows the answer to the other question (“Is it a goldfinch rather than a canary?”). However, one might only be able to answer one question but not the other. This is the problem of convergent knowledge. This, argues Schaffer, speaks in favor of contrastivism.

Some defenders of the binary view (see Jespersen 2008; Kallestrup 2009) have proposed the following way out: One does not know the same proposition when one knows the answers to different contrastive (multiple choice) questions which share a “target” (a target proposition). Rather, the corresponding knowledge has the form S knows that (p and not q). Our subject might know that that bird is a goldfinch and not a raven while it might not know that it is a goldfinch and not a canary.

Schaffer (2009) has a response to this: Even though there is no convergence of knowledge here there is “near convergence” which is still bad enough. Using the principle of closure of knowledge under known entailment[2] one can easily acquire knowledge of the second proposition on the basis of knowledge of the first. If Chris knows that that bird is a goldfinch and not a raven, then Chris also knows or can easily come to know, according to the binary view, that that bird is a goldfinch.

Since Chris also knows that whatever is a goldfinch is not a canary, he also knows or can easily come to know that that bird is not a canary. So, he knows or can easily come to know that that bird is a goldfinch and not a canary. Given that this is implausible, the problem of convergent knowledge is reincarnated as the problem of “nearly convergent knowledge”.

Tweedt’s ingenuous reply in favor of the binary account (see also van Woudenberg 2008) proposes to analyze the known answer to a contrastive (multiple choice) question as having conditional form:

(0) If p or q, then p.[3]

Question: Is that bird a goldfinch rather than a raven? Answer: If it is a goldfinch or a raven, then it is a goldfinch!

Tweedt claims that this solves the problem of convergent knowledge because the answer to the question “Is that bird a goldfinch rather than a raven?”, namely

(1) If that bird is a goldfinch or a raven, then it’s a goldfinch,

is not “a few quick closure steps away” (see Tweedt 2018, 220) from the answer to the question “Is that bird a goldfinch rather than a canary?”, namely

(2) If that bird is a goldfinch or a canary, then it’s a goldfinch.

A Problem with Tweedt’s Proposal

Tweedt does not add an explicit argument to his claim that (2) isn’t just a few easy closure steps away from (1). Here is an argument that (2) is indeed just a few easy closure steps away from (1). If that’s correct, then Tweedt’s proposal fails to solve the problem of nearly convergent knowledge.

Let “g”, “r” and “c” stand in for “That bird is a goldfinch”, “That bird is a raven”, and “That bird is a canary” respectively. We can then, following Tweedt, assume (about some subject S) that

(3) S knows that if g or r, then g.

The proposition g is the target proposition here, not r (in the latter case our subject would know that if g or r, then r, instead). Since targets and contrasts are mutually incompatible, we may also assume that

(4a) S knows that if g, then not r;

(4b) S knows that if g, then not c.

Finally, we may assume that

(5) S knows that g or r.

To be sure, one can ask contrastive questions where both propositions are false: Is Einstein rather than Fido the dog the inventor of the telephone? One might want to answer that Einstein rather than Fido invented the telephone (whether one also believes falsely or doesn’t believe that Einstein invented the telephone).

However, this is a deviant case not relevant here because we are interested in cases where one of the contrasting propositions is true and particularly in knowledge that p (where p is the target). If that knowledge is construed in a binary way, then it involves knowledge of one of the contrasting propositions (p) that it is true; if it is construed as knowledge that p rather than q, then it still obeys the factivity principle for knowledge and thus entails that p. So, we can assume here that

g or r

is true.[4] We may also assume that in standard cases the subject can know this. Hence:

(5) S knows that g or r.[5]

A closure principle like (Closure) together with (3) and (5) entails

(6) S knows that g.[6]

(Closure) together with (4b) and (6) entails

(7) S knows that not c.

So, there are only a few quick and easy “closure steps” to the implausible (7).[7] And we can add that disjunction introduction will allow the subject to come to know (on the basis of (6)) that g or c

(8) S knows that g or c.

(We could also argue for (8) along the lines of the argument above for (5)). Conjunction introduction together with (6) and (8) will allow the subject to know that (g or c) and g:

(9) S knows that (g or c) and g.

Since whenever a conjunction is true, a corresponding conditional is true, the subject can also come to know that

If g or c, then g.

In other words:

(10) S knows that if g or c, then g.[8]

There are then also quick and easy closure steps leading from Tweedt’s (1) to (2). So, the problem of nearly convergent knowledge remains unsolved.

Defending Tweedt?

There is more than one strategy for Tweedt to defend his proposal of a solution to the problem of nearly convergent knowledge. One would be to modify the closure principle in such a way that certain steps are not allowed any more. For instance, one could try to argue (4b) and (6) don’t lead to (7) because a valid closure principle doesn’t allow knowledge-producing inferences from easy-to-know propositions to hard-to-know propositions.

This kind of idea is well-known from discussions about skepticism: I might know that I have hands, and I might also know that if I have hands, then I am not merely hallucinating that I have hands, but I don’t know that I am not merely hallucinating that I have hands. Fred Dretske and Robert Nozick as well as some others have argued for such a view (see Dretske 2005 and Nozick 1981, ch.3). However, I am not sure whether Tweedt wants to choose this strategy. And it doesn’t seem easy to find a modification of the closure principle that is not ad hoc and has independent reasons in its favor.

Another strategy would be to identify other analyses of the answer to a contrastive (multiple choice) question. Perhaps one can improve on Tweedt’s response in a way similar to the one in which he attempts to improve on Kallestrup’s (and Jespersen’s) response to the original problem of convergent knowledge. I have to leave open here whether there is an analysis that does the trick, and what it could be (see, e.g., Steglich-Petersen 2015).

Could one take Tweedt’s conditional (0) not as a material conditional but rather as a subjunctive conditional? I am afraid that this would constitute a change of topic. Knowledge is factive and what would be the case (P) if something else (Q) were the case does not tell us anything about whether P or Q is the case, even if the corresponding subjunctive conditional is indeed true.

It might be more promising to explore the potential of a complaint about question begging: Isn’t Schaffer’s diagnosis that one can know (1) without knowing (2) already presupposing the truth of contrastivism? Why should one believe that there is a problem with knowing (2) but not with knowing (1) if not because one has already accepted contrastivism about knowledge?

One final side remark on an alleged advantage of binary accounts like Tweedt’s. He argues (see Tweedt 2018, 223) that contrastivism doesn’t take the skeptical problem seriously (enough) and rather deflates it; one might even want to say that contrastivism changes the topic. According to contrastivism I can know the Moorean proposition that I have hands rather than stumps even if I do not know the anti-skeptical proposition that I have hands rather than am merely hallucinating hands. Closure does not support any claim that if I know the one, then I know the other, too. Tweedt thinks this is a disadvantage of contrastivism. However, contrastivists like Schaffer would see this as an advantage. It seems to me that both ways of looking at the anti-skeptical potential of contrastivism have something going for them. In this context, it might be better to leave the question open whether skepticism can be deflated or not. (Similar points will apply to Tweedt’s remarks concerning the debate about expert disagreement; see Tweedt 2018, 223)

Conclusion

Ingenuous as Tweedt’s proposal is, it does not, I think, solve the problem of nearly convergent knowledge. However, this does not mean that a ternary account of knowledge has to be preferred to a binary account. I think that there are serious problems for contrastivism that make the binary account the better options. But this is something for another occasion.

Contact details: pbauman1@swarthmore.edu

References

Dretske, Fred I. 2005, The Case against Closure, Matthias Steup and Ernest Sosa (eds.) Contemporary Debates in Epistemology, Malden, MA: Blackwell, 13-26.

Jespersen, Bjørn 2008, Knowing that p rather than q, Sorites 20, 125-134.

Kallestrup, Jesper 2009, Knowledge-wh and the Problem of Convergent Knowledge, in: Philosophy and Phenomenological Research 78, 468-476.

Nozick, Robert 1981, Philosophical Explanations, Cambridge, MA: Harvard University Press.

Rourke, Jason 2013, A Counterexample to the Contrastive Account of Knowledge, Philosophical Studies 162, 637-643.

Schaffer, Jonathan 2004, From Contextualism to Contrastivism, Philosophical Studies 119, 73-103.

Schaffer, Jonathan 2007, Knowing the Answer, Philosophy and Phenomenological Research 75, 383-403.

Steglich-Petersen, Asbjørn 2015, Knowing the Answer to a Loaded Question, in: Theoria 81, 97-125.

Tweedt, Chris 2018, Solving the Problem of Nearly Convergent Knowledge, Social Epistemology 32, 219-227.

van Woudenberg, René 2008, The Knowledge Relation: Binary or Ternary?, Social Epistemology 22, 281-288.

[1] We can allow for a different kind of contrastive knowledge relation where the contrast can also be true. Suppose I know of Jo, the president of the cheese club and also my dentist, that Jo is my dentist. Since I have no clue who might be the president of the cheese club, it could be appropriate to express all this by saying that I know that Jo is my dentist rather than the president of the cheese club. However, against this speaks that the latter is best understood as saying that I know that Jo is my dentist rather than that I know that Jo is the president of the cheese club. But even if there was such an alternative kind of contrastivity of knowledge, we can leave it aside here.

[2] Here is a basic version: (Closure) If S knows that p, and if S knows that p entails q, then S knows that q. Whistles and bells should be added but nothing depends on these here; we can use (Closure) or other simple variants of it here.

[3] Tweedt adds that not all knowledge or all answers to questions are conditional in form (see Tweedt 2018, 222).

[4] See also Tweedt 2018, 224, fn.11 and 225, fn.14. Given (4a) and therefore also given that if g, then not r, we can also rule out that both propositions are true. Could r be true and g be false? Sure, but then r would be the target proposition, not g. This would not constitute a different case.

[5] Even if one insists that knowledge of the answer to a contrastive question is compatible with the lack of truth of any of the contrasting propositions, one still has to accept that there are other cases where there is a true target. And for such cases one still needs a convincing solution of the problem of nearly convergent knowledge.

[6] A different route to (6) uses (5) and (8) below together with the claim that all contrasting propositions are mutually incompatible. However, one might have doubts about the latter assumption and allow for propositions in the contrast set to be mutually compatible (as long as they are incompatible with the target proposition). I want to leave this issue open here and will thus not put weight on this alternative route to (6). – Here is still another route to (6). If it is true (following (3)) that if g or r, then g, (thus ruling out the case in which g is false and r is true) and if it is also true (following (4a)) that if g, then not r (thus ruling out the case in which both g and r are true), and if, finally, r is the disjunction of all the propositions contrasting with g (thus ruling out the case in which both g and r are false), then we are left with only one case: the case in which g is true and r is false. Since this is a case where g is true, S can come to know g on the basis of the considerations just given. However, in many cases the contrast set does not contain all propositions except the target proposition. In all these cases, we need to use another route to (6).

[7] If one replaces c by some proposition about the obtaining of a skeptical scenario (like An evil demon makes me hallucinate goldfinches), then one gets to even more drastic cases and implications.

[8] Again, if one replaces c by some proposition about the obtaining of a skeptical scenario (like An evil demon makes me hallucinate goldfinches), then one gets to even more drastic conclusions like the following one: S knows that if he is looking at a goldfinch or is suffering from a demon-induced hallucination of a goldfinch, then he is looking at a goldfinch. Hence, given the above, S can also come to know he is looking at a goldfinch and not suffering from a demon-induced hallucination of a goldfinch.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu.

Sassower, Raphael. “Post-Truths and Inconvenient Facts.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 47-60.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40g

Can one truly refuse to believe facts?
Image by Oxfam International via Flickr / Creative Commons

 

If nothing else, Steve Fuller has his ear to the pulse of popular culture and the academics who engage in its twists and turns. Starting with Brexit and continuing into the Trump-era abyss, “post-truth” was dubbed by the OED as its word of the year in 2016. Fuller has mustered his collected publications to recast the debate over post-truth and frame it within STS in general and his own contributions to social epistemology in particular.

This could have been a public mea culpa of sorts: we, the community of sociologists (and some straggling philosophers and anthropologists and perhaps some poststructuralists) may seem to someone who isn’t reading our critiques carefully to be partially responsible for legitimating the dismissal of empirical data, evidence-based statements, and the means by which scientific claims can be deemed not only credible but true. Instead, we are dazzled by a range of topics (historically anchored) that explain how we got to Brexit and Trump—yet Fuller’s analyses of them don’t ring alarm bells. There is almost a hidden glee that indeed the privileged scientific establishment, insular scientific discourse, and some of its experts who pontificate authoritative consensus claims are all bound to be undone by the rebellion of mavericks and iconoclasts that include intelligent design promoters and neoliberal freedom fighters.

In what follows, I do not intend to summarize the book, as it is short and entertaining enough for anyone to read on their own. Instead, I wish to outline three interrelated points that one might argue need not be argued but, apparently, do: 1) certain critiques of science have contributed to the Trumpist mindset; 2) the politics of Trumpism is too dangerous to be sanguine about; 3) the post-truth condition is troublesome and insidious. Though Fuller deals with some of these issues, I hope to add some constructive clarification to them.

Part One: Critiques of Science

As Theodor Adorno reminds us, critique is essential not only for philosophy, but also for democracy. He is aware that the “critic becomes a divisive influence, with a totalitarian phrase, a subversive” (1998/1963, 283) insofar as the status quo is being challenged and sacred political institutions might have to change. The price of critique, then, can be high, and therefore critique should be managed carefully and only cautiously deployed. Should we refrain from critique, then? Not at all, continues Adorno.

But if you think that a broad, useful distinction can be offered among different critiques, think again: “[In] the division between responsible critique, namely, that practiced by those who bear public responsibility, and irresponsible critique, namely, that practiced by those who cannot be held accountable for the consequences, critique is already neutralized.” (Ibid. 285) Adorno’s worry is not only that one forgets that “the truth content of critique alone should be that authority [that decides if it’s responsible],” but that when such a criterion is “unilaterally invoked,” critique itself can lose its power and be at the service “of those who oppose the critical spirit of a democratic society.” (Ibid)

In a political setting, the charge of irresponsible critique shuts the conversation down and ensures political hegemony without disruptions. Modifying Adorno’s distinction between (politically) responsible and irresponsible critiques, responsible scientific critiques are constructive insofar as they attempt to improve methods of inquiry, data collection and analysis, and contribute to the accumulated knowledge of a community; irresponsible scientific critiques are those whose goal is to undermine the very quest for objective knowledge and the means by which such knowledge can be ascertained. Questions about the legitimacy of scientific authority are related to but not of exclusive importance for these critiques.

Have those of us committed to the critique of science missed the mark of the distinction between responsible and irresponsible critiques? Have we become so subversive and perhaps self-righteous that science itself has been threatened? Though Fuller is primarily concerned with the hegemony of the sociology of science studies and the movement he has championed under the banner of “social epistemology” since the 1980s, he does acknowledge the Popperians and their critique of scientific progress and even admires the Popperian contribution to the scientific enterprise.

But he is reluctant to recognize the contributions of Marxists, poststructuralists, and postmodernists who have been critically engaging the power of science since the 19th century. Among them, we find Jean-François Lyotard who, in The Postmodern Condition (1984/1979), follows Marxists and neo-Marxists who have regularly lumped science and scientific discourse with capitalism and power. This critical trajectory has been well rehearsed, so suffice it here to say, SSK, SE, and the Edinburgh “Strong Programme” are part of a long and rich critical tradition (whose origins are Marxist). Adorno’s Frankfurt School is part of this tradition, and as we think about science, which had come to dominate Western culture by the 20th century (in the place of religion, whose power had by then waned as the arbiter of truth), it was its privileged power and interlocking financial benefits that drew the ire of critics.

Were these critics “responsible” in Adorno’s political sense? Can they be held accountable for offering (scientific and not political) critiques that improve the scientific process of adjudication between criteria of empirical validity and logical consistency? Not always. Did they realize that their success could throw the baby out with the bathwater? Not always. While Fuller grants Karl Popper the upper hand (as compared to Thomas Kuhn) when indirectly addressing such questions, we must keep an eye on Fuller’s “baby.” It’s easy to overlook the slippage from the political to the scientific and vice versa: Popper’s claim that we never know the Truth doesn’t mean that his (and our) quest for discovering the Truth as such is given up, it’s only made more difficult as whatever is scientifically apprehended as truth remains putative.

Limits to Skepticism

What is precious about the baby—science in general, and scientific discourse and its community in more particular ways—is that it offered safeguards against frivolous skepticism. Robert Merton (1973/1942) famously outlined the four features of the scientific ethos, principles that characterized the ideal workings of the scientific community: universalism, communism (communalism, as per the Cold War terror), disinterestedness, and organized skepticism. It is the last principle that is relevant here, since it unequivocally demands an institutionalized mindset of putative acceptance of any hypothesis or theory that is articulated by any community member.

One detects the slippery slope that would move one from being on guard when engaged with any proposal to being so skeptical as to never accept any proposal no matter how well documented or empirically supported. Al Gore, in his An Inconvenient Truth (2006), sounded the alarm about climate change. A dozen years later we are still plagued by climate-change deniers who refuse to look at the evidence, suggesting instead that the standards of science themselves—from the collection of data in the North Pole to computer simulations—have not been sufficiently fulfilled (“questions remain”) to accept human responsibility for the increase of the earth’s temperature. Incidentally, here is Fuller’s explanation of his own apparent doubt about climate change:

Consider someone like myself who was born in the midst of the Cold War. In my lifetime, scientific predictions surrounding global climate change has [sic.] veered from a deep frozen to an overheated version of the apocalypse, based on a combination of improved data, models and, not least, a geopolitical paradigm shift that has come to downplay the likelihood of a total nuclear war. Why, then, should I not expect a significant, if not comparable, alteration of collective scientific judgement in the rest of my lifetime? (86)

Expecting changes in the model does not entail a) that no improved model can be offered; b) that methodological changes in themselves are a bad thing (they might be, rather, improvements); or c) that one should not take action at all based on the current model because in the future the model might change.

The Royal Society of London (1660) set the benchmark of scientific credibility low when it accepted as scientific evidence any report by two independent witnesses. As the years went by, testability (“confirmation,” for the Vienna Circle, “falsification,” for Popper) and repeatability were added as requirements for a report to be considered scientific, and by now, various other conditions have been proposed. Skepticism, organized or personal, remains at the very heart of the scientific march towards certainty (or at least high probability), but when used perniciously, it has derailed reasonable attempts to use science as a means by which to protect, for example, public health.

Both Michael Bowker (2003) and Robert Proctor (1995) chronicle cases where asbestos and cigarette lobbyists and lawyers alike were able to sow enough doubt in the name of attenuated scientific data collection to ward off regulators, legislators, and the courts for decades. Instead of finding sufficient empirical evidence to attribute asbestos and nicotine to the failing health condition (and death) of workers and consumers, “organized skepticism” was weaponized to fight the sick and protect the interests of large corporations and their insurers.

Instead of buttressing scientific claims (that have passed the tests—in refereed professional conferences and publications, for example—of most institutional scientific skeptics), organized skepticism has been manipulated to ensure that no claim is ever scientific enough or has the legitimacy of the scientific community. In other words, what should have remained the reasonable cautionary tale of a disinterested and communal activity (that could then be deemed universally credible) has turned into a circus of fire-blowing clowns ready to burn down the tent. The public remains confused, not realizing that just because the stakes have risen over the decades does not mean there are no standards that ever can be met. Despite lobbyists’ and lawyers’ best efforts of derailment, courts have eventually found cigarette companies and asbestos manufacturers guilty of exposing workers and consumers to deathly hazards.

Limits to Belief

If we add to this logic of doubt, which has been responsible for discrediting science and the conditions for proposing credible claims, a bit of U.S. cultural history, we may enjoy a more comprehensive picture of the unintended consequences of certain critiques of science. Citing Kurt Andersen (2017), Robert Darnton suggests that the Enlightenment’s “rational individualism interacted with the older Puritan faith in the individual’s inner knowledge of the ways of Providence, and the result was a peculiarly American conviction about everyone’s unmediated access to reality, whether in the natural world or the spiritual world. If we believe it, it must be true.” (2018, 68)

This way of thinking—unmediated experiences and beliefs, unconfirmed observations, and disregard of others’ experiences and beliefs—continues what Richard Hofstadter (1962) dubbed “anti-intellectualism.” For Americans, this predates the republic and is characterized by a hostility towards the life of the mind (admittedly, at the time, religious texts), critical thinking (self-reflection and the rules of logic), and even literacy. The heart (our emotions) can more honestly lead us to the Promised Land, whether it is heaven on earth in the Americas or the Christian afterlife; any textual interference or reflective pondering is necessarily an impediment, one to be suspicious of and avoided.

This lethal combination of the life of the heart and righteous individualism brings about general ignorance and what psychologists call “confirmation bias” (the view that we endorse what we already believe to be true regardless of countervailing evidence). The critique of science, along this trajectory, can be but one of many so-called critiques of anything said or proven by anyone whose ideology we do not endorse. But is this even critique?

Adorno would find this a charade, a pretense that poses as a critique but in reality is a simple dismissal without intellectual engagement, a dogmatic refusal to listen and observe. He definitely would be horrified by Stephen Colbert’s oft-quoted quip on “truthiness” as “the conviction that what you feel to be true must be true.” Even those who resurrect Daniel Patrick Moynihan’s phrase, “You are entitled to your own opinion, but not to your own facts,” quietly admit that his admonishment is ignored by media more popular than informed.

On Responsible Critique

But surely there is merit to responsible critiques of science. Weren’t many of these critiques meant to dethrone the unparalleled authority claimed in the name of science, as Fuller admits all along? Wasn’t Lyotard (and Marx before him), for example, correct in pointing out the conflation of power and money in the scientific vortex that could legitimate whatever profit-maximizers desire? In other words, should scientific discourse be put on par with other discourses?  Whose credibility ought to be challenged, and whose truth claims deserve scrutiny? Can we privilege or distinguish science if it is true, as Monya Baker has reported, that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments” (2016, 1)?

Fuller remains silent about these important and responsible questions about the problematics (methodologically and financially) of reproducing scientific experiments. Baker’s report cites Nature‘s survey of 1,576 researchers and reveals “sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Ibid.) So, if science relies on reproducibility as a cornerstone of its legitimacy (and superiority over other discourses), and if the results are so dismal, should it not be discredited?

One answer, given by Hans E. Plesser, suggests that there is a confusion between the notions of repeatability (“same team, same experimental setup”), replicability (“different team, same experimental setup”), and reproducibility (“different team, different experimental setup”). If understood in these terms, it stands to reason that one may not get the same results all the time and that this fact alone does not discredit the scientific enterprise as a whole. Nuanced distinctions take us down a scientific rabbit-hole most post-truth advocates refuse to follow. These nuances are lost on a public that demands to know the “bottom line” in brief sound bites: Is science scientific enough, or is it bunk? When can we trust it?

Trump excels at this kind of rhetorical device: repeat a falsehood often enough and people will believe it; and because individual critical faculties are not a prerequisite for citizenship, post-truth means no truth, or whatever the president says is true. Adorno’s distinction of the responsible from the irresponsible political critics comes into play here; but he innocently failed to anticipate the Trumpian move to conflate the political and scientific and pretend as if there is no distinction—methodologically and institutionally—between political and scientific discourses.

With this cultural backdrop, many critiques of science have undermined its authority and thereby lent credence to any dismissal of science (legitimately by insiders and perhaps illegitimately at times by outsiders). Sociologists and postmodernists alike forgot to put warning signs on their academic and intellectual texts: Beware of hasty generalizations! Watch out for wolves in sheep clothes! Don’t throw the baby out with the bathwater!

One would think such advisories unnecessary. Yet without such safeguards, internal disputes and critical investigations appear to have unintentionally discredited the entire scientific enterprise in the eyes of post-truth promoters, the Trumpists whose neoliberal spectacles filter in dollar signs and filter out pollution on the horizon. The discrediting of science has become a welcome distraction that opens the way to radical free-market mentality, spanning from the exploitation of free speech to resource extraction to the debasement of political institutions, from courts of law to unfettered globalization. In this sense, internal (responsible) critiques of the scientific community and its internal politics, for example, unfortunately license external (irresponsible) critiques of science, the kind that obscure the original intent of responsible critiques. Post-truth claims at the behest of corporate interests sanction a free for all where the concentrated power of the few silences the concerns of the many.

Indigenous-allied protestors block the entrance to an oil facility related to the Kinder-Morgan oil pipeline in Alberta.
Image by Peg Hunter via Flickr / Creative Commons

 

Part Two: The Politics of Post-Truth

Fuller begins his book about the post-truth condition that permeates the British and American landscapes with a look at our ancient Greek predecessors. According to him, “Philosophers claim to be seekers of the truth but the matter is not quite so straightforward. Another way to see philosophers is as the ultimate experts in a post-truth world” (19). This means that those historically entrusted to be the guardians of truth in fact “see ‘truth’ for what it is: the name of a brand ever in need of a product which everyone is compelled to buy. This helps to explain why philosophers are most confident appealing to ‘The Truth’ when they are trying to persuade non-philosophers, be they in courtrooms or classrooms.” (Ibid.)

Instead of being the seekers of the truth, thinkers who care not about what but how we think, philosophers are ridiculed by Fuller (himself a philosopher turned sociologist turned popularizer and public relations expert) as marketing hacks in a public relations company that promotes brands. Their serious dedication to finding the criteria by which truth is ascertained is used against them: “[I]t is not simply that philosophers disagree on which propositions are ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’.” (Ibid.)

Some would argue that the criteria by which propositions are judged to be true or false are worthy of debate, rather than the cavalier dismissal of Trumpists. With criteria in place (even if only by convention), at least we know what we are arguing about, as these criteria (even if contested) offer a starting point for critical scrutiny. And this, I maintain, is a task worth performing, especially in the age of pluralism when multiple perspectives constitute our public stage.

In addition to debasing philosophers, it seems that Fuller reserves a special place in purgatory for Socrates (and Plato) for labeling the rhetorical expertise of the sophists—“the local post-truth merchants in fourth century BC Athens”—negatively. (21) It becomes obvious that Fuller is “on their side” and that the presumed debate over truth and its practices is in fact nothing but “whether its access should be free or restricted.” (Ibid.) In this neoliberal reading, it is all about money: are sophists evil because they charge for their expertise? Is Socrates a martyr and saint because he refused payment for his teaching?

Fuller admits, “Indeed, I would have us see both Plato and the Sophists as post-truth merchants, concerned more with the mix of chance and skill in the construction of truth than with the truth as such.” (Ibid.) One wonders not only if Plato receives fair treatment (reminiscent of Popper’s denigration of Plato as supporting totalitarian regimes, while sparing Socrates as a promoter of democracy), but whether calling all parties to a dispute “post-truth merchants” obliterates relevant differences. In other words, have we indeed lost the desire to find the truth, even if it can never be the whole truth and nothing but the truth?

Political Indifference to Truth

One wonders how far this goes: political discourse without any claim to truth conditions would become nothing but a marketing campaign where money and power dictate the acceptance of the message. Perhaps the intended message here is that contemporary cynicism towards political discourse has its roots in ancient Greece. Regardless, one should worry that such cynicism indirectly sanctions fascism.

Can the poor and marginalized in our society afford this kind of cynicism? For them, unlike their privileged counterparts in the political arena, claims about discrimination and exploitation, about unfair treatment and barriers to voting are true and evidence based; they are not rhetorical flourishes by clever interlocutors.

Yet Fuller would have none of this. For him, political disputes are games:

[B]oth the Sophists and Plato saw politics as a game, which is to say, a field of play involving some measure of both chance and skill. However, the Sophists saw politics primarily as a game of chance whereas Plato saw it as a game of skill. Thus, the sophistically trained client deploys skill in [the] aid of maximizing chance occurrences, which may then be converted into opportunities, while the philosopher-king uses much the same skills to minimize or counteract the workings of chance. (23)

Fuller could be channeling here twentieth-century game theory and its application in the political arena, or the notion offered by Lyotard when describing the minimal contribution we can make to scientific knowledge (where we cannot change the rules of the game but perhaps find a novel “move” to make). Indeed, if politics is deemed a game of chance, then anything goes, and it really should not matter if an incompetent candidate like Trump ends up winning the American presidency.

But is it really a question of skill and chance? Or, as some political philosophers would argue, is it not a question of the best means by which to bring to fruition the best results for the general wellbeing of a community? The point of suggesting the figure of a philosopher-king, to be sure, was not his rhetorical skills in this conjunction, but instead the deep commitment to rule justly, to think critically about policies, and to treat constituents with respect and fairness. Plato’s Republic, however criticized, was supposed to be about justice, not about expediency; it is an exploration of the rule of law and wisdom, not a manual about manipulation. If the recent presidential election in the US taught us anything, it’s that we should be wary of political gamesmanship and focus on experience and knowledge, vision and wisdom.

Out-Gaming Expertise Itself

Fuller would have none of this, either. It seems that there is virtue in being a “post-truther,” someone who can easily switch between knowledge games, unlike the “truther” whose aim is to “strengthen the distinction by making it harder to switch between knowledge games.” (34) In the post-truth realm, then, knowledge claims are lumped into games that can be played at will, that can be substituted when convenient, without a hint of the danger such capricious game-switching might engender.

It’s one thing to challenge a scientific hypothesis about astronomy because the evidence is still unclear (as Stephen Hawking has done in regard to Black Holes) and quite another to compare it to astrology (and give equal hearings to horoscope and Tarot card readers as to physicists). Though we are far from the Demarcation Problem (between science and pseudo-science) of the last century, this does not mean that there is no difference at all between different discourses and their empirical bases (or that the problem itself isn’t worthy of reconsideration in the age of Fuller and Trump).

On the contrary, it’s because we assume difference between discourses (gray as they may be) that we can move on to figure out on what basis our claims can and should rest. The danger, as we see in the political logic of the Trump administration, is that friends become foes (European Union) and foes are admired (North Korea and Russia). Game-switching in this context can lead to a nuclear war.

In Fuller’s hands, though, something else is at work. Speaking of contemporary political circumstances in the UK and the US, he says: “After all, the people who tend to be demonized as ‘post-truth’ – from Brexiteers to Trumpists – have largely managed to outflank the experts at their own game, even if they have yet to succeed in dominating the entire field of play.” (39) Fuller’s celebratory tone here may either bring a slight warning in the use of “yet” before the success “in dominating the entire field of play” or a prediction that indeed this is what is about to happen soon enough.

The neoliberal bottom-line surfaces in this assessment: he who wins must be right, the rich must be smart, and more perniciously, the appeal to truth is beside the point. More specifically, Fuller continues:

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins. They are talking about worlds that could have been and still could be—the stuff of modal power. (Ibid.)

By now one should be terrified. This is a strong endorsement of lying as a matter of course, as a way to distract from the details (and empirical bases) of one “knowledge game”—because it may not be to one’s ideological liking–in favor of another that might be deemed more suitable (for financial or other purposes).

The political stakes here are too high to ignore, especially because there are good reasons why “certain parties are advantaged over others” (say, climate scientists “relative to” climate deniers who have no scientific background or expertise). One wonders what it means to talk about “alternative facts” and “alternative science” in this context: is it a means of obfuscation? Is it yet another license granted by the “social constructivist position” not to acknowledge the legal liability of cigarette companies for the addictive power of nicotine? Or the pollution of water sources in Flint, Michigan?

What Is the Mark of an Open Society?

If we corral the broader political logic at hand to the governance of the scientific community, as Fuller wishes us to do, then we hear the following:

In the past, under the inspiration of Karl Popper, I have argued that fundamental to the governance of science as an ‘open society’ is the right to be wrong (Fuller 2000a: chap. 1). This is an extension of the classical republican ideal that one is truly free to speak their mind only if they can speak with impunity. In the Athenian and the Roman republics, this was made possible by the speakers–that is, the citizens–possessing independent means which allowed them to continue with their private lives even if they are voted down in a public meeting. The underlying intuition of this social arrangement, which is the epistemological basis of Mill’s On Liberty, is that people who are free to speak their minds as individuals are most likely to reach the truth collectively. The entangled histories of politics, economics and knowledge reveal the difficulties in trying to implement this ideal. Nevertheless, in a post-truth world, this general line of thought is not merely endorsed but intensified. (109)

To be clear, Fuller not only asks for the “right to be wrong,” but also for the legitimacy of the claim that “people who are free to speak their minds as individuals are most likely to reach the truth collectively.” The first plea is reasonable enough, as humans are fallible (yes, Popper here), and the history of ideas has proven that killing heretics is counterproductive (and immoral). If the Brexit/Trump post-truth age would only usher a greater encouragement for speculation or conjectures (Popper again), then Fuller’s book would be well-placed in the pantheon of intellectual pluralism; but if this endorsement obliterates the silly from the informed conjecture, then we are in trouble and the ensuing cacophony will turn us all deaf.

The second claim is at best supported by the likes of James Surowiecki (2004) who has argued that no matter how uninformed a crowd of people is, collectively it can guess the correct weight of a cow on stage (his TED talk). As folk wisdom, this is charming; as public policy, this is dangerous. Would you like a random group of people deciding how to store nuclear waste, and where? Would you subject yourself to the judgment of just any collection of people to decide on taking out your appendix or performing triple-bypass surgery?

When we turn to Trump, his supporters certainly like that he speaks his mind, just as Fuller says individuals should be granted the right to speak their minds (even if in error). But speaking one’s mind can also be a proxy for saying whatever, without filters, without critical thinking, or without thinking at all (let alone consulting experts whose very existence seems to upset Fuller). Since when did “speaking your mind” turn into scientific discourse? It’s one thing to encourage dissent and offer reasoned doubt and explore second opinions (as health care professionals and insurers expect), but it’s quite another to share your feelings and demand that they count as scientific authority.

Finally, even if we endorse the view that we “collectively” reach the truth, should we not ask: by what criteria? according to what procedure? under what guidelines? Herd mentality, as Nietzsche already warned us, is problematic at best and immoral at worst. Trump rallies harken back to the fascist ones we recall from Europe prior to and during WWII. Few today would entrust the collective judgment of those enthusiasts of the Thirties to carry the day.

Unlike Fuller’s sanguine posture, I shudder at the possibility that “in a post-truth world, this general line of thought is not merely endorsed but intensified.” This is neither because I worship experts and scorn folk knowledge nor because I have low regard for individuals and their (potentially informative) opinions. Just as we warn our students that simply having an opinion is not enough, that they need to substantiate it, offer data or logical evidence for it, and even know its origins and who promoted it before they made it their own, so I worry about uninformed (even if well-meaning) individuals (and presidents) whose gut will dictate public policy.

This way of unreasonably empowering individuals is dangerous for their own well-being (no paternalism here, just common sense) as well as for the community at large (too many untrained cooks will definitely spoil the broth). For those who doubt my concern, Trump offers ample evidence: trade wars with allies and foes that cost domestic jobs (when promising to bring jobs home), nuclear-war threats that resemble a game of chicken (as if no president before him ever faced such an option), and completely putting into disarray public policy procedures from immigration regulations to the relaxation of emission controls (that ignores the history of these policies and their failures).

Drought and suffering in Arbajahan, Kenya in 2006.
Photo by Brendan Cox and Oxfam International via Flickr / Creative Commons

 

Part Three: Post-Truth Revisited

There is something appealing, even seductive, in the provocation to doubt the truth as rendered by the (scientific) establishment, even as we worry about sowing the seeds of falsehood in the political domain. The history of science is the story of authoritative theories debunked, cherished ideas proven wrong, and claims of certainty falsified. Why not, then, jump on the “post-truth” wagon? Would we not unleash the collective imagination to improve our knowledge and the future of humanity?

One of the lessons of postmodernism (at least as told by Lyotard) is that “post-“ does not mean “after,” but rather, “concurrently,” as another way of thinking all along: just because something is labeled “post-“, as in the case of postsecularism, it doesn’t mean that one way of thinking or practicing has replaced another; it has only displaced it, and both alternatives are still there in broad daylight. Under the rubric of postsecularism, for example, we find religious practices thriving (80% of Americans believe in God, according to a 2018 Pew Research survey), while the number of unaffiliated, atheists, and agnostics is on the rise. Religionists and secularists live side by side, as they always have, more or less agonistically.

In the case of “post-truth,” it seems that one must choose between one orientation or another, or at least for Fuller, who claims to prefer the “post-truth world” to the allegedly hierarchical and submissive world of “truth,” where the dominant establishment shoves its truths down the throats of ignorant and repressed individuals. If post-truth meant, like postsecularism, the realization that truth and provisional or putative truth coexist and are continuously being re-examined, then no conflict would be at play. If Trump’s claims were juxtaposed to those of experts in their respective domains, we would have a lively, and hopefully intelligent, debate. False claims would be debunked, reasonable doubts could be raised, and legitimate concerns might be addressed. But Trump doesn’t consult anyone except his (post-truth) gut, and that is troublesome.

A Problematic Science and Technology Studies

Fuller admits that “STS can be fairly credited with having both routinized in its own research practice and set loose on the general public–if not outright invented—at least four common post-truth tropes”:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties. (43)

In that sense, then, Fuller agrees that the positive lessons STS wished for the practice of the scientific community may have inadvertently found their way into a post-truth world that may abuse or exploit them in unintended ways. That is, something like “consensus” is challenged by STS because of how the scientific community pretends to get there knowing as it does that no such thing can ever be reached and when reached it may have been reached for the wrong reasons (leadership pressure, pharmaceutical funding of conferences and journals). But this can also go too far.

Just because consensus is difficult to reach (it doesn’t mean unanimity) and is susceptible to corruption or bias doesn’t mean that anything goes. Some experimental results are more acceptable than others and some data are more informative than others, and the struggle for agreement may take its political toll on the scientific community, but this need not result in silly ideas about cigarettes being good for our health or that obesity should be encouraged from early childhood.

It seems important to focus on Fuller’s conclusion because it encapsulates my concern with his version of post-truth, a condition he endorses not only in the epistemological plight of humanity but as an elixir with which to cure humanity’s ills:

While some have decried recent post-truth campaigns that resulted in victory for Brexit and Trump as ‘anti-intellectual’ populism, they are better seen as the growth pains of a maturing democratic intelligence, to which the experts will need to adjust over time. Emphasis in this book has been given to the prospect that the lines of intellectual descent that have characterised disciplinary knowledge formation in the academy might come to be seen as the last stand of a political economy based on rent-seeking. (130)

Here, we are not only afforded a moralizing sermon about (and it must be said, from) the academic privileged position, from whose heights all other positions are dismissed as anti-intellectual populism, but we are also entreated to consider the rantings of the know-nothings of the post-truth world as the “growing pains of a maturing democratic intelligence.” Only an apologist would characterize the Trump administration as mature, democratic, or intelligent. Where’s the evidence? What would possibly warrant such generosity?

It’s one thing to challenge “disciplinary knowledge formation” within the academy, and there are no doubt cases deserving reconsideration as to the conditions under which experts should be paid and by whom (“rent-seeking”); but how can these questions about higher education and the troubled relations between the university system and the state (and with the military-industrial complex) give cover to the Trump administration? Here is Fuller’s justification:

One need not pronounce on the specific fates of, say, Brexit or Trump to see that the post-truth condition is here to stay. The post-truth disrespect for established authority is ultimately offset by its conceptual openness to previously ignored people and their ideas. They are encouraged to come to the fore and prove themselves on this expanded field of play. (Ibid)

This, too, is a logical stretch: is disrespect for the authority of the establishment the same as, or does it logically lead to, the “conceptual” openness to previously “ignored people and their ideas”? This is not a claim on behalf of the disenfranchised. Perhaps their ideas were simply bad or outright racist or misogynist (as we see with Trump). Perhaps they were ignored because there was hope that they would change for the better, become more enlightened, not act on their white supremacist prejudices. Should we have “encouraged” explicit anti-Semitism while we were at it?

Limits to Tolerance

We tolerate ignorance because we believe in education and hope to overcome some of it; we tolerate falsehood in the name of eventual correction. But we should never tolerate offensive ideas and beliefs that are harmful to others. Once again, it is one thing to argue about black holes, and quite another to argue about whether black lives matter. It seems reasonable, as Fuller concludes, to say that “In a post-truth utopia, both truth and error are democratised.” It is also reasonable to say that “You will neither be allowed to rest on your laurels nor rest in peace. You will always be forced to have another chance.”

But the conclusion that “Perhaps this is why some people still prefer to play the game of truth, no matter who sets the rules” (130) does not follow. Those who “play the game of truth” are always vigilant about falsehoods and post-truth claims, and to say that they are simply dupes of those in power is both incorrect and dismissive. On the contrary: Socrates was searching for the truth and fought with the sophists, as Popper fought with the logical positivists and the Kuhnians, and as scientists today are searching for the truth and continue to fight superstitions and debunked pseudoscience about vaccination causing autism in young kids.

If post-truth is like postsecularism, scientific and political discourses can inform each other. When power-plays by ignoramus leaders like Trump are obvious, they could shed light on less obvious cases of big pharma leaders or those in charge of the EPA today. In these contexts, inconvenient facts and truths should prevail and the gamesmanship of post-truthers should be exposed for what motivates it.

Contact details: rsassowe@uccs.edu

* Special thanks to Dr. Denise Davis of Brown University, whose contribution to my critical thinking about this topic has been profound.

References

Theodor W. Adorno (1998/1963), Critical Models: Interventions and Catchwords. Translated by Henry W. Pickford. New York: Columbia University Press

Kurt Andersen (2017), Fantasyland: How America Went Hotwire: A 500-Year History. New York: Random House

Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature Vol. 533, Issue 7604, 5/26/16 (corrected 7/28/16)

Michael Bowker (2003), Fatal Deception: The Untold Story of Asbestos. New York: Rodale.

Robert Darnton, “The Greatest Show on Earth,” New York Review of Books Vo. LXV, No. 11 6/28/18, pp. 68-72.

Al Gore (2006), An Inconvenient Truth: The Planetary Emergency of Global Warming and What Can Be Done About It. New York: Rodale.

Richard Hofstadter (1962), Anti-Intellectualism in American Life. New York: Vintage Books.

Jean- François Lyotard (1984), The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Robert K. Merton (1973/1942), “The Normative Structure of Science,” The Sociology of Science: Theoretical and Empirical Investigations. Chicago and London: The University of Chicago Press, pp. 267-278.

Hans E. Plesser, “Reproducibility vs. Replicability: A Brief History of Confused Terminology,” Frontiers in Neuroinformatics, 2017; 11: 76; online: 1/18/18.

Robert N. Proctor (1995), Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.

James Surowiecki (2004), The Wisdom of Crowds. New York: Anchor Books.

Author Information: Jim Collier, Virginia Tech, jim.collier@vt.edu.

Collier, James H. “Social Epistemology for the One and the Many: An Essay Review.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 15-40.

Jim Collier’s article “Social Epistemology for the One and the Many” will be published in four parts. The pdf of the article includes all four parts as a single essay, and gives specific page references. Shortlinks:

Introduction: https://wp.me/p1Bfg0-3ZN

Part One, Social Epistemology as Fullerism: https://wp.me/p1Bfg0-3ZY

Is it appropriate to call a public intellectual, a university-employed academic, a rock star?
Image by Ernesto de Quesada via Flickr / Creative Commons

 

Remedios and Dusek present social epistemology wholly as Fullerism; that is, current social epistemology amounts to glorifying Fuller’s supposed acumen and prolificacy.

Fullerism’s Narrow Scope

Fullerism oversimplifies the processes and aims of social epistemology. If Knowing Humanity in the Social World just extolled Fuller and explicated and his corpus, Remedios and Dusek would have written a book within an established genre in academic publishing—a very crowded genre, to be sure, of titles about august individual thinkers. However, in Remedios and Dusek’s presentation, Fullerism becomes conflated with social epistemology. Ultimately, Fullerism requires one to wait briefly and then react to Fuller’s next publication or scholarly incursion.

Fullerism’s origin story takes root in Fuller’s extraordinary education at “… two of the best programs in the world in philosophy and history of science” (we get class ranking for good measure), which led to work “… socially and historically richer by far than that of most philosophers and far more philosophically sophisticated than that of other sociologists” (10, emphasis mine). One will not miss the point amid the clunky phrasing that Fuller’s “breadth of reading in the humanities and social sciences is extraordinarily broad” (10).

Remedios and Dusek catalogue Fuller’s great learning by listing multiple subjects and fields about which he either possesses knowledge or “extensive familiarity.” Too, Fuller’s “range is far wider than most philosophers of science, including medieval scholastic philosophy” (emphasis mine). Readers should not ignore Fuller’s philosophical mastery and uncanny ability to get the root of a particular matter (11).[1]

Fuller deploys “great originality” (10) against the “many philosophers, historians, and sociologists of scientific knowledge [who] are simply failed scientists” (10). Remedios and Dusek’s unsubtle dig at the founders and early practitioners of STS tries to lend heft to Fuller’s broadsides against the field. Fullerism remains a game that Fuller wins by outsmarting any and all interlocutors. After all, Fuller “even if hyperbolic … has a point” (19).

Remedios and Dusek, and Remedios in his earlier book (2003), give notice that reader will encounter “Steve Fuller’s Social Epistemology.” For the precious few scholars informed on such matters the phrase gestures, in part, to an internecine scrum regarding the field’s proper origin and pursuit. Remedios and Dusek fortunately avoid the temptation to repot social epistemology’s history. Doing so would only rehearse a tired historiography that has hardened into a meme. Still, by not redressing this narrative, Remedios and Dusek reinforce the fiction that social epistemology is Fullerism.

Remedios and Dusek strike a deferential critical posture that also serves as a model for readers as they observe and assess Fuller’s performances. The reader should temper their judgments and entertain, say, a casual embrace of eugenics (116-117), or the past and future benefits of human experimentation (123), because Steve Fuller is a singular, prophetic thinker. Fuller sees the future—although the future, to be sure, looks suspiciously like Silicon Valley neoliberalism promulgated by entrepreneurs since the mid-1990’s.

Double Movement: Expansion in Contraction

In Knowing Humanity in the Social World, Fuller gets to impose his ideological will not only because of his unique personal powers, but because of how Remedios and Dusek treat the “social” in social epistemology. The book proceeds in a manner found in much of academic philosophy (and, so, in a way antithetical to a social epistemology). Broadly, academic philosophers tend to present arguments against a frictionless background to focus on definitional clarity, logical structure, internal consistency and the like. On certain practical grounds, one can understand attending less to cultural factors than, say, fallacies in a philosophical account.

However, as a consequence, Remedios and Dusek render the world as a passive constraint to the active knower. On the odd occasion, then, when the world pushes back, as in Kitzmiller v. Dover Area School District, it is the judge that “largely misconstrued [a] major part of Fuller’s presentation” (72).

Remedios and Dusek forward a myopic view of social epistemology all the while extolling the grandiosity of Fuller’s corpus.[2] Owing, in part, to Fuller’s hyper-productivity, a tension arises immediately in Knowing Humanity in the Social World. While extolling his virtuosity (particularly in Chapter 1), the book fails to address adequately the majority of Fuller’s work.[3] Focusing on publications since the year 2000 and primarily on one, Humanity 2.0 (2011), of approximately two dozen total books, Remedios and Dusek pay little critical attention to Fuller’s collective body of work.[4]

A few articles play minor supporting roles. Moreover, Remedios and Dusek deal only with print media. As of this writing, 180 audio, and dozens of video, presentations reside online.[5] Certainly, one can sympathize with the monumental effort in dealing with such inordinate output; yet, Remedios and Dusek set out such a task in the title of their book.

Remedios and Dusek trade a great deal on the virtue of knowledge making, and makers, and the power of association. (The maker-versus-taker ethos underwrites the epistemic agent’s risk taking.) Fuller’s prolificacy demonstrates superior knowledge making, if not knowledge, and thus confers greater agency on himself and agents acting in kind.

A social epistemologist pre-2000 would have considered how and why knowledge-makers deploy resources in support of a singular epistemic source. That social epistemologist would also have questioned if epistemic power should accrue to agents, and their claims, by virtue of associating with other powerful agents. The unaccounted-for influence of powerful epistemic agents, and their surrogates, looms in the book’s background.

More importantly, Remedios and Dusek’s practically ignore Fuller’s critical reception. Even when the authors take up reception, they misapprehend the state of affairs. For example, Remedios and Dusek assert: “Despite the existence of several schools of STS, the Paris School led by Bruno Latour is the main competitor of Fuller’s social epistemology” (11). The rest of the passage gives a cursory explanation of Latour’s views, and Fuller’s opposition, but shares no evidence of responses by members of the Paris school, or actor-network theorists and practitioners, to social epistemology. Perhaps social epistemologists (read Fuller) view Latour as a “main competitor.” [6]

However, STS practitioners think little, or nothing, about social epistemology. One will not locate current social epistemology as a going concern in leading (or otherwise) STS journals, textbooks, or classrooms. I find no contrary evidence in Knowing Humanity in the Social World. Presenting social epistemology as Fullerism, Remedios and Dusek promote a narrative in which academic caricatures fight for supremacy on a dialectical battlefront. Ironically, the narrative evades how human knowledge amounts to a collective achievement (a central tenet of social epistemology itself).

Instead of taking up compelling questions that emerge from the contexts of reception, Remedios and Dusek conceive the social world much as the circumscribed space of a poorly taught philosophy course. In this class, a student tries explaining a commonplace or self-evident idea and, through the instructor’s haphazard application of the Socratic method, discovers greater uncertainty, more questions, and, more often than not, defaults to the instructor’s authority. Thus, in Fullerism, the student discovers the superiority of Fuller.

Where All Is Fuller

Pursuing Fullerism, we share our unrefined intuitions regarding human experimentation (113), or inspirations for doing science (67), or technological enhancement (94). Likely, we express our intuitions as absolutist declarations. Supplied with more information on, say, the efficacy of the Dachau hypothermia experiments, we are asked to revisit and refine our intuitions. To keep the lesson alive, the epistemic agent (Fuller being the model agent) can stir in other pieces of information, shift perspective, relay different social, historical and cultural frameworks, refer to controversies, supply voluminous references to the philosophical canon, or appeal to various philosophical schools of thought.

At each turn, we might further refine our ideas, retrench, grow bored—but in recognizing Fullerism’s true didactic aim we should rightly be impressed and supplicant. The performance of our epistemic agent should replace our certitude about obvious nonsense with gnawing doubt. Darwin was certainly a scientist, right (73)? Maybe eugenics (116-117) gets a bum rap—especially if we see human experiments “… in the cause of human progress and transcendence” (117). Sure … the overblown fear of humans “playing God” with technology just needs a little enlightened philosophical recalibration (87).

This philosophical dialectic depends on the active forms of agency attributed to Fuller. How epistemic agents learn, for example, remains captive to Fullerism’s dialectic. The “deep learning” of computers receives some attention (123-124), but the dialectical process appears an end in itself. Remedios and Dusek defer to displays of learning by Fuller and seem less interested in exploring how epistemic agents learn to make knowledge to act in the world.

Remedios and Dusek set out the distinctiveness of Fuller’s learning in the book’s opening:

Other than Steve Fuller’s work, there is no other discussion in current literature of sociology of scientific knowledge (SSK), science and technology studies (STS), sociology of science, philosophy of science, epistemology of science, and analytic social epistemology on the impact of scientific knowledge on humanity. (emphasis mine, 1)

The claim’s bold start, dissipated by an ending cluster of vague prepositional phrases, compels the reader to consider Remedios and Dusek’s credulity. How could half a dozen fields of academic inquiry investigating science (to varying degrees) successfully avoid a single discussion of the impact of scientific knowledge on people?

Knowledge Becomes a Means to Transcend

We find, reading further, the matter at hand is not scientific knowledge per se; rather, knowing how to perform the accounting necessary for best achieving a preordained human future. Remedios and Dusek, like Fuller, abide in the unquestioning faith that “nanotechnology, robotics, and biotechnology” (1) will develop and converge and, inevitably, humans will transcend their biology.[7] For the next thirty years until the Singularity, we can train ourselves to tamp down our agnosticism.

Lest we forget, we can rely on Fuller’s “very well informed and richly informed historical account with delineation of varieties of theodicy” (my emphasis, 72) that include discussions of Leibniz, Malebranche and Gassendi. For Remedios and Dusek, historical analysis frequently translates into Fuller’s citational range; thus, a good argument depends on the ability to bring numerous references, preferably unexpectedly related, to bear on an issue.

For example, Fuller wins a debate with A. C. Grayling (in 2008) on intelligent design because “the historical part of Fuller’s argument is very accurate concerning early modern science. Figures such as Boyle, Newton, Leibniz, and many other figures of seventeenth-century science saw their religion as tied with their science” (my emphasis, 72). A trivially true even if “very accurate” point.

In the same paragraph, Remedios and Dusek go on to list additional clever and apt observations made by Fuller. As the adjectival emphasis suggests, Remedios and Dusek direct the reader to allow the perspicacity of Fuller’s insights suffice as an effective argument. As Remedios and Dusek lightly resist Fuller’s loose historical claims (particularly in Chapter 5), they give counter-arguments, from themselves and other scholars, short shrift. Fuller’s proactive encyclopedism assures us that we both reside in, and can actively reconstruct, the western intellectual tradition. In truth, Fullerism entails that we willingly suspend disbelief during Fuller’s ideational performance.

The social world of the book’s title remains largely unburdened by cultural complexities, and populated sparsely with one-dimensional interlocutors. Fullerism, then, is both plenum and void—space completely filled with the matter of Fuller’s creation, and void of external influences and meaning in collective effort.

Contact details: jim.collier@vt.edu

References

Barbrook, Richard and Andy Cameron. “The Californian Ideology.” Science as Culture 6, no. 1 (1996): 44-72.

Barlow, John Perry. “A Declaration of the Independence of Cyberspace.” 1996. https://bit.ly/1KavIVC.

Barron, Colin. “A Strong Distinction Between Humans and Non-humans Is No Longer Required for Research Purposes: A Debate Between Bruno Latour and Steve Fuller.” History of the Human Sciences 16, no. 2 (2003): 77–99.

Clark, William. Academic Charisma and the Origins of the Research University. University of Chicago Press, 2007.

Ellul, Jacques. The Technological Society. Alfred A. Knopf, 1964.

Frankfurt, Harry G. On Bullshit. Princeton University Press, 2005.

Fuller, Steve. Social Epistemology. Bloomington and Indianapolis, University of Indiana Press, 1988.

Fuller, Steve. Philosophy, Rhetoric, and the End of Knowledge: The Coming of Science and Technology Studies. Madison, WI: University of Wisconsin Press, 1993.

Fuller, Steve. Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press, 2001.

Fuller, Steve. “The Normative Turn: Counterfactuals and a Philosophical Historiography of Science.” Isis 99, no. 3 (September 2008): 576-584.

Fuller, Steve. “A Response to Michael Crow.” Social Epistemology Review and Reply Collective 25 November 2015. https://goo.gl/WwxFmW.

Fuller, Steve and Luke Robert Mason. “Virtual Futures Podcast #3: Transhumanism and Risk, with Professor Steve Fuller.”  Virtual Futures 16 August 2017. https://bit.ly/2mE8vCs.

Grafton, Anthony. “The Nutty Professors: The History of Academic Charisma.” The New Yorker October 26, 2006. https://bit.ly/2mxOs8Q.

Hinchman, Edward S. “Review of “Patrick J. Reider (ed.), Social Epistemology and Epistemic Agency: Decentralizing Epistemic Agency.” Notre Dame Philosophical Reviews 2 July 2018. https://ntrda.me/2NzvPgt.

Horgan, John. “Steve Fuller and the Value of Intellectual Provocation.” Scientific American, Cross-Check 27 March 2015.  https://bit.ly/2f1UI5l.

Horner, Christine. “Humanity 2.0: The Unstoppability of Singularity.” Huffpost 8 June 2017. https://bit.ly/2zTXdn6.

Joosse, Paul.“Becoming a God: Max Weber and the Social Construction of Charisma.” Journal of Classical Sociology 14, no. 3 (2014): 266–283.

Kurzweil, Ray. “The Virtual Book Revisited.”  The Library Journal 1 February 1, 1993. https://bit.ly/2AySoQx.

Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. Penguin Books, 2005.

Lynch, Michael. “From Ruse to Farce.” Social Studies of Science 36, vol 6 (2006): 819–826.

Lynch, William T. “Social Epistemology Transformed: Steve Fuller’s Account of Knowledge as a Divine Spark for Human Domination.” Symposion 3, vol. 2 (2016): 191-205.

McShane, Sveta and Jason Dorrier. “Ray Kurzweil Predicts Three Technologies Will Define Our Future.” Singularity Hub 19 April 2016. https://bit.ly/2MaQRl4.

Pein, Corey. Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley. Henry Holt and Co. Kindle Edition, 2017.

Remedios, Francis. Legitimizing Scientific Knowledge: An Introduction to Steve Fuller’s Social Epistemology. Lexington Books, 2003.

Remedios, Francis X. and Val Dusek. Knowing Humanity in the Social World: The Path of Steve Fuller’s Social Epistemology. Palgrave Macmillan UK, 2018.

Rushkoff, Douglas. “Survival of the Richest: The wealthy are plotting to leave us behind.” Medium 5 July 2018. https://bit.ly/2MRgeIw.

Shera, J.H. Sociological Foundations of Librarianship. New York: Asia Publishing House, 1970.

Simonite, Tom. “Moore’s Law Is Dead. Now What?” MIT Technology Review 13 May 13, 2016. https://bit.ly/1VVn5CK.

Talbot, Margaret. “Darwin in the Dock.” The New Yorker December 5, 2005. 66-77. https://bit.ly/2LV0IPa.

Uebel, Thomas. Review of “Francis Remedios, Legitimizing Scientific Knowledge: An Introduction to Steve Fuller’s Social Epistemology. Notre Dame Philosophical Reviews 3 March 2005. https://ntrda.me/2uT2u92

Weber, Max. Economy and Society, 2 vols. Edited by Guenther Roth and Claus Wittich. Berkeley, CA; London; Los Angeles, CA: University of California Press, 1922 (1978).

[1] In the book, getting to the root of the matter frequently amounts to the revelation that it isn’t what you think it is or thought it was.

[2] As of 13 May 2018, Fuller’s vita (https://warwick.ac.uk/fac/soc/sociology/staff/sfuller/vita1.docx ) comes in at 76 pages.

[3] Remedios can point to his first book Legitimizing Scientific Knowledge as wrestling with the first half of Fuller’s career. Thomas Uebel’s review, for Notre Dame Philosophical Reviews (https://ntrda.me/2uT2u92) notes a similar problem in not addressing the reception of Fuller’s work—the “paucity” of responses to counter arguments: “Calling notions contested does not absolve us from the task of providing defenses of the alternatives put forward.”

[4] Fuller’s “trilogy of transhumanism” all published by Palgrave Macmillan: Humanity 2.0: What It Means to Be Human Past, Present and Future (2011), Preparing for Life in Humanity 2.0 (2012), and The Proactionary Imperative: A Foundation for Transhumanism (co-authored with Veronika Lipinska, 2014).

[5] While writing this essay, I received notice of yet another book authored by Fuller Post-Truth: Knowledge As A Power Game (Anthem Press).

[6] Remedios and Dusek put Latour and Fuller into conversation predominantly in Chapter 2. As framed, Fuller “speaks at” views held by Latour (uncharitably summarized by Remedios and Dusek), but no direct exchange, or dialectic, occurs. Emblematic of this state of affairs is a “debate” between Latour and Fuller in 2002 (published in 2003), regarding what defines ‘human’ and ‘non-human’, that concludes with this editorial note: “[The debate] was least successful, perhaps, in making the issues clear to the audience, especially to those who were not familiar with the work of Bruno Latour and Steve Fuller” (98).

[7] Slightly different iterations of the trinity that will converge to give us the Singularity include Ray Kurzweil’s “nanotechnology, robotics, and biotechnology” (https://en.wikipedia.org/wiki/Ray_Kurzweil), and “genetics, nanotechnology, and robotics” (https://bit.ly/2LZ42ZB).

Author Information: Stephen John, Cambridge University, sdj22@cam.ac.uk

John, Stephen. “Transparency, Well-Ordered Science, and Paternalism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 30-33.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Zf

See also:

Image by Sergio Santos and http://nursingschoolsnearme.com, via Flickr / Creative Commons

 

Should a physician tell you that you have cancer, even if she thinks this would cause you needless distress? Of course she should! How, though, should she convey that news? Imagine three, stylised options. Dr Knowsbest is certain you should have your cancer operated on, so tells you the news in a way which vividly highlights the horrors of cancer, but downplays the risk of an operation.

Dr Neutral, by contrast, simply lists all of the facts about your cancer, your prognosis, your possible treatment options, their likely benefits and risks and so on. Finally, Dr Sensitive reports only those aspects of your condition and those risks of surgery which she judges that you, given your values and interests, would want to know about.

Many Methods to Reveal

We can, I hope, all agree that Dr Knowsbest’s communicative strategies and choices are ethically problematic, because she acts in a paternalistic manner. By contrast, Dr Neutral does not act paternalistically. In this regard, at least, Dr Neutral’s strategies are ethically preferable to Dr Knowsbest’s strategies. What about the choice between Knowsbest and Sensititve? In one sense, Dr Sensitive acts paternalistically, because she controls and structures the flow of information with the aim of improving your well-being.

However, there is an important difference between Dr Sensitive and Dr Knowsbest; the former aims solely to improve your epistemic well-being, such that you can better make a choice which aligns with your own values, whereas the latter aims to influence or override your judgment. Knowsbest’s “moral paternalism” is wrong for reasons which are absent in the case of Sensitive’s “epistemic paternalism” (Ahlstrom-Vij, 2013).

Therefore, plausibly, both the Neutral and Sensitive strategies are ethically preferable to Knowsbest; What, though, of the choice between these two communicative strategies? First, I am not certain that it is even possible to report all the facts in a neutral way (for more, see below.) Second, even if it is possible, Dr Sensitive’s strategy seems preferable; her strategy, if successful, positively promotes – as opposed to merely failing to interfere with – your ability to make autonomous choices.

At least at an abstract, ideal level, then, we have good reason to want informants who do more than merely list facts, but who are sensitive to their audiences’ epistemic situation and abilities and their evaluative commitments; we want experts who “well-lead” us. In my recent paper in Social Epistemology, I argued that that certain widely-endorsed norms for science communication are, at best, irrelevant, and, at worst, dangerous (John 2018). We should be against transparency, openness, sincerity and honesty.

It’s a Bit Provocative

One way of understanding that paper is as following from the abstract ideal of sensitive communication, combined with various broadly sociological facts (for example, about how audiences identify experts). I understand why my article put Moore in mind of a paradigm case of paternalism. However, reflection on the hypothetical example suggests we should also be against “anti-paternalism” as a norm for science communication; not because Knowsbest’s strategy is fine, but, rather, because the term “paternalism” tends to bundle together a wide range of practices, not all of which are ethically problematic, and some of which promote – rather than hinder – audiences’ autonomy.

Beyond the accusation of paternalism, Moore’s rich and provocative response focuses on my scepticism about transparency. While I argued that a “folk philosophy of science” can lead audiences to distrust experts who are, in fact, trustworthy, he uses the example of HIV-AIDS activism to point to the epistemic benefits of holding scientists to account, suggesting that “it is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science”. I agree entirely that such a dynamic is possible; indeed, his example shows it does happen!

However, conceding this possibility does not show that we must endorse a norm of transparency, because, ultimately, the costs may still be greater than the benefits. Much here depends on the mechanisms by which transparency and engagement are enacted. Moore suggests one model for such engagement, via the work of “trust proxies”, such as ACT-UP. As he acknowledges, however, although proxies may be better-placed than lay-people to identify when science is flawed, we now create a new problem for the non-expert: to adapt a distinction from Goldman’s work, we must decide which “putative proxies” are “true proxies” (Goldman, 2001).

Plausibly, this problem is even harder than Goldman’s problem of distinguishing the “true experts” among the “putative experts”; because in the latter case, we have some sense of the credentials and so on which signal experthood. Again, I am tempted to say, then, that it is unclear that transparency, openness or engagement will necessarily lead to better, rather than worse, socio-epistemic outcomes.

Knowledge From Observation and Practice

Does that mean my arguments against transparency are in the clear? No. First, many of the issues here turn on the empirical details; maybe careful institutional design can allow us to identify trustworthy trust-proxies, whose work promotes good science. Second, and more importantly, the abstract model of sensitive communication is an ideal. In practice, it is easy to fail to meet this ideal, in ways which undermine, rather than respect or promote, hearers’ autonomy.

For example, rather than tailor her communication to what her audiences do care about, Dr Sensitive might tailor what she says to what she thinks they ought to care about; as a result, she might leave out information which is relevant to their choices given their values, while including information which is irrelevant. An influential strain in recent philosophy of science suggests that non-epistemic value judgments do and must run deep in practices of justification; as such, even a bald report of what a study showed may, implicitly, encode or endorse value judgments which are not shared by the audience (Douglas, 2000).

Reporting claims when, and only when, they meet a certain confidence level may, for example, implicitly rely on assumptions about the relative disvalue of false positives and false negatives; in turn, it may be difficult to justify such assumptions without appeal to non-epistemic values (John, 2015). As such, even Dr Neutral may be unable to avoid communicating in ways which are truly sensitive to her audience’s values. In short, it may be hard to handover our epistemic autonomy to experts without also handing over our moral autonomy.

This problem means that, for research to be trustworthy, requires more than that the researchers’ claims are true, but that they are claims which are, at least, neutral and, at best, aligned with, audiences’ values. Plausibly, regardless greater engagement and transparency may help ensure such value alignment. One might understand the example of ACT-UP along these lines: activist engagement ensured that scientists did “good science” not only in a narrow, epistemic sense of “good” – more or more accurate data and hypotheses were generated – but in a broader sense of being “well-ordered”, producing knowledge that better reflected the concerns and interests of the broader community (Kitcher, 2003).

Whether engagement improves epistemic outcomes narrowly construed is a contingent matter, heavily dependent on the details of the case. By contrast, engagement may be necessary for science to be “well-ordered”. In turn, transparency may be necessary for such engagement. At least, that is the possibility I would push were I to criticise my own conclusions in line with Moore’s concerns.

A Final Sting

Unfortunately, there is a sting in the tail. Developing effective frameworks for engagement and contestation may require us to accept that scientific research is not, and cannot be, fully “value free”. To the extent that such an assumption is a commitment of our “folk philosophy of science”, then developing the kind of rigorous engagement which Moore wants may do as much to undermine, as promote, our trust in true experts. Moore is surely right that the dynamics of trust and distrust are even more complex than my paper suggested; unfortunately, they might be even more complex again than he suggests.

Contact details: sdj22@cam.ac.uk

References

Ahlstrom-Vij, K. (2013). Epistemic paternalism: a defence. Springer

Douglas, H. (2000). Inductive risk and values in science. Philosophy of science, 67(4), 559-579.

Goldman, A (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1), 85–110.

John, S. (2015). Inductive risk and the contexts of communication. Synthese, 192(1), 79-96.

John, S. (2018). Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75-87.

Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.

Author Information: Kristie Dotson, Michigan State University, dotsonk@msu.edu

Dotson, Kristie. “Abolishing Jane Crow.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 1-8.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3YJ

See also:

Image by Adley Haywood via Flickr / Creative Commons

 

It took me 8 years to publish “Theorizing Jane Crow.” I wrote it at the same time as I wrote my 2011 paper, “Tracking Epistemic Violence, Tracking Practices of Silencing.” The many reviews that advocated for rejecting “Theorizing Jane Crow” over the years made me refine it…and alter it….and refine it some more. This is not necessarily a gripe. But it will seem that way. Because there are two consistent critiques of this paper that have stuck with me for how utterly problematic they were and are. In this reply to Ayesha Hardison’s commentary, “Theorizing Jane Crow, Theorizing Literary Fragments,” I display and analyze those critiques because they link up in interesting ways to Ayesha Hardison’s commentary.

The two most common critiques of this paper include:  1) the judgement that my paper is not good intellectual history or not good literary criticism and 2) the conclusion that Black women’s literary production is so advanced that there is no way to make a claim of unknowability with respect to US Black women today (or yesterday).  In what follows, I will articulate and explore these critiques. The first critique brings attention to just how wonderful Hardison’s commentary actually is for how it sets up the rules of engagement between us. The second critique can be used to tease out convergences and a potential divergence between Hardison’s position and my own.

The First Critique: Does E’rybody Have to be Historians or Literary Studies Scholars?

Since I neither claim to be a literary scholar nor a historian, I found no reason to deny the first (and by far most consistent) critique of this paper. This paper is not good intellectual history. And, plainly speaking, it is terrible literary criticism. Let me say this, for the record, I am neither an intellectual historian, nor a literary critic. And, with all due respect to those people who do these things well, I have no desire to be.

Hardison detected that she and I are coming to the same sets of problems with different trainings, different habits of attention, and, quite frankly, different projects. Because, no, I am not a literary critic. Hardison acknowledges our different orientations when she writes:

Whereas Dotson theorizes Jane Crow by outlining social features facilitating black women’s ‘unknowability,’ in literary studies, we might say black women’s ‘unknowability’ is actually a matter of audience, and more importantly, a problem of reception. (2018, 57)

Another place where differences in our respective approaches is foreshadowed is in the very first line of Hardison’s reply when she writes, “To acknowledge Jane Crow…is not the same as understanding how black women’s subjugation works – or why it persists,” (2018, 56). From the very first line, I was put at ease with Hardison’s commentary. Because however much we might disagree or agree, at least, she recognized my actual project. I treat Murray like a philosopher. In accordance with philosopher stone rules, e.g. like an element from which composite understandings can be derived. It was clear to me that even among Black feminist academics, potential audiences for this paper were simply unused to the kinds of flights of fancy that taking Black women as philosophers requires.[1]

Hardison didn’t have this problem at all. In other words, Hardison was, for me, a “brown girl’s heart” to receive what I was trying to articulate. For that I am so very grateful to her. I believe that Hardison understood what I was trying to do. I was treating Pauli Murray the way I would be allowed to treat any theoretical white dude. Like her work should be able to inspire more work with family resemblances. I treated Murray like there could and should be Murray-ians. And it was this move that I utterly refused to compromise on. It was also the move that inspired, in my estimation, the most resistance from anonymous reviewers. But Hardison got it. But, then, of course, she would get it. She does the same thing in her book, Writing Through Jane Crow (Hardison 2014). We treat Murray like a philosopher.

The performance of Hardison’s commentary accords very much with the existence of (and necessity of) “an empathetic black female audience” (Hardison 2018, 59). And what is uncovered between us is a great deal of agreement between her positions and my own and a potential disagreement. At this point, Hardison and I can talk to each other. But I want to draw attention to the fact it is Hardison’s commentary that sets the stage for this exchange in a way where our convergences and divergences can be fruitfully explored. And that is no easy feat. Hats off to Hardison. I am deeply grateful for her work here.

The Second Critique: Black Women’s Literary Production vs. Jane Crow Dynamics

The second most common critique of “Theorizing Jane Crow” concerned skepticism about whether US Black women could be understood as unknowable in the face of US Black women’s literary production. It was only in reading Hardison’s commentary that I realized, I may have misunderstood part of the critiques being leveled at me from (again) anonymous reviewers that were most likely Black feminist academics themselves. One might have misread my essay to say that Black women never afford each other the kind of empathetic audiences that are needed to render them, broadly speaking, knowable in hegemonic and counterhegemonic spaces. That the Black community at large never extends such empathy.

Or, in Hardison’s words, some may have taken me as advocating for “the conceit that black women’s narratives about their multivalent oppression registers similarly in hegemonic and counterhegemonic spaces” (2018, 56). Now, I am not sure if Hardison is accusing me of this. There is reason to believe that she isn’t but is rather choosing this point as a way of empathetically extending my remarks. For example, Hardison writes:

An analysis of African American women writers’ engagement with Jane Crow is outside the scope of Dotson’s epistemological story in “Theorizing Jane Crow, Theorizing Unknowability,” but their texts illuminate the philosophical conundrum she identifies. (2018, 57)

This suggests, to me, that Hardison detects the problem of Jane Crow unknowability in Black women writer’s work, even as they work to navigate and counter such unknowability with some degree of success.

Now, to be clear, unknowability, on the terms I outline, can be relative. One might argue that the difficulty of receiving a fair peer-review for this paper in a particular domain rife with either Black feminists with literary, historical, and/or sociological training means that hegemonic and counterhegemonic communities alike pose epistemological problems, even if they are not exactly the conditions of Jane Crow (and they aren’t). But those epistemological problems may have the same structure of the epistemological engine I afford to Jane Crow dynamics, e.g. disregard, disbelief, and disavowal. This is primarily because, epistemologies in colonial landscapes are very difficult to render liberatory (see, for example, Dotson 2015).[2]

Limits of Unknowability, Limits of a Single Paper

Still, for me, the most egregious misreading of “Theorizing Jane Crow” is to interpret me as saying that Black women are equally as unknowable to other Black women as they are in “hegemonic spaces” (56) and according “hierarchical epistemologies” (58). Yeah, that’s absurd. Hardison’s commentary extends my article in exactly the ways it needs to be extended to cordon off this kind of ludicrous uptake, i.e. that Black womenkind are equally unknowable to ourselves as we might be in the face of hegemonic epistemological orientations.[3]

But, as Hardison notes, an extensive development of the point that Black womenkind offer empathetic audiences to Black womenkind that render them knowable, at least “to themselves and each other” (Hardison 2018, 57), both for the sake of their own lives and for the sake of the lives of other Black womenkind, is outside the scope of my paper. Rather, I am concerned with, as Hardison rightly notes, “understanding how black women’s [Jane Crow] subjugation works – or why it persists” (2018, 56). And though I don’t think my essay indicates that Black womenkind are equally “unknowable” to each other in all instances, if that is a possible reading of my essay, thank goodness for Ayesha Hardison’s generous extension of this project to make clear that the performance of this text belies that reading.

Perhaps Hardison says it best, my “grappling with and suture of Murray’s philosophical fragments challenges the hierarchical epistemologies that have characterized black women as unknowable and unknowing,” (2018, 58). This is why I love Black feminist literary studies folks. Because, yes! The performance of this piece belies the message that there is no way for us to be known, especially by ourselves. And, what’s more, such an inexhaustible unknowing has to be false for the successful performance of this text. But then I am aware of that. So what else might I be attempting to articulate in this paper?

It strikes me that a charitable reading of the second main criticism leveled at this paper might proceed as follows:

From where does the charge of unknowability come in the face of the existence and quantity of US Black women’s literary and cultural production? This is an especially important question when you need Black women’s production to write about their ‘unknowability,” how can you claim that Black women are unknowable when the condition for the possibility of this account is that you take yourself to know something about them from their own production? This seems to be a contradiction.

Yes. It does seem like a contradiction or, if folks need a white male theorist to say something to make it real, it is a kind of differend- (Lyotard 1988).[4] Radically disappeared peoples, circumstances, and populations are often subject to problems with respect to frames, evidence and modes of articulation. Being disappeared is different than being invisible simpliciter, but then I make this claim in “Theorizing Jane Crow.”

Problems of large scale disappearing that affect entire populations, events, and historical formations render unknowable unknowability. This problematic seems to be what this second critique falls prey too, i.e. the disappearing of unknowability behind sense making devices (Dotson 2017). As the critique goes, if Black women are unknowable at the scale I seem to propose, then how do I know about this unknowability?[5] How, indeed.

I still reject this rendition of the second criticism, i.e. the one that says with all the literary production of Black womenkind we are no longer unknowable or else I wouldn’t know about a condition of unknowability. Jane Crow unknowability, in my estimation, is not subject to brute impossibilities, i.e. either we are knowable or unknowable. This is because Jane Crow is domain specific in the same ways Jim Crow was (and is). Also, Jane Crow is made of epistemological and material compromises. Hardison gets this. She is very clear that “Black women continue to be ‘unknowable’ in dominant culture due to its investment in white supremacy and patriarchy,” (Hardison 2018, 57).

But, let’s get something clear, an “investment” is not only a set of attitudes. It is composed of sets of institutional norms (and institutions through which to enact those norms). Sets of norms of attention. Sets of historically derived “common sense” and “obvious truths” that routinely subject Black womenkind to Jane Crow dynamics. It is composed of social and material relations that make sense because of the investments that invest them with sense.

Jane Crow as a Dynamic of Complex Social Epistemology

Jane Crow dynamics, when they appear, are built into the functioning of institutions and communal, social relations. They are embedded in the “common sense” of many US publics- including counterhegemonic ones- because I am presuming we are assuming that some Black communities indulge in patriarchy, which is what lead Murray to her observations (See, Hardison 2018). And though Black women can disrupt this in pockets it does not change the epistemological and material conditions that are reinforcing and recreating Jane Crow dynamics for every generation. And it doesn’t change the reality that there is a limit to our capacity to change this from within Jane Crow dynamics. So, we write ourselves into existence again and again and again.

Hardison acknowledges this, as she astutely notes, “Although I engage Pauli Murray as a writer here to offer a complementary approach to Dotson’s theorizing of Jane Crow, I do not claim that black women’s writings irons out Jane Crow’s material paradoxes,” (2018, 62). And this is the heart of my disagreement with the second major critique of this essay. Are those critics claiming that epistemological possibilities brought by Black women’s literary production iron out material paradoxes that, in part, cause Jane Crow dynamics? Because, that would be absurd.

But here is where I appear to disagree with Hardison. Is Hardison claiming that epistemological possibilities have ironed out Jane Crow’s epistemological paradoxes? Because I sincerely doubt that. Schedules of disbelief, disregard, and disavowal are happening constantly and we don’t have great mechanisms for tracking who they harm, whether they harm, and why (on this point, see Dotson and Gilbert 2014).

This leads to a potential substantive disagreement between Hardison and I. And it can be found in the passage I cited earlier. She writes:

Whereas Dotson theorizes Jane Crow by outlining social features facilitating black women’s ‘unknowability,’ in literary studies, we might say black women’s ‘unknowability’ is actually a matter of audience, and more importantly, a problem of reception. (2018, 57)

There is a potential misreading of my text here that seems to center on different understandings of “epistemological” that may come from our different disciplinary foci. Specifically, I don’t necessarily focus on social features. I focus on epistemic features facilitating black women’s unknowability, when we encounter it. That is to say, disregard, disbelief, and disavowal are epistemic relations. They are also social ways of relating, but, importantly, in my analysis they are socio-epistemic. What that means is that they are social features that figure prominently in epistemological orientations and conduct. And these features are embedded in what makes audiences and uptake relevant for this discussion. That is to say, the reasons why audiences matter, and problems of reception are central, is because varying audiences indulge in disregard, disbelief, and disavowal differently.

So, the juxtaposition that might be assumed in Hardison’s statement of the focus in literary studies, which is indicated by the phrase “actually a matter of,” is not a difference in kind, but rather a difference in emphasis. I am tracking the kinds of things that makes audience and problems of reception important for rendering anything knowable in social worlds, e.g. disregard, disbelief, and disavowal. Because it is there, as a philosophy-trained academic, that I can mount an explanation of “how black women’s [Jane Crow] subjugation works -or why it persists” (Hardison 2018, 56).

The Great Obstacles of Abolishing Jane Crow

In the end, this may not be a disagreement at all. I tend to think of it as a change in focus. My story is one story that can be told. Hardison’s story is another. They need not be taken as incompatible. In fact, I would claim they are not incompatible but, as Hardison notes, complementary (2018, 62). They uncover different aspects of a complicated dynamic. One can focus on the problems of audience and reception. And I think that this is fruitful and important. But, and this is where Hardison and I might part company, focusing on these issues can lead one to believe that Jane Crow dynamics are easier to abolish than they are.

One might suspect, as some of the anonymous reviewers of this essay have, that all the literary production of US Black womenkind means that US Black womenkind don’t actually face Jane Crow dynamics. Because, and this seems to be the take-home point of the second critique, and as Hardison explains, “Structural realities (and inequities) demand black women’s invisibility, but black women’s philosophical and literary efforts make them visible – first and foremost – to themselves” (2018, 57). And this is the crux of our potential disagreement.

What do we mean by “make them visible” and, more importantly, where? In the domains where they are experiencing Jane Crow dynamics, i.e. epistemological and material compromises, or in the domains where they, arguably, are not? Because the empathetic audiences of “brown girls” outside of institutions that operate to our detriment are not major catalysts for the problem of Jane Crow unknowability, on my account. This is where domain specificity becomes important and one must reject the conclusion (as I do in “Theorizing Jane Crow”) that Jane Crow unknowability is invisibility simpliciter.

As Hardison explains, Pauli Murray’s experiences with racial and gender subordination motivated her towards identifying and signifying Jane Crow oppression (along with constructing epistemological orientations with which to do so) (2018, 61). What the anonymous reviewers and Hardison insist on is that “These fragments of knowing identify black women’s autobiography as a vehicle for positive self-concept and social epistemology.”

Moreover, Hardison claims, and rightly so, that though “Black women writers do not ‘resolve our dilemmas,’…they do ‘name them.’ In a destructive culture of invisibility, for black women to call out Jane Crow and counter with their self-representation has substantive weight” (2018, 62). I agree with all of these conclusions about the importance of Black women countering Jane Crow dynamics, even as I wonder what it means to say it has “substantive weight.”

I question this not because I disagree that such countering has substantive weight. It does. But part of what has to be interrogated in the 21st century, as we continue to grow weary of living with centuries old problematics, what does the abolition of Jane Crow look like? Are there other forms of “substantive weight” to pursue in tandem to our historical efforts?

In asking this I am not attempting to belittle the efforts that have gotten us to this point- with resources and tools to “call out and counter” Jane Crow dynamics. My work in this paper is impossible without the efforts of previous and current generations of Black womenkind to “name” this problem. Their work has been (and is) important. And for many of us it is lifesaving.  But- and yes, this is a ‘but,’ what next? I want a world other than this. And even if that world is impossible, which I half believe, I still want to work towards a world other than this today as part of what it means to live well right now. So, though this may be blasphemous in today’s Black feminist academy, I don’t think that Black women’s literary production is quite the panacea for Jane Crow dynamics that it is often assumed to be.[6] But then, from Hardison’s remarks, she doesn’t assume this either. How we come to this conclusion (and how we would extend it) may be quite different, however.

The Limits and Potential of Literary Production

And, yes, I think a focus on the socio-epistemic and material conditions of Jane Crow can help us detect the limits of relying on black women’s literary production for the abolition of Jane Crow dynamics, even if such production has an integral role to play in its abolition, e.g. producing knowledge that we use to form understandings about potential conditions of unknowability. And though I would argue that black women’s cultural production is key to worlds other than (and better than this). Because, as Hardison explains, such work helps us “confront the epistemic affront intrinsic to black women’s Jane Crow subjection,” (2018, 60).

I will still never argue that such production, by itself, can fix the problems we face. It cannot. But then, Hardison would not argue this either. As Hardison concludes, disruption of Jane Crow dynamics means a “a complete end to its material and epistemological abuses,” (2018, 62). Indeed- this is my position as well. In making this claim, we are not attempting to overshadow what has been (and continues to be) accomplished in US Black women’s literary production, but to continue to push our imaginations towards the abolition of Jane Crow.

Contact details: dotsonk@msu.edu

References

Dotson, Kristie. 2012. “A Cautionary Tale: On Limititng Epistemic Oppression.”  Frontiers: A Journal of Women Studies 33 (1):24-47.

Dotson, Kristie. 2013. “Radical Love: Black Philosophy as Deliberate Acts of Inheritance.”  The Black Scholar 43 (4):38-45.

Dotson, Kristie. 2014. “Conceptualizing Epistemic Oppression.”  Social Epistemology 28 (2).

Dotson, Kristie. 2015. “Inheriting Patricia Hill Collins’ Black Feminist Epistemology.”  Ethnic and Racial Studies 38 (13):2322-2328.

Dotson, Kristie. 2016. “Between Rocks and Hard Places.”  The Black Scholar 46 (2):46-56.

Dotson, Kristie. 2017. “Theorizing Jane Crow, Thoerizing Unknowability.”  Social Epistemology 31 (5):417-430.

Dotson, Kristie, and Marita Gilbert. 2014. “Curious Disappearances: Affectability Imbalances and Process-Based Invisibility.”  Hypatia 29 (4):873-888.

Hardison, Ayesha. 2018. “Theorizing Jane Crow, Theorizing Literary Fragments.”  Social Epistemology Review and Reply Collective 7 (2):53-63.

Hardison, Ayesha K. 2014. Writing Through Jane Crow: Racec and Gender Politics in African American Literarure. Charlottesville: University of Virginia Press.

Lyotard, Jean-Francois. 1988. The Differend: Phases in Dispute. Minneapolis: University of Minnesota Press.

[1] Nothing I am saying here is meant to indicate that literary critics are not (and can never be) philosophers. That is not a position I hold (Dotson 2016). Rather, the claim I am making is that treating people like philosophers can come with certain orientations. It takes extreme amounts of trust and belief that the person(s) whose thought one is exploring can act like a transformative element for the construction of composite understandings (Dotson 2013). It takes trust and belief to utilize someone else’s ideas to extend one’s own imagination, especially where those extensions are not written word for word. One way to treat a person’s work as philosophical work is to assume a form of authorship that allows one to use that work as a “home base” from which to explore and reconstruct the world that is implied in their abstractions. I call this activity, “theoretical archeology” (Dotson 2017, 418). And all I really meant to describe with that term was one way to take a writer as a philosopher. I had to become very detailed about my approach in this paper because of the propensity of anonymous reviewers to attempt to discipline me into literary studies or intellectual history.

[2] This is what I attempt to draw attention to in my work. The epistemological problems in Jane Crow, for example, are epistemological problems that might be able to exist without their corresponding material problems. The material problems in Jane Crow are material problems that might be able to exist without the epistemological problems. But in Jane Crow they are so linked up with each other that they reinforce and reproduce one another.  So, one can address the epistemological problems and leave the material ones (that eventually reintroduce those epistemological problems again). One can address the material problems and still leave the epistemological ones (that will eventually reintroduce those material problems again). Epistemic relations impact material relation and material relations impact epistemic relations, on my account. But they are not the same and they are not subject to domino-effect solutions. Fixing one does not mean one has fixed the other. And it is unclear one can make a claim to have fixed one without having fix both.

[3] If the reader needs more evidence that I have “figured this out,” see (Dotson 2012, 2016).

[4] There is a great deal about Lyotard’s account I would disagree with. But we are undoubtedly grappling with similar dynamics- though our subject population and approach differs significantly. Pauli Murray’s work pre-dates this formulation, however.

[5] I consider the appearance of this kind of seeming paradox to be a symptom of second order epistemic oppression. See (Dotson 2014).

[6] It may be my lower-socio-economic class background that makes it hard to accept the position that writing is going to save us all. I acknowledge that Black womenkind in the places where I am from needed literature and other cultural products for our survival (especially music, social and film medias. The kind of emphasis on writing in this exchange has a tinge of classism. But we can’t do everything here, can we? There is much more dialogue to be had on these issues.) Though, some might say, as Murray did that we need a “brown girl’s heart to hear” our songs of hope. I will agree with this and still maintain that I needed far more than that. When child protective services were coming to attempt to take me from my very good, but not flawless mother, I needed not only brown girl’s hearts. I also needed hierarchical epistemological orientations and oppressive, material conditions to lose hold.

Author Information: Claus-Christian Carbon, University of Bamberg, ccc@experimental-psychology.com

Carbon, Claus-Christian. “A Conspiracy Theory is Not a Theory About a Conspiracy.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 22-25.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yb

See also:

  • Dentith, Matthew R. X. “Expertise and Conspiracy Theories.” Social Epistemology 32, no. 3 (2018), 196-208.

The power, creation, imagery, and proliferation of conspiracy theories are fascinating avenues to explore in the construction of public knowledge and the manipulation of the public for nefarious purposes. Their role in constituting our pop cultural imaginary and as central images in political propaganda are fertile ground for research.
Image by Neil Moralee via Flickr / Creative Commons

 

The simplest and most natural definition of a conspiracy theory is a theory about a conspiracy. Although this definition seems appealing due to its simplicity and straightforwardness, the problem is that most narratives about conspiracies do not fulfill the necessary requirements of being a theory. In everyday speech, mere descriptions, explanations, or even beliefs are often termed as “theories”—such repeated usage of this technical term is not useful in the context of scientific activities.

Here, a theory does not aim to explain one specific event in time, e.g. the moon landing of 1969 or the assassination of President Kennedy in 1963, but aims at explaining a phenomenon on a very general level; e.g. that things with mass as such gravitate toward one another—independently of the specific natures of such entities. Such an epistemological status is rarely achieved by conspiracy theories, especially the ones about specific events in time. Even more general claims that so-called chemtrails (i.e. long-lasting condensation trails) are initiated by omnipotent organizations across the planet, across time zones and altitudes, is at most a hypothesis – a rather narrow one – that specifically addresses one phenomenon but lacks the capability to make predictions about other phenomena.

Narratives that Shape Our Minds

So-called conspiracy theories have had a great impact on human history, on the social interaction between groups, the attitude towards minorities, and the trust in state institutions. There is very good reason to include “conspiracy theories” into the canon of influential narratives and so it is just logical to direct a lot of scientific effort into explaining and understand how they operate, how people believe in them and how humans pile up knowledge on the basis of these narratives.

A short view on publications registered by Clarivate Analytics’ Web of Science documents 605 records with “conspiracy theories” as the topic (effective date 7 May 2018). These contributions were mostly covered by psychological (n=91) and political (n=70) science articles, with a steep increase in recent years from about 2013 on, probably due to a special issue (“Research Topic”) in the journal Frontiers of Psychology organized in the years 2012 and 2013 by Viren Swami and Christopher Charles French.

As we have repeatedly argued (e.g., Raab, Carbon, & Muth, 2017), conspiracy theories are a very common phenomenon. Most people believe in at least some of them (Goertzel, 1994), which already indicates that believers in them do not belong to a minority group, but that it is more or less the conditio humana to include such narratives in the everyday belief system.

So first of all, we can state that most of such beliefs are neither pathological nor rare (see Raab, Ortlieb, Guthmann, Auer, & Carbon, 2013), but are largely caused by “good”[1] narratives triggered by context factors (Sapountzis & Condor, 2013) such as a distrusted society. The wide acceptance of many conspiracy theories can further explained by adaptation effects that bias the standard beliefs (Raab, Auer, Ortlieb, & Carbon, 2013). This view is not undisputed, as many authors identify specific pathological personality traits such as paranoia (Grzesiak-Feldman & Ejsmont, 2008; Pipes, 1997) which cause, enable or at least proliferate the belief in conspiracy theories.

In fact, in science we mostly encounter the pathological and pejorative view on conspiracy theories and their believers. This negative connotation, and hence the prejudice toward conspiracy theories, makes it hard to solidly test the stated facts, ideas or relationships proposed by such explanatory structures (Rankin, 2017). As especially conspiracy theories of so-called “type I” – where authorities (“the system”) are blamed of conspiracies (Wagner-Egger & Bangerter, 2007)—, such a prejudice can potentially jeopardize the democratic system (Bale, 2007).

Some of the conspiracies which are described in conspiracy theories that are taking place at top state levels could indeed be threatening people’s freedom, democracy and even people’s lives, especially if they turned out to be “true” (e.g. the case of the whistleblower and previously alleged conspiracist Edward Snowden, see Van Puyvelde, Coulthart, & Hossain, 2017).

Understanding What a Theory Genuinely Is

In the present paper, I will focus on another, yet highly important, point which is hardly addressed at all: Is the term “conspiracy theories” an adequate term at all? In fact, the suggestion of a conspiracy theory being a “theory about a conspiracy” (Dentith, 2014, p.30) is indeed the simplest and seemingly most straightforward definition of “conspiracy theory”. Although appealing and allegedly logical, the term conspiracy theory as such is ill-defined. Actually a “conspiracy theory” refers to a narrative which attributes an event to a group of conspirators. As such it is clear that it is justified to associate such a narrative with the term “conspiracy”, but does a conspiracy theory has the epistemological status of a theory?

The simplest definition of a “theory” is that it represents a bundle of hypotheses which can explain a wide range of phenomena. Theories have to integrate the contained hypotheses is a concise, coherent, and systematic way. They have to go beyond the mere piling up of several statements or unlinked hypotheses. The application of theories allows events or entities which are not explicitly described in the sum of the hypotheses to be generalized and hence to be predicted.

For instance, one of the most influential physical theories, the theory of special relativity (German original description “Zur Elektrodynamik bewegter Körper”), contains two hypotheses (Einstein, 1905) on whose basis in addition to already existing theories, we can predict important issues which are not explicitly stated in the theory. Most are well aware that mass and energy are equivalent. Whether we are analyzing the energy of a tossed ball or a static car, we can use the very same theory. Whether the ball is red or whether it is a blue ball thrown by Napoleon Bonaparte does not matter—we just need to refer to the mass of the ball, in fact we are only interested in the mass as such; the ball does not play a role anymore. Other theories show similar predictive power: for instance, they can predict (more or less precisely) events in the future, the location of various types of material in a magnetic field or the trajectory of objects of different speed due to gravitational power.

Most conspiracy theories, however, refer to one single historical event. Looking through the “most enduring conspiracy theories” compiled in 2009 by TIME magazine on the 40th anniversary of the moon landing, it is instantly clear that they have explanatory power for just the specific events on which they are based, e.g. the “JFK assassination” in 1963, the “9/11 cover-up” in 2001, the “moon landings were faked” idea from 1969 or the “Paul is dead” storyline about Paul McCartney’s alleged secret death in 1966. In fact, such theories are just singular explanations, mostly ignoring counter-facts, alternative explanations and already given replies (Votsis, 2004).

But what, then, is the epistemological status of such narratives? Clearly, they aim to explain – and sometimes the explanations are indeed compelling, even coherent. What they mostly cannot demonstrate, though, is the ability to predict other events in other contexts. If these narratives belong to this class of explanatory stories, we should be less liberal in calling them “theories”. Unfortunately, it was Karl Popper himself who coined the term “conspiracy theory” in the 1940s (Popper, 1949)—the same Popper who was advocating very strict criteria for scientific theories and in so became one of the most influential philosophers of science (Suppe, 1977). This imprecise terminology diluted the genuine meaning of (scientific) theories.

Stay Rigorous

From a language pragmatics perspective, it seems odd to abandon the term conspiracy theory as it is a widely introduced and frequently used term in everyday language around the globe. Substitutions like conspiracy narratives, conspiracy stories or conspiracy explanations would fit much better, but acceptance of such terms might be quite low. Nevertheless, we should at least bear in mind that most narratives of this kind cannot qualify as theories and so cannot lead to a wider research program; although their contents and implications are often far-reaching, potentially important for society and hence, in some cases, also worthy of checking.

Contact details: ccc@experimental-psychology.com

References

Bale, J. M. (2007). Political paranoia v. political realism: on distinguishing between bogus conspiracy theories and genuine conspiratorial politics. Patterns of Prejudice, 41(1), 45-60. doi:10.1080/00313220601118751

Dentith, M. R. X. (2014). The philosophy of conspiracy theories. New York: Palgrave.

Einstein, A. (1905). Zur Elektrodynamik bewegter Körper [On the electrodynamics of moving bodies]. Annalen der Physik und Chemie, 17, 891-921.

Goertzel, T. (1994). Belief in conspiracy theories. Political Psychology, 15(4), 731-742.

Grzesiak-Feldman, M., & Ejsmont, A. (2008). Paranoia and conspiracy thinking of Jews, Arabs, Germans and russians in a Polish sample. Psychological Reports, 102(3), 884.

Pipes, D. (1997). Conspiracy: How the paranoid style flourishes and where it comes from. New York: Simon & Schuster.

Popper, K. R. (1949). Prediction and prophecy and their significance for social theory. Paper presented at the Proceedings of the Tenth International Congress of Philosophy, Amsterdam.

Raab, M. H., Auer, N., Ortlieb, S. A., & Carbon, C. C. (2013). The Sarrazin effect: The presence of absurd statements in conspiracy theories makes canonical information less plausible. Frontiers in Personality Science and Individual Differences, 4(453), 1-8.

Raab, M. H., Carbon, C. C., & Muth, C. (2017). Am Anfang war die Verschwörungstheorie [In the beginning, there was the conspiracy theory]. Berlin: Springer.

Raab, M. H., Ortlieb, S. A., Guthmann, K., Auer, N., & Carbon, C. C. (2013). Thirty shades of truth: conspiracy theories as stories of individuation, not of pathological delusion. Frontiers in Personality Science and Individual Differences, 4(406).

Rankin, J. E. (2017). The conspiracy theory meme as a tool of cultural hegemony: A critical discourse analysis. (PhD), Fielding Graduate University, Santa Barbara, CA.

Sapountzis, A., & Condor, S. (2013). Conspiracy accounts as intergroup theories: Challenging dominant understandings of social power and political legitimacy. Political Psychology. doi:10.1111/pops.12015

Suppe, F. (Ed.) (1977). The structure of scientific theories (2nd ed.). Urbana: University of Illinois Press.

Van Puyvelde, D., Coulthart, S., & Hossain, M. S. (2017). Beyond the buzzword: Big data and national security decision-making. International Affairs, 93(6), 1397-1416. doi:10.1093/ia/iix184

Votsis, I. (2004). The epistemological status of scientific theories: An investigation of the structural realist account. (PhD), London School of Economics and Political Science, London. Retrieved from Z:\PAPER\Votsis2004.pdf

Wagner-Egger, P., & Bangerter, A. (2007). The truth lies elsewhere: Correlates of belief in conspiracy theories. Revue Internationale De Psychologie Sociale-International Review of Social Psychology, 20(4), 31-61.

[1] It is important to stress that a “good narrative” in this context means “an appealing story” in which people are interested; by no means does the author want to allow confusion by suggesting the meaning as being “positive”, “proper”, “adequate” or “true”.

Author Information: Francisco Collazo-Reyes, Centro de Investigación y de Estudios Avanzados del IPN,  fcollazo@fis.cinvestav.mx
Hugo García Compeán, Centro de Investigación y de Estudios Avanzados del IPN
Miguel Ángel Pérez-Angón, Centro de Investigación y de Estudios Avanzados del IPN
Jane Margaret-Russell, Universidad Nacional Autónoma de México

Collazo Reyes, Francisco; Hugo García Compeán, Miguel Ángel Pérez-Angón, Jane Margaret-Russell,. “The Nature of the Eponym.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 12-15.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3XZ

See also:

Image by Mark Hogan via Flickr / Creative Commons

 

We agree in general with the comments made by G. Vélez-Cuartas (2018), on our paper published recently in Social Epistemology (Collazo-Reyes, et al, 2018). He accepts the use of our methodology in the analysis of the eponym of Jerzy Plebanski and at the same time, suggests applying this methodology to search for the formation of invisible colleges or scientific networks associated with the emergence of epistemic communities.

This was not a direct goal of our work but we included some related aspects in the revised version of our manuscript that may seem somewhat distant from the ambit of the eponym: namely, intertextuality, obliteration by incorporation, scientometrics networks, invisible colleges, epistemic communities, Jerzy Plebanski and “plebanski”. All these topics are keywords to access our paper in the indexes of scientific literature. These aspects distinguish our methodology from other approaches used in almost a thousand papers that addressed the issue of eponyms, according to a recent search for this topic in Web of Science database.

Within this framework, we appreciate the author’s suggestion to extend our analysis to other subject areas since “eponym as a scientometric tool sounds good as a promising methodology”. In particular, “to induce an analysis on other areas of sociology of science and social epistemology” in order “to reach a symbolic status in a semantic community that is organized in a network of meaning” and could show “a geographical penetration of scientific institutions and global dynamics of scientific systems” (Vélez-Cuartas, 2018).

Traditionally, published work on eponymy has studied the contribution or influence of certain authors in their respective scientific disciplines through biographies, tributes, eulogies or life histories and narratives. Some of these have been published as a series of studies like “Marathon of eponyms” (Scully et al., 2012) or “The man behind the eponym” (Steffen, 2004). The post-structuralism movement mentioned in our paper (Collazo-Reyes, et al, 2018) has criticized this approach.

In scientific texts, the use of the term “plebanski”, as an eponym of the proper name of Jerzy Plebanski, corroborates the recognition given by various authors to the work developed by the Polish scientist. Acknowledgement is apparent in cognitive texts on different aspects of plebanski’s contributions and in this context; the “plebanski” term is cited as a cognitive entity macro-referenced in the framework of scientific communication (Pang, 2010).

We would like to mention two points related to future applications of our findings on the use of eponym in the Latin American scientific literature:

1) The process involved in the construction of an eponym inherently generates a macro-referential scheme that is not considered in the cognitive structure of the databases of the bibliographical indices. The operational strength of the intertextuality associated with the referential process helps to generate socio-cognitive relations and space-time flows of scientific information.

This scheme requires characterization through a relatively exhaustive search in the different variants of the bibliographical indices: references, abstracts, citations, key words, views, twitters, blogs, Facebook, etc. (WoS, Scopus, arXiv, INSPIRE, ADS/NASA, Google citation, altmetric platforms). Most of these have arisen within the domain of the traditional bibliographical databases. Therefore, there is a clear possibility to generate an eponym index to characterize the intertextual structures not associated with the known bibliographical indices.

2) We coincide with the author on the need to take a new approach to carrying out an exhaustive search of eponyms as related to the Latin American scientific community. We are interested in characterizing the geography of collaboration at different levels: local, national, regional, and international (Livingstone, 2003; Naylor, 2005). This approach has been followed in the study of the geographical origin of eponyms in relation to the dominant system of scientific communication (Shapin, 1998; Livingstone, 1995, 2003; Geographies of Science, 2010).

We made a first attempt in this direction in our study of the “plebanski” eponym in the area of mathematical physics. In this paper, we made use of the methodology involved in “geographies of science” (Livingstone, 2010; Geographies of Science, 2010; Knowledge and Space, 2016) with theoretical tools that enhance the projections made in the framework of the sociology of science, bibliometrics and science communication.

In particular, the “spatial turn” movement (Finnegan, 2008; Gunn, 2001; Frenken, 2009; Fa-ti, 2012) offers a new dimension in the development of information systems, maps and networks using an innovative methodology such as “spatial scientometrics” (Frenken et al., 2009; Flores-Vargas, et al, 2018).

The new proposal considers, in each application of an eponym, the original source of authors, institutions, journals and subject matters. Each source includes the position in the geographical distribution of scientific knowledge associated with a given discipline. This information is then referred to as “geo-reference” and the eponyms as “macro-georeferenced” entities.

In this scheme, the generation of eponyms involves the combination of the different sources for authors, institutions, journals and subject areas. The resulting network may develop new aspects of the distribution mechanism of the asymmetrical power associated with the geographies of knowledge (Geographies of Knowledge and Power, 2010).

Contact details: fcollazo@fis.cinvestav.mx

References

Collazo-Reyes, F., H. García-Compeán, M. A. Pérez-Angón, and J. M. Russell. 2018.  “Scientific Eponyms in Latin America: The Case of Jerzy Plebanski in the Area of Mathematical Physics.” Social Epistemology 32 (1): 63-74.

Fa-ti, F. 2012. “The global turn in the history of science.” East Asian Science, Technology and Society: An International Journal 6 (2): 249-258.

Finnegan, D. A. 2008. “The spatial turn: Geographical approaches in the history of science.” Journal of the History of Biology, 41 (2): 369-388.

Flores-Vargas, X., S. H. Vitar-Sandoval, J. I. Gutiérrez-Maya, P. Collazo-Rodríguez, and F. Collazo-Reyes. 2018. “Determinants of the emergence of modern scientific knowledge in mineralogy (Mexico, 1975-1849): a geohistoriometric approach.” Scientometrics, https://doi.org/10.1007/s11192-018-2646-5.

Frenken, K. 2009. Geography of scientific knowledge: A proximity approach. Eindhoven Centre for Innovation Studies (ECIS), working paper 10.01. http://cms.tm.tue.nl/Ecis/Files/papers/wp2010/ wp1001.pdf. Accessed 4 June 2016.

Frenken, K., S. Hardeman, and J. Hoekman. 2009. “Spatial scientometrics: Toward a cumulative research program.” Journal of Informetrics 3 (3): 222–232.

Geographies of Science. 2010. Peter Meusburger, David N. Livingstone, Heike Jöns, Editors. London, New York; Springer Dordrecht Heidelberg, ISBN 978-90-481-8610-5 DOI 10.1007/978-90-481-8611-2.

Geographies of Knowledge and Power. 2010. Peter Meusburger, David N. Livingstone, Heike Jöns, Editors. London, New York; Springer Dordrecht Heidelberg. 347 p.  DOI 10.1007/978-90-481-8611-2.

Gunn, S. 2001. “The spatial turn: Changing history of space and place”. In: S. Gunn & R. J. Morris (Eds.), Identities in space: On tested terrains in the Western city science 1850. Aldershot: Asghate.

Knowledge and space. 2016. Peter Meusburger, David N. Livingstone, Heike Jöns, Editors. London, New York; Springer Dordrecht Heidelberg, ISBN 978-90-481-8610-5 DOI 10.1007/978-90-481-8611-2.

Livingstone, D. N. 2003. “Putting Science in Its Place: Geographies of Scientific Knowledge.” Chicago.

Livingstone, D. N. 1995. “The spaces of knowledge: Contributions towards a historical.” Geography of Science 13 (1): 5–34.

Livingstone, D. N. (2010). “Landscapes of Knowledge” In: Geographies of Science, edited by Peter Meusburger, David N. Livingstone, Heike Jöns, Editors. London, New York; Springer Dordrecht Heidelberg,

Naylor, S. 2005. “Introduction: Historical geographies of science—Places, contexts, cartographies.” British Journal for the History of Science, 38: 1–12.

Pang, Kam-yiu S. 2010. “Eponymy and life-narratives: The effect of foregrounding on proper names.” Journal of Pragmatics 42 (5): 1321-1349.

Scully, C., J. Langdon, and J. Evans. 2012. “Marathon of eponyms: 26 Zinsser-Engman-Cole syndrome (Dyskeratosis congenita).” Oral Diseases 18 (5): 522-523.

Shapin, S. 1998. “Placing the view from nowhere: Historical and sociological problems in the location of science.” Transactions of the Institute of British Geographers, New Series 23: 5–12.

Steffen, C. 2004. “The man behind the eponym – Lauren v. Ackerman and verrucous carcinoma of Ackerman.” American Journal of Dermatopathology 26 (4): 334-341. /10.1007/s11192-018-2646-5.

Veles-Cuartas, G. 2018. “Invisible Colleges 2.0: Eponymy as a Scientometric Tool.” Social Epistemology Review and Reply Collective 7 (3) 5-8.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author Information: Jensen Alex, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson, University of North Florida, jonathan.matheson@gmail.com

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “Conscientiousness and Other Problems: A Reply to Zagzabski.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 10-13.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3Sr

Please refer to:

We’d first like to thank Dr. Zagzebski for engaging with our review of Epistemic Authority. We want to extend the dialogue by offering brief comments on several issues that she raised.

Conscientiousness

In our review we brought up the case of a grieving father who simply could not believe that his son had died despite conclusive evidence to the contrary. This case struck us as a problem case for Zagzebki’s account of rationality. For Zagzebski, rationality is a matter of conscientiousness, and conscientiousness is a matter of using your faculties as best you can to get to truth, where the best guide for a belief’s truth is its surviving conscientious reflection. The problem raised by the grieving father is that his belief that his son is still alive will continuously survive his conscientious reflection (since he is psychologically incapable of believing otherwise) yet it is clearly an irrational belief. In her response, Zagzebski makes the following claims,

(A) “To say he has reasons to believe his son is dead is just to say that a conscientiously self-reflective person would treat what he hears, reads, sees as indicators of the truth of his son’s death. So I say that a reason just is what a conscientiously self-reflective person sees as indicating the truth of some belief.” (57)

and,

(B) “a conscientious judgment can never go against the balance of one’s reasons since one’s reasons for p just are what one conscientiously judges indicate the truth of p.” (57)

These claims about the case lead to a dilemma. Either conscientiousness is to be understood subjectively or objectively, and either way we see some issues. First, if we understand conscientiousness subjectively, then the father seems to pass the test. We can suppose that he is doing the best he can to believe truths, but the psychological stability of this one belief causes the dissonance to be resolved in atypical ways. So, on a subjective construal of conscientiousness, he is conscientious and his belief about his son has survived conscientious reflection.

We can stipulate that the father is doing the best he can with what he has, yet his belief is irrational. Zagzebski’s (B) above seems to fit a subjective understanding of conscientiousness and leads to such a verdict. This is also how we read her in Epistemic Authority more generally. Second, if we understand conscientiousness objectively, then it follows that the father is not being conscientious. There are objectively better ways to resolve his psychic dissonance even if they are not psychologically open to him.

So, the objective understanding of conscientiousness does not give the verdict that the grieving father is rational. Zagzebski’s (A) above fits with an objective understanding of conscientiousness. The problem with the objective understanding of conscientiousness is that it is much harder to get a grasp on what it is. Doing the best you can with what you have, has a clear meaning on the subjective level and gives a nice responsibilist account of conscientiousness. However, when we abstract away from the subject’s best efforts and the subject’s faculties, how should we understand conscientiousness? Is it to believe in accordance with what an ideal epistemic agent would conscientiously believe?

To us, while the objective understanding of conscientiousness avoids the problem, it comes with new problems, chief among which is a fleshed out concept of conscientiousness, so understood. In addition, the objective construal of conscientiousness also does not appear to be suited for how Zagzebski deploys the concept in other areas of the book. For instance, regarding her treatment of peer disagreement, Zagzebski claims that each party should resolve the dissonance in a way that favors what they trust most when thinking conscientiously about the matter. The conscientiousness in play here sounds quite subjective, since rational resolution is simply a matter of sticking with what one trusts the most (even if an ideal rational agent wouldn’t be placing their trust in the same states and even when presented evidence to the contrary).

Reasons

Zagzebski distinguishes between 1st and 3rd person reasons, in part, to include things like emotions as reasons. For Zagzebski,

“1st person or deliberative reasons are states of mind that indicate to me that some belief is true. 3rd person, or theoretical reasons, are not states of mind, but are propositions that are logically or probabilistically connected to the truth of some proposition. (What we call evidence is typically in this category)” (57)

We are troubled by the way that Zagzebski employs this distinction. First, it is not clear how these two kinds of reasons are related. Does a subject have a 1st person reason for every 3rd person reason? After all, not every proposition that is logically or probabilistically connected to the truth of a proposition is part of an individuals evidence or is one of their reasons. So, are the 3rd person reasons that one possesses reasons that one has access to by way of a first-person reason? How could a 3rd person reason be a reason that I have if not by way of some subjective connection?

The relation between these two kinds of reasons deserves further development since Zagzebski puts this distinction to a great deal of work in the book. The second issue results from Zagzebski’s claim that, “1st person and 3rd person reasons do not aggregate.” (57)  If 1st and 3rd person reasons do not aggregate, then they do not combine to give a verdict as to what one has all-things-considered reason to believe. This poses a significant problem in cases where one’s 1st and 3rd person reasons point in different directions.

Zagzebski’s focus is on one’s 1st person reasons, but what then of one’s 3rd person reasons? 3rd person reasons are still reasons, yet if they do not aggregate with 1st person reasons, and 1st person reasons are determining what one should believe, it’s hard to see what work is left for 3rd person reasons. This is quite striking since these are the very reasons epistemologists have focused on for centuries.

Zagzebski’s embrace of 1st person reasons is ostensibly a movement to integrate the concepts of rationality and truth with resolutely human faculties (e.g. emotion, belief, and sense-perception) that have largely been ignored by the Western philosophical canon. Her critical attitude toward Western hyper-intellectualism and the rationalist worldview is understandable and, in certain ways, admirable. Perhaps the movement to engage emotion, belief, and sense-perception as epistemic features can be preserved, but only in the broader context of an evidence-centered epistemology. Further research should channel this movement toward an examination of how non-traditional epistemic faculties as 1st person reasons may be mapped to 3rd person reasons in a way is cognizant of self-trust in personal experience —that is, an account of aggregation that is grounded fundamentally in evidence.

Biases

In the final part of her response, Zagzebski claims that the insight regarding prejudice within communities can bolster several of her points. She refers specifically to her argument that epistemic self-trust commits us to epistemic trust in others (and its expansion to communities), as well as her argument about communal epistemic egoism and the Rational Recognition Principle. She emphasizes the importance of communities to regard others as trustworthy and rational, which would lead to the recognition of biases within them—something that would not happen if communities relied on epistemic egoism.

However, biases have staying power beyond egoism. Even those who are interested in widening and deepening their perspective though engaging with others can nevertheless have deep biases that affect how they integrate this information. Although Zagzebski may be right in emphasizing the importance of communities to act in this way, it seems too idealistic to imply that such honest engagement would result in the recognition and correction of biases. While such engagement might highlight important disagreements, Zagzebski’s analysis of disagreement, where it is rational to stick with what you trust most, will far too often be an open invitation to maintain (if not reinforce) one’s own biases and prejudice.

It is also important to note that the worry concerning biases and prejudice cannot be resolved by emphasizing a move to communities given that communities are subject to the same biases and prejudices as individuals that compose them. Individuals, in trusting their own communities, will only reinforce the biases and prejudice of its members. So, this move can make things worse, even if sometimes it can make things better. Zagzebski’s expansion of self-trust to communities and her Rational Recognition Principle commits communities only to recognize others as (prima facie) trustworthy and rational by means of recognizing their own epistemic faculties in those others.

However, doing this does not do much in terms of the disclosure of biases given that communities are not committed to trust the beliefs of those they recognize as rational and trustworthy. Under Zagzebski’s view, it is possible for a community to recognize another as rational and trustworthy, without necessarily trusting their beliefs—all without the need to succumb to communal epistemic egoism. Communities are, then, able to treat disagreement in a way that resolves dissonance for them.

That is, by trusting their beliefs more than those of the other communities. This is so even when recognizing them as rational and trustworthy as themselves because, under Zagzebski’s view communities are justified in maintaining their beliefs over those of others not because of egoistic reasons but because by withstanding conscientious self-reflection, they trust their beliefs more than those of others. Resolving dissonance from disagreement in this way is clearly more detrimental than it is beneficial, especially in the cases of biased individuals and communities, for which this would lead them to keep their biases.

Although, as Zagzebski claims, attention to cases of prejudice within communities may help give more importance to her argument about the extension of self-trust to the communal level, it does not do much in terms of disclosing biases inasmuch as dissonance from disagreement is resolved in the way she proposes. Her proposal leads not to the disclosure of biases as she implies, but to their reinforcement given that biases—although plausibly unaware—is what communities and individuals would trust more in these cases.

Contact details: jonathan.matheson@gmail.com

References

Alex, Jensen, Valerie Joly Chock, Kyle Mallard, and Jonathan Matheson. “A Review of Linda Zagzebski’s Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 9 (2017): 29-34.

Zagzebski, Linda T. Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief. Oxford University Press, 2015.

Zagzebski, Linda T. “Trust in Others and Self-Trust: Regarding Epistemic Authority.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 56-59.