Archives For epistemic trust

Author Information: Stephen John, Cambridge University, sdj22@cam.ac.uk

John, Stephen. “Transparency, Well-Ordered Science, and Paternalism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 30-33.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Zf

See also:

Image by Sergio Santos and http://nursingschoolsnearme.com, via Flickr / Creative Commons

 

Should a physician tell you that you have cancer, even if she thinks this would cause you needless distress? Of course she should! How, though, should she convey that news? Imagine three, stylised options. Dr Knowsbest is certain you should have your cancer operated on, so tells you the news in a way which vividly highlights the horrors of cancer, but downplays the risk of an operation.

Dr Neutral, by contrast, simply lists all of the facts about your cancer, your prognosis, your possible treatment options, their likely benefits and risks and so on. Finally, Dr Sensitive reports only those aspects of your condition and those risks of surgery which she judges that you, given your values and interests, would want to know about.

Many Methods to Reveal

We can, I hope, all agree that Dr Knowsbest’s communicative strategies and choices are ethically problematic, because she acts in a paternalistic manner. By contrast, Dr Neutral does not act paternalistically. In this regard, at least, Dr Neutral’s strategies are ethically preferable to Dr Knowsbest’s strategies. What about the choice between Knowsbest and Sensititve? In one sense, Dr Sensitive acts paternalistically, because she controls and structures the flow of information with the aim of improving your well-being.

However, there is an important difference between Dr Sensitive and Dr Knowsbest; the former aims solely to improve your epistemic well-being, such that you can better make a choice which aligns with your own values, whereas the latter aims to influence or override your judgment. Knowsbest’s “moral paternalism” is wrong for reasons which are absent in the case of Sensitive’s “epistemic paternalism” (Ahlstrom-Vij, 2013).

Therefore, plausibly, both the Neutral and Sensitive strategies are ethically preferable to Knowsbest; What, though, of the choice between these two communicative strategies? First, I am not certain that it is even possible to report all the facts in a neutral way (for more, see below.) Second, even if it is possible, Dr Sensitive’s strategy seems preferable; her strategy, if successful, positively promotes – as opposed to merely failing to interfere with – your ability to make autonomous choices.

At least at an abstract, ideal level, then, we have good reason to want informants who do more than merely list facts, but who are sensitive to their audiences’ epistemic situation and abilities and their evaluative commitments; we want experts who “well-lead” us. In my recent paper in Social Epistemology, I argued that that certain widely-endorsed norms for science communication are, at best, irrelevant, and, at worst, dangerous (John 2018). We should be against transparency, openness, sincerity and honesty.

It’s a Bit Provocative

One way of understanding that paper is as following from the abstract ideal of sensitive communication, combined with various broadly sociological facts (for example, about how audiences identify experts). I understand why my article put Moore in mind of a paradigm case of paternalism. However, reflection on the hypothetical example suggests we should also be against “anti-paternalism” as a norm for science communication; not because Knowsbest’s strategy is fine, but, rather, because the term “paternalism” tends to bundle together a wide range of practices, not all of which are ethically problematic, and some of which promote – rather than hinder – audiences’ autonomy.

Beyond the accusation of paternalism, Moore’s rich and provocative response focuses on my scepticism about transparency. While I argued that a “folk philosophy of science” can lead audiences to distrust experts who are, in fact, trustworthy, he uses the example of HIV-AIDS activism to point to the epistemic benefits of holding scientists to account, suggesting that “it is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science”. I agree entirely that such a dynamic is possible; indeed, his example shows it does happen!

However, conceding this possibility does not show that we must endorse a norm of transparency, because, ultimately, the costs may still be greater than the benefits. Much here depends on the mechanisms by which transparency and engagement are enacted. Moore suggests one model for such engagement, via the work of “trust proxies”, such as ACT-UP. As he acknowledges, however, although proxies may be better-placed than lay-people to identify when science is flawed, we now create a new problem for the non-expert: to adapt a distinction from Goldman’s work, we must decide which “putative proxies” are “true proxies” (Goldman, 2001).

Plausibly, this problem is even harder than Goldman’s problem of distinguishing the “true experts” among the “putative experts”; because in the latter case, we have some sense of the credentials and so on which signal experthood. Again, I am tempted to say, then, that it is unclear that transparency, openness or engagement will necessarily lead to better, rather than worse, socio-epistemic outcomes.

Knowledge From Observation and Practice

Does that mean my arguments against transparency are in the clear? No. First, many of the issues here turn on the empirical details; maybe careful institutional design can allow us to identify trustworthy trust-proxies, whose work promotes good science. Second, and more importantly, the abstract model of sensitive communication is an ideal. In practice, it is easy to fail to meet this ideal, in ways which undermine, rather than respect or promote, hearers’ autonomy.

For example, rather than tailor her communication to what her audiences do care about, Dr Sensitive might tailor what she says to what she thinks they ought to care about; as a result, she might leave out information which is relevant to their choices given their values, while including information which is irrelevant. An influential strain in recent philosophy of science suggests that non-epistemic value judgments do and must run deep in practices of justification; as such, even a bald report of what a study showed may, implicitly, encode or endorse value judgments which are not shared by the audience (Douglas, 2000).

Reporting claims when, and only when, they meet a certain confidence level may, for example, implicitly rely on assumptions about the relative disvalue of false positives and false negatives; in turn, it may be difficult to justify such assumptions without appeal to non-epistemic values (John, 2015). As such, even Dr Neutral may be unable to avoid communicating in ways which are truly sensitive to her audience’s values. In short, it may be hard to handover our epistemic autonomy to experts without also handing over our moral autonomy.

This problem means that, for research to be trustworthy, requires more than that the researchers’ claims are true, but that they are claims which are, at least, neutral and, at best, aligned with, audiences’ values. Plausibly, regardless greater engagement and transparency may help ensure such value alignment. One might understand the example of ACT-UP along these lines: activist engagement ensured that scientists did “good science” not only in a narrow, epistemic sense of “good” – more or more accurate data and hypotheses were generated – but in a broader sense of being “well-ordered”, producing knowledge that better reflected the concerns and interests of the broader community (Kitcher, 2003).

Whether engagement improves epistemic outcomes narrowly construed is a contingent matter, heavily dependent on the details of the case. By contrast, engagement may be necessary for science to be “well-ordered”. In turn, transparency may be necessary for such engagement. At least, that is the possibility I would push were I to criticise my own conclusions in line with Moore’s concerns.

A Final Sting

Unfortunately, there is a sting in the tail. Developing effective frameworks for engagement and contestation may require us to accept that scientific research is not, and cannot be, fully “value free”. To the extent that such an assumption is a commitment of our “folk philosophy of science”, then developing the kind of rigorous engagement which Moore wants may do as much to undermine, as promote, our trust in true experts. Moore is surely right that the dynamics of trust and distrust are even more complex than my paper suggested; unfortunately, they might be even more complex again than he suggests.

Contact details: sdj22@cam.ac.uk

References

Ahlstrom-Vij, K. (2013). Epistemic paternalism: a defence. Springer

Douglas, H. (2000). Inductive risk and values in science. Philosophy of science, 67(4), 559-579.

Goldman, A (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1), 85–110.

John, S. (2015). Inductive risk and the contexts of communication. Synthese, 192(1), 79-96.

John, S. (2018). Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75-87.

Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.

Author Information: Seungbae Park, Ulsan National Institute of Science and Technology, nature@unist.ac.kr

Park, Seungbae. “Philosophers and Scientists are Social Epistemic Agents.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 31-40.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yo

Please refer to:

The example is from the regime of Hosni Mubarak, but these were the best photos the Digital Editor could find in Creative Commons when he was uploading the piece.

The style of examples common to epistemology, whether social or not, are often innocuous, ordinary situation. But the most critical uses and misuses of knowledge and belief remain all-too-ordinary situations already. If scepticism about our powers to know and believe hold – or are at least held sufficiently – then the most desperate political prisoner has lost her last glimmer of hope. Truth.
Image by Hossam el-Hamalawy via Flickr / Creative Commons

 

In this paper, I reply to Markus Arnold’s comment and Amanda Bryant’s comment on my work “Can Kuhn’s Taxonomic Incommensurability be an Image of Science?” in Moti Mizrahi’s edited collection, The Kuhnian Image of Science: Time for a Decisive Transformation?.

Arnold argues that there is a gap between the editor’s expressed goal and the actual content of the book. Mizrahi states in the introduction that his book aims to increase “our understanding of science as a social, epistemic endeavor” (2018: 7). Arnold objects that it is “not obvious how the strong emphasis on discounting Kuhn’s incommensurability thesis in the first part of the book should lead to a better understanding of science as a social practice” (2018: 46). The first part of the volume includes my work. Admittedly, my work does not explicitly and directly state how it increases our understanding of science as a social enterprise.

Knowledge and Agreement

According to Arnold, an important meaning of incommensurability is “the decision after a long and futile debate to end any further communication as a waste of time since no agreement can be reached,” and it is this “meaning, describing a social phenomenon, which is very common in science” (Arnold, 2018: 46). Arnold has in mind Kuhn’s claim that a scientific revolution is completed not when opposing parties reach an agreement through rational argumentations but when the advocates of the old paradigm die of old age, which means that they do not give up on their paradigm until they die.

I previously argued that given that most recent past paradigms coincide with present paradigms, most present paradigms will also coincide with future paradigms, and hence “taxonomic incommensurability will rarely arise in the future, as it has rarely arisen in the recent past” (Park, 2018: 70). My argument entails that scientists’ decision to end further communications with their opponents has been and will be rare, i.e., such a social phenomenon has been and will be rare.

On my account, the opposite social phenomenon has been and will rather be very common, viz., scientists keep communicating with each other to reach an agreement. Thus, my previous contention about the frequency of scientific revolutions increases our understanding of science as a social enterprise.

Let me now turn to Bryant’s comment on my criticism against Thomas Kuhn’s philosophy of science. Kuhn (1962/1970, 172–173) draws an analogy between the development of science and the evolution of organisms. According to evolutionary theory, organisms do not evolve towards a goal. Similarly, Kuhn argues, science does not develop towards truths. The kinetic theory of heat, for example, is no closer to the truth than the caloric theory of heat is, just as we are no closer to some evolutionary goal than our ancestors were. He claims that this analogy is “very nearly perfect” (1962/1970, 172).

My objection (2018a: 64–66) was that it is self-defeating for Kuhn to use evolutionary theory to justify his philosophical claim about the development of science that present paradigms will be replaced by incommensurable future paradigms. His philosophical view entails that evolutionary theory will be superseded by an incommensurable alternative, and hence evolutionary theory is not trustworthy. Since his philosophical view relies on this untrustworthy theory, it is also untrustworthy, i.e., we ought to reject his philosophical view that present paradigms will be displaced by incommensurable future paradigms.

Bryant replies that “Kuhn could adopt the language of a paradigm (for the purposes of drawing an analogy, no less!) without committing to the literal truth of that paradigm” (2018: 3). On her account, Kuhn could have used the language of evolutionary theory without believing that evolutionary theory is true.

Can We Speak a Truth Without Having to Believe It True?

Bryant’s defense of Kuhn’s position is brilliant. Kuhn would have responded exactly as she has, if he had been exposed to my criticism above. In fact, it is a common view among many philosophers of science that we can adopt the language of a scientific theory without committing to the truth of it.

Bas van Fraassen, for example, states that “acceptance of a theory involves as belief only that it is empirically adequate” (1980: 12). He also states that if “the acceptance is at all strong, it is exhibited in the person’s assumption of the role of explainer” (1980: 12). These sentences indicate that according to van Fraassen, we can invoke a scientific theory for the purpose of explaining phenomena without committing to the truth of it. Rasmus Winther (2009: 376), Gregory Dawes (2013: 68), and Finnur Dellsén (2016: 11) agree with van Fraassen on this account.

I have been pondering this issue for the past several years. The more I reflect upon it, however, the more I am convinced that it is problematic to use the language of a scientific theory without committing to the truth of it. This thesis would be provocative and objectionable to many philosophers, especially to scientific antirealists. So I invite them to consider the following two thought experiments.

First, imagine that an atheist uses the language of Christianity without committing to the truth of it (Park, 2015: 227, 2017a: 60). He is a televangelist, saying on TV, “If you worship God, you’ll go to heaven.” He converts millions of TV viewers into Christianity. As a result, his church flourishes, and he makes millions of dollars a year. To his surprise, however, his followers discover that he is an atheist.

They request him to explain how he could speak as if he were a Christian when he is an atheist. He replies that he can use the language of Christianity without believing that it conveys truths, just as scientific antirealists can use the language of a scientific theory without believing that it conveys the truth.

Second, imagine that scientific realists, who believe that our best scientific theories are true, adopts Kuhn’s philosophical language without committing to Kuhn’s view of science. They say, as Kuhn does, “Successive paradigms are incommensurable, so present and future scientists would not be able to communicate with each other.” Kuhn requests them to explain how they could speak as if they were Kuhnians when they are not Kuhnians. They reply that they can adopt his philosophical language without committing to his view of science, just as scientific antirealists can adopt the language of a scientific theory without committing to the truth of it.

The foregoing two thought experiments are intended to be reductio ad absurdum. That is, my reasoning is that if it is reasonable for scientific antirealists to speak the language of a scientific theory without committing to the truth of it, it should also be reasonable for the atheist to speak the language of Christianity and for scientific realists to speak Kuhn’s philosophical language. It is, however, unreasonable for them to do so.

Let me now diagnose the problems with the atheist’s speech acts and scientific realists’ speech acts. The atheist’s speech acts go contrary to his belief that God does not exist, and scientific realists’ speech acts go contrary to their belief that our best scientific theories are true. As a result, the atheist’s speech acts mislead his followers into believing that he is Christian. The scientific realists’ speech acts mislead their hearers into believing that they are Kuhnians.

Moore’s Paradox

Such speech acts raise an interesting philosophical issue. Imagine that someone says, “Snow is white, but I don’t believe snow is white.” The assertion of such a sentence involves Moore’s paradox. Moore’s paradox arises when we say a sentence of the form, “P, but I don’t believe p” (Moore, 1993: 207–212). We can push the atheist above to be caught in Moore’s paradox. Imagine that he says, “If you worship God, you’ll go to heaven.” We request him to declare whether he believes or not what he just said. He declares, “I don’t believe if you worship God, you’ll go to heaven.” As a result, he is caught in Moore’s paradox, and he only puzzles his audience.

The same is true of the scientific realists above. Imagine that they say, “Successive paradigms are incommensurable, so present and future scientists would not be able to communicate with each other.” We request them to declare whether they believe or not what they just said. They declare, “I don’t believe successive paradigms are incommensurable, so present and future scientists would not be able to communicate with each other.” As a result, they are caught in Moore’s paradox, and they only puzzle their audience.

Kuhn would also be caught in Moore’s paradox if he draws the analogy between the development of science and the evolution of organisms without committing to the truth of evolutionary theory, pace Bryant. Imagine that Kuhn says, “Organisms don’t evolve towards a goal. Similarly, science doesn’t develop towards truths. I, however, don’t believe organisms don’t evolve towards a goal.” He says, “Organisms don’t evolve towards a goal. Similarly, science doesn’t develop towards truths” in order to draw the analogy between the development of science and the evolution of organisms. He says, “I, however, don’t believe organisms don’t evolve towards a goal,” in order to express his refusal to believe that evolutionary theory is true. It is, however, a Moorean sentence: “Organisms don’t evolve towards a goal. I, however, don’t believe organisms don’t evolve towards a goal.” The assertion of such a sentence gives rise to Moore’s paradox.

Scientific antirealists would also be caught in Moore’s paradox, if they explain phenomena in terms of a scientific theory without committing to the truth of it, pace van Fraassen. Imagine that scientific antirealists say, “The space between two galaxies expands because dark energy exists between them, but I don’t believe that dark energy exists between two galaxies.” They say, “The space between two galaxies expands because dark energy exists between them,” in order to explain why the space between galaxies expands.

They add, “I don’t believe that dark energy exists between two galaxies,” in order to express their refusal to commit to the truth of the theoretical claim that dark energy exists. It is, however, a Moorean sentence: “The space between two galaxies expands because dark energy exists between them, but I don’t believe that dark energy exists between two galaxies.” Asserting such a sentence will only puzzle their audience. Consequently, Moore’s paradox bars scientific antirealists from invoking scientific theories to explain phenomena (Park, 2017b: 383, 2018b: Section 4).

Researchers on Moore’s paradox believe that “contradiction is at the heart of the absurdity of saying a Moorean sentence, but it is not obvious wherein contradiction lies” (Park, 2014: 345). Park (2014: 345) argues that when you say, “Snow is white,” your audience believe that you believe that snow is white. Their belief that you believe that snow is white contradicts the second conjunct of your Moorean sentence that you do not believe that snow is white.

Thus, the contradiction lies in your audience’s belief and the second conjunct of your Moorean sentence. The present paper does not aim to flesh out and defend this view of wherein lies the contradiction. It rather aims to show that Moore’s paradox prevents us from using the language of a scientific theory without committing to the truth of it, pace Bryant and van Fraassen.

The Real Consequences of Speaking What You Don’t Believe

Set Moore’s paradox aside. Let me raise another objection to Bryant and van Fraassen. Imagine that Kuhn encounters a philosopher of mind. The philosopher of mind asserts, “A mental state is reducible to a brain state.” Kuhn realizes that the philosopher of mind espouses the identity theory of mind, but he knows that the identity theory of mind has already been refuted by the multiple realizability argument. So he brings up the multiple realizability argument to the philosopher of mind. The philosopher of mind is persuaded of the multiple realizability argument and admits that the identity theory is not tenable.

To Kuhn’s surprise, however, the philosopher of mind claims that when he said, “A mental state is reducible to a brain state,” he spoke the language of the identity theory without committing to the truth of it, so his position is not refuted by Kuhn. Note that the philosopher of mind escapes the refutation of his position by saying that he did not believe what he stated. It is also reasonable for the philosopher of mind to escape the refutation of his position by saying that he did not believe what he stated, if it is reasonable for Kuhn to escape the refutation of his position by saying that he did not believe what he stated. Kuhn would think that it is not reasonable for the philosopher of mind to do so.

Kuhn, however, might bite the bullet, saying that it is reasonable for the philosopher of mind to do so. The strategy to avoid the refutation, Kuhn might continue, only reveals that the identity theory was not his position after all. Evaluating arguments does not require that we identify the beliefs of the authors of arguments. In philosophy, we only need to care about whether arguments are valid or invalid, sound or unsound, strong or weak, and so on.

Speculating about what beliefs the authors of arguments hold as a way of evaluating arguments is to implicitly rely on an argument from authority, i.e., it is to think as though the authors’ beliefs determine the strength of arguments rather than the form and content of arguments do.

We, however, need to consider under what conditions we accept the conclusion of an argument in general. We accept it, when premises are plausible and when the conclusion follows from the premises. We can tell whether the conclusion follows from the premises or not without the author’s belief that it does. In many cases, however, we cannot tell whether premises are plausible or not without the author’s belief that they are.

Imagine, for example, that a witness states in court that a defendant is guilty because the defendant was in the crime scene. The judge can tell whether the conclusion follows from the premise or not without the witness’s belief that it does. The judge, however, cannot tell whether the premise is plausible or not without the witness’s belief that it is. Imagine that the witness says that the defendant is guilty because the defendant was in the crime scene, but that the witness declares that he does not believe that the defendant was in the crime scene. Since the witness does not believe that the premise is true, the judge has no reason to believe that it is true. It is unreasonable for the judge to evaluate the witness’s argument independently of whether the witness believes or not that the premise is true.

In a nutshell, an argument loses its persuasive force, if the author of the argument does not believe that premises are true. Thus, if you aim to convince your audience that your argument is cogent, you should believe yourself that the premises are true. If you declare that you do not believe that the premises are true, your audience will ask you some disconcerting questions: “If you don’t, why should I believe what you don’t? How can you say to me what you don’t believe? Do you expect me to believe what you don’t?” (Park, 2018b: Section 4).

In case you still think that it is harmless and legitimate to speak what you do not believe, I invite you to imagine that your political rival commits murder to frame you. A false charge is brought to you, and you are tried in court. The prosecutor has a strong indictment against you. You state vehemently that you did not commit murder. You, however, have no physical evidence supporting your statement. Furthermore, you are well-known as a person who speaks vehemently what you do not believe. Not surprisingly, the judge issues a death sentence on you, thinking that you are merely speaking the language of the innocent. The point of this sad story is that speaking what you do not believe may result in a tragedy in certain cases.

A Solution With a Prestigious Inspiration

Let me now turn to a slightly different, but related, issue. Under what condition can I refute your belief when you speak contrary to what you believe? I can do it only when I have direct access to your doxastic states, i.e., only when I can identify your beliefs without the mediation of your language. It is not enough for me to interpret your language correctly and present powerful evidence against what your language conveys.

After all, whenever I present such evidence to you, you will escape the refutation of what you stated simply by saying that you did not believe what you stated. Thus, Bryant’s defense of Kuhn’s position from my criticism above amounts to imposing an excessively high epistemic standard on Kuhn’s opponents. After all, his opponents do not have direct access to his doxastic states.

In this context, it is useful to be reminded of the epistemic imperative: “Act only on an epistemic maxim through which you can at the same time will that it should become a universal one” (Park, 2018c: 3). Consider the maxim “Escape the refutation of your position by saying you didn’t believe what you stated.” If you cannot will this maxim to become a universal one, you ought not to act on it yourself. It is immoral for you to act on the maxim despite the fact that you cannot will it to become a universal maxim. Thus, the epistemic imperative can be invoked to argue that Kuhn ought not to use the language of evolutionary theory without committing to the truth of it, pace Bryant.

Let me now raise a slightly different, although related, issue. Recall that according to Bryant, Kuhn could adopt the language of evolutionary theory without committing to the truth of it. Admittedly, there is an epistemic advantage of not committing to the truth of evolutionary theory on Kuhn’s part. The advantage is that he might avoid the risk of forming a false belief regarding evolutionary theory. Yet, he can stick to his philosophical account of science according to which science does not develop towards truths, and current scientific theories will be supplanted by incommensurable alternatives.

There is, however, an epistemic disadvantage of not committing to the truth of a scientific theory. Imagine that Kuhn is not only a philosopher and historian of science but also a scientist. He has worked hard for several decades to solve a scientific problem that has been plaguing an old scientific theory. Finally, he hits upon a great scientific theory that handles the recalcitrant problem. His scientific colleagues reject the old scientific theory and accept his new scientific theory, i.e., a scientific revolution occurs.

He becomes famous not only among scientists but also among the general public. He is so excited about his new scientific theory that he believes that it is true. Some philosophers, however, come along and dispirit him by saying that they do not believe that his new theory is true, and that they do not even believe that it is closer to the truth than its predecessor was. Kuhn protests that his new theory has theoretical virtues, such as accuracy, simplicity, and fruitfulness. Not impressed by these virtues, however, the philosophers reply that science does not develop towards truths, and that his theory will be displaced by an incommensurable alternative. They were exposed to Kuhn’s philosophical account of science!

Epistemic Reciprocation

They have adopted a philosophical position called epistemic reciprocalism according to which “we ought to treat our epistemic colleagues, as they treat their epistemic agents” (Park, 2017a: 57). Epistemic reciprocalists are scientific antirealists’ true adversaries. Scientific antirealists refuse to believe that their epistemic colleagues’ scientific theories are true for fear that they might form false beliefs.

In return, epistemic reciprocalists refuse to believe that scientific antirealists’ positive theories are true for fear that they might form false beliefs. We, as epistemic agents, are not only interested in avoiding false beliefs but also in propagating “to others our own theories which we are confident about” (Park, 2017a: 58). Scientific antirealists achieve the first epistemic goal at the cost of the second epistemic goal.

Epistemic reciprocalism is built upon the foundation of social epistemology, which claims that we are not asocial epistemic agents but social epistemic agents. Social epistemic agents are those who interact with each other over the matters of what to believe and what not to believe. So they take into account how their interlocutors treat their epistemic colleagues before taking epistemic attitudes towards their interlocutors’ positive theories.

Let me now turn to another of Bryant’s defenses of Kuhn’s position. She says that it is not clear that the analogy between the evolution of organisms and the development of science is integral to Kuhn’s account. Kuhn could “have ascribed the same characteristics to theory change without referring to evolutionary theory at all” (Bryant, 2018: 3). In other words, Kuhn’s contention that science does not develop towards truths rises or falls independently of the analogy between the development of science and the evolution of organisms. Again, this defense of Kuhn’s position is brilliant.

Consider, however, that the development of science is analogous to the evolution of organisms, regardless of whether Kuhn makes use of the analogy to defend his philosophical account of science or not, and that the fact that they are analogous is a strike against Kuhn’s philosophical account of science. Suppose that Kuhn believes that science does not develop towards truths, but that he does not believe that organisms do not evolve towards a goal, despite the fact that the development of science is analogous to the evolution of organisms.

An immediate objection to his position is that it is not clear on what grounds he embraces the philosophical claim about science, but not the scientific claim about organisms, when the two claims parallel each other. It is ad hoc merely to suggest that the scientific claim is untrustworthy, but that the philosophical claim is trustworthy. What is so untrustworthy about the scientific claim, but so trustworthy about the philosophical claim? It would be difficult to answer these questions because the development of science and the evolution of organisms are similar to each other.

A moral is that if philosophers reject our best scientific theories, they cannot make philosophical claims that are similar to what our best scientific theories assert. In general, the more philosophers reject scientific claims, the more impoverished their philosophical positions will be, and the heavier their burdens will be to prove that their philosophical claims are dissimilar to the scientific claims that they reject.

Moreover, it is not clear what Kuhn could say to scientists who take the opposite position in response to him. They believe that organisms do not evolve towards a goal, but refuse to believe that science does not develop towards truths. To go further, they trust scientific claims, but distrust philosophical claims. They protest that it is a manifestation of philosophical arrogance to suppose that philosophical claims are worthy of beliefs, but scientific claims are not.

This possible response to Kuhn reminds us of the Golden Rule: Treat others as you want to be treated. Philosophers ought to treat scientists as they want to be treated, concerning epistemic matters. Suppose that a scientific claim is similar to a philosophical claim. If philosophers do not want scientists to hold a double standard with respect to the scientific and philosophical claims, philosophers should not hold a double standard with respect to them.

There “is no reason for thinking that the Golden Rule ranges over moral matters, but not over epistemic matters” (Park, 2018d: 77–78). Again, we are not asocial epistemic agents but social epistemic agents. As such, we ought to behave in accordance with the epistemic norms governing the behavior of social epistemic agents.

Finally, the present paper is intended to be critical of Kuhn’s philosophy of science while enshrining his insight that science is a social enterprise, and that scientists are social epistemic agents. I appealed to Moore’s paradox, epistemic reciprocalism, the epistemic imperative, and the Golden Rule in order to undermine Bryant’s defenses of Kuhn’s position from my criticism. All these theoretical resources can be used to increase our understanding of science as a social endeavor. Let me add to Kuhn’s insight that philosophers are also social epistemic agents.

Contact details: nature@unist.ac.kr

References

Arnold, Markus. “Is There Anything Wrong with Thomas Kuhn?”, Social Epistemology Review and Reply Collective 7, no. 5 (2018): 42–47.

Byrant, Amanda. “Each Kuhn Mutually Incommensurable”, Social Epistemology Review and Reply Collective 7, no. 6 (2018): 1–7.

Dawes, Gregory. “Belief is Not the Issue: A Defence of Inference to the Best Explanation”, Ratio: An International Journal of Analytic Philosophy 26, no. 1 (2013): 62–78.

Dellsén, Finnur. “Understanding without Justification or Belief”, Ratio: An International Journal of Analytic Philosophy (2016). DOI: 10.1111/rati.12134.

Kuhn, Thomas. The Structure of Scientific Revolutions. 2nd ed. The University of Chicago Press, (1962/1970).

Mizrahi, Moti. “Introduction”, In The Kuhnian Image of Science: Time for a Decisive Transformation? Moti Mizrahi (ed.), London: Rowman & Littlefield, (2018): 1–22.

Moore, George. “Moore’s Paradox”, In G.E. Moore: Selected Writings. Baldwin, Thomas (ed.), London: Routledge, (1993).

Park, Seungbae. “On the Relationship between Speech Acts and Psychological States”, Pragmatics and Cognition 22, no. 3 (2014): 340–351.

Park, Seungbae. “Accepting Our Best Scientific Theories”, Filosofija. Sociologija 26, no. 3 (2015): 218–227.

Park, Seungbae. “Defense of Epistemic Reciprocalism”, Filosofija. Sociologija 28, no. 1 (2017a): 56–64.

Park, Seungbae. “Understanding without Justification and Belief?” Principia: An International Journal of Epistemology 21, no. 3 (2017b): 379–389.

Park, Seungbae. “Can Kuhn’s Taxonomic Incommensurability Be an Image of Science?” In The Kuhnian Image of Science: Time for a Decisive Transformation? Moti Mizrahi (ed.), London: Rowman & Littlefield, (2018a): 61–74.

Park, Seungbae. “Should Scientists Embrace Scientific Realism or Antirealism?”, Philosophical Forum (2018b): (to be assigned).

Park, Seungbae. “In Defense of the Epistemic Imperative”, Axiomathes (2018c). DOI: https://doi.org/10.1007/s10516-018-9371-9.

Park, Seungbae. “The Pessimistic Induction and the Golden Rule”, Problemos 93 (2018d): 70–80.

van Fraassen, Bas. The Scientific Image. Oxford: Oxford University Press, (1980).

Winther, Rasmus. “A Dialogue”, Metascience 18 (2009): 370–379.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author Information: Benjamin W. McCraw, University of South Carolina Upstate, bmccraw@uscupstate.edu

McCraw, Benjamin W. “Combes on McCraw on the Nature of Epistemic Trust: A Rejoinder.” Social Epistemology Review and Reply Collective 5, no. 8 (2016): 28-31.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-39F

Please refer to:

checking_time

Image credit: Marius Brede, via flickr

My genuine thanks to Richard Combes for continuing his thoughtful analysis of my views on epistemic trust. In this really short reply, let me offer a quick re-rejoinder to a few of his latest comments.

Combes on Trust-In and Trust-That

First, let’s get clear on Combes’ view. He claims that “one epistemically trusts S if and only if one has certain beliefs about S’s thick reliability” (2016, 8) where ‘thick reliability’ refers to the state where “one has consciously tracked S’s past history, judged that S enjoys some perhaps unique expertise, and therefore should depend on s’s testimony…” (8). That is, H trusts S just in case H believes that:

(a) H has tracked S’s history with respect to the accuracy of S’s utterances,
(b) S’s track record is reliable and
(c) H should depend on S’s future assertions.  Continue Reading…

Author Information: Richard Combes, University of South Carolina Upstate, rcombes@uscupstate.edu

Combes, Richard. “McCraw on the Nature of Epistemic Trust—Part II.” Social Epistemology Review and Reply Collective 5, no. 6 (2016): 7-10.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-30A

Please refer to:

hafencity_university_subway_station

Image credit: Arne Halvorsen, via flickr

In my original response to “The Nature of Epistemic Trust,” by Benjamin McCraw (2015), I defended the view that epistemic trust reduces to one’s belief that another’s allegedly successful ability to track the truth in the past underwrites confidence in the latter’s present and future testimony (2015). On the basis of the introspective data, I deny that any irreducibly distinct, non-propositional attitude of epistemic trust supervenes on such a belief. Epistemic trust is not presented to consciousness as an episodic quale. There is nothing that it is like to trust someone other than being convinced that the trustee’s history validates the truster’s continued support in him or her as a beacon of knowledge.  Continue Reading…

Author Information: Benjamin McCraw, University of South Carolina Upstate, bmccraw@uscupstate.edu

McCraw, Benjamin. “Thinking Through Social Epistemology: A Reply to Combes, Smolkin, and Simmons.” Social Epistemology Review and Reply Collective 5, no. 4 (2016): 1-12.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2NU

Please refer to:

trust_door

Image credit: Steve Simmonds, via flickr

I want to thank Richard Combes, Doran Smolkin, and Aaron Simmons for their gracious, penetrating, and excellent commentaries on my paper. They’ve offered me outstanding points to consider, objections to ponder, and directions to pursue. In what follows, I’ll offer some thoughts of my own and respond to what I think are the truly insightful criticisms they raise for my model of epistemic trust (ET). Let me address Combes first.  Continue Reading…

Author Information: J. Aaron Simmons, Furman University, aaron.simmons@furman.edu

Simmons, J. Aaron. “Existence and Epistemic Trust.” Social Epistemology Review and Reply Collective 4, no. 12 (2015): 14-19.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2uR

Please refer to:

trust_1

Image credit: Steve Rotman, via flickr

The history of philosophy repeatedly demonstrates that it is possible to read an author differently, and maybe even better, than she reads herself. For example, in many ways, Edmund Husserl quite sensibly considered his phenomenological project primarily to be a matter of epistemology. Yet, Martin Heidegger goes a long way toward showing the ontological stakes of Husserl’s epistemology such that phenomenology gets radically rethought not by going counter to Husserl, but, as Heidegger (1968) would put it in What is Called Thinking?, by going to Husserl’s encounter.[1] While reading Benjamin W. McCraw’s (2015) excellent essay “The Nature of Epistemic Trust,” I was struck by the way that, like Heidegger’s reading of Husserl, McCraw’s account of epistemic trust (ET) productively opens onto issues far beyond where McCraw himself goes. In this short response to McCraw’s essay, I will look to what I consider to be the existential stakes of McCraw’s proposal regarding epistemic trust. Crucially, I do not take my thoughts here to be a direct critique of McCraw, but instead an attempt to think with him by taking seriously the importance of epistemic trust and its implications for subjectivity and social life more broadly.  Continue Reading…

Author Information: Doran Smolkin, Kwantlen Polytechnic University, Doran.Smolkin@kpu.ca

Smolkin, Doran. “Clarifying the Dependence Condition: A Reply to Benjamin McCraw’s, ‘The Nature of Epistemic Trust’.” Social Epistemology Review and Reply Collective 4, no 10 (2015): 10-13.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2lK

Please refer to:

5365388625_453028f736_z

Image credit: Pao, via flickr

Much of what we come to believe is based on trusting the communication of others. It would, therefore, be helpful to better understand the nature of this sort of trust. Benjamin McCraw offers one very clear and well-argued account in his, “The Nature of Epistemic Trust.” McCraw claims that a hearer or audience (H) places epistemic trust (ET) in a person or speaker (S) that some proposition (p) is true if and only if:

1. H believes that p;
2. H takes S to communicate that p;
3. H depends upon S’s (perceived) communication for H’s belief that p; and
4. H sees S as epistemically well-placed with respect to p. (McCraw, 13).

Continue Reading…

Author Information: Fabien Medvecky, University of Otago, fabien.medvecky@otgao.ac.nz

Medvecky, Fabien. “Knowing From Others: A Review of Knowledge on Trust and A Critical Introduction to Testimony.Social Epistemology Review and Reply Collective 4, no. 9 (2015): 11-12.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2gD

faulkner_gelfert

Image Credit: Oxford University Press; Bloomsbury Academic

A Critical Introduction to Testimony
Axel Gelfert
Bloomsbury, 2014
264 pp.

Knowledge on Trust
Paul Faulkner
Oxford University Press, 2011
240 pp.

If you are hungry for some reading on testimonial epistemology—the study of knowledge created and gained through testimony—then Axel Gelfert’s introductory text, A Critical Introduction to Testimony (2014), sits as a perfect entrée to Paul Faulkner’s Knowledge on Trust (2011). Both are well written and both are aimed at philosophers, though they are very different in style. While Gelfert’s volume is clearly aimed as an upper undergraduate or postgraduate philosophy course text, presenting the reader with a good overview of the field, Faulkner’s work delves into more specificity as it develops a rich theory of how we acquire new knowledge as a result of testimony. And while I am sympathetic to Faulkner’s views on the role of trust as the foundation for testimonial knowledge, I think his discussion on trust is a little quick.  Continue Reading…

Author Information: Richard E. Combes, University of South Carolina Upstate, rcombes@uscupstate.edu

Combes, Richard E. “McCraw on the Nature of Epistemic Trust.” Social Epistemology Review and Reply Collective 4, no. 8 (2015): 76-78.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2fr

Please refer to:

trust

Image credit: purplejavatroll, via flickr

In “The Nature of Epistemic Trust”, Benjamin W. McCraw (2015) offers an appealing account of what it means to trust someone epistemically. More than merely the recognition that some state of affairs is the case, epistemic trust includes an affective, non-propositional attitude as well, namely, a strong conviction in the integrity of the one trusted. According to McCraw, if Jones places epistemic trust in Smith that some proposition is true, the following four conditions need to be satisfied:  Continue Reading…