Archives For epistemic norms

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsassowe@uccs.edu.

Sassower, Raphael. “On Political Culpability: The Unconscious?” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 26-29.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-45p

Image by Morning Calm Weekly Newspaper, U.S. Army via Flickr / Creative Commons

 

In the post-truth age where Trump’s presidency looms large because of its irresponsible conduct, domestically and abroad, it’s refreshing to have another helping in the epistemic buffet of well-meaning philosophical texts. What can academics do? How can they help, if at all?

Anna Elisabetta Galeotti, in her Political Self-Deception (2018), is convinced that her (analytic) philosophical approach to political self-deception (SD) is crucial for three reasons. First, because of the importance of conceptual clarity about the topic, second, because of how one can attribute responsibility to those engaged in SD, and third, in order to identify circumstances that are conducive to SD. (6-7)

For her, “SD is the distortion of reality against the available evidence and according to one’s wishes.” (1) The distortion, according to Galeotti, is motivated by wishful thinking, the kind that licenses someone to ignore facts or distort them in a fashion suitable to one’s (political) needs and interests. The question of “one’s wishes,” may they be conscious or not, remains open.

What Is Deception?

Galeotti surveys the different views of deception that “range from the realist position, holding that deception, secrecy, and manipulation are intrinsic to politics, to the ‘dirty hands’ position, justifying certain political lies under well-defined circumstances, to the deontological stance denouncing political deception as a serious pathology of democratic systems.” (2)

But she follows none of these views; instead, her contribution to the philosophical and psychological debates over deception, lies, self-deception, and mistakes is to argue that “political deception might partly be induced unintentionally by SD” and that it is also sometimes “the by-product of government officials’ (honest) mistakes.” (2) The consequences, though, of SD can be monumental since “the deception of the public goes hand in hand with faulty decision,” (3) and those eventually affect the country.

Her three examples are President Kennedy and Cuba (Ch. 4), President Johnson and Vietnam (Ch. 5), and President Bush and Iraq (Ch. 6). In all cases, the devastating consequences of “political deception” (and for Galeotti it is based on SD) were obviously due to “faulty” decision making processes. Why else would presidents end up in untenable political binds? Who would deliberately make mistakes whose political and human price is high?

Why Self-Deception?

So, why SD? What is it about self-deception, especially the unintended kind presented here, that differentiates it from garden variety deceptions and mistakes? Galeotti’s  preference for SD is explained in this way: SD “enables the analyst to account for (a) why the decision was bad, given that is was grounded on self-deceptive, hence false beliefs; (b) why the beliefs were not just false but self-serving, as in the result of the motivated processing of data; and (c) why the people were deceived, as the by-product of the leaders’ SD.” (4)

But how would one know that a “bad” decision is “grounded on self-decepti[on] rather than on false information given by intelligence agents, for example, who were misled by local informants who in turn were misinformed by others, deliberately or innocently? With this question in mind, “false belief” can be based on false information, false interpretation of true information, wishful thinking, unconscious self-destructive streak, or SD.

In short, one’s SD can be either externally or internally induced, and in each case, there are multiple explanations that could be deployed. Why stick with SD? What is the attraction it holds for analytical purposes?

Different answers are given to these questions at different times. In one case, Galeotti suggests the following:

“Only self-deceptive beliefs are, however, false by definition, being counterevidential [sic], prompted by an emotional reaction to data that contradicts one’s desires. If this is the specific nature of SD . . . then self-deceptive beliefs are distinctly dangerous, for no false belief can ground a wise decision.” (5)

In this answer, Galeotti claims that an “emotional reaction” to “one’s desires” is what characterizes SD and makes it “dangerous.” It is unclear why this is more dangerous a ground for false beliefs than a deliberate deceptive scheme that is self-serving; likewise, how does one truly know one’s true desires? Perhaps the logician is at a loss to counter emotive reaction with cold deduction, or perhaps there is a presumption here that logical and empirical arguments are by definition open to critiques but emotions are immune to such strategies, and therefore analytic philosophy is superior to other methods of analysis.

Defending Your Own Beliefs

If the first argument for seeing SD as an emotional “reaction” that conflicts with “one’s desires” is a form of self-defense, the second argument is more focused on the threat of the evidence one wishes to ignore or subvert. In Galeotti’s words: SD is:

“the unintended outcome of intentional steps of the agent. . . according to my invisible hand model, SD is the emotionally loaded response of a subject confronting threatening evidence relative to some crucial wish that P. . . Unable to counteract the threat, the subject . . . become prey to cognitive biases. . . unintentionally com[ing] to believe that P which is false.” (79; 234ff)

To be clear, the “invisible hand” model invoked here is related to the infamous one associated with Adam Smith and his unregulated markets where order is maintained, fairness upheld, and freedom of choice guaranteed. Just like Smith, Galeotti appeals to individual agents, in her case the political leaders, as if SD happens to them, as if their conduct leads to “unintended outcome.”

But the whole point of SD is to ward off the threat of unwelcomed evidence so that some intention is always afoot. Since agents undertake “intentional steps,” is it unreasonable for them to anticipate the consequences of their conduct? Are they still unconscious of their “cognitive biases” and their management of their reactions?

Galeotti confronts this question head on when she says: “This work is confined to analyzing the working of SD in crucial instances of governmental decision making and to drawing the normative implications related both to responsibility ascription and to devising prophylactic measures.” (14) So, the moral dimension, the question of responsibility does come into play here, unlike the neoliberal argument that pretends to follow Smith’s model of invisible hand but ends with no one being responsible for any exogenous liabilities to the environment, for example.

Moreover, Galeotti’s most intriguing claim is that her approach is intertwined with a strategic hope for “prophylactic measures” to ensure dangerous consequences are not repeated. She believes this could be achieved by paying close attention to “(a) the typical circumstances in which SD may take place; (b) the ability of external observers to identify other people’s SD, a strategy of precommitment [sic] can be devised. Precommitment is a precautionary strategy, aimed at creating constraints to prevent people from falling prey to SD.” (5)

But this strategy, as promising as it sounds, has a weakness: if people could be prevented from “falling prey to SD,” then SD is preventable or at least it seems to be less of an emotional threat than earlier suggested. In other words, either humans cannot help themselves from falling prey to SD or they can; if they cannot, then highlighting SD’s danger is important; if they can, then the ubiquity of SD is no threat at all as simply pointing out their SD would make them realize how to overcome it.

A Limited Hypothesis

Perhaps one clue to Galeotti’s own self-doubt (or perhaps it is a form of self-deception as well) is in the following statement: “my interpretation is a purely speculative hypothesis, as I will never be in the position to prove that SD was the case.” (82) If this is the case, why bother with SD at all? For Galeotti, the advantage of using SD as the “analytic tool” with which to view political conduct and policy decisions is twofold: allowing “proper attribution of responsibility to self-deceivers” and “the possibility of preventive measures against SD” (234)

In her concluding chapter, she offers a caveat, even a self-critique that undermines the very use of SD as an analytic tool (no self-doubt or self-deception here, after all): “Usually, the circumstances of political decision making, when momentous foreign policy choices are at issue, are blurred and confused both epistemically and motivationally.

Sorting out simple miscalculations from genuine uncertainty, and dishonesty and duplicity from SD is often a difficult task, for, as I have shown when analyzing the cases, all these elements are present and entangled.” (240) So, SD is one of many relevant variables, but being both emotional and in one’s subconscious, it remains opaque at best, and unidentifiable at worst.

In case you are confused about SD and one’s ability to isolate it as an explanatory model with which to approach post-hoc bad political choices with grave consequences, this statement might help clarify the usefulness of SD: “if SD is to play its role as a fundamental explanation, as I contend, it cannot be conceived of as deceiving oneself, but it must be understood as an unintended outcome of mental steps elsewhere directed.” (240)

So, logically speaking, SD (self-deception) is not “deceiving oneself.” So, what is it? What are “mental steps elsewhere directed”? Of course, it is quite true, as Galeotti says that “if lessons are to be learned from past failures, the question of SD must in any case be raised. . . Political SD is a collective product” which is even more difficult to analyze (given its “opacity”) and so how would responsibility be attributed? (244-5)

Perhaps what is missing from this careful analysis is a cold calculation of who is responsible for what and under what circumstances, regardless of SD or any other kind of subconscious desires. Would a psychoanalyst help usher such an analysis?

Contact details: rsassowe@uccs.edu

References

Galeotti, Anna Elisabetta. Political Self-Deception. Cambridge: Cambridge University Press, 2018.

Author Information: Rik Peels, Vrije Universiteit Amsterdam, mail@rikpeels.nl.

Peels, Rik. “Exploring the Boundaries of Ignorance: Its Nature and Accidental Features.” Social Epistemology Review and Reply Collective 8, no. 1 (2019): 10-18.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-456

From the Metropolitan United Church in downtown Toronto.
Image by Loozrboy via Flickr / Creative Commons

 

This article responds to El Kassar, Nadja (2018). “What Ignorance Really Is: Examining the Foundations of Epistemology of Ignorance.” Social Epistemology. DOI: 10.1080/02691728.2018.1518498.

As does Bondy, Patrick. “Knowledge and Ignorance, Theoretical and Practical.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 9-14.

Nadja El Kassar is right that different fields in philosophy use rather different conceptions of ignorance. I also agree with her that there seem to be three major conceptions of ignorance: (i) ignorance as propositional ignorance, which she calls the ‘propositional conception of ignorance’, (ii) ignorance as actively upheld false outlooks, which she names the ‘agential conception of ignorance’, and (iii) ignorance as an epistemic practice, which she dubs the ‘structural conception of ignorance’.

It is remarkable that nobody else has addressed the question before of how these three conceptions relate to each other. I consider it a great virtue of her lucid essay that she not only considers this question in detail, but also provides an account that is meant to do justice to all these different conceptions of ignorance. Let us call her account the El Kassar Synthesis. It reads as follows:

Ignorance is a disposition of an epistemic agent that manifests itself in her beliefs – either she has no belief about p or a false belief – and her epistemic attitudes (doxastic attitudes, epistemic virtues, epistemic vices).[1]

My reply to her insightful paper is structured as follows. First, I argue that her synthesis needs revision on various important points (§2). After that, I show that, despite her ambition to capture the main varieties of ignorance in her account, there are important kinds of ignorance that the El Kassar Synthesis leaves out (§4).

I then consider the agential and structural conceptions of ignorance and suggest that we should distinguish between the nature of ignorance and its accidental features. I also argue that these two other conceptions of ignorance are best understood as accounts of important accidental features of ignorance (§5). I sketch and reply to four objections that one might level against my account of the nature and accidental features of ignorance (§6).

I conclude that ignorance should be understood as the absence of propositional knowledge or the absence of true belief, the absence of objectual knowledge, or the absence of procedural knowledge. I also conclude that epistemic vices, hermeneutical frameworks, intentional avoidance of evidence, and other important phenomena that the agential and structural conceptions of ignorance draw our attention to, are best understood as important accidental features of ignorance, not as properties that are essential to ignorance.

Preliminaries

Before I explore the tenability of the El Kassar Synthesis in more detail, I would like to make a few preliminary points about it that call for some fine-tuning on her part. Remember that on the El Kassar Synthesis, ignorance should be understood as follows:

El Kassar Synthesis version 1: Ignorance is a disposition of an epistemic agent that manifests itself in her beliefs – either she has no belief about p or a false belief – and her epistemic attitudes (doxastic attitudes, epistemic virtues, epistemic vices).[2]

It seems to me that this synthesis needs revision on at least three points.

First, a false belief is an epistemic attitude and even a doxastic attitude. Moreover, if – as is widely thought among philosophers – there are exactly three doxastic attitudes, namely belief, disbelief, and suspension of judgment, then any case of ignorance that manifests itself in a doxastic attitude is one in which one lacks a belief about p or one has a false belief about p.

After all, if one holds a false belief and that is manifest in one’s doxastic attitude, it is because one holds a false belief (that is the manifestation). If one holds no belief and that is manifest in one’s doxastic attitudes, it is because one suspends judgment (that is the manifestation). Of course, it is also possible that one is deeply ignorant (e.g, one cannot even consider the proposition), but then it is simply not even manifest in one’s doxastic attitudes.

The reference to doxastic attitudes in the second conjunct is, therefore, redundant. The revised El Kassar Synthesis reads as follows:

El Kassar Synthesis version 2: Ignorance is a disposition of an epistemic agent that manifests itself in her beliefs – either she has no belief about p or a false belief – and her epistemic attitudes (epistemic virtues, epistemic vices).

What is left in the second conjunct after the first revision is epistemic virtues and vices. There is a problem with this, though. Ignorance need not be manifested in any epistemic virtues or vices. True, it happens often enough. But it is not necessary; it does not belong to the essence of being ignorant.

If one is ignorant of the fact that Antarctica is the greatest desert on earth (which is actually a fact), then that may simply be a fairly cognitively isolated, single fact of which one is ignorant. Nothing follows about such substantial cognitive phenomena as intellectual virtues and vices (which are, after all, dispositions) like open-mindedness or dogmatism. A version that takes this point into account reads as follows:

El Kassar Synthesis version 3: Ignorance is a disposition of an epistemic agent that manifests itself in her beliefs: either she has no belief about p or a false belief.

A third and final worry I would like to raise here is that on the El Kassar Synthesis, ignorance is a disposition of an epistemic agent that manifests itself in her beliefs—and, as we saw, on versions 1 and 2, in her intellectual character traits (epistemic virtues, epistemic vices). I find this worrisome, because it is widely accepted that virtues and vices are dispositions themselves, and many philosophers have argued this also holds for beliefs.[3]

If so, on the El Kassar Synthesis, ignorance is a disposition that manifests itself in a number of dispositions (beliefs, lack of beliefs, virtues, vices). What sort of thing is ignorance if it is a disposition to manifest certain dispositions? It seems if one is disposed to manifest certain dispositions, one simply has those dispositions and will, therefore, manifest them in the relevant circumstances.

Moreover, virtue or the manifestation of virtue does not seem to be an instance or exemplification of ignorance; at most, this seems to be the case for vices. Open-mindedness, thoroughness, and intellectual perseverance are clearly not manifestations of ignorance.[4] If anything, they are the opposite: manifestations of knowledge, insight, and understanding. An account that takes these points also into account would therefore look as follows:

El Kassar Synthesis version 4: Ignorance is an epistemic agent’s having no belief or a false belief about p.

It seems to me that version 4 is significantly more plausible than version 1. I realize, though, that it is also a significant revision of the original El Kassar Synthesis. My criticisms in what follows will, therefore, also be directed against version 1 of El Kassar’s synthesis.

Propositional, Objectual, and Procedural Ignorance

On the first conception of ignorance that El Kassar explores, the propositional one, ignorance is ignorance of the truth of a proposition. On the Standard View of ignorance, defended by Pierre Le Morvan and others,[5] ignorance is lack of propositional knowledge, whereas on the New View, championed by me and others,[6] ignorance is lack of true belief.

I would like to add that it may more suitable to call these ‘conceptions of propositional ignorance’ rather than ‘positional conceptions of ignorance’. After all, they are explicitly concerned with and limit themselves to situations in which one is ignorant of the truth of one or more propositions; they do not say that all ignorance is ignorance of a proposition.

More importantly, though, we should note that ever since Bertrand Russell, it has been quite common in epistemology to distinguish not only propositional knowledge (or knowledge-that), but also knowledge by acquaintance or objectual knowledge (knowledge-of) and procedural or technical knowledge (knowledge-how).[7]

Examples of knowledge by acquaintance are my knowledge of my fiancée’s lovely personality, my knowledge of the taste of the Scotch whisky Talisker Storm, my knowledge of Southern France, and my knowledge of the smell of fresh raspberries. Examples of technical or procedural knowledge are my knowledge of how to navigate through Amsterdam by bike, my knowledge of how to catch a North Sea cod, my knowledge of how to get the attention of a group of 150 students (the latter, incidentally, suggests that know-how comes in degrees…).

Since ignorance is often taken to be lack of knowledge, it is only natural to consider whether there can also be objectual and technical ignorance. Nikolaj Nottelmann, in a recent piece, has convincingly argued that there are such varieties of ignorance.[8]

The rub is that the El Kassar Synthesis, on all of its four versions, does not capture these two other varieties of ignorance. If one is ignorant of how to ride a bike, it is not so much that one lacks beliefs about p or that one has false beliefs about p (even if it is clear exactly which proposition p is). Also, not knowing how to ride a bike does not seem to come with certain intellectual virtues or vices.

The same is true for objectual ignorance: if I am not familiar with the smell of fresh raspberries, that does not imply any false beliefs or absence of beliefs, nor does it come with intellectual virtues or vices. Objectual and procedural ignorance seem to be sui generis kinds of ignorance.

The following definition does capture these three varieties of ignorance—one that, for obvious reasons, I will call the ‘threefold synthesis’:

Threefold Synthesis: Ignorance is an epistemic agent’s lack of propositional knowledge or lack of true belief, lack of objectual knowledge, or lack of procedural knowledge.[9]

Of course, each of the four versions of the El Kassar Synthesis could be revised so as to accommodate this. As we shall see below, though, we have good reason to formulate the Threefold Synthesis independently from the El Kassar Synthesis.

The Agential and Structural Conceptions of Ignorance

According to El Kassar, there is a second conception of ignorance, not captured in the conception of propositional ignorance but captured in the conception of agential ignorance, namely ignorance as an actively upheld false outlook. This conception has, understandably, been particularly influential in the epistemology of race. Charles Mills, whose contributions to this field have been seminal, defines such ignorance as the absence of beliefs, false belief, or a set of false beliefs, brought about by various factors, such as people’s whiteness in the case of white people, that leads to a variety of behavior, such as avoiding evidence.[10] El Kassar suggests that José Medina, who has also contributed much to this field, defends a conception along these lines as well.[11]

The way Charles Mills phrases things suggests a natural interpretation of such ignorance, though. It is this: ignorance is the lack of belief, false beliefs, or various false beliefs (all captured by the conception of propositional ignorance), brought about or caused by a variety of factors. What these factors are will differ from case to case: people’s whiteness, people’s social power and status, people’s being Western, people’s being male, and people’s being heterosexual.

But this means that the agential conception is not a conception of the nature of ignorance. It grants the nature of ignorance as conceived of by the conception of propositional ignorance spelled out above and then, for obvious reasons, goes on to focus on those cases in which such ignorance has particular causes, namely the kinds of factors I just mentioned.[12]

Remarkably, much of what El Kassar herself says supports this interpretation. For example, she says: “Medina picks out a kind of ignorance, active ignorance, that is fed by epistemic vices – in particular, arrogance, laziness and closed-mindedness.” (p. 3; italics are mine) This seems entirely right to me: the epistemology of race focuses on ignorance with specific, contingent features that are crucially relevant for the debate in that field: (i) it is actively upheld, (ii) it is often, but not always, disbelieving ignorance, (iii) it is fed by epistemic vices, etc.

This is of course all perfectly compatible with the Standard or New Views on Ignorance. Most people’s ignorance of the fact that Antarctica is the largest desert on earth is a clear case of ignorance, but one that is not at all relevant to the epistemology of race.

Unsurprisingly then, even though it clearly is a case of ignorance, it does not meet any of the other, contingent criteria that are so pivotal in critical race theory: (i) it is not actively upheld, (ii) it is deep ignorance rather than disbelieving ignorance (most people have never considered this statement about Antarctica), (iii) it is normally not in any way fed by epistemic vices, such as closed-mindedness, laziness, intellectual arrogance, or dogmatism.

That this is a more plausible way of understanding the nature of ignorance and its accidental features can be seen by considering what is widely regarded as the opposite of ignorance: knowledge. According to most philosophers, to know a particular proposition p is to believe a true proposition p on the basis of some kind of justification in a non-lucky (in some sense of the word) way. That is what it is to know something, that is the nature of knowledge.

But in various cases, knowledge can have all sorts of accidental properties: it can be sought and found or one can stumble upon it, it may be the result of the exercise of intellectual virtue or it may be pretty much automatic (such as in the case of my knowledge that I exist), it may be morally good to know that thing or it may be morally bad (as in the case of a privacy violation), it may be based primarily on the exercise of one’s own cognitive capacities or primarily on those of other people (in some cases of testimony), and so on. If this is the case, then it is only natural to think that the same applies to the opposite of knowledge, namely ignorance, and that we should, therefore, clearly distinguish between its nature and its accidental (sometimes crucially important) features:

The nature of ignorance

Ignorance is the lack of propositional knowledge / the lack of true belief, or the lack of objectual knowledge, or the lack of procedural knowledge.[13]

Accidental, context-dependent features of ignorance

Willful or unintentional;

Individual or collective;

Small-scale (individual propositions) or large-scale (whole themes, topics, areas of life);

Brought about by external factors, such as the government, institutions, or socially accepted frameworks, or internal factors, such as one’s own intellectual vices, background assumptions, or hermeneutic paradigms;

And so on.

According to El Kassar, an advantage of her position is that it tells us how one is ignorant (p. 7). However, an account of, say, knowledge, also need not tell us how a particular person in specific circumstances knows something.[14] Perceptual knowledge is crucially important in our lives, and so is knowledge based on memory, moral knowledge (if there is such a thing), and so on.

It is surely no defect in all the many accounts of knowledge, such as externalism, internalism, reliabilism, internalist externalism, proper functionalism, deontologism, or even knowledge-first epistemology, that they do not tell us how a particular person in specific circumstances knows something. They were never meant to do that.

Clearly, mutatis mutandis, the same point applies to the structural conception of ignorance that plays an important role in agnotology. Agnotology is the field that studies how various institutional structures and mechanisms can intentionally keep people ignorant or make them ignorant or create different kinds of doubt. The ignorance about the effects of smoking brought about and intentionally maintained by the tobacco industry is a well-known example.

Again, the natural interpretation is to say that people are ignorant because they lack propositional knowledge or true belief, they lack objectual knowledge, or they lack procedural knowledge. And they do so because – and this is what agnotology focuses on – it is intentionally brought about or maintained by various institutions, agencies, governments, mechanisms, and so on. Understandably, the field is more interested in studying those accidental features of ignorance than in studying its nature.

Objections and Replies

Before we draw a conclusion, let us consider El Kassar’s objections to a position along the lines I have suggested.[15] First, she suggests that we lose a lot if we reject the agential and structural conceptions of ignorance. We lose such things as: ignorance as a bad practice, the role of epistemic agency, the fact that much ignorance is strategic, and so on. I reply that, fortunately, we do not: those are highly important, but contingent features of ignorance: some cases of ignorance have them, others do not. This leaves plenty of room to study such contingent features of ignorance in critical race theory and agnotology.[16]

Second, she suggests that this account would exclude highly important kinds of ignorance, such as ignorance deliberately constructed by companies. I reply that it does not: it just says that its being deliberately constructed by, say, pharmaceutical companies, is an accidental or contingent feature and that it is not part of the nature of ignorance.

Third, Roget’s Thesaurus, for example, lists knowledge as only one of the antonyms of ignorance. Other options are cognizance, understanding, competence, cultivation, education, experience, intelligence, literacy, talent, and wisdom. I reply that we can make sense of this on my alternative, threefold synthesis: competence, cultivation, education, intelligence, and so on, all come with knowledge and true belief and remove certain kinds of ignorance. Thus, it makes perfect sense that these are mentioned as antonyms of ignorance.

Finally, one may wonder whether my alternative conception enables us to distinguish between Hannah and Kate, as described by El Kassar. Hannah is deeply and willingly ignorant about the high emissions of both carbon and sulfur dioxides of cruise ships (I recently found out that a single cruise trip has roughly the same amount of emission as seven million cars in an average year combined). Kate is much more open-minded, but has simply never considered the issue in any detail.

She is in a state of suspending ignorance regarding the emission of cruise ships. I reply that they are both ignorant, at least propositionally ignorant, but that their ignorance has different, contingent features: Hannah’s ignorance is deep ignorance, Kate’s ignorance is suspending ignorance, Hannah’s ignorance is willing or intentional, Kate’s ignorance is not. These are among the contingent features of ignorance; both are ignorant and, therefore, meet the criteria that I laid out for the nature of ignorance.

The Nature and Accidental Features of Ignorance

I conclude that ignorance is the lack of propositional knowledge or true belief, the lack of objectual knowledge, or the lack of procedural knowledge. That is the nature of ignorance: each case meets this threefold disjunctive criterion. I also conclude that ignorance has a wide variety of accidental or contingent features. Various fields have drawn attention to these accidental or contingent features because they matter crucially in certain debates in those fields. It is not surprising then that the focus in mainstream epistemology is on the nature of ignorance, whereas the focus in agnotology, epistemology of race, feminist epistemology, and various other debates is on those context-dependent features of ignorance.

This is not at all to say that the nature of ignorance is more important than its accidental features. Contingent, context-dependent features of something may be significantly more important. For example, it may well be the case that we have the parents that we have essentially; that we would be someone else if we had different biological parents. If so, that is part of our nature or essence.

And yet, certain contingent and accidental features may matter more to us, such as whether or not our partner loves us. Let us not confuse the nature of something with the accidental features of it that we value or disvalue. If we get this distinction straight, there is no principled reason not to accept the threefold synthesis that I have suggested in this paper as a plausible alternative to El Kassar’s synthesis.[17]

Contact details: mail@rikpeels.nl

References

Driver, Julia. (1989). “The Virtues of Ignorance,” The Journal of Philosophy 86.7, 373-384.

El Kassar, Nadja. (2018). “What Ignorance Really Is: Examining the Foundations of Epistemology of Ignorance”, Social Epistemology, DOI: 10.1080/02691728.2018.1518498.

Le Morvan, Pierre. (2011). “On Ignorance: A Reply to Peels”, Philosophia 39.2, 335-344.

Medina, José. (2013). The Epistemology of Resistance (Oxford: Oxford University Press).

Mills, Charles. (2015). “Global White Ignorance”, in M. Gross and L. McGoey (eds.), Routledge International Handbook of Ignorance Studies (London: Routledge), 217-227.

Nottelmann, Nikolaj. (2015). “Ignorance”, in Robert Audi (ed.), Cambridge Dictionary of Philosophy, 3rd ed. (Cambridge: Cambridge University Press).

Peels, Rik. (2010). “What Is Ignorance?”, Philosophia 38, 57-67.

Peels, Rik. (2014). “What Kind of Ignorance Excuses? Two Neglected Issues”, The Philosophical Quarterly 64 (256), 478–496.

Peels, Rik, ed. 2017. Perspectives on Ignorance from Moral and Social Philosophy (New York: Routledge).

Peels, Rik. (2019). “Asserting Ignorance”, in Sanford C. Goldberg (ed.), Oxford Handbook of Assertion (Oxford: Oxford University Press), forthcoming.

Peels, Rik, and Martijn Blaauw, eds. (2016). The Epistemic Dimensions of Ignorance (Cambridge: Cambridge University Press).

Russell, Bertrand. (1980). The Problems of Philosophy (Oxford: Oxford University Press).

Schwitzgebel, Eric. (2002). “A Phenomenal, Dispositional Account of Belief”, Noûs 36.2, 249-275.

[1] El Kassar 2018, 7.

[2] El Kassar 2018, 7.

[3] E.g. Schwitzgebel 2002.

[4] Julia driver (1989) has argued that certain moral virtues, such as modesty, imply some kind of ignorance. However, moral virtues are different from epistemic virtues and the suggestion that something implies ignorance is different from the idea that something manifests ignorance.

[5] See Le Morvan 2011. See also various essays in Peels and Blaauw 2016; Peels 2017.

[6] See Peels 2010; 2014; 2019. See also various essays in Peels and Blaauw 2016; Peels 2017.

[7] See Russell 1980, 3.

[8] See Nottelmann 2015.

[9] If the Standard View on Ignorance is correct, then one could simply replace this with: Ignorance is a disposition of an epistemic agent that manifests itself in lack of (propositional, objectual, or procedural) knowledge.

[10] See Mills 2015, 217.

[11] See Medina 2013.

[12] El Kassar in her paper mentions Anne Meylan’s suggestion on this point. Anne Meylan has suggested – and confirmed to me in personal correspondence – that we ought to distinguish between the state of being ignorant (which is nicely captured by the Standard View or the New View) and the action or failure to act that induced that state of ignorance (that the agential and structural conceptions of ignorance refer to), such as absence of inquiry or a sloppy way of dealing with evidence. I fully agree with Anne Meylan’s distinction on this point and, as I argue in more detail below, taking this distinction into account can lead to a significantly improved account of ignorance.

[13] The disjunction is meant to be inclusive.

[15] See pp. 4-5 of her paper.

[16] As Anne Meylan has pointed out to me in correspondence, it is generally true that doxastic states are not as such morally bad; whether or not they are depends on their contingent, extrinsic features.

[17] For their helpful comments on earlier versions of this paper, I would like to thank Thirza Lagewaard, Anne Meylan, and Nadja El Kassar.

Author Information: Jeff Kochan, University of Konstanz, jwkochan@gmail.com.

Kochan, Jeff. “Suppressed Subjectivity and Truncated Tradition: A Reply to Pablo Schyfter.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 15-21.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44s

Image by Brandon Warren via Flickr / Creative Commons

 

This article responds to: Schyfter, Pablo. “Inaccurate Ambitions and Missing Methodologies: Thoughts on Jeff Kochan and the Sociology of Scientific Knowledge.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 8-14.

In his review of my book – Science as Social Existence: Heidegger and the Sociology of Scientific Knowledge – Raphael Sassower objects that I do not address issues of market capitalism, democracy, and the ‘industrial-academic-military complex’ (Sassower 2018, 31). To this, I responded: ‘These are not what my book is about’ (Kochan 2018, 40).

In a more recent review, Pablo Schyfter tries to turn this response around, and use it against me. Turnabout is fair play, I agree. Rebuffing my friendly, constructive criticism of the Edinburgh School’s celebrated and also often maligned ‘Strong Programme’ in the Sociology of Scientific Knowledge (SSK), Schyfter argues that I have failed to address what the Edinburgh School is actually about (Schyfter 2018, 9).

Suppressing the Subject

More specifically, Schyfter argues that I expect things from the Edinburgh School that they never intended to provide. For example, he takes what I call the ‘glass bulb’ model of subjectivity, characterises it as a ‘form of realism,’ and then argues that I have, in criticising the School’s lingering adherence to this model, failed to address their ‘actual intents’ (Schyfter 2018, 8, 9). According to Schyfter, the Edinburgh School did not have among its intentions the sorts of things I represent in the glass-bulb model – these are not, he says, what the School is about.

This claim is clear enough. Yet, at the end of his review, Schyfter then muddies the waters. Rather than rejecting the efficacy of the glass-bulb model, as he had earlier, he now tries ‘expanding’ on it, suggesting that the Strong Programme is better seen as a ‘working light bulb’: ‘It may employ a glass-bulb, but cannot be reduced to it’ (Schyfter 2018, 14).

So is the glass-bulb model a legitimate resource for understanding the Edinburgh School, or is it not? Schyfter’s confused analysis leaves things uncertain. In any case, I agree with him that the Edinburgh School’s complete range of concerns cannot be reduced to those specific concerns I try to capture in the glass-bulb model.

The glass-bulb model is a model of subjectivity, and subjectivity is a central topic of Science as Social Existence. It is remarkable, then, that the word ‘subject’ and its cognates never appear in Schyfter’s review (apart from in one quote from me). One may furthermore wonder why Schyfter characterises the glass-bulb model as a ‘form of realism.’ No doubt, these two topics – subjectivity and realism – are importantly connected, but they are not the same. Schyfter has mixed them up, and, in doing so, he has suppressed subjectivity as a topic of discussion.

Different Kinds of Realism

Schyfter argues that I am ‘unfair’ in criticising the Edinburgh School for failing to properly address the issue of realism, because, he claims, ‘[t]heir work was not about ontology’ (Schyfter 2018, 9). As evidence for my unfairness, he quotes my reference to ‘the problem of how one can know that the external world exists’ (Schyfter 2018, 9; cf. Kochan 2017, 37). But the problem of how we can know something is not an ontological problem, it is an epistemological one, a problem of knowledge. Schyfter has mixed things up again.

Two paragraphs later, Schyfter then admits that the Edinburgh School ‘did not entirely ignore ontology’ (Schyfter 2018, 9). I agree. In fact, as I demonstrate in Chapter One, the Edinburgh School was keen to ontologically ground the belief that the ‘external world’ exists. Why? Because they see this as a fundamental premise of science, including their own social science.

I criticise this commitment to external-world realism, because it generates the epistemological problem of how one can know that the external world exists. And this epistemological problem, in turn, is vulnerable to sceptical attack. If the world is ‘external,’ the question will arise: external to what? The answer is: to the subject who seeks to know it.

The glass-bulb model reflects this ontological schema. The subject is sealed inside the bulb; the world is external to the bulb. The epistemological problem then arises of how the subject penetrates the glass barrier, makes contact with – knows – the world. This problem is invariably vulnerable to sceptical attack. One can avoid the problem, and the attack, by fully jettisoning the glass-bulb model. Crucially, this is not a rejection of realism per se, but only of a particular form of realism, namely, external-world realism.

Schyfter argues that the Edinburgh School accepts a basic premise, ‘held implicitly by people as they live their lives, that the world with which they interact exists’ (Schyfter 2018, 9). I agree; I accept it too. Yet he continues: ‘Kochan chastises this form of realism because it does not “establish the existence of the external world”’ (Schyfter 2018, 9).

That is not quite right. I agree that people, as they live their lives, accept that the world exists. But this is not external-world realism, and it is the latter view that I oppose. I ‘chastise’ the Edinburgh School for attempting to defend the latter view, when all they need to defend is the former. The everyday realist belief that the world exists is not vulnerable to sceptical attack, because it does not presuppose the glass-bulb model of subjectivity.

On this point, then, my criticism of the Edinburgh School is both friendly and constructive. It assuages their worries about sceptical attack – which I carefully document in Chapter One – without requiring them to give up their realism. But the transaction entails that they abandon their lingering commitment to the glass-bulb model, including their belief in an ‘external’ world, and instead adopt a phenomenological model of the subject as being-in-the-world.

Failed Diversionary Tactics

It is important to note that the Edinburgh School does not reject scepticism outright. As long as the sceptic attacks absolutist knowledge of the external world, they are happy to go along. But once the sceptic argues that knowledge of the external world, as such, is impossible, they demur, for this threatens their realism. Instead, they combine realism with relativism. Yet, as I argue, as long as they also combine their relativism with the glass-bulb model, that is, as long as theirs is an external-world realism, they will remain vulnerable to sceptical attack.

Hence, I wrote that, in the context of their response to the external-world sceptic, the Edinburgh School’s distinction between absolute and relative knowledge ‘is somewhat beside the point’ (Kochan 2017, 48). In response, Schyfter criticises me for neglecting the importance of the Edinburgh School’s relativism (Schyfter 2018, 10). But I have done no such thing. In fact, I wholly endorse their relativism. I do suggest, however, that it be completely divorced from the troublesome vestiges of the glass-bulb model of subjectivity.

Schyfter uses the same tactic in response to this further claim of mine: ‘For the purposes of the present analysis, whether [conceptual] content is best explained in collectivist or individualist terms is beside the point’ (Kochan 2017, 79). For this, I am accused of failing to recognise the importance of the Edinburgh School’s commitment to a collectivist or social conception of knowledge (Schyfter 2018, 11).

The reader should not be deceived into thinking that the phrase ‘the present analysis’ refers to the book as a whole. In fact, it refers to that particular passage of Science as Social Existence wherein I discuss David Bloor’s claim that the subject can make ‘genuine reference to an external reality’ (Kochan 2017, 79; cf. Bloor 2001, 149). Bloor’s statement relies on the glass-bulb model. Whether the subjectivity in the bulb is construed in individualist terms or in collectivist terms, the troubles caused by the model will remain.

Hence, I cannot reasonably be charged with ignoring the importance of social knowledge for the Edinburgh School. Indeed, the previous but one sentence to the sentence on which Schyfter rests his case reads: ‘This sociological theory of the normativity and objectivity of conceptual content is a central pillar of SSK’ (Kochan 2017, 79). It is a central pillar of Science as Social Existence as well.

Existential Grounds for Scientific Experience

Let me shift now to Heidegger. Like previous critics of Heidegger, Schyfter is unhappy with Heidegger’s concept of the ‘mathematical projection of nature.’ Although I offer an extended defense and development of this concept, Schyfter nevertheless insists that it does ‘not offer a clear explanation of what occurs in the lived world of scientific work’ (Schyfter 2018, 11).

For Heidegger, ‘projection’ structures the subject’s understanding at an existential level. It thus serves as a condition of possibility for both practical and theoretical experience. Within the scope of this projection, practical understanding may ‘change over’ to theoretical understanding. This change-over in experience occurs when a subject holds back from immersed, practical involvement with things, and instead comes to experience those things at a distance, as observed objects to which propositional statements may then be referred.

The kind of existential projection specific to modern science, Heidegger called ‘mathematical.’ Within this mathematical projection, scientific understanding may likewise change over from practical immersion in a work-world (e.g., at a lab bench) to a theoretical, propositionally structured conception of that same world (e.g., in a lab report).

What critics like Schyfter fail to recognise is that the mathematical projection explicitly envelopes ‘the lived world of scientific work’ and tries to explain it (necessarily but not sufficiently) in terms of the existential conditions structuring that experience. This is different from – but compatible with – an ethnographic description of scientific life, which need not attend to the subjective structures that enable that life.

When such inattention is elevated to a methodological virtue, however, scientific subjectivity will be excluded from analysis. As we will see in a moment, this exclusion is manifest, on the sociology side, in the rejection of the Edinburgh School’s core principle of underdetermination.

In the mid-1930s, Heidegger expanded on his existential conception of science, introducing the term mathēsis in a discussion of the Scientific Revolution. Mathēsis has two features: metaphysical projection; and work experiences. These are reciprocally related, always occurring together in scientific activity. I view this as a reciprocal relation between the empirical and the metaphysical, between the practical and the theoretical, a reciprocal relation enabled, in necessary part, by the existential conditions of scientific subjectivity.

Schyfter criticises my claim that, for Heidegger, the Scientific Revolution was not about a sudden interest in facts, measurement, or experiment, where no such interest had previously existed. For him, this is ‘excessively broad,’ ‘does not reflect the workings of scientific practice,’ and is ‘belittling of empirical study’ (Schyfter 2018, 12). This might be true if Heidegger had offered a theory-centred account of science. But he did not. Heidegger argued that what was decisive in the Scientific Revolution was, as I put it, ‘not that facts, experiments, calculation and measurement are deployed, but how and to what end they are deployed’ (Kochan 2017, 233).

According to Heidegger, in the 17th c. the reciprocal relation between metaphysical projection and work experience was mathematicised. As the projection became more narrowly specified – i.e., axiomatised – the manner in which things were experienced and worked with also became narrower. In turn, the more accustomed subjects became to experiencing and working with things within this mathematical frame, the more resolutely mathematical the projection became. Mathēsis is a kind of positive feedback loop at the existential level.

Giving Heidegger Empirical Feet

This is all very abstract. That is why I suggested that ‘[a]dditional material from the history of science will allow us to develop and refine Heidegger’s account of modern science in a way which he did not’ (Kochan 2017, 235). This empirical refinement and development takes up almost all of Chapters 5 and 6, wherein I consider: studies of diagnostic method by Renaissance physician-professors at the University of Padua, up until their appointment of Galileo in 1591; the influence of artisanal and mercantile culture on the development of early-modern scientific methods, with a focus on metallurgy; and the dispute between Robert Boyle and Francis Line in the mid-17th c. over the experimentally based explanation of suction.

As Paolo Palladino recognises in his review of Science as Social Existence, this last empirical case study offers a different account of events than was given by Steven Shapin and Simon Schaffer in their classic 1985 book Leviathan and the Air-Pump, which influentially applied Edinburgh School methods to the history of science (Palladino 2018, 42). I demonstrate that Heidegger’s account is compatible with this sociological account, and that it also offers different concepts leading to a new interpretation.

Finally, at the end of Chapter 6, I demonstrate the compatibility of Heidegger’s account of modern science with Bloor’s concept of ‘social imagery,’ not just further developing and refining Heidegger’s account of modern science, but also helping to more precisely define the scope of application of Bloor’s valuable methodological concept. Perhaps this does not amount to very much in the big picture, but it is surely more than a mere ‘semantic reformulation of Heidegger’s ideas,’ as Schyfter suggests (Schyfter 2018, 13).

Given all of this, I am left a bit baffled by Schyfter’s claims that I ‘belittle’ empirical methods, that I ‘do[] not present any analysis of SSK methodologies,’ and that I am guilty of ‘a general disregard for scientific practice’ (Schyfter 2018, 12, 11).

Saving an Edinburgh School Method

Let me pursue the point with another example. A key methodological claim of the Edinburgh School is that scientific theory is underdetermined by empirical data. In order to properly explain theory, one must recognise that empirical observation is an interpretative act, necessarily (but not sufficiently) guided by social norms.

I discuss this in Chapter 3, in the context of Bloor’s and Bruno Latour’s debate over another empirical case study from the history of science, the contradictory interpretations given by Robert Millikan and Felix Ehrenhaft of the natural phenomena we now call ‘electrons.’

According to Bloor, because Millikan and Ehrenhaft both observed the same natural phenomena, the divergence between their respective claims – that electrons do and do not exist – must be explained by reference to something more than those phenomena. This ‘something more’ is the divergence in the respective social conditions guiding Millikan and Ehrenhaft’s interpretations of the data (Kochan 2017, 124-5; see also Kochan 2010, 130-33). Electron theory is underdetermined by the raw data of experience. Social phenomena, or ‘social imagery,’ must also play a role in any explanation of how the controversy was settled.

Latour rejects underdetermination as ‘absurd’ (Kochan 2017, 126). This is part of his more general dismissal of the Edinburgh School, based on his exploitation of vulnerabilities in their lingering adherence to the glass-bulb model of subjectivity. I suggest that the Edinburgh School, by fully replacing the glass-bulb model with Heidegger’s model of the subject as being-in-the-world, can deflect Latour’s challenge, thus saving underdetermination as a methodological tool.

This would also allow the Edinburgh School to preserve subjectivity as a methodological resource for sociological explanation. Like Heidegger’s metaphysical projection, the Edinburgh School’s social imagery plays a necessary (but not a sufficient) role in guiding the subject’s interpretation of natural phenomena.

The ‘Tradition’ of SSK – Open or Closed?

Earlier, I mentioned the curious fact that Schyfter never uses the word ‘subject’ or its cognates. It is also curious that he neglects my discussion of the Bloor-Latour debate and never mentions underdetermination. In Chapter 7 of Science as Social Existence, I argue that Latour, in his attack on the Edinburgh School, seeks to suppress subjectivity as a topic for sociological analysis (Kochan 2017, 353-54, and, for methodological implications, 379-80; see also Kochan 2015).

More recently, in my response to Sassower, I noted the ongoing neglect of the history of disciplinary contestation within the field of science studies (Kochan 2018, 40). I believe that the present exchange with Schyfter nicely exemplifies that internal contestation, and I thank him for helping me to more fully demonstrate the point.

Let me tally up. Schyfter is silent on the topic of subjectivity. He is silent on the Bloor-Latour debate. He is silent on the methodological importance of underdetermination. And he tries to divert attention from his silence with specious accusations that, in Science as Social Existence, I belittle empirical research, that I disregard scientific practice, that I fail to recognise the importance of social accounts of knowledge, and that I generally do not take seriously Edinburgh School methodology.

Schyfter is eager to exclude me from what he calls the ‘tradition’ of SSK (Schyfter 2018, 13). He seems to view tradition as a cleanly bounded and internally cohesive set of ideas and doings. By contrast, in Science as Social Existence, I treat tradition as a historically fluid range of intersubjectively sustained existential possibilities, some inevitably vying against others for a place of cultural prominence (Kochan 2017, 156, 204f, 223, 370f). Within this ambiguously bounded and inherently fricative picture, I can count Schyfter as a member of my tradition.

Acknowledgement

My thanks to David Bloor and Martin Kusch for sharing with me their thoughts on Schyfter’s review. The views expressed here are my own.

Contact details: jwkochan@gmail.com

References

Bloor, David (2001). ‘What Is a Social Construct?’ Facta Philosophica 3: 141-56.

Kochan, Jeff (2018). ‘On the Sociology of Subjectivity: A Reply to Raphael Sassower.’ Social Epistemology Review and Reply Collective 7(5): 39-41. https://wp.me/p1Bfg0-3Xm

Kochan, Jeff (2017). Science as Social Existence: Heidegger and the Sociology of Scientific Knowledge (Cambridge: Open Book Publishers). http://dx.doi.org/10.11647/OBP.0129

Kochan, Jeff (2015). ‘Putting a Spin on Circulating Reference, or How to Rediscover the Scientific Subject.’ Studies in History and Philosophy of Science 49:103-107. https://doi.org/10.1016/j.shpsa.2014.10.004

Kochan, Jeff (2010). ‘Contrastive Explanation and the “Strong Programme” in the Sociology of Scientific Knowledge.’ Social Studies of Science 40(1): 127-44. https://doi.org/10.1177/0306312709104780

Palladino, Paolo (2018). ‘Heidegger Today: On Jeff Kochan’s Science and Social Existence.’ Social Epistemology Review and Reply Collective 7(8): 41-46.

Sassower, Raphael (2018). ‘Heidegger and the Sociologists: A Forced Marriage?’ Social Epistemology Review and Reply Collective 7(5): 30-32.

Schyfter, Pablo (2018). ‘Inaccurate Ambitions and Missing Methodologies: Thoughts on Jeff Kochan and the Sociology of Scientific Knowledge.’ Social Epistemology Review and Reply Collective 7(8): 8-14.

Shapin, Steven and Simon Schaffer (1985). Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life (Princeton: Princeton University Press).

Author Information: Luca Tateo, Aalborg University & Federal University of Bahia, luca@hum.aau.dk.

Tateo, Luca. “Ethics, Cogenetic Logic, and the Foundation of Meaning.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 1-8.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44i

Mural entitled “Paseo de Humanidad” on the Mexican side of the US border wall in the city of Heroica Nogales, in Sonora. Art by Alberto Morackis, Alfred Quiróz and Guadalupe Serrano.
Image by Jonathan McIntosh, via Flickr / Creative Commons

 

This essay is in reply to: Miika Vähämaa (2018) Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

In his interesting essay, Vähämaa (2018) discusses two issues that I find particularly relevant. The first one concerns the foundation of meaning in language, which in the era of connectivism (Siemens, 2005) and post-truth (Keyes, 2004) becomes problematic. The second issue is the appreciation of epistemic virtues in a collective context: how the group can enhance the epistemic skill of the individual?

I will try to explain why these problems are relevant and why it is worth developing Vähämaa’s (2018) reflection in the specific direction of group and person as complementary epistemic and ethic agents (Fricker, 2007). First, I will discuss the foundations of meaning in different theories of language. Then, I will discuss the problems related to the stability and liminality of meaning in the society of “popularity”. Finally I will propose the idea that the range of contemporary epistemic virtues should be integrated by an ethical grounding of meaning and a co-genetic foundation of meaning.

The Foundation of Meaning in Language

The theories about the origins of human language can be grouped in four main categories, based on the elements characterizing the ontogenesis and glottogenesis.

Sociogenesis Hypothesis (SH): it is the idea that language is a conventional product, that historically originates from coordinated social activities and it is ontogenetically internalized through individual participation to social interactions. The characteristic authors in SH are Wundt, Wittgenstein and Vygotsky (2012).

Praxogenesis Hypothesis (PH): it is the idea that language historically originates from praxis and coordinated actions. Ontogenetically, the language emerges from senso-motory coordination (e.g. gaze coordination). It is for instance the position of Mead, the idea of linguistic primes in Smedslund (Vähämaa, 2018) and the language as action theory of Austin (1975).

Phylogenesis Hypothesis (PhH): it is the idea that humans have been provided by evolution with an innate “language device”, emerging from the evolutionary preference for forming social groups of hunters and collective long-duration spring care (Bouchard, 2013). Ontogenetically, language predisposition is wired in the brain and develops in the maturation in social groups. This position is represented by evolutionary psychology and by innatism such as Chomsky’s linguistics.

Structure Hypothesis (StH): it is the idea that human language is a more or less logic system, in which the elements are determined by reciprocal systemic relationships, partly conventional and partly ontic (Thao, 2012). This hypothesis is not really concerned with ontogenesis, rather with formal features of symbolic systems of distinctions. It is for instance the classical idea of Saussure and of the structuralists like Derrida.

According to Vähämaa (2018), every theory of meaning has to deal today with the problem of a terrific change in the way common sense knowledge is produced, circulated and modified in collective activities. Meaning needs some stability in order to be of collective utility. Moreover, meaning needs some validation to become stable.

The PhH solves this problem with a simple idea: if humans have survived and evolved, their evolutionary strategy about meaning is successful. In a natural “hostile” environment, our ancestors must have find the way to communicate in such a way that a danger would be understood in the same way by all the group members and under different conditions, including when the danger is not actually present, like in bonfire tales or myths.

The PhH becomes problematic when we consider the post-truth era. What would be the evolutionary advantage to deconstruct the environmental foundations of meaning, even in a virtual environment? For instance, what would be the evolutionary advantage of the common sense belief that global warming is not a reality, considered that this false belief could bring mankind to the extinction?

StH leads to the view of meaning as a configuration of formal conditions. Thus, stability is guaranteed by structural relations of the linguistic system, rather than by the contribution of groups or individuals as epistemic agents. StH cannot account for the rapidity and liminality of meaning that Vähämaa (2018) attributes to common sense nowadays. SH and PH share the idea that meaning emerges from what people do together, and that stability is both the condition and the product of the fact that we establish contexts of meaningful actions, ways of doing things in a habitual way.

The problem is today the fact that our accelerated Western capitalistic societies have multiplied the ways of doing and the number of groups in society, decoupling the habitual from the common sense meaning. New habits, new words, personal actions and meanings are built, disseminated and destroyed in short time. So, if “Our lives, with regard to language and knowledge, are fundamentally bound to social groups” (Vähämaa, 2018, p. 169) what does it happen to language and to knowledge when social groups multiply, segregate and disappear in a short time?

From Common Sense to the Bubble

The grounding of meaning in the group as epistemic agent has received a serious stroke in the era of connectivism and post-truth. The idea of connectivism is that knowledge is distributed among the different agents of a collective network (Siemens, 2005). Knowledge does not reside into the “mind” or into a “memory”, but is rather produced in bits and pieces, that the epistemic agent is required to search, and to assemble through the contribution of the collective effort of the group’s members.

Thus, depending on the configuration of the network, different information will be connected, and different pictures of the world will emerge. The meaning of the words will be different if, for instance, the network of information is aggregated by different groups in combination with, for instance, specific algorithms. The configuration of groups, mediated by social media, as in the case of contemporary politics (Lewandowsky, Ecker & Cook, 2017), leads to the reproduction of “bubbles” of people that share the very same views, and are exposed to the very same opinions, selected by an algorithm that will show only the content compliant with their previous content preferences.

The result is that the group loses a great deal of its epistemic capability, which Vähämaa (2018) suggests as a foundation of meaning. The meaning of words that will be preferred in this kind of epistemic bubble is the result of two operations of selection that are based on popularity. First, the meaning will be aggregated by consensual agents, rather than dialectic ones. Meaning will always convergent rather than controversial.

Second, between alternative meanings, the most “popular” will be chosen, rather than the most reliable. The epistemic bubble of connectivism originates from a misunderstanding. The idea is that a collectivity has more epistemic force than the individual alone, to the extent that any belief is scrutinized democratically and that if every agent can contribute with its own bit, the knowledge will be more reliable, because it is the result of a constant and massive peer-review. Unfortunately, the events show us a different picture.

Post-truth is actually a massive action of epistemic injustice (Fricker, 2007), to the extent that the reliability of the other as epistemic agent is based on criteria of similarity, rather than on dialectic. One is reliable as long as it is located within my own bubble. Everything outside is “fake news”. The algorithmic selection of information contributes to reinforce the polarization. Thus, no hybridization becomes possible, the common sense (Vähämaa, 2018) is reduced to the common bubble. How can the epistemic community still be a source of meaning in the connectivist era?

Meaning and Common Sense

SH and PH about language point to a very important historical source: the philosopher Giambattista Vico (Danesi, 1993; Tateo, 2015). Vico can be considered the scholar of the common sense and the imagination (Tateo, 2015). Knowledge is built as product of human experience and crystallized into the language of a given civilization. Civilization is the set of interpretations and solutions that different groups have found to respond to the common existential events, such as birth, death, mating, natural phenomena, etc.

According to Vico, all the human beings share a fate of mortal existence and rely on each other to get along. This is the notion of common sense: the profound sense of humanity that we all share and that constitutes the ground for human ethical choices, wisdom and collective living. Humans rely on imagination, before reason, to project themselves into others and into the world, in order to understand them both. Imagination is the first step towards the understanding of the Otherness.

When humans loose contact with this sensus communis, the shared sense of humanity, and start building their meaning on egoism or on pure rationality, civilizations then slip into barbarism. Imagination gives thus access to the intersubjectivity, the capability of feeling the other, while common sense constitutes the wisdom of developing ethical beliefs that will not harm the other. Vico ideas are echoed and made present by the critical theory:

“We have no doubt (…) that freedom in society is inseparable from enlightenment thinking. We believe we have perceived with equal clarity, however, that the very concept of that thinking (…) already contains the germ of the regression which is taking place everywhere today. If enlightenment does not [engage in] reflection on this regressive moment, it seals its own fate (…) In the mysterious willingness of the technologically educated masses to fall under the spell of any despotism, in its self-destructive affinity to nationalist paranoia (…) the weakness of contemporary theoretical understanding is evident.” (Horkheimer & Adorno, 2002, xvi)

Common sense is the basis for the wisdom, that allows to question the foundational nature of the bubble. It is the basis to understand that every meaning is not only defined in a positive way, but is also defined by its complementary opposite (Tateo, 2016).

When one uses the semantic prime “we” (Vähämaa, 2018), one immediately produces a system of meaning that implies the existence of a “non-we”, one is producing otherness. In return, the meaning of “we” can only be clearly defined through the clarification of who is “non-we”. Meaning is always cogenetic (Tateo, 2015). Without the capability to understand that by saying “we” people construct a cogenetic complex of meaning, the group is reduced to a self confirming, self reinforcing collective, in which the sense of being a valid epistemic agent is actually faked, because it is nothing but an act of epistemic arrogance.

How we can solve the problem of the epistemic bubble and give to the relationship between group and person a real epistemic value? How we can overcome the dangerous overlapping between sense of being functional in the group and false beliefs based on popularity?

Complementarity Between Meaning and Sense

My idea is that we must look in that complex space between the “meaning”, understood as a collectively shared complex of socially constructed significations, and the “sense”, understood as the very personal elaboration of meaning which is based on the person’s uniqueness (Vygotsky, 2012; Wertsck, 2000). Meaning and sense feed into each other, like common sense and imagination. Imagination is the psychic function that enables the person to feel into the other, and thus to establish the ethical and affective ground for the common sense wisdom. It is the empathic movement on which Kant will later on look for a logic foundation.

“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” (Kant 1993, p. 36. 4:429)

I would further claim that maybe they feed into each other: the logic foundation is made possible by the synthetic power of empathic imagination. Meaning and sense feed into each other. On the one hand, the collective is the origin of internalized psychic activities (SH), and thus the basis for the sense elaborated about one’s own unique life experience. On the other hand, the personal sense constitutes the basis for the externalization of the meaning into the arena of the collective activities, constantly innovating the meaning of the words.

So, personal sense can be a strong antidote to the prevailing force of the meaning produced for instance in the epistemic bubble. My sense of what is “ought”, “empathic”, “human” and “ethic”, in other words my wisdom, can help me to develop a critical stance towards meanings that are build in a self-feeding uncritical way.

Can the dialectic, complementary and cogenetic relationship between sense and meaning become the ground for a better epistemic performance, and for an appreciation of the liminal meaning produced in contemporary societies? In the last section, I will try to provide arguments in favor of this idea.

Ethical Grounding of Meaning

If connectivistic and post-truth societies produce meanings that are based on popularity check, rather than on epistemic appreciation, we risk to have a situation in which any belief is the contingent result of a collective epistemic agent which replicates its patterns into bubbles. One will just listen to messages that confirm her own preferences and belief and reject the different ones as unreliable. Inside the bubble there is no way to check the meaning, because the meaning is not cogenetic, it is consensual.

For instance, if I read and share a post on social media, claiming that migrants are the main criminal population, despite my initial position toward the news, there is the possibility that within my group I will start to see only posts confirming the initial fact. The fact can be proven wrong, for instance by the press, but the belief will be hard to change, as the meaning of “migrant” in my bubble is likely to continue being that of “criminal”. The collectivity will share an epistemically unjust position, to the extent that it will attribute a lessened epistemic capability to those who are not part of the group itself. How can one avoid that the group is scaffolding the “bad” epistemic skills, rather than empowering the individual (Vähämaa, 2018)?

The solution I propose is to develop an epistemic virtue based on two main principles: the ethical grounding of meaning and the cogenetic logic. The ethical grounding of meaning is directly related to the articulation between common sense and wisdom in the sense of Vico (Tateo, 2015). In a post-truth world in which we cannot appreciate the epistemic foundation of meaning, we must rely on a different epistemic virtue in order to become critical toward messages. Ethical grounding, based on the personal sense of humanity, is not of course epistemic test of reliability, but it is an alarm bell to become legitimately suspicious toward meanings. The second element of the new epistemic virtue is cogenetic logic (Tateo, 2016).

Meaning is grounded in the building of every belief as a complementary system between “A” and “non-A”. This implies that any meaning is constructed through the relationship with its complementary opposite. The truth emerges in a double dialectic movement (Silva Filho, 2014): through Socratic dialogue and through cogenetic logic. In conclusion, let me try to provide a practical example of this epistemic virtue.

The way to start to discriminate potentially fake news or the tendentious interpretations of facts would be essentially based on an ethic foundation. As in Vico’s wisdom of common sense, I would base my epistemic scrutiny on the imaginative work that allows me to access the other and on the cogenetic logic that assumes every meaning is defined by its relationship with the opposite.

Let’s imagine that we are exposed to a post on social media, in which someone states that a caravan of migrants, which is travelling from Honduras across Central America toward the USA border, is actually made of criminals sent by hostile foreign governments to destabilize the country right before elections. The same post claims that it is a conspiracy and that all the press coverage is fake news.

Finally the post presents some “debunking” pictures showing some athletic young Latino men, with their faces covered by scarves, to demonstrate that the caravan is not made by families with children, but is made by “soldiers” in good shape and who don’t look poor and desperate as the “mainstream” media claim. I do not know whether such a post has ever been made, but I just assembled elements of very common discourses circulating in the social media.

The task is no to assess the nature of this message, its meaning and its reliability. I could rely on the group as a ground for assessing statements, to scrutinize their truth and justification. However, due to the “bubble” effect, I may fall into a simple tautological confirmation, due to the configuration of the network of my relations. I would probably find only posts confirming the statements and delegitimizing the opposite positions. In this case, the fact that the group will empower my epistemic confidence is a very dangerous element.

I could limit my search for alternative positions to establish a dialogue. However, I could not be able, alone, to find information that can help me to assess the statement with respect to its degree of bias. How can I exert my skepticism in a context of post-truth? I propose some initial epistemic moves, based on a common sense approach to the meaning-making.

1) I must be skeptical of every message which uses a violent, aggressive, discriminatory language, and that such kind of message is “fake” by default.

2) I must be skeptical of every message that treats as criminals or is against whole social groups, even on the basis of real isolated events, because this interpretation is biased by default.

3) I must be skeptical of every message that attacks or targets persons for their characteristics rather than discussing ideas or behaviors.

Appreciating the hypothetical post about the caravan by the three rules above mentioned, one will immediately see that it violates all of them. Thus, no matter what is the information collected by my epistemic bubble, I have justified reasons to be skeptical towards it. The foundation of the meaning of the message will not be neither in the group nor in the person. It will be based on the ethical position of common sense’s wisdom.

Contact details: luca@hum.aau.dk

References

Austin, J. L. (1975). How to do things with words. Oxford: Oxford University Press.

Bouchard, D. (2013). The nature and origin of language. Oxford: Oxford University Press.

Danesi, M. (1993). Vico, metaphor, and the origin of language. Bloomington: Indiana University Press.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.

Horkheimer, M., & Adorno, T. W. (2002). Dialectic of Enlightenment. Trans. Edmund Jephcott. Stanford: Stanford University Press.

Kant, I. (1993) [1785]. Grounding for the Metaphysics of Morals. Translated by Ellington, James W. (3rd ed.). Indianapolis and Cambridge: Hackett.

Keyes, R. (2004). The Post-Truth Era: Dishonesty and Deception in Contemporary Life. New York: St. Martin’s.

Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353-369.

Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1) http://www.itdl.org/Journal/Jan_05/article01.htm

Silva Filho, W. J. (2014). Davidson: Dialog, dialectic, interpretation. Utopía y praxis latinoamericana, 7(19).

Tateo, L. (2015). Giambattista Vico and the psychological imagination. Culture & Psychology, 21(2), 145-161.

Tateo, L. (2016). Toward a cogenetic cultural psychology. Culture & Psychology, 22(3), 433-447.

Thao, T. D. (2012). Investigations into the origin of language and consciousness. New York: Springer.

Vähämaa, M. (2018). Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

Vygotsky, L. S. (2012). Thought and language. Cambridge, MA: MIT press.

Wertsck, J. V. (2000). Vygotsky’s Two Minds on the Nature of Meaning. In C. D. Lee & P. Smagorinsky (eds), Vygotskian perspectives on literacy research: Constructing meaning through collaborative inquiry (pp. 19-30). Cambridge: Cambridge University Press.

Author Information: Jonathan Matheson & Valerie Joly Chock, University of North Florida, jonathan.matheson@gmail.com.

Matheson, Jonathan; Valerie Joly Chock. “Knowledge and Entailment: A Review of Jessica Brown’s Fallibilism: Evidence and Knowledge.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 55-58.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42k

Photo by JBColorado via Flickr / Creative Commons

 

Jessica Brown’s Fallibilism is an exemplary piece of analytic philosophy. In it, Brown engages a number of significant debates in contemporary epistemology with the aim of making a case for fallibilism about knowledge. The book is divided into two halves. In the first half (ch. 1-4), Brown raises a number of challenges to infallibilism. In the second half (ch. 5-8), Brown responds to challenges to fallibilism. Brown’s overall argument is that since fallibilism is more intuitively plausible than infallibilism, and since it fares no worse in terms of responding to the main objections, we should endorse fallibilism.

What Is Fallibilism?

In the introductory chapter, Brown distinguishes between fallibilism and infallibilism. According to her, infallibilism is the claim that one knows that p only if one’s evidence entails p, whereas fallibilism denies this. Brown settles on this definition after having examined some motivation and objections to other plausible definitions of infallibilism. With these definitions in hand, the chapter turns to examine some motivation for fallibilism and infallibilism.

Brown then argues that infallibilists face a trilemma: skepticism, shifty views of knowledge, or generous accounts of knowledge. Put differently, infallibilists must either reject that we know a great deal of what we think we know (since our evidence rarely seems to entail what we take ourselves to know), embrace a view about knowledge where the standards for knowledge, or knowledge ascriptions, vary with context, or include states of the world as part of our evidence. Brown notes that her focus is on non-skeptical infallibilist accounts, and explains why she restricts her attention in the remainder of the book to infallibilist views with generous conception of evidence.

In chapter 2, Brown lays the groundwork for her argument against infallibilism by demonstrating some commitments of non-skeptical infallibilists. In order to avoid skepticism, infallibilists must show that we have evidence that entails what we know. In order to do so, they must commit to certain claims regarding the nature of evidence and evidential support.

Brown argues that non-factive accounts of evidence are not suitable for defending infallibilism, and that infallibilists must embrace an externalist, factive account of evidence on which knowing that p is sufficient for p to be part of one’s evidence. That is, infallibilists need to endorse Factivity (p is evidence only if p is true) and the Sufficiency of knowledge for evidence (if one knows that p, then p is part of one’s evidence).

However, Brown argues, this is insufficient for infallibilists to avoid skepticism in cases of knowledge by testimony, inference to the best explanation, and enumerative induction. In addition, infallibilists are committed to the claim that if one knows p, then p is part of one’s evidence for p (the Sufficiency of knowledge for self-support thesis).

Sufficiency of Knowledge to Support Itself

Chapter 3 examines the Sufficiency of knowledge for self-support in more detail. Brown begins by examining how the infallibilist may motivate this thesis by appealing to a probabilistic account of evidential support. If probability raisers are evidence, then there is some reason to think that every proposition is evidence for itself.

The main problem for the thesis surrounds the infelicity of citing p as evidence for p. In the bulk of the chapter, Brown examines how the infallibilist may account for this infelicity by appealing to pragmatic explanations, conversational norms, or an error theory. Finding each of these explanations insufficient to explain the infelicity here, Brown concludes that the infallibilist’s commitment to the Sufficiency of knowledge for self-support thesis is indeed problematic.

Brown takes on the infallibilists’ conception of evidence in Chapter 4. As mentioned above, the infallibilist is committed to a factive account of evidence, where knowledge suffices for evidence. The central problem here is that such an account has it that intuitively equally justified agents (one in a good case and one in a bad case) are not in fact equally justified.

Brown then examines the ‘excuse maneuver’, which claims that the subject in the bad case is unjustified yet blameless in their belief, and the original intuition confuses these assessments. The excuse maneuver relies on the claim that knowledge is the norm of belief. Brown argues that the knowledge norm fails to provide comparative evaluations of epistemic positions where subjects are intuitively more or less justified, and fails to give an adequate account of propositional justification when the target proposition is not believed. In addition, Brown argues that extant accounts of what would provide the subject in the bad case with an excuse are all insufficient.

In Chapter 5 the book turns to defending fallibilism. The first challenge to fallibilism that Brown examines concerns closure. Fallibilism presents a threat to multi-premise closure since one could meet the threshold for knowledge regarding each individual premise, yet fail to meet it regarding the conclusion. Brown argues that giving up on closure is no cost to fallibilists since closure ought to be rejected on independent grounds having to do with defeat.

A subject can know the premises and deduce the conclusion from them, yet have a defeater (undercutting or rebutting) that prevents the subject from knowing the conclusion. Brown then defends such defeat counterexamples to closure from a number of recent objections to the very notion of defeat.

Chapter 6 focuses on undermining defeat and recent challenges that come to it from ‘level-splitting’ views. According to level-splitting views, rational akrasia is possible—i.e., it is possible to be rational in simultaneously believing both p and that your evidence does not support p. Brown argues that level-splitting views face problems when applied to theoretical and practical reasoning. She then examines and rejects attempts to respond to these objections to level-splitting views.

Brown considers objections to fallibilism from practical reasoning and the infelicity of concessive knowledge attributions in Chapter 7. She argues that these challenges are not limited to fallibilism but that they also present a problem for infallibilism. In particular, Brown examines how (fallibilist or infallibilist) non-skeptical views have difficulty accommodating the knowledge norm for practical reasoning (KNPR) in high-stakes cases.

She considers two possible responses: to reject KNPR or to maintain KNPR by means of explain-away maneuvers. Brown claims that one’s response is related to the notion of probability one takes as relevant to practical reasoning. According to her, fallibilists and infallibilists tend to respond differently to the challenge from practical reasoning because they adopt different views of probability.

However, Brown argues, both responses to the challenge are in principle available to each because it is compatible with their positions to adopt the alternative view of probability. Thus, Brown concludes that practical reasoning and concessive knowledge attributions do not provide reasons to prefer infallibilism over fallibilism, or vice versa.

Keen Focus, Insightful Eyes

Fallibilism is an exemplary piece of analytic philosophy. Brown is characteristically clear and accessible throughout. This book will be very much enjoyed by anyone interested in epistemology. Brown makes significant contributions to contemporary debates, making this a must read for anyone engaged in these epistemological issues. It is difficult to find much to resist in this book.

The arguments do not overstep and the central thesis is both narrow and modest. It’s worth emphasizing here that Brown does not argue that fallibilism is preferable to infallibilism tout court, but only that it is preferable to a very particular kind of infallibilism: non-skeptical, non-shifty infallibilism.  So, while the arguments are quite strong, the target is more narrow.

One of the central arguments against fallibilism that Brown considers concerns closure. While she distinguishes multi-premise closure from single-premise closure, the problems for fallibilism concern only the former, which she formulates as follows:

Necessarily, if S knows p1-n, competently deduces, and thereby comes to believe q, while retaining her knowledge of p1-n throughout, then S knows q. (101)

The fallibilist threshold condition is that knowledge that p requires that the probability of p on one’s evidence be greater than some threshold less than 1. This threshold condition generates counterexamples to multiple-premise closure in which S fails to know a proposition entailed by other propositions she knows. Where S’s evidence for each premise gives them a probability that meets the threshold, S knows each of the premises.

If together these premises entail q, then S knows premises p1-n that jointly entail conclusion q. The problem is that S knowing the premises in this way is compatible with the probability of the conclusion on S’s evidence not meeting the threshold. Thus, this presents possibility for counterexamples to closure and a problem for fallibilism.

As the argument goes, fallibilists must deny closure and this is a significant cost. Brown’s reply is to soften the consequence of denying closure by arguing that it is implausible due to alternative (and independent) reasons concerning defeat. Brown’s idea is that closure gives no reason to reject fallibilism, or favor infallibilism, given that defeat rules out closure in a way that is independent of the fallibilism-infallibilism debate.

After laying out her response, Brown moves on to consider and reply to objections concerning the legitimacy of defeat itself. She ultimately focuses on defending defeat against such objections and ignores other responses that may be available to fallibilists when dealing with this problem. Brown, though, is perhaps a little too quick to give up on closure.

Consider the following alternative framing of closure:

If S knows [p and p entails q] and believes q as the result of a competent deduction from that knowledge, then S knows q.

So understood, when there are multiple premises, closure only applies when the subject knows the conjunction of the premises and that the premises entail the conclusion. Framing closure in this way avoids the threshold problem (since the conjunction must be known). If S knows the conjunction and believes q (as the result of competent deduction), then S’s belief that q cannot be false. This is the case because the truth of p entailing q, coupled with the truth of p itself, guarantees that q is true. This framing of closure, then, eliminates the considered counterexamples.

Framing closure in this way not only avoids the threshold problem, but plausibly avoids the defeat problem as well. Regarding undercutting defeat, it is at least much harder to see how S can know that p entails q while possessing such a defeater. Regarding rebutting defeat, it is implausible that S would retain knowledge of the conjunction if S possesses a rebutting defeater.

However, none of this is a real problem for Brown’s argument. It simply seems that she has ignored some possible lines of response open to the fallibilist that allows the fallibilist to keep some principle in the neighborhood of closure, which is an intuitive advantage.

Contact details: jonathan.matheson@gmail.com

References

Brown, Jessica. Fallibilism: Evidence and Knowledge. Oxford: Oxford University Press, 2018.

Author Information: András Szigeti, Linköping University, andras.szigeti@liu.se

Szigeti, András.”Seumas Miller: Joint Epistemic Action and Collective Moral Responsibility—A Reply.” Social Epistemology Review and Reply Collective 4, no. 5 (2015): 14-19.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-23R

Please refer to:

1703228419_4e6fdf0a3c_o

Image credit: Ken Douglas, via flickr

In a series of books and articles, Miller has developed a refreshingly original and complex account of joint action and collective responsibility. This approach constitutes an interesting alternative to the current orthodoxy that seeks to explain shared agency in terms of joint intentions. Miller also offers a novel, moderately individualist conception of group responsibility steering clear of both robust collectivism, according to which group-responsibility does not reduce to the responsibility of individual group members, as well as more radical forms of individualism, according to which collective responsibility is always just the sum of the responsibility of individual group members.

His present paper extends this account to the area of collective epistemic action. [1] I believe the approach is promising overall and its application to epistemology fruitful. In what follows, I will explore how the general account and its application could be further strengthened by making some of the central conceptual distinctions of the paper clearer. Continue Reading…

Author Information: Jonathan Matheson, University of North Florida, jonathan.matheson@gmail.com

Matheson, Jonathan. “Epistemic Norms and Self-Defeat: A Reply to Littlejohn.” Social Epistemology Review and Reply Collective 4, no. 2 (2015): 26-32.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1Uo

Please refer to:

epistemic_monomania

Image credit: myshko_, via flickr

In “Are Conciliatory Views of Disagreement Self-Defeating?” I argued that we should revise how we understand conciliatory views of disagreement. Conciliatory views of disagreement claim that discovering that an epistemic peer disagrees with you is epistemically significant. In particular, they have been understood as claiming that becoming aware that an epistemic peer disagrees with you about a proposition makes you less justified in adopting the doxastic attitude that you had toward that proposition. So, if you believed p and became aware that your epistemic peer disbelieves p, then you would become less justified in believing p, at least so long as you have no undefeated reason to discount your peer’s conclusion about p. More formally, conciliationism has been understood as claiming the following:  Continue Reading…