Archives For epistemic norms

Author Information: Jeff Kochan, University of Konstanz, jwkochan@gmail.com.

Kochan, Jeff. “Suppressed Subjectivity and Truncated Tradition: A Reply to Pablo Schyfter.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 15-21.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44s

Image by Brandon Warren via Flickr / Creative Commons

 

This article responds to: Schyfter, Pablo. “Inaccurate Ambitions and Missing Methodologies: Thoughts on Jeff Kochan and the Sociology of Scientific Knowledge.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 8-14.

In his review of my book – Science as Social Existence: Heidegger and the Sociology of Scientific Knowledge – Raphael Sassower objects that I do not address issues of market capitalism, democracy, and the ‘industrial-academic-military complex’ (Sassower 2018, 31). To this, I responded: ‘These are not what my book is about’ (Kochan 2018, 40).

In a more recent review, Pablo Schyfter tries to turn this response around, and use it against me. Turnabout is fair play, I agree. Rebuffing my friendly, constructive criticism of the Edinburgh School’s celebrated and also often maligned ‘Strong Programme’ in the Sociology of Scientific Knowledge (SSK), Schyfter argues that I have failed to address what the Edinburgh School is actually about (Schyfter 2018, 9).

Suppressing the Subject

More specifically, Schyfter argues that I expect things from the Edinburgh School that they never intended to provide. For example, he takes what I call the ‘glass bulb’ model of subjectivity, characterises it as a ‘form of realism,’ and then argues that I have, in criticising the School’s lingering adherence to this model, failed to address their ‘actual intents’ (Schyfter 2018, 8, 9). According to Schyfter, the Edinburgh School did not have among its intentions the sorts of things I represent in the glass-bulb model – these are not, he says, what the School is about.

This claim is clear enough. Yet, at the end of his review, Schyfter then muddies the waters. Rather than rejecting the efficacy of the glass-bulb model, as he had earlier, he now tries ‘expanding’ on it, suggesting that the Strong Programme is better seen as a ‘working light bulb’: ‘It may employ a glass-bulb, but cannot be reduced to it’ (Schyfter 2018, 14).

So is the glass-bulb model a legitimate resource for understanding the Edinburgh School, or is it not? Schyfter’s confused analysis leaves things uncertain. In any case, I agree with him that the Edinburgh School’s complete range of concerns cannot be reduced to those specific concerns I try to capture in the glass-bulb model.

The glass-bulb model is a model of subjectivity, and subjectivity is a central topic of Science as Social Existence. It is remarkable, then, that the word ‘subject’ and its cognates never appear in Schyfter’s review (apart from in one quote from me). One may furthermore wonder why Schyfter characterises the glass-bulb model as a ‘form of realism.’ No doubt, these two topics – subjectivity and realism – are importantly connected, but they are not the same. Schyfter has mixed them up, and, in doing so, he has suppressed subjectivity as a topic of discussion.

Different Kinds of Realism

Schyfter argues that I am ‘unfair’ in criticising the Edinburgh School for failing to properly address the issue of realism, because, he claims, ‘[t]heir work was not about ontology’ (Schyfter 2018, 9). As evidence for my unfairness, he quotes my reference to ‘the problem of how one can know that the external world exists’ (Schyfter 2018, 9; cf. Kochan 2017, 37). But the problem of how we can know something is not an ontological problem, it is an epistemological one, a problem of knowledge. Schyfter has mixed things up again.

Two paragraphs later, Schyfter then admits that the Edinburgh School ‘did not entirely ignore ontology’ (Schyfter 2018, 9). I agree. In fact, as I demonstrate in Chapter One, the Edinburgh School was keen to ontologically ground the belief that the ‘external world’ exists. Why? Because they see this as a fundamental premise of science, including their own social science.

I criticise this commitment to external-world realism, because it generates the epistemological problem of how one can know that the external world exists. And this epistemological problem, in turn, is vulnerable to sceptical attack. If the world is ‘external,’ the question will arise: external to what? The answer is: to the subject who seeks to know it.

The glass-bulb model reflects this ontological schema. The subject is sealed inside the bulb; the world is external to the bulb. The epistemological problem then arises of how the subject penetrates the glass barrier, makes contact with – knows – the world. This problem is invariably vulnerable to sceptical attack. One can avoid the problem, and the attack, by fully jettisoning the glass-bulb model. Crucially, this is not a rejection of realism per se, but only of a particular form of realism, namely, external-world realism.

Schyfter argues that the Edinburgh School accepts a basic premise, ‘held implicitly by people as they live their lives, that the world with which they interact exists’ (Schyfter 2018, 9). I agree; I accept it too. Yet he continues: ‘Kochan chastises this form of realism because it does not “establish the existence of the external world”’ (Schyfter 2018, 9).

That is not quite right. I agree that people, as they live their lives, accept that the world exists. But this is not external-world realism, and it is the latter view that I oppose. I ‘chastise’ the Edinburgh School for attempting to defend the latter view, when all they need to defend is the former. The everyday realist belief that the world exists is not vulnerable to sceptical attack, because it does not presuppose the glass-bulb model of subjectivity.

On this point, then, my criticism of the Edinburgh School is both friendly and constructive. It assuages their worries about sceptical attack – which I carefully document in Chapter One – without requiring them to give up their realism. But the transaction entails that they abandon their lingering commitment to the glass-bulb model, including their belief in an ‘external’ world, and instead adopt a phenomenological model of the subject as being-in-the-world.

Failed Diversionary Tactics

It is important to note that the Edinburgh School does not reject scepticism outright. As long as the sceptic attacks absolutist knowledge of the external world, they are happy to go along. But once the sceptic argues that knowledge of the external world, as such, is impossible, they demur, for this threatens their realism. Instead, they combine realism with relativism. Yet, as I argue, as long as they also combine their relativism with the glass-bulb model, that is, as long as theirs is an external-world realism, they will remain vulnerable to sceptical attack.

Hence, I wrote that, in the context of their response to the external-world sceptic, the Edinburgh School’s distinction between absolute and relative knowledge ‘is somewhat beside the point’ (Kochan 2017, 48). In response, Schyfter criticises me for neglecting the importance of the Edinburgh School’s relativism (Schyfter 2018, 10). But I have done no such thing. In fact, I wholly endorse their relativism. I do suggest, however, that it be completely divorced from the troublesome vestiges of the glass-bulb model of subjectivity.

Schyfter uses the same tactic in response to this further claim of mine: ‘For the purposes of the present analysis, whether [conceptual] content is best explained in collectivist or individualist terms is beside the point’ (Kochan 2017, 79). For this, I am accused of failing to recognise the importance of the Edinburgh School’s commitment to a collectivist or social conception of knowledge (Schyfter 2018, 11).

The reader should not be deceived into thinking that the phrase ‘the present analysis’ refers to the book as a whole. In fact, it refers to that particular passage of Science as Social Existence wherein I discuss David Bloor’s claim that the subject can make ‘genuine reference to an external reality’ (Kochan 2017, 79; cf. Bloor 2001, 149). Bloor’s statement relies on the glass-bulb model. Whether the subjectivity in the bulb is construed in individualist terms or in collectivist terms, the troubles caused by the model will remain.

Hence, I cannot reasonably be charged with ignoring the importance of social knowledge for the Edinburgh School. Indeed, the previous but one sentence to the sentence on which Schyfter rests his case reads: ‘This sociological theory of the normativity and objectivity of conceptual content is a central pillar of SSK’ (Kochan 2017, 79). It is a central pillar of Science as Social Existence as well.

Existential Grounds for Scientific Experience

Let me shift now to Heidegger. Like previous critics of Heidegger, Schyfter is unhappy with Heidegger’s concept of the ‘mathematical projection of nature.’ Although I offer an extended defense and development of this concept, Schyfter nevertheless insists that it does ‘not offer a clear explanation of what occurs in the lived world of scientific work’ (Schyfter 2018, 11).

For Heidegger, ‘projection’ structures the subject’s understanding at an existential level. It thus serves as a condition of possibility for both practical and theoretical experience. Within the scope of this projection, practical understanding may ‘change over’ to theoretical understanding. This change-over in experience occurs when a subject holds back from immersed, practical involvement with things, and instead comes to experience those things at a distance, as observed objects to which propositional statements may then be referred.

The kind of existential projection specific to modern science, Heidegger called ‘mathematical.’ Within this mathematical projection, scientific understanding may likewise change over from practical immersion in a work-world (e.g., at a lab bench) to a theoretical, propositionally structured conception of that same world (e.g., in a lab report).

What critics like Schyfter fail to recognise is that the mathematical projection explicitly envelopes ‘the lived world of scientific work’ and tries to explain it (necessarily but not sufficiently) in terms of the existential conditions structuring that experience. This is different from – but compatible with – an ethnographic description of scientific life, which need not attend to the subjective structures that enable that life.

When such inattention is elevated to a methodological virtue, however, scientific subjectivity will be excluded from analysis. As we will see in a moment, this exclusion is manifest, on the sociology side, in the rejection of the Edinburgh School’s core principle of underdetermination.

In the mid-1930s, Heidegger expanded on his existential conception of science, introducing the term mathēsis in a discussion of the Scientific Revolution. Mathēsis has two features: metaphysical projection; and work experiences. These are reciprocally related, always occurring together in scientific activity. I view this as a reciprocal relation between the empirical and the metaphysical, between the practical and the theoretical, a reciprocal relation enabled, in necessary part, by the existential conditions of scientific subjectivity.

Schyfter criticises my claim that, for Heidegger, the Scientific Revolution was not about a sudden interest in facts, measurement, or experiment, where no such interest had previously existed. For him, this is ‘excessively broad,’ ‘does not reflect the workings of scientific practice,’ and is ‘belittling of empirical study’ (Schyfter 2018, 12). This might be true if Heidegger had offered a theory-centred account of science. But he did not. Heidegger argued that what was decisive in the Scientific Revolution was, as I put it, ‘not that facts, experiments, calculation and measurement are deployed, but how and to what end they are deployed’ (Kochan 2017, 233).

According to Heidegger, in the 17th c. the reciprocal relation between metaphysical projection and work experience was mathematicised. As the projection became more narrowly specified – i.e., axiomatised – the manner in which things were experienced and worked with also became narrower. In turn, the more accustomed subjects became to experiencing and working with things within this mathematical frame, the more resolutely mathematical the projection became. Mathēsis is a kind of positive feedback loop at the existential level.

Giving Heidegger Empirical Feet

This is all very abstract. That is why I suggested that ‘[a]dditional material from the history of science will allow us to develop and refine Heidegger’s account of modern science in a way which he did not’ (Kochan 2017, 235). This empirical refinement and development takes up almost all of Chapters 5 and 6, wherein I consider: studies of diagnostic method by Renaissance physician-professors at the University of Padua, up until their appointment of Galileo in 1591; the influence of artisanal and mercantile culture on the development of early-modern scientific methods, with a focus on metallurgy; and the dispute between Robert Boyle and Francis Line in the mid-17th c. over the experimentally based explanation of suction.

As Paolo Palladino recognises in his review of Science as Social Existence, this last empirical case study offers a different account of events than was given by Steven Shapin and Simon Schaffer in their classic 1985 book Leviathan and the Air-Pump, which influentially applied Edinburgh School methods to the history of science (Palladino 2018, 42). I demonstrate that Heidegger’s account is compatible with this sociological account, and that it also offers different concepts leading to a new interpretation.

Finally, at the end of Chapter 6, I demonstrate the compatibility of Heidegger’s account of modern science with Bloor’s concept of ‘social imagery,’ not just further developing and refining Heidegger’s account of modern science, but also helping to more precisely define the scope of application of Bloor’s valuable methodological concept. Perhaps this does not amount to very much in the big picture, but it is surely more than a mere ‘semantic reformulation of Heidegger’s ideas,’ as Schyfter suggests (Schyfter 2018, 13).

Given all of this, I am left a bit baffled by Schyfter’s claims that I ‘belittle’ empirical methods, that I ‘do[] not present any analysis of SSK methodologies,’ and that I am guilty of ‘a general disregard for scientific practice’ (Schyfter 2018, 12, 11).

Saving an Edinburgh School Method

Let me pursue the point with another example. A key methodological claim of the Edinburgh School is that scientific theory is underdetermined by empirical data. In order to properly explain theory, one must recognise that empirical observation is an interpretative act, necessarily (but not sufficiently) guided by social norms.

I discuss this in Chapter 3, in the context of Bloor’s and Bruno Latour’s debate over another empirical case study from the history of science, the contradictory interpretations given by Robert Millikan and Felix Ehrenhaft of the natural phenomena we now call ‘electrons.’

According to Bloor, because Millikan and Ehrenhaft both observed the same natural phenomena, the divergence between their respective claims – that electrons do and do not exist – must be explained by reference to something more than those phenomena. This ‘something more’ is the divergence in the respective social conditions guiding Millikan and Ehrenhaft’s interpretations of the data (Kochan 2017, 124-5; see also Kochan 2010, 130-33). Electron theory is underdetermined by the raw data of experience. Social phenomena, or ‘social imagery,’ must also play a role in any explanation of how the controversy was settled.

Latour rejects underdetermination as ‘absurd’ (Kochan 2017, 126). This is part of his more general dismissal of the Edinburgh School, based on his exploitation of vulnerabilities in their lingering adherence to the glass-bulb model of subjectivity. I suggest that the Edinburgh School, by fully replacing the glass-bulb model with Heidegger’s model of the subject as being-in-the-world, can deflect Latour’s challenge, thus saving underdetermination as a methodological tool.

This would also allow the Edinburgh School to preserve subjectivity as a methodological resource for sociological explanation. Like Heidegger’s metaphysical projection, the Edinburgh School’s social imagery plays a necessary (but not a sufficient) role in guiding the subject’s interpretation of natural phenomena.

The ‘Tradition’ of SSK – Open or Closed?

Earlier, I mentioned the curious fact that Schyfter never uses the word ‘subject’ or its cognates. It is also curious that he neglects my discussion of the Bloor-Latour debate and never mentions underdetermination. In Chapter 7 of Science as Social Existence, I argue that Latour, in his attack on the Edinburgh School, seeks to suppress subjectivity as a topic for sociological analysis (Kochan 2017, 353-54, and, for methodological implications, 379-80; see also Kochan 2015).

More recently, in my response to Sassower, I noted the ongoing neglect of the history of disciplinary contestation within the field of science studies (Kochan 2018, 40). I believe that the present exchange with Schyfter nicely exemplifies that internal contestation, and I thank him for helping me to more fully demonstrate the point.

Let me tally up. Schyfter is silent on the topic of subjectivity. He is silent on the Bloor-Latour debate. He is silent on the methodological importance of underdetermination. And he tries to divert attention from his silence with specious accusations that, in Science as Social Existence, I belittle empirical research, that I disregard scientific practice, that I fail to recognise the importance of social accounts of knowledge, and that I generally do not take seriously Edinburgh School methodology.

Schyfter is eager to exclude me from what he calls the ‘tradition’ of SSK (Schyfter 2018, 13). He seems to view tradition as a cleanly bounded and internally cohesive set of ideas and doings. By contrast, in Science as Social Existence, I treat tradition as a historically fluid range of intersubjectively sustained existential possibilities, some inevitably vying against others for a place of cultural prominence (Kochan 2017, 156, 204f, 223, 370f). Within this ambiguously bounded and inherently fricative picture, I can count Schyfter as a member of my tradition.

Acknowledgement

My thanks to David Bloor and Martin Kusch for sharing with me their thoughts on Schyfter’s review. The views expressed here are my own.

Contact details: jwkochan@gmail.com

References

Bloor, David (2001). ‘What Is a Social Construct?’ Facta Philosophica 3: 141-56.

Kochan, Jeff (2018). ‘On the Sociology of Subjectivity: A Reply to Raphael Sassower.’ Social Epistemology Review and Reply Collective 7(5): 39-41. https://wp.me/p1Bfg0-3Xm

Kochan, Jeff (2017). Science as Social Existence: Heidegger and the Sociology of Scientific Knowledge (Cambridge: Open Book Publishers). http://dx.doi.org/10.11647/OBP.0129

Kochan, Jeff (2015). ‘Putting a Spin on Circulating Reference, or How to Rediscover the Scientific Subject.’ Studies in History and Philosophy of Science 49:103-107. https://doi.org/10.1016/j.shpsa.2014.10.004

Kochan, Jeff (2010). ‘Contrastive Explanation and the “Strong Programme” in the Sociology of Scientific Knowledge.’ Social Studies of Science 40(1): 127-44. https://doi.org/10.1177/0306312709104780

Palladino, Paolo (2018). ‘Heidegger Today: On Jeff Kochan’s Science and Social Existence.’ Social Epistemology Review and Reply Collective 7(8): 41-46.

Sassower, Raphael (2018). ‘Heidegger and the Sociologists: A Forced Marriage?’ Social Epistemology Review and Reply Collective 7(5): 30-32.

Schyfter, Pablo (2018). ‘Inaccurate Ambitions and Missing Methodologies: Thoughts on Jeff Kochan and the Sociology of Scientific Knowledge.’ Social Epistemology Review and Reply Collective 7(8): 8-14.

Shapin, Steven and Simon Schaffer (1985). Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life (Princeton: Princeton University Press).

Author Information: Luca Tateo, Aalborg University & Federal University of Bahia, luca@hum.aau.dk.

Tateo, Luca. “Ethics, Cogenetic Logic, and the Foundation of Meaning.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 1-8.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44i

Mural entitled “Paseo de Humanidad” on the Mexican side of the US border wall in the city of Heroica Nogales, in Sonora. Art by Alberto Morackis, Alfred Quiróz and Guadalupe Serrano.
Image by Jonathan McIntosh, via Flickr / Creative Commons

 

This essay is in reply to: Miika Vähämaa (2018) Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

In his interesting essay, Vähämaa (2018) discusses two issues that I find particularly relevant. The first one concerns the foundation of meaning in language, which in the era of connectivism (Siemens, 2005) and post-truth (Keyes, 2004) becomes problematic. The second issue is the appreciation of epistemic virtues in a collective context: how the group can enhance the epistemic skill of the individual?

I will try to explain why these problems are relevant and why it is worth developing Vähämaa’s (2018) reflection in the specific direction of group and person as complementary epistemic and ethic agents (Fricker, 2007). First, I will discuss the foundations of meaning in different theories of language. Then, I will discuss the problems related to the stability and liminality of meaning in the society of “popularity”. Finally I will propose the idea that the range of contemporary epistemic virtues should be integrated by an ethical grounding of meaning and a co-genetic foundation of meaning.

The Foundation of Meaning in Language

The theories about the origins of human language can be grouped in four main categories, based on the elements characterizing the ontogenesis and glottogenesis.

Sociogenesis Hypothesis (SH): it is the idea that language is a conventional product, that historically originates from coordinated social activities and it is ontogenetically internalized through individual participation to social interactions. The characteristic authors in SH are Wundt, Wittgenstein and Vygotsky (2012).

Praxogenesis Hypothesis (PH): it is the idea that language historically originates from praxis and coordinated actions. Ontogenetically, the language emerges from senso-motory coordination (e.g. gaze coordination). It is for instance the position of Mead, the idea of linguistic primes in Smedslund (Vähämaa, 2018) and the language as action theory of Austin (1975).

Phylogenesis Hypothesis (PhH): it is the idea that humans have been provided by evolution with an innate “language device”, emerging from the evolutionary preference for forming social groups of hunters and collective long-duration spring care (Bouchard, 2013). Ontogenetically, language predisposition is wired in the brain and develops in the maturation in social groups. This position is represented by evolutionary psychology and by innatism such as Chomsky’s linguistics.

Structure Hypothesis (StH): it is the idea that human language is a more or less logic system, in which the elements are determined by reciprocal systemic relationships, partly conventional and partly ontic (Thao, 2012). This hypothesis is not really concerned with ontogenesis, rather with formal features of symbolic systems of distinctions. It is for instance the classical idea of Saussure and of the structuralists like Derrida.

According to Vähämaa (2018), every theory of meaning has to deal today with the problem of a terrific change in the way common sense knowledge is produced, circulated and modified in collective activities. Meaning needs some stability in order to be of collective utility. Moreover, meaning needs some validation to become stable.

The PhH solves this problem with a simple idea: if humans have survived and evolved, their evolutionary strategy about meaning is successful. In a natural “hostile” environment, our ancestors must have find the way to communicate in such a way that a danger would be understood in the same way by all the group members and under different conditions, including when the danger is not actually present, like in bonfire tales or myths.

The PhH becomes problematic when we consider the post-truth era. What would be the evolutionary advantage to deconstruct the environmental foundations of meaning, even in a virtual environment? For instance, what would be the evolutionary advantage of the common sense belief that global warming is not a reality, considered that this false belief could bring mankind to the extinction?

StH leads to the view of meaning as a configuration of formal conditions. Thus, stability is guaranteed by structural relations of the linguistic system, rather than by the contribution of groups or individuals as epistemic agents. StH cannot account for the rapidity and liminality of meaning that Vähämaa (2018) attributes to common sense nowadays. SH and PH share the idea that meaning emerges from what people do together, and that stability is both the condition and the product of the fact that we establish contexts of meaningful actions, ways of doing things in a habitual way.

The problem is today the fact that our accelerated Western capitalistic societies have multiplied the ways of doing and the number of groups in society, decoupling the habitual from the common sense meaning. New habits, new words, personal actions and meanings are built, disseminated and destroyed in short time. So, if “Our lives, with regard to language and knowledge, are fundamentally bound to social groups” (Vähämaa, 2018, p. 169) what does it happen to language and to knowledge when social groups multiply, segregate and disappear in a short time?

From Common Sense to the Bubble

The grounding of meaning in the group as epistemic agent has received a serious stroke in the era of connectivism and post-truth. The idea of connectivism is that knowledge is distributed among the different agents of a collective network (Siemens, 2005). Knowledge does not reside into the “mind” or into a “memory”, but is rather produced in bits and pieces, that the epistemic agent is required to search, and to assemble through the contribution of the collective effort of the group’s members.

Thus, depending on the configuration of the network, different information will be connected, and different pictures of the world will emerge. The meaning of the words will be different if, for instance, the network of information is aggregated by different groups in combination with, for instance, specific algorithms. The configuration of groups, mediated by social media, as in the case of contemporary politics (Lewandowsky, Ecker & Cook, 2017), leads to the reproduction of “bubbles” of people that share the very same views, and are exposed to the very same opinions, selected by an algorithm that will show only the content compliant with their previous content preferences.

The result is that the group loses a great deal of its epistemic capability, which Vähämaa (2018) suggests as a foundation of meaning. The meaning of words that will be preferred in this kind of epistemic bubble is the result of two operations of selection that are based on popularity. First, the meaning will be aggregated by consensual agents, rather than dialectic ones. Meaning will always convergent rather than controversial.

Second, between alternative meanings, the most “popular” will be chosen, rather than the most reliable. The epistemic bubble of connectivism originates from a misunderstanding. The idea is that a collectivity has more epistemic force than the individual alone, to the extent that any belief is scrutinized democratically and that if every agent can contribute with its own bit, the knowledge will be more reliable, because it is the result of a constant and massive peer-review. Unfortunately, the events show us a different picture.

Post-truth is actually a massive action of epistemic injustice (Fricker, 2007), to the extent that the reliability of the other as epistemic agent is based on criteria of similarity, rather than on dialectic. One is reliable as long as it is located within my own bubble. Everything outside is “fake news”. The algorithmic selection of information contributes to reinforce the polarization. Thus, no hybridization becomes possible, the common sense (Vähämaa, 2018) is reduced to the common bubble. How can the epistemic community still be a source of meaning in the connectivist era?

Meaning and Common Sense

SH and PH about language point to a very important historical source: the philosopher Giambattista Vico (Danesi, 1993; Tateo, 2015). Vico can be considered the scholar of the common sense and the imagination (Tateo, 2015). Knowledge is built as product of human experience and crystallized into the language of a given civilization. Civilization is the set of interpretations and solutions that different groups have found to respond to the common existential events, such as birth, death, mating, natural phenomena, etc.

According to Vico, all the human beings share a fate of mortal existence and rely on each other to get along. This is the notion of common sense: the profound sense of humanity that we all share and that constitutes the ground for human ethical choices, wisdom and collective living. Humans rely on imagination, before reason, to project themselves into others and into the world, in order to understand them both. Imagination is the first step towards the understanding of the Otherness.

When humans loose contact with this sensus communis, the shared sense of humanity, and start building their meaning on egoism or on pure rationality, civilizations then slip into barbarism. Imagination gives thus access to the intersubjectivity, the capability of feeling the other, while common sense constitutes the wisdom of developing ethical beliefs that will not harm the other. Vico ideas are echoed and made present by the critical theory:

“We have no doubt (…) that freedom in society is inseparable from enlightenment thinking. We believe we have perceived with equal clarity, however, that the very concept of that thinking (…) already contains the germ of the regression which is taking place everywhere today. If enlightenment does not [engage in] reflection on this regressive moment, it seals its own fate (…) In the mysterious willingness of the technologically educated masses to fall under the spell of any despotism, in its self-destructive affinity to nationalist paranoia (…) the weakness of contemporary theoretical understanding is evident.” (Horkheimer & Adorno, 2002, xvi)

Common sense is the basis for the wisdom, that allows to question the foundational nature of the bubble. It is the basis to understand that every meaning is not only defined in a positive way, but is also defined by its complementary opposite (Tateo, 2016).

When one uses the semantic prime “we” (Vähämaa, 2018), one immediately produces a system of meaning that implies the existence of a “non-we”, one is producing otherness. In return, the meaning of “we” can only be clearly defined through the clarification of who is “non-we”. Meaning is always cogenetic (Tateo, 2015). Without the capability to understand that by saying “we” people construct a cogenetic complex of meaning, the group is reduced to a self confirming, self reinforcing collective, in which the sense of being a valid epistemic agent is actually faked, because it is nothing but an act of epistemic arrogance.

How we can solve the problem of the epistemic bubble and give to the relationship between group and person a real epistemic value? How we can overcome the dangerous overlapping between sense of being functional in the group and false beliefs based on popularity?

Complementarity Between Meaning and Sense

My idea is that we must look in that complex space between the “meaning”, understood as a collectively shared complex of socially constructed significations, and the “sense”, understood as the very personal elaboration of meaning which is based on the person’s uniqueness (Vygotsky, 2012; Wertsck, 2000). Meaning and sense feed into each other, like common sense and imagination. Imagination is the psychic function that enables the person to feel into the other, and thus to establish the ethical and affective ground for the common sense wisdom. It is the empathic movement on which Kant will later on look for a logic foundation.

“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.” (Kant 1993, p. 36. 4:429)

I would further claim that maybe they feed into each other: the logic foundation is made possible by the synthetic power of empathic imagination. Meaning and sense feed into each other. On the one hand, the collective is the origin of internalized psychic activities (SH), and thus the basis for the sense elaborated about one’s own unique life experience. On the other hand, the personal sense constitutes the basis for the externalization of the meaning into the arena of the collective activities, constantly innovating the meaning of the words.

So, personal sense can be a strong antidote to the prevailing force of the meaning produced for instance in the epistemic bubble. My sense of what is “ought”, “empathic”, “human” and “ethic”, in other words my wisdom, can help me to develop a critical stance towards meanings that are build in a self-feeding uncritical way.

Can the dialectic, complementary and cogenetic relationship between sense and meaning become the ground for a better epistemic performance, and for an appreciation of the liminal meaning produced in contemporary societies? In the last section, I will try to provide arguments in favor of this idea.

Ethical Grounding of Meaning

If connectivistic and post-truth societies produce meanings that are based on popularity check, rather than on epistemic appreciation, we risk to have a situation in which any belief is the contingent result of a collective epistemic agent which replicates its patterns into bubbles. One will just listen to messages that confirm her own preferences and belief and reject the different ones as unreliable. Inside the bubble there is no way to check the meaning, because the meaning is not cogenetic, it is consensual.

For instance, if I read and share a post on social media, claiming that migrants are the main criminal population, despite my initial position toward the news, there is the possibility that within my group I will start to see only posts confirming the initial fact. The fact can be proven wrong, for instance by the press, but the belief will be hard to change, as the meaning of “migrant” in my bubble is likely to continue being that of “criminal”. The collectivity will share an epistemically unjust position, to the extent that it will attribute a lessened epistemic capability to those who are not part of the group itself. How can one avoid that the group is scaffolding the “bad” epistemic skills, rather than empowering the individual (Vähämaa, 2018)?

The solution I propose is to develop an epistemic virtue based on two main principles: the ethical grounding of meaning and the cogenetic logic. The ethical grounding of meaning is directly related to the articulation between common sense and wisdom in the sense of Vico (Tateo, 2015). In a post-truth world in which we cannot appreciate the epistemic foundation of meaning, we must rely on a different epistemic virtue in order to become critical toward messages. Ethical grounding, based on the personal sense of humanity, is not of course epistemic test of reliability, but it is an alarm bell to become legitimately suspicious toward meanings. The second element of the new epistemic virtue is cogenetic logic (Tateo, 2016).

Meaning is grounded in the building of every belief as a complementary system between “A” and “non-A”. This implies that any meaning is constructed through the relationship with its complementary opposite. The truth emerges in a double dialectic movement (Silva Filho, 2014): through Socratic dialogue and through cogenetic logic. In conclusion, let me try to provide a practical example of this epistemic virtue.

The way to start to discriminate potentially fake news or the tendentious interpretations of facts would be essentially based on an ethic foundation. As in Vico’s wisdom of common sense, I would base my epistemic scrutiny on the imaginative work that allows me to access the other and on the cogenetic logic that assumes every meaning is defined by its relationship with the opposite.

Let’s imagine that we are exposed to a post on social media, in which someone states that a caravan of migrants, which is travelling from Honduras across Central America toward the USA border, is actually made of criminals sent by hostile foreign governments to destabilize the country right before elections. The same post claims that it is a conspiracy and that all the press coverage is fake news.

Finally the post presents some “debunking” pictures showing some athletic young Latino men, with their faces covered by scarves, to demonstrate that the caravan is not made by families with children, but is made by “soldiers” in good shape and who don’t look poor and desperate as the “mainstream” media claim. I do not know whether such a post has ever been made, but I just assembled elements of very common discourses circulating in the social media.

The task is no to assess the nature of this message, its meaning and its reliability. I could rely on the group as a ground for assessing statements, to scrutinize their truth and justification. However, due to the “bubble” effect, I may fall into a simple tautological confirmation, due to the configuration of the network of my relations. I would probably find only posts confirming the statements and delegitimizing the opposite positions. In this case, the fact that the group will empower my epistemic confidence is a very dangerous element.

I could limit my search for alternative positions to establish a dialogue. However, I could not be able, alone, to find information that can help me to assess the statement with respect to its degree of bias. How can I exert my skepticism in a context of post-truth? I propose some initial epistemic moves, based on a common sense approach to the meaning-making.

1) I must be skeptical of every message which uses a violent, aggressive, discriminatory language, and that such kind of message is “fake” by default.

2) I must be skeptical of every message that treats as criminals or is against whole social groups, even on the basis of real isolated events, because this interpretation is biased by default.

3) I must be skeptical of every message that attacks or targets persons for their characteristics rather than discussing ideas or behaviors.

Appreciating the hypothetical post about the caravan by the three rules above mentioned, one will immediately see that it violates all of them. Thus, no matter what is the information collected by my epistemic bubble, I have justified reasons to be skeptical towards it. The foundation of the meaning of the message will not be neither in the group nor in the person. It will be based on the ethical position of common sense’s wisdom.

Contact details: luca@hum.aau.dk

References

Austin, J. L. (1975). How to do things with words. Oxford: Oxford University Press.

Bouchard, D. (2013). The nature and origin of language. Oxford: Oxford University Press.

Danesi, M. (1993). Vico, metaphor, and the origin of language. Bloomington: Indiana University Press.

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.

Horkheimer, M., & Adorno, T. W. (2002). Dialectic of Enlightenment. Trans. Edmund Jephcott. Stanford: Stanford University Press.

Kant, I. (1993) [1785]. Grounding for the Metaphysics of Morals. Translated by Ellington, James W. (3rd ed.). Indianapolis and Cambridge: Hackett.

Keyes, R. (2004). The Post-Truth Era: Dishonesty and Deception in Contemporary Life. New York: St. Martin’s.

Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353-369.

Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1) http://www.itdl.org/Journal/Jan_05/article01.htm

Silva Filho, W. J. (2014). Davidson: Dialog, dialectic, interpretation. Utopía y praxis latinoamericana, 7(19).

Tateo, L. (2015). Giambattista Vico and the psychological imagination. Culture & Psychology, 21(2), 145-161.

Tateo, L. (2016). Toward a cogenetic cultural psychology. Culture & Psychology, 22(3), 433-447.

Thao, T. D. (2012). Investigations into the origin of language and consciousness. New York: Springer.

Vähämaa, M. (2018). Challenges to Groups as Epistemic Communities: Liminality of Common Sense and Increasing Variability of Word Meanings, Social Epistemology, 32:3, 164-174, DOI: 10.1080/02691728.2018.1458352

Vygotsky, L. S. (2012). Thought and language. Cambridge, MA: MIT press.

Wertsck, J. V. (2000). Vygotsky’s Two Minds on the Nature of Meaning. In C. D. Lee & P. Smagorinsky (eds), Vygotskian perspectives on literacy research: Constructing meaning through collaborative inquiry (pp. 19-30). Cambridge: Cambridge University Press.

Author Information: Jonathan Matheson & Valerie Joly Chock, University of North Florida, jonathan.matheson@gmail.com.

Matheson, Jonathan; Valerie Joly Chock. “Knowledge and Entailment: A Review of Jessica Brown’s Fallibilism: Evidence and Knowledge.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 55-58.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42k

Photo by JBColorado via Flickr / Creative Commons

 

Jessica Brown’s Fallibilism is an exemplary piece of analytic philosophy. In it, Brown engages a number of significant debates in contemporary epistemology with the aim of making a case for fallibilism about knowledge. The book is divided into two halves. In the first half (ch. 1-4), Brown raises a number of challenges to infallibilism. In the second half (ch. 5-8), Brown responds to challenges to fallibilism. Brown’s overall argument is that since fallibilism is more intuitively plausible than infallibilism, and since it fares no worse in terms of responding to the main objections, we should endorse fallibilism.

What Is Fallibilism?

In the introductory chapter, Brown distinguishes between fallibilism and infallibilism. According to her, infallibilism is the claim that one knows that p only if one’s evidence entails p, whereas fallibilism denies this. Brown settles on this definition after having examined some motivation and objections to other plausible definitions of infallibilism. With these definitions in hand, the chapter turns to examine some motivation for fallibilism and infallibilism.

Brown then argues that infallibilists face a trilemma: skepticism, shifty views of knowledge, or generous accounts of knowledge. Put differently, infallibilists must either reject that we know a great deal of what we think we know (since our evidence rarely seems to entail what we take ourselves to know), embrace a view about knowledge where the standards for knowledge, or knowledge ascriptions, vary with context, or include states of the world as part of our evidence. Brown notes that her focus is on non-skeptical infallibilist accounts, and explains why she restricts her attention in the remainder of the book to infallibilist views with generous conception of evidence.

In chapter 2, Brown lays the groundwork for her argument against infallibilism by demonstrating some commitments of non-skeptical infallibilists. In order to avoid skepticism, infallibilists must show that we have evidence that entails what we know. In order to do so, they must commit to certain claims regarding the nature of evidence and evidential support.

Brown argues that non-factive accounts of evidence are not suitable for defending infallibilism, and that infallibilists must embrace an externalist, factive account of evidence on which knowing that p is sufficient for p to be part of one’s evidence. That is, infallibilists need to endorse Factivity (p is evidence only if p is true) and the Sufficiency of knowledge for evidence (if one knows that p, then p is part of one’s evidence).

However, Brown argues, this is insufficient for infallibilists to avoid skepticism in cases of knowledge by testimony, inference to the best explanation, and enumerative induction. In addition, infallibilists are committed to the claim that if one knows p, then p is part of one’s evidence for p (the Sufficiency of knowledge for self-support thesis).

Sufficiency of Knowledge to Support Itself

Chapter 3 examines the Sufficiency of knowledge for self-support in more detail. Brown begins by examining how the infallibilist may motivate this thesis by appealing to a probabilistic account of evidential support. If probability raisers are evidence, then there is some reason to think that every proposition is evidence for itself.

The main problem for the thesis surrounds the infelicity of citing p as evidence for p. In the bulk of the chapter, Brown examines how the infallibilist may account for this infelicity by appealing to pragmatic explanations, conversational norms, or an error theory. Finding each of these explanations insufficient to explain the infelicity here, Brown concludes that the infallibilist’s commitment to the Sufficiency of knowledge for self-support thesis is indeed problematic.

Brown takes on the infallibilists’ conception of evidence in Chapter 4. As mentioned above, the infallibilist is committed to a factive account of evidence, where knowledge suffices for evidence. The central problem here is that such an account has it that intuitively equally justified agents (one in a good case and one in a bad case) are not in fact equally justified.

Brown then examines the ‘excuse maneuver’, which claims that the subject in the bad case is unjustified yet blameless in their belief, and the original intuition confuses these assessments. The excuse maneuver relies on the claim that knowledge is the norm of belief. Brown argues that the knowledge norm fails to provide comparative evaluations of epistemic positions where subjects are intuitively more or less justified, and fails to give an adequate account of propositional justification when the target proposition is not believed. In addition, Brown argues that extant accounts of what would provide the subject in the bad case with an excuse are all insufficient.

In Chapter 5 the book turns to defending fallibilism. The first challenge to fallibilism that Brown examines concerns closure. Fallibilism presents a threat to multi-premise closure since one could meet the threshold for knowledge regarding each individual premise, yet fail to meet it regarding the conclusion. Brown argues that giving up on closure is no cost to fallibilists since closure ought to be rejected on independent grounds having to do with defeat.

A subject can know the premises and deduce the conclusion from them, yet have a defeater (undercutting or rebutting) that prevents the subject from knowing the conclusion. Brown then defends such defeat counterexamples to closure from a number of recent objections to the very notion of defeat.

Chapter 6 focuses on undermining defeat and recent challenges that come to it from ‘level-splitting’ views. According to level-splitting views, rational akrasia is possible—i.e., it is possible to be rational in simultaneously believing both p and that your evidence does not support p. Brown argues that level-splitting views face problems when applied to theoretical and practical reasoning. She then examines and rejects attempts to respond to these objections to level-splitting views.

Brown considers objections to fallibilism from practical reasoning and the infelicity of concessive knowledge attributions in Chapter 7. She argues that these challenges are not limited to fallibilism but that they also present a problem for infallibilism. In particular, Brown examines how (fallibilist or infallibilist) non-skeptical views have difficulty accommodating the knowledge norm for practical reasoning (KNPR) in high-stakes cases.

She considers two possible responses: to reject KNPR or to maintain KNPR by means of explain-away maneuvers. Brown claims that one’s response is related to the notion of probability one takes as relevant to practical reasoning. According to her, fallibilists and infallibilists tend to respond differently to the challenge from practical reasoning because they adopt different views of probability.

However, Brown argues, both responses to the challenge are in principle available to each because it is compatible with their positions to adopt the alternative view of probability. Thus, Brown concludes that practical reasoning and concessive knowledge attributions do not provide reasons to prefer infallibilism over fallibilism, or vice versa.

Keen Focus, Insightful Eyes

Fallibilism is an exemplary piece of analytic philosophy. Brown is characteristically clear and accessible throughout. This book will be very much enjoyed by anyone interested in epistemology. Brown makes significant contributions to contemporary debates, making this a must read for anyone engaged in these epistemological issues. It is difficult to find much to resist in this book.

The arguments do not overstep and the central thesis is both narrow and modest. It’s worth emphasizing here that Brown does not argue that fallibilism is preferable to infallibilism tout court, but only that it is preferable to a very particular kind of infallibilism: non-skeptical, non-shifty infallibilism.  So, while the arguments are quite strong, the target is more narrow.

One of the central arguments against fallibilism that Brown considers concerns closure. While she distinguishes multi-premise closure from single-premise closure, the problems for fallibilism concern only the former, which she formulates as follows:

Necessarily, if S knows p1-n, competently deduces, and thereby comes to believe q, while retaining her knowledge of p1-n throughout, then S knows q. (101)

The fallibilist threshold condition is that knowledge that p requires that the probability of p on one’s evidence be greater than some threshold less than 1. This threshold condition generates counterexamples to multiple-premise closure in which S fails to know a proposition entailed by other propositions she knows. Where S’s evidence for each premise gives them a probability that meets the threshold, S knows each of the premises.

If together these premises entail q, then S knows premises p1-n that jointly entail conclusion q. The problem is that S knowing the premises in this way is compatible with the probability of the conclusion on S’s evidence not meeting the threshold. Thus, this presents possibility for counterexamples to closure and a problem for fallibilism.

As the argument goes, fallibilists must deny closure and this is a significant cost. Brown’s reply is to soften the consequence of denying closure by arguing that it is implausible due to alternative (and independent) reasons concerning defeat. Brown’s idea is that closure gives no reason to reject fallibilism, or favor infallibilism, given that defeat rules out closure in a way that is independent of the fallibilism-infallibilism debate.

After laying out her response, Brown moves on to consider and reply to objections concerning the legitimacy of defeat itself. She ultimately focuses on defending defeat against such objections and ignores other responses that may be available to fallibilists when dealing with this problem. Brown, though, is perhaps a little too quick to give up on closure.

Consider the following alternative framing of closure:

If S knows [p and p entails q] and believes q as the result of a competent deduction from that knowledge, then S knows q.

So understood, when there are multiple premises, closure only applies when the subject knows the conjunction of the premises and that the premises entail the conclusion. Framing closure in this way avoids the threshold problem (since the conjunction must be known). If S knows the conjunction and believes q (as the result of competent deduction), then S’s belief that q cannot be false. This is the case because the truth of p entailing q, coupled with the truth of p itself, guarantees that q is true. This framing of closure, then, eliminates the considered counterexamples.

Framing closure in this way not only avoids the threshold problem, but plausibly avoids the defeat problem as well. Regarding undercutting defeat, it is at least much harder to see how S can know that p entails q while possessing such a defeater. Regarding rebutting defeat, it is implausible that S would retain knowledge of the conjunction if S possesses a rebutting defeater.

However, none of this is a real problem for Brown’s argument. It simply seems that she has ignored some possible lines of response open to the fallibilist that allows the fallibilist to keep some principle in the neighborhood of closure, which is an intuitive advantage.

Contact details: jonathan.matheson@gmail.com

References

Brown, Jessica. Fallibilism: Evidence and Knowledge. Oxford: Oxford University Press, 2018.

Author Information: András Szigeti, Linköping University, andras.szigeti@liu.se

Szigeti, András.”Seumas Miller: Joint Epistemic Action and Collective Moral Responsibility—A Reply.” Social Epistemology Review and Reply Collective 4, no. 5 (2015): 14-19.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-23R

Please refer to:

1703228419_4e6fdf0a3c_o

Image credit: Ken Douglas, via flickr

In a series of books and articles, Miller has developed a refreshingly original and complex account of joint action and collective responsibility. This approach constitutes an interesting alternative to the current orthodoxy that seeks to explain shared agency in terms of joint intentions. Miller also offers a novel, moderately individualist conception of group responsibility steering clear of both robust collectivism, according to which group-responsibility does not reduce to the responsibility of individual group members, as well as more radical forms of individualism, according to which collective responsibility is always just the sum of the responsibility of individual group members.

His present paper extends this account to the area of collective epistemic action. [1] I believe the approach is promising overall and its application to epistemology fruitful. In what follows, I will explore how the general account and its application could be further strengthened by making some of the central conceptual distinctions of the paper clearer. Continue Reading…

Author Information: Jonathan Matheson, University of North Florida, jonathan.matheson@gmail.com

Matheson, Jonathan. “Epistemic Norms and Self-Defeat: A Reply to Littlejohn.” Social Epistemology Review and Reply Collective 4, no. 2 (2015): 26-32.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1Uo

Please refer to:

epistemic_monomania

Image credit: myshko_, via flickr

In “Are Conciliatory Views of Disagreement Self-Defeating?” I argued that we should revise how we understand conciliatory views of disagreement. Conciliatory views of disagreement claim that discovering that an epistemic peer disagrees with you is epistemically significant. In particular, they have been understood as claiming that becoming aware that an epistemic peer disagrees with you about a proposition makes you less justified in adopting the doxastic attitude that you had toward that proposition. So, if you believed p and became aware that your epistemic peer disbelieves p, then you would become less justified in believing p, at least so long as you have no undefeated reason to discount your peer’s conclusion about p. More formally, conciliationism has been understood as claiming the following:  Continue Reading…