Archives For epistemic community

Author Information: Priyadarshini Vijaisri, Centre for the Study of Developing Societies, vijaisri@csds.in.

Vijaisri, Priyadarshini. “The Turn of Postscript Narratives.” Social Epistemology Review and Reply Collective 7, no. 10. (2018): 22-27.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-41H

Image by Ian D. Keating via Flickr / Creative Commons

 

Recalcitrant narratives are ever relegated to the status of dispensable appendages of dominant ideological and epistemic regimes. Vaditya’s paper captures the turn of such postscript narratives’ epistemic concerns that are gaining critical significance in African, Latin American and Asian countries, emerging from intellectual and sociopolitical movements within and outside the Western context.

The driving force being the inadequacy of Eurocentric philosophical and epistemology to engage with contra Western cosmologies and the critical recognition that epistemology is no pure science but mediated by ideologies, shaped by historical factors and undergird by institutionalized epistemic suppression and entrenched in power. Such turn fundamentally foregrounds fidelity to ‘fact’ and universe of study rather than acquiesce to epistemic mimesis and has immense potential to bring in critical reflexivity into newer disciplines like exclusion and discrimination created precisely due to the failure of traditional disciplines to deal with issues concerning the marginalized.

Prior to making some very preliminary points to think about future directions in exploration of these issues would require recognizing problems dominant epistemic practices pose, especially in thinking about marginality in the Indian context. Proposed here is a promising mode of enquiry to disentangle the over-determined idea of the oppressed, i.e., the aesthetic frame.

An Essence of Oppression

It is increasingly recognized that the predominance of western epistemology based on dualism, certitude, and mechanistic conception of the universe is culmination of negation of contra episteme, worldviews and technologies. Its methodological and ideological epistemic filters occlude range of ideas, experiences and processes from its purview that can barely pass through scientific rationalist sieve or appear within a specific form; power should appear in the political, reason must be untainted by emotion, fact must correspond to the principle of bivalence, and true belief could be certified as knowledge if it arrived in a particular mode, any non-rational detour could consign it to false knowledge – deformed episteme, methodless technologies, illogical mythical, irrational sensorial etc.

Thus, the simmering discontent in non-western societies, especially its marginalized collectivities, against a soliloquy of the western rational self which entitles itself as arbitration of true knowledge; and whose provenance of authority is expanded and reinforced by its apologists outside itself by virtue of institutionalization of epistemic authority in the image of the western ‘form’. Such that the West is the transcendental form, and replication being impossibility, the rest are at best ‘copies’ or duplicitous entities whose trajectory is deeply bound to the center.

For the diverse ideologies, grounded in positivism and enlightenment philosophy, the non-Western subjects (especially the marginalized amongst them) are the feral boys, who have accidentally strayed into civilization and ought step into universal history to reclaim humanness. Such modernist discourses riddled with a priori conceptions have impoverished the oppressed and resulted in mystification and entrenched impertinence towards other cognitive modes has caused damage both in representations of and self-representations by the non-west/marginalized on the validity and relevance of their forms of knowing, and technologies.

The crisis in Marxist politics and ideological framework, despite its brief revolutionary spells and significant role in generating radical consciousness in few regions, is too evident despite its entrenchment in the academia. While it has rendered native categories and non-western world as regressive deviance the crisis is reflected in politics too, with exit of oppressed from the Marxist bands, paradoxically due to its own convoluted caste bias and negative valuation of their worldviews.

Inversely, the Subaltern subject is a peculiar species whose appearance and consciousness in finitude nature of appearances/traces is at best mediated, its very essence or ephemeral ontology simply lost in the many layers of obfuscating consciousness; an ontology of the disembodied subject. Thus, the Freirean pedagogic vision was in India at best an inadvertent idyllic where the epistemic base for liberation couldn’t take off, given the many ‘lacks’ in the subject/cognitive agent and distorted worldview and materiality. It is against this history of many interstices in cartographies of repression that B. Sousas Santos’s subversive stance resonates and foregrounds break from the epistemic center as a necessary condition for emancipation.

Diversity and Homogeneity

Thus, standpoint perspectives’ critique of positivism marks a fundamental shift making legible/accountable cognitive agency and diversification and revitalization of discursive space. Positivist epistemology’s conception of scientism and universalism (unadulterated by particularities) is consequence of homogenization, which allows for transposition of singular particularity (of the West) as the universal. Scientific method by implication is premised on the presupposition that truths and representations are products of cognitive process free from cultural and ideological bias.

Thus, the conception of the knower as outside the world of enquiry by implication reinforces a positivist common sense, that errors/distortions are solely a consequence of method, absolving the epistemic agency (complicity/accountability) of the knower, precluding recognition of the nature of relation between epistemology and worldview. While, epistemology originates in the need for exposition and justification of ontological and metaphysical truth claims. As such it creates discursive space both within particular philosophical tradition and outside it for debate and justification of its claims and thus epistemology is a collective dialogical process and open to critique and revision.

Thus, within Indian philosophical tradition deeply antithetical ideas (eg., multiplicity of standpoints on truth or ideas of self/selves/non-self) could be disputed/conceded as a consequence of epistemic plurality and debate (as exemplified in the theory of sources of knowledge).

Worldviews/structures are founded on cultural substratum with their own rendering of the ontology of ideas/mental artifacts- i.e., the cognitive, unconscious/conscious and experiential states by which axiomatic truths are arrived at from the seamless flows between intuition, reason, emotion etc. Such ontology is complexly interwoven with the distinctive conceptions of self and effect the ways in which the knower is defined in relation to the objects of knowledge or the phenomenal world. Application of a mechanistic worldview or historical materialism is incapable of engaging with entirely different universalisms opposed to it.

Also, while dominant codified systems offer coherent theories in grasping the essence of ideas, understanding oral tradition is beset with problems over form and validity of knowledge. In speech traditions codified text (of art, technology or knowledge practices) where knowledge and skills are transmitted orally by collectivities textualization marks a crisis in a culture. Text at best is instrumental for purposes of legible affinity or entitlements rarely a referent for practice or validation of epistemic claims.[1]. Failure to appreciate such epistemic practices have resulted in repression of technologies and cognitive systems of the marginalized as invalid forms of knowledge.

Genuinely Overcoming Domination

This double bind of falsified traditional representations and positivist accounts have led to creative explosion of other representative forms that enable more critical introspection as in literature, fiction and the autobiographical. Dominant ‘disciplinary matrix’ overlooks ‘crisis’ as a dissoluble diversion. Such politics of knowledge fetters the marginalized in a double bind; tradition has its own pernicious facets while modernity, (its antidote to internal repression and non-recognition), and its evocation serve as a justification of the credibility of such episteme and politics.

Struggles of emancipation find legitimacy within a specific mode, i.e., through eliciting proof of their abomination-the prototypical ideal of the oppressed, and irreverence to oppressive tradition. This entails a conscious repression of histories and traditional forms of cultural critique, grounded in a logic and worldview that is in contradiction with modern values. It is within this contradictory pull of modern/negation of tradition and pathos and pre-modern/positive self-affirmation that the consciousness of the oppressed wrestles given the distortion of these spaces with the privileging of textual and singular dominant historical and cultural representations. Abandoning such discourses constricts routes to retrace the lost epistemic/metaphysical ground and its non-redundancy via folk cultures and further obstructs the resources for a grounded critical subject.

It would be erroneous to assume that the domain of the marginalized is distorted/disjointed part of the whole, incapable of unfolding universals or coherent systems. Claims to validity of such cognitive systems and technologies rest on its firm anchoring within the whole. By nature of inherence constituent parts of a whole possess the potential to reveal the whole. Thus, the margins is a site of immense potentiality, as signifier of a space that has no fixed or categorical relation with any single institutionalized or hegemonic discourse. Its potentiality rests in refractory power and thereby offers pathways to retrace the basic organizing principles of Indic systems of knowledge.

The evidence for such epistemology is offered in the perceptible folk/marginalized non-androcentric worldview. Such universe as a play of elements, the distinctive ontology of the elemental body, transfigures the conception of and interrelatedness between spirit and matter, non-human entities, spatiality and the many planes of existence and states of consciousness and their relevance for relating to realities beyond conscious mind, the value attributed to work untethered with profit, meaning of and relation with land, difference/hierarchies, ethics, the cyclical nature of time, etc.

This metaphysical substratum mediated by and enlivened through enactments, myths, rituals, customs as part of coherent system is formative of Indic universalism and it is this shared ground that is expressive of the inherence of truth claims of the marginalized discourses. Undeniably, presentation and disputations against dominance, violations and counterclaims manifest within this form and experience. The material artifact, a product of collective labor, itself becomes a universal metaphor for positive self-affirmation, and re-imagination of the universe, radically centering collective self in cosmology. The modern conceptions of labor, materiality and individualism substitute such aesthetic with a mechanistic and atomistic worldview.

The Validity of Validity

The hegemonic deontic texts and archives with a purposive language enunciate a desired ideal and a ‘fact’ isolating it from the diffuse cognitive/cultural system and can barely provide a clue to the aesthetic. What then are the sources of validity of such folk beliefs and experience? This question strikes at the core of any epistemology founded in orality; ‘uncodified’ technologies, cognitive systems and experience and problematizes the naive idea of the detached knower and the distant object of knowledge. Such an enquiry necessitates understanding the general folk epistemic orientation and the identifiable connections between the folk and the classical to grasp the continuities and disjunctions.

The folk is the proximate arche and constitutes the substratum of a culture. Pervasion of orality signifies its primal quality in virtue of which it transcends the definitive value attributed to it in philosophical and epistemic practices. Thus, its validity lies as much as its locus within the general knowledge tradition as its inherence to ontology and synchrony with the essence of its cosmology. Given the current limitations some very basic links can be identified between folk modes of knowing and ‘formal’ epistemology.

Word or testimony/sabda is recognized, though not uncontested, among most schools of Indian epistemology as a valid source of knowledge, and has two broad conceptualizations; one in terms of the self-evident, infalliable truth of the Vedic scriptures and the other the truth claim of statements of reliable person accompanied by necessary conditions (absence of deceit and specific form of presentation). Uniqueness of orality is evidenced by the creative combination of various skills of narration, argumentation and presentation/artistic representation in highly stylized form involving a sensibility and intimacy different from Mimamsa hermeneutics and Nyaya logic.

Another shared epistemic resource is analogy/upamana with divergent conceptualization as source of knowledge and subject to intricate analysis. Generally it is a specific type of cognition generating new knowledge through similarities or resemblances.  For folk cultures analogy possess a truth bearing quality, as a proof of an idea, wise dictum of deontic value that shed light in times of moral dilemma, or exposition of a metaphysical truth.

Analogical reasoning for the folk has special significance as a didactic and literary device to elicit truth, in establishing common ground, in grounding disputes and subversion and allows for seamless flows of ideas and experiences. Off the repertoire of the reliable knowers analogical and logical reasoning is a skill cultivated optimally.

Thus, self-evident truth of such beliefs are referents of ‘facts’ or of factive collective experience whose meaning and value is tied to and codified in custom, mythologies, collective rites, festivities, everyday life and tales people tell about themselves and others. Thus, orality has a very distinctive metaphysical and epistemic value in this context.

It thus cannot be strictly translated as orality for in subsumption of other epistemic forms it radically attains a quality of universalism. Sustained by specialized communities (genealogists/bards) as testifiers/transmitters of such primal truths untethered by external justification, verdicality is intrinsic in its efficacious quality to produce culturally desired goals and reconfiguration of the world. It gains legitimacy from collectivities that participate in its recreation with the knowers.

Subversive Aesthetic

Such being the overarching frame of reference subversion and conflict are presented in specific cultural forms that resonate with the spirit of the whole. Such an aesthetic mode (continuous with the theory of emotions/rasa vada) is grounded in a positive valuation of emotions and sense experience different from western aesthetics/formalism. Emotions in folk aesthetic have a positive value as catalytic states for realization of higher states of being and grasping of truth, of the heroic, and refinement. If any it is the marginalized who have sustained the robust tradition of aesthetic as it is in this form that their representations of their self and the world are anchored.

Ironically, Nietzsche would have found an unlikely protagonist in the ‘Pariah’! Inevitably, any systematic exploration of aesthetic, and its cultural trajectories would mandate a return to its basic connotation as relating to sense(s)/perception, for discerning root categories, foundational to epistemology and metaphysics.  It then becomes possible to trace the broad trajectory of primacy accorded to reason and its affinity with sense of sight in western thought (from the Platonic allegories, idea of panoptican vision, concept of gaze) to its deployment as a mechanism of power, (as in racial differentiation, color being secondary property of vision) and technologies of surveillance. Any uncritical application of such concepts, originating within a particular historical context, to non-Western contexts obscures other realities, mechanisms of power and worldviews founded on contrary conceptualization of the senses.

Thus, sustainability of critical ‘pluriversal’ epistemology demands an investment in comparative philosophy/epistemology. It would be a fallacy to assume that engaging with the oppressed is little more than working on the fringes, with the residue of dominant knowledge systems. These vital sites allow for looking at the whole from the peripheries in enriching ways and paradoxically as one of the solid anchors by which to retrace the credence and rootedness of culture specific epistemological traditions in its critique of traditional forms of oppression.

To maximize the progress made thus far entails identifying newer sources of knowledge, exploring knowledge practices, generating root concepts that can enable coherent understanding of the many universalisms in comparativist perspective. Fundamentally, such quests are about restitution of lost ground of the oppressed, undoing the immeasurable damage of epistemic stigmatization through demystification of hegemonic myths and repositioning of and meaningful dialogue across alternative ethical cosmologies.

Contact details: vijaisri@csds.in

References

Friere, Paulo. The Pedagogy of the Oppressed. New York: Herder and Herder, 1970.

Obeyesekere, Gananatha. The Awakened Ones: Phenomenology of Visionary Experience. New York: Columbia University Press, 2012.

Matilal, B. K., A. Chakrabarti. Knowing from Words: Western and Indian Philosophical Analysis of Understanding and Testimony Dordrecht: Springer Science Business Media, 1994.

Sarukkai, Sundar. What is Science? Delhi: National Book Trust India, 2012.

de Sousa Santos, Baoventura. Epistemologies of the South: Justice against Epistemicide. London: Routledge, 2014.

Vaditya, Venkatesh. “Social Domination and Epistemic Marginalisation: Towards Methodology of the Oppressed,” Social Epistemology, DOI: 10.1080/02691728.2018.1444111, 2018.

[1] Observations are based on folk/marginalized communities of Southern India wherein knowledge is hereditarily transmitted. For example, communities have cultural mechanisms for transmission of particular types of knowledge within each community, for example among the leather workers, potters, ironsmiths, masons, sculptors, stone cutters, artists, toddy tapers, rope makers, weavers, washermen, healers, acrobats, jugglers, nomads, and tribals etc.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu.

Sassower, Raphael. “Post-Truths and Inconvenient Facts.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 47-60.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40g

Can one truly refuse to believe facts?
Image by Oxfam International via Flickr / Creative Commons

 

If nothing else, Steve Fuller has his ear to the pulse of popular culture and the academics who engage in its twists and turns. Starting with Brexit and continuing into the Trump-era abyss, “post-truth” was dubbed by the OED as its word of the year in 2016. Fuller has mustered his collected publications to recast the debate over post-truth and frame it within STS in general and his own contributions to social epistemology in particular.

This could have been a public mea culpa of sorts: we, the community of sociologists (and some straggling philosophers and anthropologists and perhaps some poststructuralists) may seem to someone who isn’t reading our critiques carefully to be partially responsible for legitimating the dismissal of empirical data, evidence-based statements, and the means by which scientific claims can be deemed not only credible but true. Instead, we are dazzled by a range of topics (historically anchored) that explain how we got to Brexit and Trump—yet Fuller’s analyses of them don’t ring alarm bells. There is almost a hidden glee that indeed the privileged scientific establishment, insular scientific discourse, and some of its experts who pontificate authoritative consensus claims are all bound to be undone by the rebellion of mavericks and iconoclasts that include intelligent design promoters and neoliberal freedom fighters.

In what follows, I do not intend to summarize the book, as it is short and entertaining enough for anyone to read on their own. Instead, I wish to outline three interrelated points that one might argue need not be argued but, apparently, do: 1) certain critiques of science have contributed to the Trumpist mindset; 2) the politics of Trumpism is too dangerous to be sanguine about; 3) the post-truth condition is troublesome and insidious. Though Fuller deals with some of these issues, I hope to add some constructive clarification to them.

Part One: Critiques of Science

As Theodor Adorno reminds us, critique is essential not only for philosophy, but also for democracy. He is aware that the “critic becomes a divisive influence, with a totalitarian phrase, a subversive” (1998/1963, 283) insofar as the status quo is being challenged and sacred political institutions might have to change. The price of critique, then, can be high, and therefore critique should be managed carefully and only cautiously deployed. Should we refrain from critique, then? Not at all, continues Adorno.

But if you think that a broad, useful distinction can be offered among different critiques, think again: “[In] the division between responsible critique, namely, that practiced by those who bear public responsibility, and irresponsible critique, namely, that practiced by those who cannot be held accountable for the consequences, critique is already neutralized.” (Ibid. 285) Adorno’s worry is not only that one forgets that “the truth content of critique alone should be that authority [that decides if it’s responsible],” but that when such a criterion is “unilaterally invoked,” critique itself can lose its power and be at the service “of those who oppose the critical spirit of a democratic society.” (Ibid)

In a political setting, the charge of irresponsible critique shuts the conversation down and ensures political hegemony without disruptions. Modifying Adorno’s distinction between (politically) responsible and irresponsible critiques, responsible scientific critiques are constructive insofar as they attempt to improve methods of inquiry, data collection and analysis, and contribute to the accumulated knowledge of a community; irresponsible scientific critiques are those whose goal is to undermine the very quest for objective knowledge and the means by which such knowledge can be ascertained. Questions about the legitimacy of scientific authority are related to but not of exclusive importance for these critiques.

Have those of us committed to the critique of science missed the mark of the distinction between responsible and irresponsible critiques? Have we become so subversive and perhaps self-righteous that science itself has been threatened? Though Fuller is primarily concerned with the hegemony of the sociology of science studies and the movement he has championed under the banner of “social epistemology” since the 1980s, he does acknowledge the Popperians and their critique of scientific progress and even admires the Popperian contribution to the scientific enterprise.

But he is reluctant to recognize the contributions of Marxists, poststructuralists, and postmodernists who have been critically engaging the power of science since the 19th century. Among them, we find Jean-François Lyotard who, in The Postmodern Condition (1984/1979), follows Marxists and neo-Marxists who have regularly lumped science and scientific discourse with capitalism and power. This critical trajectory has been well rehearsed, so suffice it here to say, SSK, SE, and the Edinburgh “Strong Programme” are part of a long and rich critical tradition (whose origins are Marxist). Adorno’s Frankfurt School is part of this tradition, and as we think about science, which had come to dominate Western culture by the 20th century (in the place of religion, whose power had by then waned as the arbiter of truth), it was its privileged power and interlocking financial benefits that drew the ire of critics.

Were these critics “responsible” in Adorno’s political sense? Can they be held accountable for offering (scientific and not political) critiques that improve the scientific process of adjudication between criteria of empirical validity and logical consistency? Not always. Did they realize that their success could throw the baby out with the bathwater? Not always. While Fuller grants Karl Popper the upper hand (as compared to Thomas Kuhn) when indirectly addressing such questions, we must keep an eye on Fuller’s “baby.” It’s easy to overlook the slippage from the political to the scientific and vice versa: Popper’s claim that we never know the Truth doesn’t mean that his (and our) quest for discovering the Truth as such is given up, it’s only made more difficult as whatever is scientifically apprehended as truth remains putative.

Limits to Skepticism

What is precious about the baby—science in general, and scientific discourse and its community in more particular ways—is that it offered safeguards against frivolous skepticism. Robert Merton (1973/1942) famously outlined the four features of the scientific ethos, principles that characterized the ideal workings of the scientific community: universalism, communism (communalism, as per the Cold War terror), disinterestedness, and organized skepticism. It is the last principle that is relevant here, since it unequivocally demands an institutionalized mindset of putative acceptance of any hypothesis or theory that is articulated by any community member.

One detects the slippery slope that would move one from being on guard when engaged with any proposal to being so skeptical as to never accept any proposal no matter how well documented or empirically supported. Al Gore, in his An Inconvenient Truth (2006), sounded the alarm about climate change. A dozen years later we are still plagued by climate-change deniers who refuse to look at the evidence, suggesting instead that the standards of science themselves—from the collection of data in the North Pole to computer simulations—have not been sufficiently fulfilled (“questions remain”) to accept human responsibility for the increase of the earth’s temperature. Incidentally, here is Fuller’s explanation of his own apparent doubt about climate change:

Consider someone like myself who was born in the midst of the Cold War. In my lifetime, scientific predictions surrounding global climate change has [sic.] veered from a deep frozen to an overheated version of the apocalypse, based on a combination of improved data, models and, not least, a geopolitical paradigm shift that has come to downplay the likelihood of a total nuclear war. Why, then, should I not expect a significant, if not comparable, alteration of collective scientific judgement in the rest of my lifetime? (86)

Expecting changes in the model does not entail a) that no improved model can be offered; b) that methodological changes in themselves are a bad thing (they might be, rather, improvements); or c) that one should not take action at all based on the current model because in the future the model might change.

The Royal Society of London (1660) set the benchmark of scientific credibility low when it accepted as scientific evidence any report by two independent witnesses. As the years went by, testability (“confirmation,” for the Vienna Circle, “falsification,” for Popper) and repeatability were added as requirements for a report to be considered scientific, and by now, various other conditions have been proposed. Skepticism, organized or personal, remains at the very heart of the scientific march towards certainty (or at least high probability), but when used perniciously, it has derailed reasonable attempts to use science as a means by which to protect, for example, public health.

Both Michael Bowker (2003) and Robert Proctor (1995) chronicle cases where asbestos and cigarette lobbyists and lawyers alike were able to sow enough doubt in the name of attenuated scientific data collection to ward off regulators, legislators, and the courts for decades. Instead of finding sufficient empirical evidence to attribute asbestos and nicotine to the failing health condition (and death) of workers and consumers, “organized skepticism” was weaponized to fight the sick and protect the interests of large corporations and their insurers.

Instead of buttressing scientific claims (that have passed the tests—in refereed professional conferences and publications, for example—of most institutional scientific skeptics), organized skepticism has been manipulated to ensure that no claim is ever scientific enough or has the legitimacy of the scientific community. In other words, what should have remained the reasonable cautionary tale of a disinterested and communal activity (that could then be deemed universally credible) has turned into a circus of fire-blowing clowns ready to burn down the tent. The public remains confused, not realizing that just because the stakes have risen over the decades does not mean there are no standards that ever can be met. Despite lobbyists’ and lawyers’ best efforts of derailment, courts have eventually found cigarette companies and asbestos manufacturers guilty of exposing workers and consumers to deathly hazards.

Limits to Belief

If we add to this logic of doubt, which has been responsible for discrediting science and the conditions for proposing credible claims, a bit of U.S. cultural history, we may enjoy a more comprehensive picture of the unintended consequences of certain critiques of science. Citing Kurt Andersen (2017), Robert Darnton suggests that the Enlightenment’s “rational individualism interacted with the older Puritan faith in the individual’s inner knowledge of the ways of Providence, and the result was a peculiarly American conviction about everyone’s unmediated access to reality, whether in the natural world or the spiritual world. If we believe it, it must be true.” (2018, 68)

This way of thinking—unmediated experiences and beliefs, unconfirmed observations, and disregard of others’ experiences and beliefs—continues what Richard Hofstadter (1962) dubbed “anti-intellectualism.” For Americans, this predates the republic and is characterized by a hostility towards the life of the mind (admittedly, at the time, religious texts), critical thinking (self-reflection and the rules of logic), and even literacy. The heart (our emotions) can more honestly lead us to the Promised Land, whether it is heaven on earth in the Americas or the Christian afterlife; any textual interference or reflective pondering is necessarily an impediment, one to be suspicious of and avoided.

This lethal combination of the life of the heart and righteous individualism brings about general ignorance and what psychologists call “confirmation bias” (the view that we endorse what we already believe to be true regardless of countervailing evidence). The critique of science, along this trajectory, can be but one of many so-called critiques of anything said or proven by anyone whose ideology we do not endorse. But is this even critique?

Adorno would find this a charade, a pretense that poses as a critique but in reality is a simple dismissal without intellectual engagement, a dogmatic refusal to listen and observe. He definitely would be horrified by Stephen Colbert’s oft-quoted quip on “truthiness” as “the conviction that what you feel to be true must be true.” Even those who resurrect Daniel Patrick Moynihan’s phrase, “You are entitled to your own opinion, but not to your own facts,” quietly admit that his admonishment is ignored by media more popular than informed.

On Responsible Critique

But surely there is merit to responsible critiques of science. Weren’t many of these critiques meant to dethrone the unparalleled authority claimed in the name of science, as Fuller admits all along? Wasn’t Lyotard (and Marx before him), for example, correct in pointing out the conflation of power and money in the scientific vortex that could legitimate whatever profit-maximizers desire? In other words, should scientific discourse be put on par with other discourses?  Whose credibility ought to be challenged, and whose truth claims deserve scrutiny? Can we privilege or distinguish science if it is true, as Monya Baker has reported, that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments” (2016, 1)?

Fuller remains silent about these important and responsible questions about the problematics (methodologically and financially) of reproducing scientific experiments. Baker’s report cites Nature‘s survey of 1,576 researchers and reveals “sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Ibid.) So, if science relies on reproducibility as a cornerstone of its legitimacy (and superiority over other discourses), and if the results are so dismal, should it not be discredited?

One answer, given by Hans E. Plesser, suggests that there is a confusion between the notions of repeatability (“same team, same experimental setup”), replicability (“different team, same experimental setup”), and reproducibility (“different team, different experimental setup”). If understood in these terms, it stands to reason that one may not get the same results all the time and that this fact alone does not discredit the scientific enterprise as a whole. Nuanced distinctions take us down a scientific rabbit-hole most post-truth advocates refuse to follow. These nuances are lost on a public that demands to know the “bottom line” in brief sound bites: Is science scientific enough, or is it bunk? When can we trust it?

Trump excels at this kind of rhetorical device: repeat a falsehood often enough and people will believe it; and because individual critical faculties are not a prerequisite for citizenship, post-truth means no truth, or whatever the president says is true. Adorno’s distinction of the responsible from the irresponsible political critics comes into play here; but he innocently failed to anticipate the Trumpian move to conflate the political and scientific and pretend as if there is no distinction—methodologically and institutionally—between political and scientific discourses.

With this cultural backdrop, many critiques of science have undermined its authority and thereby lent credence to any dismissal of science (legitimately by insiders and perhaps illegitimately at times by outsiders). Sociologists and postmodernists alike forgot to put warning signs on their academic and intellectual texts: Beware of hasty generalizations! Watch out for wolves in sheep clothes! Don’t throw the baby out with the bathwater!

One would think such advisories unnecessary. Yet without such safeguards, internal disputes and critical investigations appear to have unintentionally discredited the entire scientific enterprise in the eyes of post-truth promoters, the Trumpists whose neoliberal spectacles filter in dollar signs and filter out pollution on the horizon. The discrediting of science has become a welcome distraction that opens the way to radical free-market mentality, spanning from the exploitation of free speech to resource extraction to the debasement of political institutions, from courts of law to unfettered globalization. In this sense, internal (responsible) critiques of the scientific community and its internal politics, for example, unfortunately license external (irresponsible) critiques of science, the kind that obscure the original intent of responsible critiques. Post-truth claims at the behest of corporate interests sanction a free for all where the concentrated power of the few silences the concerns of the many.

Indigenous-allied protestors block the entrance to an oil facility related to the Kinder-Morgan oil pipeline in Alberta.
Image by Peg Hunter via Flickr / Creative Commons

 

Part Two: The Politics of Post-Truth

Fuller begins his book about the post-truth condition that permeates the British and American landscapes with a look at our ancient Greek predecessors. According to him, “Philosophers claim to be seekers of the truth but the matter is not quite so straightforward. Another way to see philosophers is as the ultimate experts in a post-truth world” (19). This means that those historically entrusted to be the guardians of truth in fact “see ‘truth’ for what it is: the name of a brand ever in need of a product which everyone is compelled to buy. This helps to explain why philosophers are most confident appealing to ‘The Truth’ when they are trying to persuade non-philosophers, be they in courtrooms or classrooms.” (Ibid.)

Instead of being the seekers of the truth, thinkers who care not about what but how we think, philosophers are ridiculed by Fuller (himself a philosopher turned sociologist turned popularizer and public relations expert) as marketing hacks in a public relations company that promotes brands. Their serious dedication to finding the criteria by which truth is ascertained is used against them: “[I]t is not simply that philosophers disagree on which propositions are ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’.” (Ibid.)

Some would argue that the criteria by which propositions are judged to be true or false are worthy of debate, rather than the cavalier dismissal of Trumpists. With criteria in place (even if only by convention), at least we know what we are arguing about, as these criteria (even if contested) offer a starting point for critical scrutiny. And this, I maintain, is a task worth performing, especially in the age of pluralism when multiple perspectives constitute our public stage.

In addition to debasing philosophers, it seems that Fuller reserves a special place in purgatory for Socrates (and Plato) for labeling the rhetorical expertise of the sophists—“the local post-truth merchants in fourth century BC Athens”—negatively. (21) It becomes obvious that Fuller is “on their side” and that the presumed debate over truth and its practices is in fact nothing but “whether its access should be free or restricted.” (Ibid.) In this neoliberal reading, it is all about money: are sophists evil because they charge for their expertise? Is Socrates a martyr and saint because he refused payment for his teaching?

Fuller admits, “Indeed, I would have us see both Plato and the Sophists as post-truth merchants, concerned more with the mix of chance and skill in the construction of truth than with the truth as such.” (Ibid.) One wonders not only if Plato receives fair treatment (reminiscent of Popper’s denigration of Plato as supporting totalitarian regimes, while sparing Socrates as a promoter of democracy), but whether calling all parties to a dispute “post-truth merchants” obliterates relevant differences. In other words, have we indeed lost the desire to find the truth, even if it can never be the whole truth and nothing but the truth?

Political Indifference to Truth

One wonders how far this goes: political discourse without any claim to truth conditions would become nothing but a marketing campaign where money and power dictate the acceptance of the message. Perhaps the intended message here is that contemporary cynicism towards political discourse has its roots in ancient Greece. Regardless, one should worry that such cynicism indirectly sanctions fascism.

Can the poor and marginalized in our society afford this kind of cynicism? For them, unlike their privileged counterparts in the political arena, claims about discrimination and exploitation, about unfair treatment and barriers to voting are true and evidence based; they are not rhetorical flourishes by clever interlocutors.

Yet Fuller would have none of this. For him, political disputes are games:

[B]oth the Sophists and Plato saw politics as a game, which is to say, a field of play involving some measure of both chance and skill. However, the Sophists saw politics primarily as a game of chance whereas Plato saw it as a game of skill. Thus, the sophistically trained client deploys skill in [the] aid of maximizing chance occurrences, which may then be converted into opportunities, while the philosopher-king uses much the same skills to minimize or counteract the workings of chance. (23)

Fuller could be channeling here twentieth-century game theory and its application in the political arena, or the notion offered by Lyotard when describing the minimal contribution we can make to scientific knowledge (where we cannot change the rules of the game but perhaps find a novel “move” to make). Indeed, if politics is deemed a game of chance, then anything goes, and it really should not matter if an incompetent candidate like Trump ends up winning the American presidency.

But is it really a question of skill and chance? Or, as some political philosophers would argue, is it not a question of the best means by which to bring to fruition the best results for the general wellbeing of a community? The point of suggesting the figure of a philosopher-king, to be sure, was not his rhetorical skills in this conjunction, but instead the deep commitment to rule justly, to think critically about policies, and to treat constituents with respect and fairness. Plato’s Republic, however criticized, was supposed to be about justice, not about expediency; it is an exploration of the rule of law and wisdom, not a manual about manipulation. If the recent presidential election in the US taught us anything, it’s that we should be wary of political gamesmanship and focus on experience and knowledge, vision and wisdom.

Out-Gaming Expertise Itself

Fuller would have none of this, either. It seems that there is virtue in being a “post-truther,” someone who can easily switch between knowledge games, unlike the “truther” whose aim is to “strengthen the distinction by making it harder to switch between knowledge games.” (34) In the post-truth realm, then, knowledge claims are lumped into games that can be played at will, that can be substituted when convenient, without a hint of the danger such capricious game-switching might engender.

It’s one thing to challenge a scientific hypothesis about astronomy because the evidence is still unclear (as Stephen Hawking has done in regard to Black Holes) and quite another to compare it to astrology (and give equal hearings to horoscope and Tarot card readers as to physicists). Though we are far from the Demarcation Problem (between science and pseudo-science) of the last century, this does not mean that there is no difference at all between different discourses and their empirical bases (or that the problem itself isn’t worthy of reconsideration in the age of Fuller and Trump).

On the contrary, it’s because we assume difference between discourses (gray as they may be) that we can move on to figure out on what basis our claims can and should rest. The danger, as we see in the political logic of the Trump administration, is that friends become foes (European Union) and foes are admired (North Korea and Russia). Game-switching in this context can lead to a nuclear war.

In Fuller’s hands, though, something else is at work. Speaking of contemporary political circumstances in the UK and the US, he says: “After all, the people who tend to be demonized as ‘post-truth’ – from Brexiteers to Trumpists – have largely managed to outflank the experts at their own game, even if they have yet to succeed in dominating the entire field of play.” (39) Fuller’s celebratory tone here may either bring a slight warning in the use of “yet” before the success “in dominating the entire field of play” or a prediction that indeed this is what is about to happen soon enough.

The neoliberal bottom-line surfaces in this assessment: he who wins must be right, the rich must be smart, and more perniciously, the appeal to truth is beside the point. More specifically, Fuller continues:

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins. They are talking about worlds that could have been and still could be—the stuff of modal power. (Ibid.)

By now one should be terrified. This is a strong endorsement of lying as a matter of course, as a way to distract from the details (and empirical bases) of one “knowledge game”—because it may not be to one’s ideological liking–in favor of another that might be deemed more suitable (for financial or other purposes).

The political stakes here are too high to ignore, especially because there are good reasons why “certain parties are advantaged over others” (say, climate scientists “relative to” climate deniers who have no scientific background or expertise). One wonders what it means to talk about “alternative facts” and “alternative science” in this context: is it a means of obfuscation? Is it yet another license granted by the “social constructivist position” not to acknowledge the legal liability of cigarette companies for the addictive power of nicotine? Or the pollution of water sources in Flint, Michigan?

What Is the Mark of an Open Society?

If we corral the broader political logic at hand to the governance of the scientific community, as Fuller wishes us to do, then we hear the following:

In the past, under the inspiration of Karl Popper, I have argued that fundamental to the governance of science as an ‘open society’ is the right to be wrong (Fuller 2000a: chap. 1). This is an extension of the classical republican ideal that one is truly free to speak their mind only if they can speak with impunity. In the Athenian and the Roman republics, this was made possible by the speakers–that is, the citizens–possessing independent means which allowed them to continue with their private lives even if they are voted down in a public meeting. The underlying intuition of this social arrangement, which is the epistemological basis of Mill’s On Liberty, is that people who are free to speak their minds as individuals are most likely to reach the truth collectively. The entangled histories of politics, economics and knowledge reveal the difficulties in trying to implement this ideal. Nevertheless, in a post-truth world, this general line of thought is not merely endorsed but intensified. (109)

To be clear, Fuller not only asks for the “right to be wrong,” but also for the legitimacy of the claim that “people who are free to speak their minds as individuals are most likely to reach the truth collectively.” The first plea is reasonable enough, as humans are fallible (yes, Popper here), and the history of ideas has proven that killing heretics is counterproductive (and immoral). If the Brexit/Trump post-truth age would only usher a greater encouragement for speculation or conjectures (Popper again), then Fuller’s book would be well-placed in the pantheon of intellectual pluralism; but if this endorsement obliterates the silly from the informed conjecture, then we are in trouble and the ensuing cacophony will turn us all deaf.

The second claim is at best supported by the likes of James Surowiecki (2004) who has argued that no matter how uninformed a crowd of people is, collectively it can guess the correct weight of a cow on stage (his TED talk). As folk wisdom, this is charming; as public policy, this is dangerous. Would you like a random group of people deciding how to store nuclear waste, and where? Would you subject yourself to the judgment of just any collection of people to decide on taking out your appendix or performing triple-bypass surgery?

When we turn to Trump, his supporters certainly like that he speaks his mind, just as Fuller says individuals should be granted the right to speak their minds (even if in error). But speaking one’s mind can also be a proxy for saying whatever, without filters, without critical thinking, or without thinking at all (let alone consulting experts whose very existence seems to upset Fuller). Since when did “speaking your mind” turn into scientific discourse? It’s one thing to encourage dissent and offer reasoned doubt and explore second opinions (as health care professionals and insurers expect), but it’s quite another to share your feelings and demand that they count as scientific authority.

Finally, even if we endorse the view that we “collectively” reach the truth, should we not ask: by what criteria? according to what procedure? under what guidelines? Herd mentality, as Nietzsche already warned us, is problematic at best and immoral at worst. Trump rallies harken back to the fascist ones we recall from Europe prior to and during WWII. Few today would entrust the collective judgment of those enthusiasts of the Thirties to carry the day.

Unlike Fuller’s sanguine posture, I shudder at the possibility that “in a post-truth world, this general line of thought is not merely endorsed but intensified.” This is neither because I worship experts and scorn folk knowledge nor because I have low regard for individuals and their (potentially informative) opinions. Just as we warn our students that simply having an opinion is not enough, that they need to substantiate it, offer data or logical evidence for it, and even know its origins and who promoted it before they made it their own, so I worry about uninformed (even if well-meaning) individuals (and presidents) whose gut will dictate public policy.

This way of unreasonably empowering individuals is dangerous for their own well-being (no paternalism here, just common sense) as well as for the community at large (too many untrained cooks will definitely spoil the broth). For those who doubt my concern, Trump offers ample evidence: trade wars with allies and foes that cost domestic jobs (when promising to bring jobs home), nuclear-war threats that resemble a game of chicken (as if no president before him ever faced such an option), and completely putting into disarray public policy procedures from immigration regulations to the relaxation of emission controls (that ignores the history of these policies and their failures).

Drought and suffering in Arbajahan, Kenya in 2006.
Photo by Brendan Cox and Oxfam International via Flickr / Creative Commons

 

Part Three: Post-Truth Revisited

There is something appealing, even seductive, in the provocation to doubt the truth as rendered by the (scientific) establishment, even as we worry about sowing the seeds of falsehood in the political domain. The history of science is the story of authoritative theories debunked, cherished ideas proven wrong, and claims of certainty falsified. Why not, then, jump on the “post-truth” wagon? Would we not unleash the collective imagination to improve our knowledge and the future of humanity?

One of the lessons of postmodernism (at least as told by Lyotard) is that “post-“ does not mean “after,” but rather, “concurrently,” as another way of thinking all along: just because something is labeled “post-“, as in the case of postsecularism, it doesn’t mean that one way of thinking or practicing has replaced another; it has only displaced it, and both alternatives are still there in broad daylight. Under the rubric of postsecularism, for example, we find religious practices thriving (80% of Americans believe in God, according to a 2018 Pew Research survey), while the number of unaffiliated, atheists, and agnostics is on the rise. Religionists and secularists live side by side, as they always have, more or less agonistically.

In the case of “post-truth,” it seems that one must choose between one orientation or another, or at least for Fuller, who claims to prefer the “post-truth world” to the allegedly hierarchical and submissive world of “truth,” where the dominant establishment shoves its truths down the throats of ignorant and repressed individuals. If post-truth meant, like postsecularism, the realization that truth and provisional or putative truth coexist and are continuously being re-examined, then no conflict would be at play. If Trump’s claims were juxtaposed to those of experts in their respective domains, we would have a lively, and hopefully intelligent, debate. False claims would be debunked, reasonable doubts could be raised, and legitimate concerns might be addressed. But Trump doesn’t consult anyone except his (post-truth) gut, and that is troublesome.

A Problematic Science and Technology Studies

Fuller admits that “STS can be fairly credited with having both routinized in its own research practice and set loose on the general public–if not outright invented—at least four common post-truth tropes”:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties. (43)

In that sense, then, Fuller agrees that the positive lessons STS wished for the practice of the scientific community may have inadvertently found their way into a post-truth world that may abuse or exploit them in unintended ways. That is, something like “consensus” is challenged by STS because of how the scientific community pretends to get there knowing as it does that no such thing can ever be reached and when reached it may have been reached for the wrong reasons (leadership pressure, pharmaceutical funding of conferences and journals). But this can also go too far.

Just because consensus is difficult to reach (it doesn’t mean unanimity) and is susceptible to corruption or bias doesn’t mean that anything goes. Some experimental results are more acceptable than others and some data are more informative than others, and the struggle for agreement may take its political toll on the scientific community, but this need not result in silly ideas about cigarettes being good for our health or that obesity should be encouraged from early childhood.

It seems important to focus on Fuller’s conclusion because it encapsulates my concern with his version of post-truth, a condition he endorses not only in the epistemological plight of humanity but as an elixir with which to cure humanity’s ills:

While some have decried recent post-truth campaigns that resulted in victory for Brexit and Trump as ‘anti-intellectual’ populism, they are better seen as the growth pains of a maturing democratic intelligence, to which the experts will need to adjust over time. Emphasis in this book has been given to the prospect that the lines of intellectual descent that have characterised disciplinary knowledge formation in the academy might come to be seen as the last stand of a political economy based on rent-seeking. (130)

Here, we are not only afforded a moralizing sermon about (and it must be said, from) the academic privileged position, from whose heights all other positions are dismissed as anti-intellectual populism, but we are also entreated to consider the rantings of the know-nothings of the post-truth world as the “growing pains of a maturing democratic intelligence.” Only an apologist would characterize the Trump administration as mature, democratic, or intelligent. Where’s the evidence? What would possibly warrant such generosity?

It’s one thing to challenge “disciplinary knowledge formation” within the academy, and there are no doubt cases deserving reconsideration as to the conditions under which experts should be paid and by whom (“rent-seeking”); but how can these questions about higher education and the troubled relations between the university system and the state (and with the military-industrial complex) give cover to the Trump administration? Here is Fuller’s justification:

One need not pronounce on the specific fates of, say, Brexit or Trump to see that the post-truth condition is here to stay. The post-truth disrespect for established authority is ultimately offset by its conceptual openness to previously ignored people and their ideas. They are encouraged to come to the fore and prove themselves on this expanded field of play. (Ibid)

This, too, is a logical stretch: is disrespect for the authority of the establishment the same as, or does it logically lead to, the “conceptual” openness to previously “ignored people and their ideas”? This is not a claim on behalf of the disenfranchised. Perhaps their ideas were simply bad or outright racist or misogynist (as we see with Trump). Perhaps they were ignored because there was hope that they would change for the better, become more enlightened, not act on their white supremacist prejudices. Should we have “encouraged” explicit anti-Semitism while we were at it?

Limits to Tolerance

We tolerate ignorance because we believe in education and hope to overcome some of it; we tolerate falsehood in the name of eventual correction. But we should never tolerate offensive ideas and beliefs that are harmful to others. Once again, it is one thing to argue about black holes, and quite another to argue about whether black lives matter. It seems reasonable, as Fuller concludes, to say that “In a post-truth utopia, both truth and error are democratised.” It is also reasonable to say that “You will neither be allowed to rest on your laurels nor rest in peace. You will always be forced to have another chance.”

But the conclusion that “Perhaps this is why some people still prefer to play the game of truth, no matter who sets the rules” (130) does not follow. Those who “play the game of truth” are always vigilant about falsehoods and post-truth claims, and to say that they are simply dupes of those in power is both incorrect and dismissive. On the contrary: Socrates was searching for the truth and fought with the sophists, as Popper fought with the logical positivists and the Kuhnians, and as scientists today are searching for the truth and continue to fight superstitions and debunked pseudoscience about vaccination causing autism in young kids.

If post-truth is like postsecularism, scientific and political discourses can inform each other. When power-plays by ignoramus leaders like Trump are obvious, they could shed light on less obvious cases of big pharma leaders or those in charge of the EPA today. In these contexts, inconvenient facts and truths should prevail and the gamesmanship of post-truthers should be exposed for what motivates it.

Contact details: rsassowe@uccs.edu

* Special thanks to Dr. Denise Davis of Brown University, whose contribution to my critical thinking about this topic has been profound.

References

Theodor W. Adorno (1998/1963), Critical Models: Interventions and Catchwords. Translated by Henry W. Pickford. New York: Columbia University Press

Kurt Andersen (2017), Fantasyland: How America Went Hotwire: A 500-Year History. New York: Random House

Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature Vol. 533, Issue 7604, 5/26/16 (corrected 7/28/16)

Michael Bowker (2003), Fatal Deception: The Untold Story of Asbestos. New York: Rodale.

Robert Darnton, “The Greatest Show on Earth,” New York Review of Books Vo. LXV, No. 11 6/28/18, pp. 68-72.

Al Gore (2006), An Inconvenient Truth: The Planetary Emergency of Global Warming and What Can Be Done About It. New York: Rodale.

Richard Hofstadter (1962), Anti-Intellectualism in American Life. New York: Vintage Books.

Jean- François Lyotard (1984), The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Robert K. Merton (1973/1942), “The Normative Structure of Science,” The Sociology of Science: Theoretical and Empirical Investigations. Chicago and London: The University of Chicago Press, pp. 267-278.

Hans E. Plesser, “Reproducibility vs. Replicability: A Brief History of Confused Terminology,” Frontiers in Neuroinformatics, 2017; 11: 76; online: 1/18/18.

Robert N. Proctor (1995), Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.

James Surowiecki (2004), The Wisdom of Crowds. New York: Anchor Books.

Author Information: Claus-Christian Carbon, University of Bamberg, ccc@experimental-psychology.com

Carbon, Claus-Christian. “A Conspiracy Theory is Not a Theory About a Conspiracy.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 22-25.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Yb

See also:

  • Dentith, Matthew R. X. “Expertise and Conspiracy Theories.” Social Epistemology 32, no. 3 (2018), 196-208.

The power, creation, imagery, and proliferation of conspiracy theories are fascinating avenues to explore in the construction of public knowledge and the manipulation of the public for nefarious purposes. Their role in constituting our pop cultural imaginary and as central images in political propaganda are fertile ground for research.
Image by Neil Moralee via Flickr / Creative Commons

 

The simplest and most natural definition of a conspiracy theory is a theory about a conspiracy. Although this definition seems appealing due to its simplicity and straightforwardness, the problem is that most narratives about conspiracies do not fulfill the necessary requirements of being a theory. In everyday speech, mere descriptions, explanations, or even beliefs are often termed as “theories”—such repeated usage of this technical term is not useful in the context of scientific activities.

Here, a theory does not aim to explain one specific event in time, e.g. the moon landing of 1969 or the assassination of President Kennedy in 1963, but aims at explaining a phenomenon on a very general level; e.g. that things with mass as such gravitate toward one another—independently of the specific natures of such entities. Such an epistemological status is rarely achieved by conspiracy theories, especially the ones about specific events in time. Even more general claims that so-called chemtrails (i.e. long-lasting condensation trails) are initiated by omnipotent organizations across the planet, across time zones and altitudes, is at most a hypothesis – a rather narrow one – that specifically addresses one phenomenon but lacks the capability to make predictions about other phenomena.

Narratives that Shape Our Minds

So-called conspiracy theories have had a great impact on human history, on the social interaction between groups, the attitude towards minorities, and the trust in state institutions. There is very good reason to include “conspiracy theories” into the canon of influential narratives and so it is just logical to direct a lot of scientific effort into explaining and understand how they operate, how people believe in them and how humans pile up knowledge on the basis of these narratives.

A short view on publications registered by Clarivate Analytics’ Web of Science documents 605 records with “conspiracy theories” as the topic (effective date 7 May 2018). These contributions were mostly covered by psychological (n=91) and political (n=70) science articles, with a steep increase in recent years from about 2013 on, probably due to a special issue (“Research Topic”) in the journal Frontiers of Psychology organized in the years 2012 and 2013 by Viren Swami and Christopher Charles French.

As we have repeatedly argued (e.g., Raab, Carbon, & Muth, 2017), conspiracy theories are a very common phenomenon. Most people believe in at least some of them (Goertzel, 1994), which already indicates that believers in them do not belong to a minority group, but that it is more or less the conditio humana to include such narratives in the everyday belief system.

So first of all, we can state that most of such beliefs are neither pathological nor rare (see Raab, Ortlieb, Guthmann, Auer, & Carbon, 2013), but are largely caused by “good”[1] narratives triggered by context factors (Sapountzis & Condor, 2013) such as a distrusted society. The wide acceptance of many conspiracy theories can further explained by adaptation effects that bias the standard beliefs (Raab, Auer, Ortlieb, & Carbon, 2013). This view is not undisputed, as many authors identify specific pathological personality traits such as paranoia (Grzesiak-Feldman & Ejsmont, 2008; Pipes, 1997) which cause, enable or at least proliferate the belief in conspiracy theories.

In fact, in science we mostly encounter the pathological and pejorative view on conspiracy theories and their believers. This negative connotation, and hence the prejudice toward conspiracy theories, makes it hard to solidly test the stated facts, ideas or relationships proposed by such explanatory structures (Rankin, 2017). As especially conspiracy theories of so-called “type I” – where authorities (“the system”) are blamed of conspiracies (Wagner-Egger & Bangerter, 2007)—, such a prejudice can potentially jeopardize the democratic system (Bale, 2007).

Some of the conspiracies which are described in conspiracy theories that are taking place at top state levels could indeed be threatening people’s freedom, democracy and even people’s lives, especially if they turned out to be “true” (e.g. the case of the whistleblower and previously alleged conspiracist Edward Snowden, see Van Puyvelde, Coulthart, & Hossain, 2017).

Understanding What a Theory Genuinely Is

In the present paper, I will focus on another, yet highly important, point which is hardly addressed at all: Is the term “conspiracy theories” an adequate term at all? In fact, the suggestion of a conspiracy theory being a “theory about a conspiracy” (Dentith, 2014, p.30) is indeed the simplest and seemingly most straightforward definition of “conspiracy theory”. Although appealing and allegedly logical, the term conspiracy theory as such is ill-defined. Actually a “conspiracy theory” refers to a narrative which attributes an event to a group of conspirators. As such it is clear that it is justified to associate such a narrative with the term “conspiracy”, but does a conspiracy theory has the epistemological status of a theory?

The simplest definition of a “theory” is that it represents a bundle of hypotheses which can explain a wide range of phenomena. Theories have to integrate the contained hypotheses is a concise, coherent, and systematic way. They have to go beyond the mere piling up of several statements or unlinked hypotheses. The application of theories allows events or entities which are not explicitly described in the sum of the hypotheses to be generalized and hence to be predicted.

For instance, one of the most influential physical theories, the theory of special relativity (German original description “Zur Elektrodynamik bewegter Körper”), contains two hypotheses (Einstein, 1905) on whose basis in addition to already existing theories, we can predict important issues which are not explicitly stated in the theory. Most are well aware that mass and energy are equivalent. Whether we are analyzing the energy of a tossed ball or a static car, we can use the very same theory. Whether the ball is red or whether it is a blue ball thrown by Napoleon Bonaparte does not matter—we just need to refer to the mass of the ball, in fact we are only interested in the mass as such; the ball does not play a role anymore. Other theories show similar predictive power: for instance, they can predict (more or less precisely) events in the future, the location of various types of material in a magnetic field or the trajectory of objects of different speed due to gravitational power.

Most conspiracy theories, however, refer to one single historical event. Looking through the “most enduring conspiracy theories” compiled in 2009 by TIME magazine on the 40th anniversary of the moon landing, it is instantly clear that they have explanatory power for just the specific events on which they are based, e.g. the “JFK assassination” in 1963, the “9/11 cover-up” in 2001, the “moon landings were faked” idea from 1969 or the “Paul is dead” storyline about Paul McCartney’s alleged secret death in 1966. In fact, such theories are just singular explanations, mostly ignoring counter-facts, alternative explanations and already given replies (Votsis, 2004).

But what, then, is the epistemological status of such narratives? Clearly, they aim to explain – and sometimes the explanations are indeed compelling, even coherent. What they mostly cannot demonstrate, though, is the ability to predict other events in other contexts. If these narratives belong to this class of explanatory stories, we should be less liberal in calling them “theories”. Unfortunately, it was Karl Popper himself who coined the term “conspiracy theory” in the 1940s (Popper, 1949)—the same Popper who was advocating very strict criteria for scientific theories and in so became one of the most influential philosophers of science (Suppe, 1977). This imprecise terminology diluted the genuine meaning of (scientific) theories.

Stay Rigorous

From a language pragmatics perspective, it seems odd to abandon the term conspiracy theory as it is a widely introduced and frequently used term in everyday language around the globe. Substitutions like conspiracy narratives, conspiracy stories or conspiracy explanations would fit much better, but acceptance of such terms might be quite low. Nevertheless, we should at least bear in mind that most narratives of this kind cannot qualify as theories and so cannot lead to a wider research program; although their contents and implications are often far-reaching, potentially important for society and hence, in some cases, also worthy of checking.

Contact details: ccc@experimental-psychology.com

References

Bale, J. M. (2007). Political paranoia v. political realism: on distinguishing between bogus conspiracy theories and genuine conspiratorial politics. Patterns of Prejudice, 41(1), 45-60. doi:10.1080/00313220601118751

Dentith, M. R. X. (2014). The philosophy of conspiracy theories. New York: Palgrave.

Einstein, A. (1905). Zur Elektrodynamik bewegter Körper [On the electrodynamics of moving bodies]. Annalen der Physik und Chemie, 17, 891-921.

Goertzel, T. (1994). Belief in conspiracy theories. Political Psychology, 15(4), 731-742.

Grzesiak-Feldman, M., & Ejsmont, A. (2008). Paranoia and conspiracy thinking of Jews, Arabs, Germans and russians in a Polish sample. Psychological Reports, 102(3), 884.

Pipes, D. (1997). Conspiracy: How the paranoid style flourishes and where it comes from. New York: Simon & Schuster.

Popper, K. R. (1949). Prediction and prophecy and their significance for social theory. Paper presented at the Proceedings of the Tenth International Congress of Philosophy, Amsterdam.

Raab, M. H., Auer, N., Ortlieb, S. A., & Carbon, C. C. (2013). The Sarrazin effect: The presence of absurd statements in conspiracy theories makes canonical information less plausible. Frontiers in Personality Science and Individual Differences, 4(453), 1-8.

Raab, M. H., Carbon, C. C., & Muth, C. (2017). Am Anfang war die Verschwörungstheorie [In the beginning, there was the conspiracy theory]. Berlin: Springer.

Raab, M. H., Ortlieb, S. A., Guthmann, K., Auer, N., & Carbon, C. C. (2013). Thirty shades of truth: conspiracy theories as stories of individuation, not of pathological delusion. Frontiers in Personality Science and Individual Differences, 4(406).

Rankin, J. E. (2017). The conspiracy theory meme as a tool of cultural hegemony: A critical discourse analysis. (PhD), Fielding Graduate University, Santa Barbara, CA.

Sapountzis, A., & Condor, S. (2013). Conspiracy accounts as intergroup theories: Challenging dominant understandings of social power and political legitimacy. Political Psychology. doi:10.1111/pops.12015

Suppe, F. (Ed.) (1977). The structure of scientific theories (2nd ed.). Urbana: University of Illinois Press.

Van Puyvelde, D., Coulthart, S., & Hossain, M. S. (2017). Beyond the buzzword: Big data and national security decision-making. International Affairs, 93(6), 1397-1416. doi:10.1093/ia/iix184

Votsis, I. (2004). The epistemological status of scientific theories: An investigation of the structural realist account. (PhD), London School of Economics and Political Science, London. Retrieved from Z:\PAPER\Votsis2004.pdf

Wagner-Egger, P., & Bangerter, A. (2007). The truth lies elsewhere: Correlates of belief in conspiracy theories. Revue Internationale De Psychologie Sociale-International Review of Social Psychology, 20(4), 31-61.

[1] It is important to stress that a “good narrative” in this context means “an appealing story” in which people are interested; by no means does the author want to allow confusion by suggesting the meaning as being “positive”, “proper”, “adequate” or “true”.

Author Information: Paul R. Smart, University of Southampton, ps02v@ecs.soton.ac.uk

Smart, Paul R. “(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 45-55.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uq

Please refer to:

Image by BTC Keychain via Flickr / Creative Commons

 

Richard Heersmink’s (2018) article, A virtue epistemology of the Internet: Search engines, intellectual virtues, and education, provides an important and timely analysis of the Internet from the standpoint of virtue epistemology.[1] According to Richard, the Internet is an important epistemic resource, but it is one that comes with a range of epistemic hazards. Such hazards, he suggests, motivate a consideration of the ways in which individuals should interact with the Internet.

In particular, Richard appeals to a specific branch of virtue epistemology, known as virtue responsibilism, arguing that certain kinds of cognitive trait (e.g., curiosity and open-mindedness) are useful in helping us press maximal epistemic benefit from the Internet. Given the utility of such traits, coupled with the epistemic importance of the Internet, Richard suggests that educational policy should be adapted so as to equip would-be knowers with the cognitive wherewithal to cope with the epistemic challenges thrown up by the online environment.

There is, no doubt, something right about all this. Few would disagree with the claim that a certain level of discernment and discrimination is important when it comes to the evaluation of online content. Whether such ‘virtues’ are best understood from the perspective of virtue responsibilism or virtue reliabilism is, I think, a moot point, for I suspect that in the case of both virtue responsibilism and virtue reliabilism what matters is the way in which belief-forming informational circuits are subject to active configuration by processes that may be broadly construed as metacognitive in nature (Smart, in pressa). That, however, is a minor quibble, and it is one that is of little consequence to the issues raised in Richard’s paper.

For the most part, then, I find myself in agreement with many of the assumptions that motivate the target article. I agree that the Internet is an important epistemic resource that is unprecedented in terms of its scale, scope, and accessibility. I also agree that, at the present time, the Internet is far from an epistemically safe environment, and this raises issues regarding the epistemic standing of individual Internet users. In particular, it looks unlikely that the indiscriminate selection and endorsement of online information will do much to bolster one’s epistemic credentials.

We thus encounter something of a dilemma: As an epistemic resource, the Internet stands poised to elevate our epistemic standing, but as an open and public space the Internet provides ample opportunities for our doxastic systems to be led astray. The result is that we are obliged to divide the online informational cornucopia into a treasure trove of genuine facts and a ragbag collection of ‘false facts’ and ‘fake news.’ The information superhighway, it seems, promises to expand our epistemic power and potential, but the road ahead is one that is fraught with a dizzying array of epistemic perils, problems, and pitfalls. What ought we to do in response to such a situation?

It is at this point that I suspect my own views start to diverge with those of the target article. Richard’s response to the dilemma is to focus attention on the individual agent and consider the ways in which an agent’s cognitive character can be adapted to meet the challenges of the Internet. My own approach is somewhat different. It is borne out of three kinds of doubt: doubts about the feasibility (although not the value) of virtue-oriented educational policies, doubts about the basic validity of virtue theoretic conceptions of knowledge, and doubts about whether the aforementioned dilemma is best resolved by attempting to change the agent as opposed to the environment in which the agent is embedded. As always, space is limited and life is short, so I will restrict my discussion to issues that I deem to be of greatest interest to the epistemological community.

Reliable Technology

Inasmuch as intellectual virtues are required for online knowledge—i.e., knowledge that we possess as a result of our interactions and engagements with the Internet—they are surely only part of a much  broader (and richer) story that includes details about the environment in which our cognitive systems operate. In judging the role of intellectual virtue in shielding us from the epistemic hazards of the online environment, it therefore seems important to have some understanding of the actual technologies we interact with.

This is important because it helps us understand the kinds of intellectual virtue that might be required, as well as the efficacy of specific intellectual virtues in helping us believe the truth (and thus working as virtues in the first place). Internet technologies are, of course, many and varied, and it will not be possible to assess their general relevance to epistemological debates in the present commentary. For the sake of brevity, I will therefore restrict my attention to one particular technology: blockchain.

Blockchain is perhaps best known for its role in supporting the digital cryptocurrency, Bitcoin. It provides us with a means of storing data in a secure fashion, using a combination of data encryption and data linking techniques. For present purposes, we can think of a blockchain as a connected set of data records (or data blocks), each of which contains some body of encrypted data. In the case of Bitcoin, of course, the data blocks contain data of a particular kind, namely, data pertaining to financial transactions. But this is not the only kind of data that can be stored in a blockchain. In fact, blockchains can be used to store information about pretty much anything. This includes online voting records, news reports, sensor readings, personal health records, and so on.

Once data is recorded inside a blockchain, it is very difficult to modify. In essence, the data stored within a blockchain is immutable, in the sense that it cannot be changed without ‘breaking the chain’ of data blocks, and thereby invalidating the data contained within the blockchain. This property makes blockchains of considerable epistemic significance, because it speaks to some of the issues (e.g., concerns about data tampering and malign forms of information manipulation) that are likely to animate epistemological debates in this area.

This does not mean, of course, that the information stored within a blockchain is guaranteed to be factually correct, in the sense of being true and thus yielding improvements in epistemic standing. Nevertheless, there are, I think, reasons to regard blockchain as an important technology relative to efforts to make the online environment a somewhat safer place for would-be knowers. Consider, for example, the title of the present article. Suppose that we wanted to record the fact that a person known as Paul Smart—that’s me—wrote an article with the title:

(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!

We can incorporate this particular piece of information into a blockchain using something called a cryptographic hash function, which yields a unique identifier for the block and all of its contents. In the case of the aforementioned title, the cryptographic hash (as returned by the SHA256 algorithm[2]) is:

7147bd321e79a63041d9b00a937954976236289ee4de6f8c97533fb6083a8532

Now suppose that someone wants to alter the title, perhaps to garner support for an alternative argumentative position. In particular, let’s suppose they want to claim that the title of the article is:

Fake News Alert: Intellectual Virtues Required for Online Knowledge!

From an orthographic perspective, of course, not much has changed. But the subtlety of the alteration is not something that can be used to cause confusion about the actual wording of the original title—the title that I intended for the present article. (Neither can it be used to cast doubt about the provenance of the paper—the fact that the author of the paper was a person called Paul Smart.) To see this, note that the hash generated for the ‘fake’ title looks nothing like the original:

cc05baf2fa7a439674916fe56611eaacc55d31f25aa6458b255f8290a831ddc4

It is this property that, at least in part, makes blockchains useful for recording information that might otherwise be prone to epistemically malign forms of information manipulation. Imagine, for the sake of argument, that climatological data, as recorded by globally distributed sensors, was stored in a blockchain. The immutability of such data makes it extremely difficult for anyone to manipulate the data in such a way as to confirm or deny the reality of year-on-year changes in global temperature. Neither is it easy to alter information pertaining to the provenance of existing data records, i.e., information about when, where, and how such data was generated.

None of this should delude us into thinking that blockchain technology is a panacea for Internet-related epistemic problems—it isn’t! Neither does blockchain obviate the need for agents to exercise at least some degree of intellectual virtue when it comes to the selection and evaluation of competing data streams. Nevertheless, there is, I think, something that is of crucial epistemological interest and relevance here—something that makes blockchain and other cybersecurity technologies deserving of further epistemological attention. In particular, such technologies may be seen as enhancing the epistemic safety of the online environment, and thus perhaps reducing the need for intellectual virtue.

In this sense, the epistemological analysis of Internet technologies may be best approached from some variant of modal epistemology—e.g., epistemological approaches that emphasize the modal stability of true beliefs across close possible worlds (Pritchard, 2009, chap. 2). But even if we choose to countenance an approach that appeals to issues of intellectual virtue, there is still, I suggest, a need to broaden the analytic net to include technologies that (for the time being at least) lie beyond the bounds of the individual cognitive agent.

Safety in Numbers

“From an epistemic perspective,” Richard writes, “the most salient dimension of the Internet is that it is an information space” (Heersmink, 2018, p. 5). Somewhat surprisingly, I disagree. Although it is obviously true that the Internet is an information space, it is not clear that this is its most salient feature, at least from an epistemological standpoint. In particular, there is, I suggest, a sense in which the Internet is more than just an information space. As is clear from the explosive growth in all things social—social media, social networks, social bots, and so on—the Internet functions as a social technology, yielding all manner of opportunities for people to create, share and process information in a collaborative fashion. The result, I suggest, is that we should not simply think of the Internet as an information space (although it is surely that), we should also view it as a social space.

Viewing the Internet as a social space is important because it changes the way we think about the epistemic impact of the Internet, relative to the discovery, production, representation, acquisition, processing and utilization of knowledge. Smart (in pressb), for example, suggests that some online systems function as knowledge machines, which are systems in which some form of knowledge-relevant processing is realized by a socio-technical mechanism, i.e., a mechanism whose component elements are drawn from either the social (human) or the technological realm.

An interesting feature of many of these systems is the way in which the reliability (or truth-conducive) nature of the realized process is rooted in the socio-technical nature of the underlying (realizing) mechanism. When it comes to human computation or citizen science systems, for example, user contributions are typically solicited from multiple independent users as a means of improving the reliability of specific epistemic outputs (Smart, in pressb; Smart and Shadbolt, in press; Watson and Floridi, 2018). Such insights highlight the socially-distributed character of at least some forms of online knowledge production, thereby moving us beyond the realms of individual, agent-centric analyses.

On a not altogether unrelated note, it is important to appreciate the way in which social participation can itself be used to safeguard online systems from various forms of malign intervention. One example is provided by the Google PageRank algorithm. In this case, any attempt to ‘artificially’ elevate the ranking assigned to specific contributions (e.g., a user’s website) is offset by the globally-distributed nature of the linking effort, coupled with the fact that links to a specific resource are themselves weighted by the ranking of the resource from which the link originates. This makes it difficult for any single agent to subvert the operation of the PageRank algorithm.

Even ostensibly non-social technologies can be seen to rely on the distributed and decentralized nature of the Internet. In the case of blockchain, for example, multiple elements of a peer-to-peer network participate in the computational processes that make blockchain work. In this way, the integrity of the larger system is founded on the collaborative efforts of an array of otherwise independent computational elements. And it is this that (perhaps) allows us to think of blockchain’s epistemically-desirable features as being rooted in something of a ‘social’ substrate.

All of this, I suggest, speaks in favor of an approach that moves beyond a preoccupation with the properties of individual Internet users. In particular, there seems to be considerable merit in approaching the Internet from a more socially-oriented epistemological perspective. It is easy to see the social aspects of the Internet as lying at the root of a panoply of epistemic concerns, especially when it comes to the opportunities for misinformation, deception, and manipulation. But in light of the above discussion, perhaps an alternative, more positive, take on the Internet (qua social space) starts to come into sharper focus. This is a view that highlights the way in which certain kinds of online system can work to transform a ‘vice’ into a ‘virtue,’ exploiting the social properties of the Internet for the purposes of dealing with reliability-related concerns.

Image by Dariorug via Flickr / Creative Commons

 

Filter Bubblicious

Search engines form one of the focal points of Richard’s analysis, and, as with previous work in this area, Richard finds at least some aspects of their operation to be highly problematic. A particular issue surfaces in respect of personalized search. Here, Richard’s analysis echoes the sentiments expressed by other epistemologists who regard personalized search algorithms as of dubious epistemic value.

In fact, I suspect the consensus that has emerged in this area fails to tell the whole story about the epistemic consequences of personalized search. Indeed, from a virtue epistemological position, I worry that epistemologists are in danger of failing to heed their own advice—prematurely converging on a particular view without proper consideration of competing positions. In my new-found role as the virtue epistemologist’s guardian angel (or should that be devil’s advocate?), I will attempt to highlight a couple of reasons why I think more empirical research is required before we can say anything useful about the epistemological impact of personalized search algorithms.

My first worry is that our understanding about the extent to which search results and subsequent user behavior is affected by personalization is surprisingly poor. Consider, for example, the results of one study, which attempted to quantify the effect of personalization on search results (Hannak et al., 2013). Using an empirical approach, Hannak et al. (2013) report a demonstrable personalization effect, with 11.7% of search results exhibiting differences due to personalization. Interestingly, however, the effect of personalization appeared to be greater for search results with lower rankings; highly ranked results (i.e., those appearing at the top of a list of search results) appeared to be much less affected by personalization.

This result is interesting given the observation that college students “prefer to click on links in higher positions even when the abstracts are less relevant to the task at hand” (Heersmink, 2018, p. 6). From one perspective, of course, this tendency looks like a vice that jeopardizes the epistemic standing of the individual user. And yet, from another perspective, it looks like the preference for higher ranked search results is poised to negate (or at least reduce) the negative epistemological effects of personalized search. What we seem to have here, in essence, is a situation in which one kind of ‘intellectual vice’ (i.e., a tendency to select highly-ranked search results) is playing something of a more positive (virtuous?) role in mitigating the negative epistemological sequelae of a seemingly vicious technology (i.e., personalized search).

None of this means that the epistemic effects of personalized search are to the overall benefit of individual users; nevertheless, the aforementioned results do call for a more nuanced and empirically informed approach when considering the veritistic value of search engines, as well as other kinds of Internet-related technology.

A second worry relates to the scope of the epistemological analysis upon which judgements about the veritistic value of search engines are based. In this case, it is unclear whether analyses that focus their attention on individual agents are best placed to reveal the full gamut of epistemic costs and benefits associated with a particular technology, especially one that operates in the socio-technical ecology of the Internet. To help us understand this worry in a little more detail, it will be useful to introduce the notion of mandevillian intelligence (Smart, in pressc; Smart, in pressd).

Mandevillian intelligence is a specific form of collective intelligence in which the cognitive shortcomings and epistemic vices of the individual agent are seen to yield cognitive benefits and epistemic virtues at the collective or social level of analysis, e.g., at the level of collective doxastic agents (see Palermos, 2015) or socio-epistemic systems (see Goldman, 2011). According to this idea, personalized search systems may play a productive role in serving the collective cognitive good, providing a means by which individual vices (e.g., a tendency for confirmation bias) are translated into something that more closely resembles an epistemic virtue (e.g., greater cognitive coverage of a complex space of thoughts, ideas, opinions, and so on). Consider, for example, the way in which personalized search may help to focus individual attention on particular bodies of information, thereby restricting access to a larger space of ideas, opinions, and other information.

While such forms of ‘restricted access’ or ‘selective information exposure’ are unlikely to yield much in the way of an epistemic benefit for the individual agent, it is possible that by exploiting (and, indeed, accentuating!) an existing cognitive bias (e.g., confirmation bias), personalized search may work to promote cognitive diversity, helping to prevent precipitant forms of cognitive convergence (see Zollman, 2010) and assisting with the epistemically optimal division of cognitive labor (see Muldoon, 2013). This possibility reveals something of a tension in how we interpret or evaluate the veritistic value of a particular technology or epistemic practice. In particular, it seems that assessments of veritistic value may vary according to whether our epistemological gaze is directed towards individual epistemic agents or the collective ensembles in which those agents are situated.

The Necessity of Virtue

As Richard notes, virtue epistemology is characterized by a shift in emphasis, away from the traditional targets of epistemological analysis (e.g., truth, justification and belief) and towards the cognitive properties of would-be knowers. “Virtue epistemology,” Richard writes, “is less concerned with the nature of truth and more concerned with the cognitive character of agents” (Heersmink, 2018, p. 2). This is, no doubt, a refreshing change, relative to the intellectual orientation of traditional philosophical debates.

Nevertheless, I assume that virtue epistemologists still recognize the value and priority of truth when it comes to issues of epistemic evaluation. Someone who holds false beliefs is not the possessor of knowledge, and this remains the case irrespective of whatever vices and virtues the agent has. In other words, it does not matter how careful, attentive and assiduous an agent is in selecting and evaluating information, if what the agent believes is false, they simply do not know.

What seems to be important in the case of virtue epistemology is the role that intellectual virtue plays in securing the truth of an agent’s beliefs. In particular, the central feature of virtue epistemology (at least to my mind) is that the truth of an agent’s beliefs stem from the exercise of intellectual virtue. It is thus not the case that truth is unimportant (or less important) when it comes to issues of positive epistemic standing; rather, it is the role that intellectual virtue plays in establishing the truth of an agent’s beliefs. An agent is thus a bona fide knower when they believe the truth and the truth in question is attributable to some aspect of their cognitive character, specifically, a cognitive trait (virtue responsibilism) or cognitive faculty (virtue reliabilism).

What then makes something a vice or virtue seems to be tied to the reliability of token instantiations of processes that are consistent with an agent’s cognitive character. Intellectual virtues are thus “cognitive character traits that are truth-conducive and minimalise error” (Heersmink, 2018, p. 3), while intellectual vices are characterized as “cognitive character traits that are not truth-conducive and do not minimalise error” (Heersmink, 2018, p. 3). It is this feature of the intellectual virtues—the fact that they are, in general, reliable (or give rise to reliable belief-relevant processes)—that looks to be important when it comes to issues of epistemic evaluation.

So this is what I find problematic about virtue theoretic approaches to knowledge. (Note that I am not an epistemologist by training, so this will require a generous—and hopefully virtue-inspiring swig—of the ole intellectual courage.) Imagine a state-of-affairs in which the Internet was (contrary to the present state-of-affairs) a perfectly safe environment—one where the factive status of online information was guaranteed as a result of advances in cyber-security techniques and intelligent fact-checking services. Next, let us imagine that we have two individuals, Paul and Sophia, who differ with respect to their cognitive character. Paul is the least virtuous of the two, unreflectively and automatically accepting whatever the Internet tells him. Sophia is more circumspect, wary of being led astray by (the now non-existent) fake news.

Inasmuch as we see the exercise of intellectual virtue as necessary for online knowledge, it looks unlikely that poor old Paul can be said to know very much. This is because the truth of Paul’s beliefs are not the result of anything that warrants the label ‘intellectual virtue.’ Paul, of course, does have a lot of true beliefs, but the truth of these beliefs does not stem from the exercise of his intellectual virtues—if, indeed, he has any. In fact, inasmuch as there is any evidence of virtue in play here, it is probably best attributed to the technologies that work to ensure the safety of the online environment. The factive status of Paul’s beliefs thus has more to do with the reliability of the Internet than it does with the elements of his cognitive character.

But is it correct to say that Paul has no online knowledge in this situation? Personally, I do not have this intuition. In other words, in a perfectly safe environment, I can see no reason why we should restrict knowledge attributions to agents whose beliefs are true specifically as the result of intellectual virtue. My sense is that even the most unreflective of agents could be credited with knowledge in a situation where there was no possibility of them being wrong. And if that is indeed the case, then why insist that it is only the exercise of intellectual virtue that underwrites positive epistemic standing?

After all, it seems perfectly possible, to my mind, that Sophia’s epistemic caution contributes no more to the minimization of error in an epistemically benign (i.e., safe) environment than does Paul’s uncritical acceptance. (In fact, given the relative efficiency of their doxastic systems, it may very well be the case that Sophia ends up with fewer true beliefs than Paul.) It might be claimed that this case is invalidated by a failure to consider the modal stability of an agent’s beliefs relative to close possible worlds, as well as perhaps their sensitivity to counterfactual error possibilities. But given the way in which the case is characterized, I suggest that there are no close possible worlds that should worry us—the cybersecurity and fact checking technologies are, let us assume, sufficiently robust as to ensure the modal distance of those worrisome worlds.

One implication of all this is to raise doubts about the necessity of intellectual virtue, relative to our conceptual understanding of knowledge. If there are cases where intellectual virtue is not required for positive epistemic standing, then intellectual virtue cannot be a necessary condition for knowledge attribution. And if that is the case, then why should intellectual virtue form the basis of an approach that is intended to deal with the epistemic shortcomings of the (contemporary) Internet?

Part of the attraction of virtue epistemology, I suspect, is the way in which a suite of generally reliable processes are inextricably linked to the agent who is the ultimate target of epistemic evaluation. This linkage, which is established via the appeal to cognitive character, helps to ensure the portability of an agent’s truth-tracking capabilities—it helps to ensure, in other words, that wherever the agent goes their reliable truth-tracking capabilities are sure to follow.

However, in an era where our doxastic systems are more-or-less constantly plugged into a reliable and epistemically safe environment, it is not so clear that agential capabilities are relevant to epistemic standing. This, I suggest, raises doubts about the necessity of intellectual virtue in securing positive epistemic status, and it also (although this is perhaps less clear) encourages us to focus our attention on some of the engineering efforts (as opposed to agent-oriented educational programs) that might be required to make the online world an epistemically safer place.

Conclusion

What, then, should we make of the appeal to virtue epistemology in our attempt to deal with the  epistemic hazards of the Internet. My main concern is that the appeal to virtue epistemology (and the emphasis placed on intellectual virtue) risks an unproductive focus on individual human agents at the expense of both the technological and social features of the online world. This certainly does not rule out the relevance of virtue theoretic approaches as part of our attempt to understand the epistemic significance of the Internet, but other approaches (e.g., modal reliabilism, process reliabilism, distributed reliabilism, and systems-oriented social epistemology) also look to be important.

Personally, I remain agnostic with regard to the relevance of different epistemological approaches, although I worry about the extent to which virtue epistemology is best placed to inform policy-related decisions (e.g., those relating to education). In particular, I fear that by focusing our attention on individual agents and issues of intellectual virtue, we risk overlooking some of the socio-epistemic benefits of the Internet, denigrating a particular technology (e.g., personalized search) on account of its failure to enhance individual knowledge, while ignoring the way a technology contributes to more collective forms of epistemic success.

In concluding his thought-provoking paper on virtue epistemology and the Internet, Richard suggests that “there is an important role for educators to teach and assess [intellectual] virtues as part of formal school and university curricula, perhaps as part of critical thinking courses” (Heersmink, 2018, p. 10). I have said relatively little about this particular issue in the present paper. For what it’s worth, however, I can see no reason to object to the general idea of Internet-oriented educational policies. The only caveat, perhaps, concerns the relative emphasis that might be placed on the instillation of intellectual virtue as opposed to the inculcation of technical skills, especially those that enable future generations to make the online world a safer place.

No doubt there is room for both kinds of pedagogical program (assuming they can even be dissociated). At the very least, it seems to me that the effort to resolve a problem (i.e., engineer a safer Internet) is just as important as the effort to merely cope with it (i.e., acquire a virtuous cognitive character). But, in any case, when it comes to education and learning, we should not lose sight of the fact that the Internet is itself something that is used for educational purposes. Perhaps, then, the more important point about education and the Internet is not so much the precise details of what gets taught, so much as the issue of whether the Internet (with all its epistemic foibles) is really the best place to learn.

Contact details: ps02v@ecs.soton.ac.uk

References

Goldman, A. I. (2011). A guide to social epistemology. In A. I. Goldman and D. Whitcomb (Eds.), Social Epistemology: Essential Readings, pp. 11–37. New York, New York, USA: Oxford University Press.

Hannak, A., P. Sapiezynski, A. Molavi Kakhki, B. Krishnamurthy, D. Lazer, A. Mislove, and C. Wilson (2013). Measuring personalization of Web search. In D. Schwabe, V. Almeida, H. Glaser, R. Baeza-Yates, and S. Moon (Eds.), Proceedings of the 22nd International Conference  on World Wide Web, Rio  de Janeiro, Brazil, pp. 527–538. ACM.

Heersmink, R. (2018). A virtue epistemology of the Internet: Search engines, intellectual virtues, and education. Social Epistemology 32 (1), 1–12.

Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass 8 (2), 117–125.

Palermos, S. O. (2015). Active externalism, virtue reliabilism and scientific knowledge. Synthese 192 (9), 2955–2986.

Pritchard, D. (2009). Knowledge. Basingstoke, England, UK: Palgrave Macmillan.

Smart, P. R. (in pressa). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. (in pressb). Knowledge machines. The Knowledge Engineering Review.

Smart, P. R. (in pressc). Mandevillian intelligence. Synthese.

Smart, P. R. (in pressd). Mandevillian intelligence: From individual vice to collective virtue. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Socially Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. and N. R. Shadbolt (in press). The World Wide Web. In J. Chase and D. Coady (Eds.), The Routledge Handbook of Applied Epistemology. New York, New York, USA: Routledge.

Watson, D. and L. Floridi (2018). Crowdsourced science: Sociotechnical epistemology in the e-research paradigm. Synthese 195 (2), 741–764.

Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis 72 (1), 17–35.

[1] This work is supported under SOCIAM: The Theory and Practice of Social Machines. The SOCIAM Project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/1 and comprises the Universities of Southampton, Oxford and Edinburgh.

[2] See http://www.xorbin.com/tools/sha256-hash-calculator [accessed: 30th  January 2018].

Author Information: Simone Chambers, University of California – Irvine, sechambe@uci.edu

Chambers, Simone. “The Epistemic Ideal of Reason-Giving in Deliberative Democracy.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 59-64.

The pdf of the article includes specific page numbers. Shortlink: https://wp.me/p1Bfg0-3PY

Please refer to:

Image by Thierry Ehrman, via Flickr

 

As interest in deliberative democracy continues to grow, the term is becoming more umbrella-like encompassing different strands, orientations, and methodological proclivities. Helen Landemore in her essay “Beyond the Fact of Disagreement? The Epistemic Turn in Deliberative Democracy” traces one such strand. This strand has been influenced by John Rawls and begins from the fact of pluralism.  Navigating deep disagreements in a way that recognizes each person’s freedom and equality becomes the prime motivation in developing theories of deliberative democracy.

This in turn has led some to embrace a Rawlsian epistemological position of agnosticism with respect to the truth-value of moral and political claims. Epistemic abstinence then characterizes this strand of deliberative democracy.  I have no quarrel with Landemore’s description and criticism of epistemic abstinence. I have a quarrel with her claim that epistemic abstinence describes “most deliberative democrats” and such abstinence is part of the “mainstream version deliberative democracy.”[1]  In my essay I would like to trace a different strand of deliberative democracy one influenced by Jürgen Habermas’s work and which has in many ways been much more influential than Rawls.

The Heft of Deliberation

In this alternative origin story, deliberative democracy grows as a response to and criticism of two interconnected strands of democratic theory that had come to dominate certainly the empirical study of democracy but also many basic theories of democracy.  The first strand sees democracy as about the fair competition between fixed interests. Here voting is the central mechanism through which competing interests are mediated in a fair way that recognizes the equality of all citizens.

Deliberative democrats found this picture deeply inadequate on the twin grounds that it rested on an impoverished and indeed implausible view of legitimacy and two it presupposed a black box of fixed interests. Deliberative democracy shifts the focus from preference aggregation to the processes of opinion formation that precedes the vote. Aggregation often in the form of voting (and sometimes governed by majority rule) does not necessarily disappear. Whether we are talking about a citizen’s assembly, an informal public debate, a jury, everyday talk about a public issue, or a process of participatory budgeting, decision procedures are required in order to have an outcome.

But the question that motivated many deliberative democrats was how deliberation could carry the burden of democratic legitimacy in a way that aggregation could not).[2] For many theorists inspired by this alternative vision of democracy, Habermas offered a rich source of helpful theoretic insights to develop this talk centered view of democracy. Theorists drew primarily from two sources. First from his work on the public sphere which suggested that the essential relationship between the public and the state should be understood in terms of rational justification and discursive accountability. And second his work in communicative action and discourse ethics that offered a procedural ideal for the conditions of justification.

Landemore suggests that until recently this endorsement of deliberative democracy did not rely on an epistemic claim about outcomes but rather on a procedural claim about equality, respect and freedom of participants.  Because deliberation (ideally anyway) requires that participants take on attitudes of respect and open mindedness towards each other’s claims and grievances it is seen as a more satisfying instantiation of equal citizenship than one person one vote. This argument according to Landemore has dominated deliberative democracy (I agree) and it has no epistemic dimension (I disagree).

Here I come to the second strand in democratic theory and research against which deliberative democracy grew. That is the elitist view of democracy that is deeply suspect of ordinary citizen’s competency. Deliberative democracy has for the most part been a champion of the epistemic competency of citizens if given the chance to deliberative with each other under positive conditions. Even deliberative democrats who focus on middle democracy rather than mass democracy claim the deliberation is valued because it enhances epistemic competency of the participants. This epistemic competency is often discussed under the ubiquitous phrase (i.e. every deliberative democrat uses this formation almost without exception) reason-giving.  Before I elaborate the epistemic import of reason-giving I want to take a small detour into epistemic democracy and what it has come to mean in contemporary debates.

What Is Epistemic Democracy?

Epistemic democracy is the view that democracy is to be valued not simply for normative reasons but also (or sometimes exclusively) because it arrives at or can arrive at the right answers. Some (but not all) theories of epistemic democracy are also theories of deliberative democracy because they identify deliberation as the mechanism through which right outcomes are produced (Estlund 2008; Landemore 2013; Mizak 2008). Are all theories of deliberative democracy also theories of epistemic democracy? I want to say that almost all theories of deliberative democracy recognize and value an epistemic function of deliberation but not all of them have focused on developing a clear procedurally-independent standard of correct outcome.

If, as Landemore appears to imply, such a procedurally-independent standard is the defining feature of an epistemic theory of democracy then perhaps it is true that David Estlund has really inaugurated a new era of deliberative theory.  But if, as I would like to argue, deliberative democracy has always had a robust procedurally-dependent epistemological view of deliberation then deliberative democracy has always been a form of epistemic democracy or if one insists that that term must be reserved for theories that have procedurally-independent standards of right outcomes, then I would say that deliberative democracy has always valued democracy on epistemic as well as normative grounds.

The sine qua non of deliberation is reason-giving.  Now Landemore suggest that in early deliberative democracy reason-giving was tied to treating co-citizens with respect.  We each deserve justification for coercion or claims we make on each other. But there was always more to reason-giving than equal respect. Certainly its agonist and postmodern critics have always thought of deliberative democracy as containing over blown claims to rationality and reason. But what is the epistemic status of the ubiquitous reason-giving?

Landemore suggests that there are two minimum requirements in order to be able to talk about epistemic democracy. The first is to accept that political and normative questions are open to rational adjudication. She does not insist on a very high bar here. Accepting that there are “better or worse answers to at least some political questions” (290) is enough. The second is to accept that democratic procedures are a good way to generate that adjudication.

Both of these dimensions are explicitly present in Habermas’s work and both are explicitly or implicitly present in deliberative democracy theories that have been influenced by Habermas ideal of discourse. But, as I argue below, these dimensions can be present without a procedurally-independent standard of correct outcome. How do we know if the answers are better or worse? For public disputes that are predominantly normative, the Habermasian answer is to look to see to what extent the conditions of justification have been met in democratic procedures.

Its Roots in Habermas

People often note that Habermas’s discourse theory insists on a strong analogy between truth claims and normative claims of rightness. When reading Habermas as containing strong epistemic claims (as Landemore does) it is sometimes common to mistake analogy with identity. Landemore for example says that the distinction between truth and rightness is “purely semantic.” (285). But this is not entirely correct. In respect to truth Habermas makes a distinction between what the truth is (say correspondence) and how we arrive at the truth (discourse). No such distinction is possible with regard to normative questions. Therefore for Habermas there are procedurally-independent standards of truth but no procedurally-independent standards of rightness even though both truth and rightness are ‘established’ through discourse and as such open to rational adjudication.  Decisions facing a democratic polity will contain both elements.

The more a policy dispute centers on facts or established knowledge, the more it is truth apt.  Thus when Habermas talks about democracy’s truth-tracking potential he has procedurally-independent standards in mind which is why he appeals to empirical evidence to assess whether citizens have basic knowledge, are likely to get the science right, or make basic errors in judgements (2006 414).  But here too Rawls is not agnostic nor does he advocate abstinence.  Rawls suggests that public reason rely on established truth of science and common sense for example (2005 223-227).  Rawls’s abstinence enters when we turn to deep moral and normative questions.  Here Rawls insists that, on the one hand, participants in debate exercise self-restraint and refrain from claiming truth or objective status for their normative positions and, on the other hand, no strong truth claim should be made regarding the outcomes of public justification.

Habermas makes neither of these claims.  Citizens can bring up any and all claims certainly in the broad public sphere.[3] Furthermore, Habermas thinks that properly structured democracies will be able to claim a rational status, fallible and corrigible to be sure, for outcomes. But the epistemic claim tied to normative rightness is a procedural not a procedurally-independent claim.  Now Landemore might say that any claim that outcomes are better is a procedurally-independent standard even if the assessment is exclusively tied to whether the procedures are good. Outcomes are not procedures; to value outcomes is to value something more than mere procedure.

But if this is what she means then she is operating with a very narrow view of procedural theory in which democracy is valued for entirely outcome-independent reasons. On this reading of proceduralism then democracy might very well fair poorly on all epistemic measures but that is not why one endorses democracy; one endorses it because it treats people equally not because it arrives at better outcomes. But ‘procedurally-independent standard of correct outcome” implies that there is a standard that is independent of the procedure not just that we value outcome and not just procedure.  It is this idea of independent standard that Habermas has explicitly questioned in his procedural theory of democracy and which as a result was never been a focus of the epistemic claims inherent in deliberative democracy from the beginning.

There are two reasons why Habermas’s strong epistemic claims about discourse are procedural claims. The first is that Habermas’s epistemology is parasitic on his linguistic theory that reconstructs the conditions of justification. Rational outcomes are the outcomes that would be justified in a discourse of all those affected.  But no thought experiment can produce this outcome independently. So we are thrown back on trying to approximate the procedural conditions of justification: “The unity of a completely proceduralized reason then retreats into the discursive structure of public communication” (1996, 186). But there is a second and I think more interesting reason why procedures and not outcomes are the focus of epistemic assessment.

Habermas is fully committed to the idea that democracy if properly structured results in better, more epistemically sound, and more rational outcomes than other regime types.  But he often sees epistemic gains in the long term using the language of “learning” for example. Whereas Rawls thought we should exercise self-restraint for the sake of getting along under conditions of pluralism, Habermas sees unrestrained pluralism under democratic rules as the condition for epistemic advancement. The wild and anarchic nature of the informal public sphere allows for new claims to emerge, hidden injustices to be unmasked, received truth to be questioned, and new forms of political participation to be tested. This lack of Rawlsian restraint plays an important discursive and epistemic function by holding out the possibility of learning, revision, correction, and change through criticism of and opposition to stands taken and claims made especially by those who rule. This is a Kantian/pragmatist view of the way reason works: epistemic gains come through public criticism and demands for rational justification.

A Tradition of Reason

Few deliberative democrats have adopted the full Habermasian discourse theoretic edifice of his political theory. But many have embraced the Habermasian intuition that giving each other reasons, listening to each other’s claims, and being committed to collective problem solving, leads to improved epistemic competence and better outcomes. Not just better in the sense that the outcomes are more legitimate but also better because outcomes are more in line with reason.

Here we see the Kantian idea that reason arrives at the right answers through criticism, argument and persuasion. In a very famous passage from The Critique of Pure Reason Kant says: “Reason must subject itself to critique in all its undertakings, and cannot restrict the freedom of critique through any prohibition…The very existence of reason depends upon this freedom, which has no dictatorial authority, but whose claim is never anything more than the agreement of free citizens, each of whom must be able to express his reservations, indeed even veto, without holding back” (Kant 1998, p. 643 [A738/B767]).

This view is echoed in Mill as well as Dewey and has been, I would argue, a central aspirational ideal of deliberative democracy from its inception. But it is an ideal that resides in the process of reason-giving and in maintaining the procedural conditions conducive to that reason-giving. Thus the evaluation of the epistemic value of the outcome will be invested in how well the procedures approximate good procedures.

I have tried to make two arguments. The first is that because reason-giving has been at the center of deliberative democracy from the start and because many theorists of deliberative democracy have been influenced by Habermas in thinking about how reason-giving works, there has always been an epistemic dimension to deliberative democracy. The second argument I have made is that criterion of procedurally-independent standard of correct outcome is not the best way to conceptualize that epistemic dimension of much of deliberative democracy because so much of that dimension is invested in good procedures.

Contact details: sechambe@uci.edu

References

Bohman, J. “Survey Article: The Coming Age of Deliberative Democracy”. The Journal of Political Philosophy: 6, no. 4 (1998): 400-25.

Chambers, S. “Deliberative Democracy Theory.” Annual Review of Politics 6 (2003): 307-26.

Estlund, D. Democratic Authority: A Philosophical Framework. Princeton, NJ: Princeton University Press, 2008.

Elstub, Stephen and McLaverty. Deliberative Democracy: Issues and Cases. Edinburgh: Edinburgh University Press, 2014.

Habermas, J. Between Facts and Norms. Contributions to a Discourse Theory of Law and Democracy, translated by William Rheg, Cambridge: MIT Press, 1996.

Habermas, J. “Political communication in media society: does democracy still enjoy an epistemic dimension? The impact of normative theory on empirical research.” Communication Theory 16 (2006): 411-426.

Habermas, J. Between Naturalism and Religion, translated by Ciaran Cronin, Cambridge: MIT Press, 2008.

Kant, I. Critique of Pure Reason. The Cambridge Edition of the Works of Immanuel Kant, eds. Paul Guyer and Allen W. Wood. Cambridge, UK: Cambridge University Press, 1998.

Landemore, H. Democratic Reason: Politics, Collective Intelligence and the Rule of the Many. Princeton, NJ: Princeton University Press, 2013.

Manin, B. “On Legitimacy and Political Deliberation.” The Journal of Political Philosophy 15, no. 3 (2016): 338-368.

Mizak, C.J. “A Culture of Justification: The Pragmatist’s Epistemic Argument for Democracy.” Episteme (2008): 94-105

Rawls, J. Political Liberalism. Expanded Edition. New York, NY: Columbia University Press, 2005.

 

[1] Gutmann and Thompson, a central exemplar for Landemore, are indeed very mainstream and do follow Rawls in some form of abstinence.  But I would not call Jeremy Waldron, a second example appealed to by Landemore, a deliberative democrat at all precisely because he objects to some epistemic assumptions of most deliberative democracy theory and also because he has more faith in majority voting than deliberation.

[2] See Bernard Manin (1987) for an early articulation of this view. But the contrast with aggregative pictures of democracy can be found in almost all the “narratives” of deliberative democracy. (Bohman 19989; Chambers 2003; Elstub and McLaverty 2014).

[3] While it is true that Habermas insists that religious claims should be excluded from debates in Parliament, this is not a form epistemic abstinence.  On the contrary, religious claims are excluded because Habermas wants to make a strong epistemic claim about the discursive process and such a claims would be undermined, he thinks, if religious claims were allowed to enter the debate (Habermas 2008).

Author Information: Richard W. Moodey, Gannon University, moodey001@gannon.edu

Moodey, Richard W. “Models of Face-to-Face Interaction and the Epistemic Significance of Other Minds.” Social Epistemology Review and Reply Collective 3, no. 7 (2014): 19-28.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-1wo

Please refer to:

Steve Fuller attacked ‘analytic social epistemology’ in 2012, and in 2013 Sanford Goldberg counter-attacked. Goldberg also prescribes a way of moving beyond the kind of conflicts exemplified by his exchange with Fuller. He says that social epistemologists should study the epistemic significance of other minds. I argue that constructing models of face-to-face interaction, specifically, models of cooperation, competition, and conflict, can be useful in implementing Goldberg’s prescription. Such models can help generate the propositions that must be the result of systematic study of a topic. I modify Goldberg’s image of epistemic communities as a result of including competition and conflict, as well as cooperation among the members.

Continue Reading…