Archives For collective knowledge

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu.

Sassower, Raphael. “Post-Truths and Inconvenient Facts.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 47-60.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-40g

Can one truly refuse to believe facts?
Image by Oxfam International via Flickr / Creative Commons

 

If nothing else, Steve Fuller has his ear to the pulse of popular culture and the academics who engage in its twists and turns. Starting with Brexit and continuing into the Trump-era abyss, “post-truth” was dubbed by the OED as its word of the year in 2016. Fuller has mustered his collected publications to recast the debate over post-truth and frame it within STS in general and his own contributions to social epistemology in particular.

This could have been a public mea culpa of sorts: we, the community of sociologists (and some straggling philosophers and anthropologists and perhaps some poststructuralists) may seem to someone who isn’t reading our critiques carefully to be partially responsible for legitimating the dismissal of empirical data, evidence-based statements, and the means by which scientific claims can be deemed not only credible but true. Instead, we are dazzled by a range of topics (historically anchored) that explain how we got to Brexit and Trump—yet Fuller’s analyses of them don’t ring alarm bells. There is almost a hidden glee that indeed the privileged scientific establishment, insular scientific discourse, and some of its experts who pontificate authoritative consensus claims are all bound to be undone by the rebellion of mavericks and iconoclasts that include intelligent design promoters and neoliberal freedom fighters.

In what follows, I do not intend to summarize the book, as it is short and entertaining enough for anyone to read on their own. Instead, I wish to outline three interrelated points that one might argue need not be argued but, apparently, do: 1) certain critiques of science have contributed to the Trumpist mindset; 2) the politics of Trumpism is too dangerous to be sanguine about; 3) the post-truth condition is troublesome and insidious. Though Fuller deals with some of these issues, I hope to add some constructive clarification to them.

Part One: Critiques of Science

As Theodor Adorno reminds us, critique is essential not only for philosophy, but also for democracy. He is aware that the “critic becomes a divisive influence, with a totalitarian phrase, a subversive” (1998/1963, 283) insofar as the status quo is being challenged and sacred political institutions might have to change. The price of critique, then, can be high, and therefore critique should be managed carefully and only cautiously deployed. Should we refrain from critique, then? Not at all, continues Adorno.

But if you think that a broad, useful distinction can be offered among different critiques, think again: “[In] the division between responsible critique, namely, that practiced by those who bear public responsibility, and irresponsible critique, namely, that practiced by those who cannot be held accountable for the consequences, critique is already neutralized.” (Ibid. 285) Adorno’s worry is not only that one forgets that “the truth content of critique alone should be that authority [that decides if it’s responsible],” but that when such a criterion is “unilaterally invoked,” critique itself can lose its power and be at the service “of those who oppose the critical spirit of a democratic society.” (Ibid)

In a political setting, the charge of irresponsible critique shuts the conversation down and ensures political hegemony without disruptions. Modifying Adorno’s distinction between (politically) responsible and irresponsible critiques, responsible scientific critiques are constructive insofar as they attempt to improve methods of inquiry, data collection and analysis, and contribute to the accumulated knowledge of a community; irresponsible scientific critiques are those whose goal is to undermine the very quest for objective knowledge and the means by which such knowledge can be ascertained. Questions about the legitimacy of scientific authority are related to but not of exclusive importance for these critiques.

Have those of us committed to the critique of science missed the mark of the distinction between responsible and irresponsible critiques? Have we become so subversive and perhaps self-righteous that science itself has been threatened? Though Fuller is primarily concerned with the hegemony of the sociology of science studies and the movement he has championed under the banner of “social epistemology” since the 1980s, he does acknowledge the Popperians and their critique of scientific progress and even admires the Popperian contribution to the scientific enterprise.

But he is reluctant to recognize the contributions of Marxists, poststructuralists, and postmodernists who have been critically engaging the power of science since the 19th century. Among them, we find Jean-François Lyotard who, in The Postmodern Condition (1984/1979), follows Marxists and neo-Marxists who have regularly lumped science and scientific discourse with capitalism and power. This critical trajectory has been well rehearsed, so suffice it here to say, SSK, SE, and the Edinburgh “Strong Programme” are part of a long and rich critical tradition (whose origins are Marxist). Adorno’s Frankfurt School is part of this tradition, and as we think about science, which had come to dominate Western culture by the 20th century (in the place of religion, whose power had by then waned as the arbiter of truth), it was its privileged power and interlocking financial benefits that drew the ire of critics.

Were these critics “responsible” in Adorno’s political sense? Can they be held accountable for offering (scientific and not political) critiques that improve the scientific process of adjudication between criteria of empirical validity and logical consistency? Not always. Did they realize that their success could throw the baby out with the bathwater? Not always. While Fuller grants Karl Popper the upper hand (as compared to Thomas Kuhn) when indirectly addressing such questions, we must keep an eye on Fuller’s “baby.” It’s easy to overlook the slippage from the political to the scientific and vice versa: Popper’s claim that we never know the Truth doesn’t mean that his (and our) quest for discovering the Truth as such is given up, it’s only made more difficult as whatever is scientifically apprehended as truth remains putative.

Limits to Skepticism

What is precious about the baby—science in general, and scientific discourse and its community in more particular ways—is that it offered safeguards against frivolous skepticism. Robert Merton (1973/1942) famously outlined the four features of the scientific ethos, principles that characterized the ideal workings of the scientific community: universalism, communism (communalism, as per the Cold War terror), disinterestedness, and organized skepticism. It is the last principle that is relevant here, since it unequivocally demands an institutionalized mindset of putative acceptance of any hypothesis or theory that is articulated by any community member.

One detects the slippery slope that would move one from being on guard when engaged with any proposal to being so skeptical as to never accept any proposal no matter how well documented or empirically supported. Al Gore, in his An Inconvenient Truth (2006), sounded the alarm about climate change. A dozen years later we are still plagued by climate-change deniers who refuse to look at the evidence, suggesting instead that the standards of science themselves—from the collection of data in the North Pole to computer simulations—have not been sufficiently fulfilled (“questions remain”) to accept human responsibility for the increase of the earth’s temperature. Incidentally, here is Fuller’s explanation of his own apparent doubt about climate change:

Consider someone like myself who was born in the midst of the Cold War. In my lifetime, scientific predictions surrounding global climate change has [sic.] veered from a deep frozen to an overheated version of the apocalypse, based on a combination of improved data, models and, not least, a geopolitical paradigm shift that has come to downplay the likelihood of a total nuclear war. Why, then, should I not expect a significant, if not comparable, alteration of collective scientific judgement in the rest of my lifetime? (86)

Expecting changes in the model does not entail a) that no improved model can be offered; b) that methodological changes in themselves are a bad thing (they might be, rather, improvements); or c) that one should not take action at all based on the current model because in the future the model might change.

The Royal Society of London (1660) set the benchmark of scientific credibility low when it accepted as scientific evidence any report by two independent witnesses. As the years went by, testability (“confirmation,” for the Vienna Circle, “falsification,” for Popper) and repeatability were added as requirements for a report to be considered scientific, and by now, various other conditions have been proposed. Skepticism, organized or personal, remains at the very heart of the scientific march towards certainty (or at least high probability), but when used perniciously, it has derailed reasonable attempts to use science as a means by which to protect, for example, public health.

Both Michael Bowker (2003) and Robert Proctor (1995) chronicle cases where asbestos and cigarette lobbyists and lawyers alike were able to sow enough doubt in the name of attenuated scientific data collection to ward off regulators, legislators, and the courts for decades. Instead of finding sufficient empirical evidence to attribute asbestos and nicotine to the failing health condition (and death) of workers and consumers, “organized skepticism” was weaponized to fight the sick and protect the interests of large corporations and their insurers.

Instead of buttressing scientific claims (that have passed the tests—in refereed professional conferences and publications, for example—of most institutional scientific skeptics), organized skepticism has been manipulated to ensure that no claim is ever scientific enough or has the legitimacy of the scientific community. In other words, what should have remained the reasonable cautionary tale of a disinterested and communal activity (that could then be deemed universally credible) has turned into a circus of fire-blowing clowns ready to burn down the tent. The public remains confused, not realizing that just because the stakes have risen over the decades does not mean there are no standards that ever can be met. Despite lobbyists’ and lawyers’ best efforts of derailment, courts have eventually found cigarette companies and asbestos manufacturers guilty of exposing workers and consumers to deathly hazards.

Limits to Belief

If we add to this logic of doubt, which has been responsible for discrediting science and the conditions for proposing credible claims, a bit of U.S. cultural history, we may enjoy a more comprehensive picture of the unintended consequences of certain critiques of science. Citing Kurt Andersen (2017), Robert Darnton suggests that the Enlightenment’s “rational individualism interacted with the older Puritan faith in the individual’s inner knowledge of the ways of Providence, and the result was a peculiarly American conviction about everyone’s unmediated access to reality, whether in the natural world or the spiritual world. If we believe it, it must be true.” (2018, 68)

This way of thinking—unmediated experiences and beliefs, unconfirmed observations, and disregard of others’ experiences and beliefs—continues what Richard Hofstadter (1962) dubbed “anti-intellectualism.” For Americans, this predates the republic and is characterized by a hostility towards the life of the mind (admittedly, at the time, religious texts), critical thinking (self-reflection and the rules of logic), and even literacy. The heart (our emotions) can more honestly lead us to the Promised Land, whether it is heaven on earth in the Americas or the Christian afterlife; any textual interference or reflective pondering is necessarily an impediment, one to be suspicious of and avoided.

This lethal combination of the life of the heart and righteous individualism brings about general ignorance and what psychologists call “confirmation bias” (the view that we endorse what we already believe to be true regardless of countervailing evidence). The critique of science, along this trajectory, can be but one of many so-called critiques of anything said or proven by anyone whose ideology we do not endorse. But is this even critique?

Adorno would find this a charade, a pretense that poses as a critique but in reality is a simple dismissal without intellectual engagement, a dogmatic refusal to listen and observe. He definitely would be horrified by Stephen Colbert’s oft-quoted quip on “truthiness” as “the conviction that what you feel to be true must be true.” Even those who resurrect Daniel Patrick Moynihan’s phrase, “You are entitled to your own opinion, but not to your own facts,” quietly admit that his admonishment is ignored by media more popular than informed.

On Responsible Critique

But surely there is merit to responsible critiques of science. Weren’t many of these critiques meant to dethrone the unparalleled authority claimed in the name of science, as Fuller admits all along? Wasn’t Lyotard (and Marx before him), for example, correct in pointing out the conflation of power and money in the scientific vortex that could legitimate whatever profit-maximizers desire? In other words, should scientific discourse be put on par with other discourses?  Whose credibility ought to be challenged, and whose truth claims deserve scrutiny? Can we privilege or distinguish science if it is true, as Monya Baker has reported, that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments” (2016, 1)?

Fuller remains silent about these important and responsible questions about the problematics (methodologically and financially) of reproducing scientific experiments. Baker’s report cites Nature‘s survey of 1,576 researchers and reveals “sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Ibid.) So, if science relies on reproducibility as a cornerstone of its legitimacy (and superiority over other discourses), and if the results are so dismal, should it not be discredited?

One answer, given by Hans E. Plesser, suggests that there is a confusion between the notions of repeatability (“same team, same experimental setup”), replicability (“different team, same experimental setup”), and reproducibility (“different team, different experimental setup”). If understood in these terms, it stands to reason that one may not get the same results all the time and that this fact alone does not discredit the scientific enterprise as a whole. Nuanced distinctions take us down a scientific rabbit-hole most post-truth advocates refuse to follow. These nuances are lost on a public that demands to know the “bottom line” in brief sound bites: Is science scientific enough, or is it bunk? When can we trust it?

Trump excels at this kind of rhetorical device: repeat a falsehood often enough and people will believe it; and because individual critical faculties are not a prerequisite for citizenship, post-truth means no truth, or whatever the president says is true. Adorno’s distinction of the responsible from the irresponsible political critics comes into play here; but he innocently failed to anticipate the Trumpian move to conflate the political and scientific and pretend as if there is no distinction—methodologically and institutionally—between political and scientific discourses.

With this cultural backdrop, many critiques of science have undermined its authority and thereby lent credence to any dismissal of science (legitimately by insiders and perhaps illegitimately at times by outsiders). Sociologists and postmodernists alike forgot to put warning signs on their academic and intellectual texts: Beware of hasty generalizations! Watch out for wolves in sheep clothes! Don’t throw the baby out with the bathwater!

One would think such advisories unnecessary. Yet without such safeguards, internal disputes and critical investigations appear to have unintentionally discredited the entire scientific enterprise in the eyes of post-truth promoters, the Trumpists whose neoliberal spectacles filter in dollar signs and filter out pollution on the horizon. The discrediting of science has become a welcome distraction that opens the way to radical free-market mentality, spanning from the exploitation of free speech to resource extraction to the debasement of political institutions, from courts of law to unfettered globalization. In this sense, internal (responsible) critiques of the scientific community and its internal politics, for example, unfortunately license external (irresponsible) critiques of science, the kind that obscure the original intent of responsible critiques. Post-truth claims at the behest of corporate interests sanction a free for all where the concentrated power of the few silences the concerns of the many.

Indigenous-allied protestors block the entrance to an oil facility related to the Kinder-Morgan oil pipeline in Alberta.
Image by Peg Hunter via Flickr / Creative Commons

 

Part Two: The Politics of Post-Truth

Fuller begins his book about the post-truth condition that permeates the British and American landscapes with a look at our ancient Greek predecessors. According to him, “Philosophers claim to be seekers of the truth but the matter is not quite so straightforward. Another way to see philosophers is as the ultimate experts in a post-truth world” (19). This means that those historically entrusted to be the guardians of truth in fact “see ‘truth’ for what it is: the name of a brand ever in need of a product which everyone is compelled to buy. This helps to explain why philosophers are most confident appealing to ‘The Truth’ when they are trying to persuade non-philosophers, be they in courtrooms or classrooms.” (Ibid.)

Instead of being the seekers of the truth, thinkers who care not about what but how we think, philosophers are ridiculed by Fuller (himself a philosopher turned sociologist turned popularizer and public relations expert) as marketing hacks in a public relations company that promotes brands. Their serious dedication to finding the criteria by which truth is ascertained is used against them: “[I]t is not simply that philosophers disagree on which propositions are ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’.” (Ibid.)

Some would argue that the criteria by which propositions are judged to be true or false are worthy of debate, rather than the cavalier dismissal of Trumpists. With criteria in place (even if only by convention), at least we know what we are arguing about, as these criteria (even if contested) offer a starting point for critical scrutiny. And this, I maintain, is a task worth performing, especially in the age of pluralism when multiple perspectives constitute our public stage.

In addition to debasing philosophers, it seems that Fuller reserves a special place in purgatory for Socrates (and Plato) for labeling the rhetorical expertise of the sophists—“the local post-truth merchants in fourth century BC Athens”—negatively. (21) It becomes obvious that Fuller is “on their side” and that the presumed debate over truth and its practices is in fact nothing but “whether its access should be free or restricted.” (Ibid.) In this neoliberal reading, it is all about money: are sophists evil because they charge for their expertise? Is Socrates a martyr and saint because he refused payment for his teaching?

Fuller admits, “Indeed, I would have us see both Plato and the Sophists as post-truth merchants, concerned more with the mix of chance and skill in the construction of truth than with the truth as such.” (Ibid.) One wonders not only if Plato receives fair treatment (reminiscent of Popper’s denigration of Plato as supporting totalitarian regimes, while sparing Socrates as a promoter of democracy), but whether calling all parties to a dispute “post-truth merchants” obliterates relevant differences. In other words, have we indeed lost the desire to find the truth, even if it can never be the whole truth and nothing but the truth?

Political Indifference to Truth

One wonders how far this goes: political discourse without any claim to truth conditions would become nothing but a marketing campaign where money and power dictate the acceptance of the message. Perhaps the intended message here is that contemporary cynicism towards political discourse has its roots in ancient Greece. Regardless, one should worry that such cynicism indirectly sanctions fascism.

Can the poor and marginalized in our society afford this kind of cynicism? For them, unlike their privileged counterparts in the political arena, claims about discrimination and exploitation, about unfair treatment and barriers to voting are true and evidence based; they are not rhetorical flourishes by clever interlocutors.

Yet Fuller would have none of this. For him, political disputes are games:

[B]oth the Sophists and Plato saw politics as a game, which is to say, a field of play involving some measure of both chance and skill. However, the Sophists saw politics primarily as a game of chance whereas Plato saw it as a game of skill. Thus, the sophistically trained client deploys skill in [the] aid of maximizing chance occurrences, which may then be converted into opportunities, while the philosopher-king uses much the same skills to minimize or counteract the workings of chance. (23)

Fuller could be channeling here twentieth-century game theory and its application in the political arena, or the notion offered by Lyotard when describing the minimal contribution we can make to scientific knowledge (where we cannot change the rules of the game but perhaps find a novel “move” to make). Indeed, if politics is deemed a game of chance, then anything goes, and it really should not matter if an incompetent candidate like Trump ends up winning the American presidency.

But is it really a question of skill and chance? Or, as some political philosophers would argue, is it not a question of the best means by which to bring to fruition the best results for the general wellbeing of a community? The point of suggesting the figure of a philosopher-king, to be sure, was not his rhetorical skills in this conjunction, but instead the deep commitment to rule justly, to think critically about policies, and to treat constituents with respect and fairness. Plato’s Republic, however criticized, was supposed to be about justice, not about expediency; it is an exploration of the rule of law and wisdom, not a manual about manipulation. If the recent presidential election in the US taught us anything, it’s that we should be wary of political gamesmanship and focus on experience and knowledge, vision and wisdom.

Out-Gaming Expertise Itself

Fuller would have none of this, either. It seems that there is virtue in being a “post-truther,” someone who can easily switch between knowledge games, unlike the “truther” whose aim is to “strengthen the distinction by making it harder to switch between knowledge games.” (34) In the post-truth realm, then, knowledge claims are lumped into games that can be played at will, that can be substituted when convenient, without a hint of the danger such capricious game-switching might engender.

It’s one thing to challenge a scientific hypothesis about astronomy because the evidence is still unclear (as Stephen Hawking has done in regard to Black Holes) and quite another to compare it to astrology (and give equal hearings to horoscope and Tarot card readers as to physicists). Though we are far from the Demarcation Problem (between science and pseudo-science) of the last century, this does not mean that there is no difference at all between different discourses and their empirical bases (or that the problem itself isn’t worthy of reconsideration in the age of Fuller and Trump).

On the contrary, it’s because we assume difference between discourses (gray as they may be) that we can move on to figure out on what basis our claims can and should rest. The danger, as we see in the political logic of the Trump administration, is that friends become foes (European Union) and foes are admired (North Korea and Russia). Game-switching in this context can lead to a nuclear war.

In Fuller’s hands, though, something else is at work. Speaking of contemporary political circumstances in the UK and the US, he says: “After all, the people who tend to be demonized as ‘post-truth’ – from Brexiteers to Trumpists – have largely managed to outflank the experts at their own game, even if they have yet to succeed in dominating the entire field of play.” (39) Fuller’s celebratory tone here may either bring a slight warning in the use of “yet” before the success “in dominating the entire field of play” or a prediction that indeed this is what is about to happen soon enough.

The neoliberal bottom-line surfaces in this assessment: he who wins must be right, the rich must be smart, and more perniciously, the appeal to truth is beside the point. More specifically, Fuller continues:

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins. They are talking about worlds that could have been and still could be—the stuff of modal power. (Ibid.)

By now one should be terrified. This is a strong endorsement of lying as a matter of course, as a way to distract from the details (and empirical bases) of one “knowledge game”—because it may not be to one’s ideological liking–in favor of another that might be deemed more suitable (for financial or other purposes).

The political stakes here are too high to ignore, especially because there are good reasons why “certain parties are advantaged over others” (say, climate scientists “relative to” climate deniers who have no scientific background or expertise). One wonders what it means to talk about “alternative facts” and “alternative science” in this context: is it a means of obfuscation? Is it yet another license granted by the “social constructivist position” not to acknowledge the legal liability of cigarette companies for the addictive power of nicotine? Or the pollution of water sources in Flint, Michigan?

What Is the Mark of an Open Society?

If we corral the broader political logic at hand to the governance of the scientific community, as Fuller wishes us to do, then we hear the following:

In the past, under the inspiration of Karl Popper, I have argued that fundamental to the governance of science as an ‘open society’ is the right to be wrong (Fuller 2000a: chap. 1). This is an extension of the classical republican ideal that one is truly free to speak their mind only if they can speak with impunity. In the Athenian and the Roman republics, this was made possible by the speakers–that is, the citizens–possessing independent means which allowed them to continue with their private lives even if they are voted down in a public meeting. The underlying intuition of this social arrangement, which is the epistemological basis of Mill’s On Liberty, is that people who are free to speak their minds as individuals are most likely to reach the truth collectively. The entangled histories of politics, economics and knowledge reveal the difficulties in trying to implement this ideal. Nevertheless, in a post-truth world, this general line of thought is not merely endorsed but intensified. (109)

To be clear, Fuller not only asks for the “right to be wrong,” but also for the legitimacy of the claim that “people who are free to speak their minds as individuals are most likely to reach the truth collectively.” The first plea is reasonable enough, as humans are fallible (yes, Popper here), and the history of ideas has proven that killing heretics is counterproductive (and immoral). If the Brexit/Trump post-truth age would only usher a greater encouragement for speculation or conjectures (Popper again), then Fuller’s book would be well-placed in the pantheon of intellectual pluralism; but if this endorsement obliterates the silly from the informed conjecture, then we are in trouble and the ensuing cacophony will turn us all deaf.

The second claim is at best supported by the likes of James Surowiecki (2004) who has argued that no matter how uninformed a crowd of people is, collectively it can guess the correct weight of a cow on stage (his TED talk). As folk wisdom, this is charming; as public policy, this is dangerous. Would you like a random group of people deciding how to store nuclear waste, and where? Would you subject yourself to the judgment of just any collection of people to decide on taking out your appendix or performing triple-bypass surgery?

When we turn to Trump, his supporters certainly like that he speaks his mind, just as Fuller says individuals should be granted the right to speak their minds (even if in error). But speaking one’s mind can also be a proxy for saying whatever, without filters, without critical thinking, or without thinking at all (let alone consulting experts whose very existence seems to upset Fuller). Since when did “speaking your mind” turn into scientific discourse? It’s one thing to encourage dissent and offer reasoned doubt and explore second opinions (as health care professionals and insurers expect), but it’s quite another to share your feelings and demand that they count as scientific authority.

Finally, even if we endorse the view that we “collectively” reach the truth, should we not ask: by what criteria? according to what procedure? under what guidelines? Herd mentality, as Nietzsche already warned us, is problematic at best and immoral at worst. Trump rallies harken back to the fascist ones we recall from Europe prior to and during WWII. Few today would entrust the collective judgment of those enthusiasts of the Thirties to carry the day.

Unlike Fuller’s sanguine posture, I shudder at the possibility that “in a post-truth world, this general line of thought is not merely endorsed but intensified.” This is neither because I worship experts and scorn folk knowledge nor because I have low regard for individuals and their (potentially informative) opinions. Just as we warn our students that simply having an opinion is not enough, that they need to substantiate it, offer data or logical evidence for it, and even know its origins and who promoted it before they made it their own, so I worry about uninformed (even if well-meaning) individuals (and presidents) whose gut will dictate public policy.

This way of unreasonably empowering individuals is dangerous for their own well-being (no paternalism here, just common sense) as well as for the community at large (too many untrained cooks will definitely spoil the broth). For those who doubt my concern, Trump offers ample evidence: trade wars with allies and foes that cost domestic jobs (when promising to bring jobs home), nuclear-war threats that resemble a game of chicken (as if no president before him ever faced such an option), and completely putting into disarray public policy procedures from immigration regulations to the relaxation of emission controls (that ignores the history of these policies and their failures).

Drought and suffering in Arbajahan, Kenya in 2006.
Photo by Brendan Cox and Oxfam International via Flickr / Creative Commons

 

Part Three: Post-Truth Revisited

There is something appealing, even seductive, in the provocation to doubt the truth as rendered by the (scientific) establishment, even as we worry about sowing the seeds of falsehood in the political domain. The history of science is the story of authoritative theories debunked, cherished ideas proven wrong, and claims of certainty falsified. Why not, then, jump on the “post-truth” wagon? Would we not unleash the collective imagination to improve our knowledge and the future of humanity?

One of the lessons of postmodernism (at least as told by Lyotard) is that “post-“ does not mean “after,” but rather, “concurrently,” as another way of thinking all along: just because something is labeled “post-“, as in the case of postsecularism, it doesn’t mean that one way of thinking or practicing has replaced another; it has only displaced it, and both alternatives are still there in broad daylight. Under the rubric of postsecularism, for example, we find religious practices thriving (80% of Americans believe in God, according to a 2018 Pew Research survey), while the number of unaffiliated, atheists, and agnostics is on the rise. Religionists and secularists live side by side, as they always have, more or less agonistically.

In the case of “post-truth,” it seems that one must choose between one orientation or another, or at least for Fuller, who claims to prefer the “post-truth world” to the allegedly hierarchical and submissive world of “truth,” where the dominant establishment shoves its truths down the throats of ignorant and repressed individuals. If post-truth meant, like postsecularism, the realization that truth and provisional or putative truth coexist and are continuously being re-examined, then no conflict would be at play. If Trump’s claims were juxtaposed to those of experts in their respective domains, we would have a lively, and hopefully intelligent, debate. False claims would be debunked, reasonable doubts could be raised, and legitimate concerns might be addressed. But Trump doesn’t consult anyone except his (post-truth) gut, and that is troublesome.

A Problematic Science and Technology Studies

Fuller admits that “STS can be fairly credited with having both routinized in its own research practice and set loose on the general public–if not outright invented—at least four common post-truth tropes”:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties. (43)

In that sense, then, Fuller agrees that the positive lessons STS wished for the practice of the scientific community may have inadvertently found their way into a post-truth world that may abuse or exploit them in unintended ways. That is, something like “consensus” is challenged by STS because of how the scientific community pretends to get there knowing as it does that no such thing can ever be reached and when reached it may have been reached for the wrong reasons (leadership pressure, pharmaceutical funding of conferences and journals). But this can also go too far.

Just because consensus is difficult to reach (it doesn’t mean unanimity) and is susceptible to corruption or bias doesn’t mean that anything goes. Some experimental results are more acceptable than others and some data are more informative than others, and the struggle for agreement may take its political toll on the scientific community, but this need not result in silly ideas about cigarettes being good for our health or that obesity should be encouraged from early childhood.

It seems important to focus on Fuller’s conclusion because it encapsulates my concern with his version of post-truth, a condition he endorses not only in the epistemological plight of humanity but as an elixir with which to cure humanity’s ills:

While some have decried recent post-truth campaigns that resulted in victory for Brexit and Trump as ‘anti-intellectual’ populism, they are better seen as the growth pains of a maturing democratic intelligence, to which the experts will need to adjust over time. Emphasis in this book has been given to the prospect that the lines of intellectual descent that have characterised disciplinary knowledge formation in the academy might come to be seen as the last stand of a political economy based on rent-seeking. (130)

Here, we are not only afforded a moralizing sermon about (and it must be said, from) the academic privileged position, from whose heights all other positions are dismissed as anti-intellectual populism, but we are also entreated to consider the rantings of the know-nothings of the post-truth world as the “growing pains of a maturing democratic intelligence.” Only an apologist would characterize the Trump administration as mature, democratic, or intelligent. Where’s the evidence? What would possibly warrant such generosity?

It’s one thing to challenge “disciplinary knowledge formation” within the academy, and there are no doubt cases deserving reconsideration as to the conditions under which experts should be paid and by whom (“rent-seeking”); but how can these questions about higher education and the troubled relations between the university system and the state (and with the military-industrial complex) give cover to the Trump administration? Here is Fuller’s justification:

One need not pronounce on the specific fates of, say, Brexit or Trump to see that the post-truth condition is here to stay. The post-truth disrespect for established authority is ultimately offset by its conceptual openness to previously ignored people and their ideas. They are encouraged to come to the fore and prove themselves on this expanded field of play. (Ibid)

This, too, is a logical stretch: is disrespect for the authority of the establishment the same as, or does it logically lead to, the “conceptual” openness to previously “ignored people and their ideas”? This is not a claim on behalf of the disenfranchised. Perhaps their ideas were simply bad or outright racist or misogynist (as we see with Trump). Perhaps they were ignored because there was hope that they would change for the better, become more enlightened, not act on their white supremacist prejudices. Should we have “encouraged” explicit anti-Semitism while we were at it?

Limits to Tolerance

We tolerate ignorance because we believe in education and hope to overcome some of it; we tolerate falsehood in the name of eventual correction. But we should never tolerate offensive ideas and beliefs that are harmful to others. Once again, it is one thing to argue about black holes, and quite another to argue about whether black lives matter. It seems reasonable, as Fuller concludes, to say that “In a post-truth utopia, both truth and error are democratised.” It is also reasonable to say that “You will neither be allowed to rest on your laurels nor rest in peace. You will always be forced to have another chance.”

But the conclusion that “Perhaps this is why some people still prefer to play the game of truth, no matter who sets the rules” (130) does not follow. Those who “play the game of truth” are always vigilant about falsehoods and post-truth claims, and to say that they are simply dupes of those in power is both incorrect and dismissive. On the contrary: Socrates was searching for the truth and fought with the sophists, as Popper fought with the logical positivists and the Kuhnians, and as scientists today are searching for the truth and continue to fight superstitions and debunked pseudoscience about vaccination causing autism in young kids.

If post-truth is like postsecularism, scientific and political discourses can inform each other. When power-plays by ignoramus leaders like Trump are obvious, they could shed light on less obvious cases of big pharma leaders or those in charge of the EPA today. In these contexts, inconvenient facts and truths should prevail and the gamesmanship of post-truthers should be exposed for what motivates it.

Contact details: rsassowe@uccs.edu

* Special thanks to Dr. Denise Davis of Brown University, whose contribution to my critical thinking about this topic has been profound.

References

Theodor W. Adorno (1998/1963), Critical Models: Interventions and Catchwords. Translated by Henry W. Pickford. New York: Columbia University Press

Kurt Andersen (2017), Fantasyland: How America Went Hotwire: A 500-Year History. New York: Random House

Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature Vol. 533, Issue 7604, 5/26/16 (corrected 7/28/16)

Michael Bowker (2003), Fatal Deception: The Untold Story of Asbestos. New York: Rodale.

Robert Darnton, “The Greatest Show on Earth,” New York Review of Books Vo. LXV, No. 11 6/28/18, pp. 68-72.

Al Gore (2006), An Inconvenient Truth: The Planetary Emergency of Global Warming and What Can Be Done About It. New York: Rodale.

Richard Hofstadter (1962), Anti-Intellectualism in American Life. New York: Vintage Books.

Jean- François Lyotard (1984), The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Robert K. Merton (1973/1942), “The Normative Structure of Science,” The Sociology of Science: Theoretical and Empirical Investigations. Chicago and London: The University of Chicago Press, pp. 267-278.

Hans E. Plesser, “Reproducibility vs. Replicability: A Brief History of Confused Terminology,” Frontiers in Neuroinformatics, 2017; 11: 76; online: 1/18/18.

Robert N. Proctor (1995), Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.

James Surowiecki (2004), The Wisdom of Crowds. New York: Anchor Books.

Author Information: Kristie Dotson, Michigan State University, dotsonk@msu.edu

Dotson, Kristie. “Abolishing Jane Crow.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 1-8.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3YJ

See also:

Image by Adley Haywood via Flickr / Creative Commons

 

It took me 8 years to publish “Theorizing Jane Crow.” I wrote it at the same time as I wrote my 2011 paper, “Tracking Epistemic Violence, Tracking Practices of Silencing.” The many reviews that advocated for rejecting “Theorizing Jane Crow” over the years made me refine it…and alter it….and refine it some more. This is not necessarily a gripe. But it will seem that way. Because there are two consistent critiques of this paper that have stuck with me for how utterly problematic they were and are. In this reply to Ayesha Hardison’s commentary, “Theorizing Jane Crow, Theorizing Literary Fragments,” I display and analyze those critiques because they link up in interesting ways to Ayesha Hardison’s commentary.

The two most common critiques of this paper include:  1) the judgement that my paper is not good intellectual history or not good literary criticism and 2) the conclusion that Black women’s literary production is so advanced that there is no way to make a claim of unknowability with respect to US Black women today (or yesterday).  In what follows, I will articulate and explore these critiques. The first critique brings attention to just how wonderful Hardison’s commentary actually is for how it sets up the rules of engagement between us. The second critique can be used to tease out convergences and a potential divergence between Hardison’s position and my own.

The First Critique: Does E’rybody Have to be Historians or Literary Studies Scholars?

Since I neither claim to be a literary scholar nor a historian, I found no reason to deny the first (and by far most consistent) critique of this paper. This paper is not good intellectual history. And, plainly speaking, it is terrible literary criticism. Let me say this, for the record, I am neither an intellectual historian, nor a literary critic. And, with all due respect to those people who do these things well, I have no desire to be.

Hardison detected that she and I are coming to the same sets of problems with different trainings, different habits of attention, and, quite frankly, different projects. Because, no, I am not a literary critic. Hardison acknowledges our different orientations when she writes:

Whereas Dotson theorizes Jane Crow by outlining social features facilitating black women’s ‘unknowability,’ in literary studies, we might say black women’s ‘unknowability’ is actually a matter of audience, and more importantly, a problem of reception. (2018, 57)

Another place where differences in our respective approaches is foreshadowed is in the very first line of Hardison’s reply when she writes, “To acknowledge Jane Crow…is not the same as understanding how black women’s subjugation works – or why it persists,” (2018, 56). From the very first line, I was put at ease with Hardison’s commentary. Because however much we might disagree or agree, at least, she recognized my actual project. I treat Murray like a philosopher. In accordance with philosopher stone rules, e.g. like an element from which composite understandings can be derived. It was clear to me that even among Black feminist academics, potential audiences for this paper were simply unused to the kinds of flights of fancy that taking Black women as philosophers requires.[1]

Hardison didn’t have this problem at all. In other words, Hardison was, for me, a “brown girl’s heart” to receive what I was trying to articulate. For that I am so very grateful to her. I believe that Hardison understood what I was trying to do. I was treating Pauli Murray the way I would be allowed to treat any theoretical white dude. Like her work should be able to inspire more work with family resemblances. I treated Murray like there could and should be Murray-ians. And it was this move that I utterly refused to compromise on. It was also the move that inspired, in my estimation, the most resistance from anonymous reviewers. But Hardison got it. But, then, of course, she would get it. She does the same thing in her book, Writing Through Jane Crow (Hardison 2014). We treat Murray like a philosopher.

The performance of Hardison’s commentary accords very much with the existence of (and necessity of) “an empathetic black female audience” (Hardison 2018, 59). And what is uncovered between us is a great deal of agreement between her positions and my own and a potential disagreement. At this point, Hardison and I can talk to each other. But I want to draw attention to the fact it is Hardison’s commentary that sets the stage for this exchange in a way where our convergences and divergences can be fruitfully explored. And that is no easy feat. Hats off to Hardison. I am deeply grateful for her work here.

The Second Critique: Black Women’s Literary Production vs. Jane Crow Dynamics

The second most common critique of “Theorizing Jane Crow” concerned skepticism about whether US Black women could be understood as unknowable in the face of US Black women’s literary production. It was only in reading Hardison’s commentary that I realized, I may have misunderstood part of the critiques being leveled at me from (again) anonymous reviewers that were most likely Black feminist academics themselves. One might have misread my essay to say that Black women never afford each other the kind of empathetic audiences that are needed to render them, broadly speaking, knowable in hegemonic and counterhegemonic spaces. That the Black community at large never extends such empathy.

Or, in Hardison’s words, some may have taken me as advocating for “the conceit that black women’s narratives about their multivalent oppression registers similarly in hegemonic and counterhegemonic spaces” (2018, 56). Now, I am not sure if Hardison is accusing me of this. There is reason to believe that she isn’t but is rather choosing this point as a way of empathetically extending my remarks. For example, Hardison writes:

An analysis of African American women writers’ engagement with Jane Crow is outside the scope of Dotson’s epistemological story in “Theorizing Jane Crow, Theorizing Unknowability,” but their texts illuminate the philosophical conundrum she identifies. (2018, 57)

This suggests, to me, that Hardison detects the problem of Jane Crow unknowability in Black women writer’s work, even as they work to navigate and counter such unknowability with some degree of success.

Now, to be clear, unknowability, on the terms I outline, can be relative. One might argue that the difficulty of receiving a fair peer-review for this paper in a particular domain rife with either Black feminists with literary, historical, and/or sociological training means that hegemonic and counterhegemonic communities alike pose epistemological problems, even if they are not exactly the conditions of Jane Crow (and they aren’t). But those epistemological problems may have the same structure of the epistemological engine I afford to Jane Crow dynamics, e.g. disregard, disbelief, and disavowal. This is primarily because, epistemologies in colonial landscapes are very difficult to render liberatory (see, for example, Dotson 2015).[2]

Limits of Unknowability, Limits of a Single Paper

Still, for me, the most egregious misreading of “Theorizing Jane Crow” is to interpret me as saying that Black women are equally as unknowable to other Black women as they are in “hegemonic spaces” (56) and according “hierarchical epistemologies” (58). Yeah, that’s absurd. Hardison’s commentary extends my article in exactly the ways it needs to be extended to cordon off this kind of ludicrous uptake, i.e. that Black womenkind are equally unknowable to ourselves as we might be in the face of hegemonic epistemological orientations.[3]

But, as Hardison notes, an extensive development of the point that Black womenkind offer empathetic audiences to Black womenkind that render them knowable, at least “to themselves and each other” (Hardison 2018, 57), both for the sake of their own lives and for the sake of the lives of other Black womenkind, is outside the scope of my paper. Rather, I am concerned with, as Hardison rightly notes, “understanding how black women’s [Jane Crow] subjugation works – or why it persists” (2018, 56). And though I don’t think my essay indicates that Black womenkind are equally “unknowable” to each other in all instances, if that is a possible reading of my essay, thank goodness for Ayesha Hardison’s generous extension of this project to make clear that the performance of this text belies that reading.

Perhaps Hardison says it best, my “grappling with and suture of Murray’s philosophical fragments challenges the hierarchical epistemologies that have characterized black women as unknowable and unknowing,” (2018, 58). This is why I love Black feminist literary studies folks. Because, yes! The performance of this piece belies the message that there is no way for us to be known, especially by ourselves. And, what’s more, such an inexhaustible unknowing has to be false for the successful performance of this text. But then I am aware of that. So what else might I be attempting to articulate in this paper?

It strikes me that a charitable reading of the second main criticism leveled at this paper might proceed as follows:

From where does the charge of unknowability come in the face of the existence and quantity of US Black women’s literary and cultural production? This is an especially important question when you need Black women’s production to write about their ‘unknowability,” how can you claim that Black women are unknowable when the condition for the possibility of this account is that you take yourself to know something about them from their own production? This seems to be a contradiction.

Yes. It does seem like a contradiction or, if folks need a white male theorist to say something to make it real, it is a kind of differend- (Lyotard 1988).[4] Radically disappeared peoples, circumstances, and populations are often subject to problems with respect to frames, evidence and modes of articulation. Being disappeared is different than being invisible simpliciter, but then I make this claim in “Theorizing Jane Crow.”

Problems of large scale disappearing that affect entire populations, events, and historical formations render unknowable unknowability. This problematic seems to be what this second critique falls prey too, i.e. the disappearing of unknowability behind sense making devices (Dotson 2017). As the critique goes, if Black women are unknowable at the scale I seem to propose, then how do I know about this unknowability?[5] How, indeed.

I still reject this rendition of the second criticism, i.e. the one that says with all the literary production of Black womenkind we are no longer unknowable or else I wouldn’t know about a condition of unknowability. Jane Crow unknowability, in my estimation, is not subject to brute impossibilities, i.e. either we are knowable or unknowable. This is because Jane Crow is domain specific in the same ways Jim Crow was (and is). Also, Jane Crow is made of epistemological and material compromises. Hardison gets this. She is very clear that “Black women continue to be ‘unknowable’ in dominant culture due to its investment in white supremacy and patriarchy,” (Hardison 2018, 57).

But, let’s get something clear, an “investment” is not only a set of attitudes. It is composed of sets of institutional norms (and institutions through which to enact those norms). Sets of norms of attention. Sets of historically derived “common sense” and “obvious truths” that routinely subject Black womenkind to Jane Crow dynamics. It is composed of social and material relations that make sense because of the investments that invest them with sense.

Jane Crow as a Dynamic of Complex Social Epistemology

Jane Crow dynamics, when they appear, are built into the functioning of institutions and communal, social relations. They are embedded in the “common sense” of many US publics- including counterhegemonic ones- because I am presuming we are assuming that some Black communities indulge in patriarchy, which is what lead Murray to her observations (See, Hardison 2018). And though Black women can disrupt this in pockets it does not change the epistemological and material conditions that are reinforcing and recreating Jane Crow dynamics for every generation. And it doesn’t change the reality that there is a limit to our capacity to change this from within Jane Crow dynamics. So, we write ourselves into existence again and again and again.

Hardison acknowledges this, as she astutely notes, “Although I engage Pauli Murray as a writer here to offer a complementary approach to Dotson’s theorizing of Jane Crow, I do not claim that black women’s writings irons out Jane Crow’s material paradoxes,” (2018, 62). And this is the heart of my disagreement with the second major critique of this essay. Are those critics claiming that epistemological possibilities brought by Black women’s literary production iron out material paradoxes that, in part, cause Jane Crow dynamics? Because, that would be absurd.

But here is where I appear to disagree with Hardison. Is Hardison claiming that epistemological possibilities have ironed out Jane Crow’s epistemological paradoxes? Because I sincerely doubt that. Schedules of disbelief, disregard, and disavowal are happening constantly and we don’t have great mechanisms for tracking who they harm, whether they harm, and why (on this point, see Dotson and Gilbert 2014).

This leads to a potential substantive disagreement between Hardison and I. And it can be found in the passage I cited earlier. She writes:

Whereas Dotson theorizes Jane Crow by outlining social features facilitating black women’s ‘unknowability,’ in literary studies, we might say black women’s ‘unknowability’ is actually a matter of audience, and more importantly, a problem of reception. (2018, 57)

There is a potential misreading of my text here that seems to center on different understandings of “epistemological” that may come from our different disciplinary foci. Specifically, I don’t necessarily focus on social features. I focus on epistemic features facilitating black women’s unknowability, when we encounter it. That is to say, disregard, disbelief, and disavowal are epistemic relations. They are also social ways of relating, but, importantly, in my analysis they are socio-epistemic. What that means is that they are social features that figure prominently in epistemological orientations and conduct. And these features are embedded in what makes audiences and uptake relevant for this discussion. That is to say, the reasons why audiences matter, and problems of reception are central, is because varying audiences indulge in disregard, disbelief, and disavowal differently.

So, the juxtaposition that might be assumed in Hardison’s statement of the focus in literary studies, which is indicated by the phrase “actually a matter of,” is not a difference in kind, but rather a difference in emphasis. I am tracking the kinds of things that makes audience and problems of reception important for rendering anything knowable in social worlds, e.g. disregard, disbelief, and disavowal. Because it is there, as a philosophy-trained academic, that I can mount an explanation of “how black women’s [Jane Crow] subjugation works -or why it persists” (Hardison 2018, 56).

The Great Obstacles of Abolishing Jane Crow

In the end, this may not be a disagreement at all. I tend to think of it as a change in focus. My story is one story that can be told. Hardison’s story is another. They need not be taken as incompatible. In fact, I would claim they are not incompatible but, as Hardison notes, complementary (2018, 62). They uncover different aspects of a complicated dynamic. One can focus on the problems of audience and reception. And I think that this is fruitful and important. But, and this is where Hardison and I might part company, focusing on these issues can lead one to believe that Jane Crow dynamics are easier to abolish than they are.

One might suspect, as some of the anonymous reviewers of this essay have, that all the literary production of US Black womenkind means that US Black womenkind don’t actually face Jane Crow dynamics. Because, and this seems to be the take-home point of the second critique, and as Hardison explains, “Structural realities (and inequities) demand black women’s invisibility, but black women’s philosophical and literary efforts make them visible – first and foremost – to themselves” (2018, 57). And this is the crux of our potential disagreement.

What do we mean by “make them visible” and, more importantly, where? In the domains where they are experiencing Jane Crow dynamics, i.e. epistemological and material compromises, or in the domains where they, arguably, are not? Because the empathetic audiences of “brown girls” outside of institutions that operate to our detriment are not major catalysts for the problem of Jane Crow unknowability, on my account. This is where domain specificity becomes important and one must reject the conclusion (as I do in “Theorizing Jane Crow”) that Jane Crow unknowability is invisibility simpliciter.

As Hardison explains, Pauli Murray’s experiences with racial and gender subordination motivated her towards identifying and signifying Jane Crow oppression (along with constructing epistemological orientations with which to do so) (2018, 61). What the anonymous reviewers and Hardison insist on is that “These fragments of knowing identify black women’s autobiography as a vehicle for positive self-concept and social epistemology.”

Moreover, Hardison claims, and rightly so, that though “Black women writers do not ‘resolve our dilemmas,’…they do ‘name them.’ In a destructive culture of invisibility, for black women to call out Jane Crow and counter with their self-representation has substantive weight” (2018, 62). I agree with all of these conclusions about the importance of Black women countering Jane Crow dynamics, even as I wonder what it means to say it has “substantive weight.”

I question this not because I disagree that such countering has substantive weight. It does. But part of what has to be interrogated in the 21st century, as we continue to grow weary of living with centuries old problematics, what does the abolition of Jane Crow look like? Are there other forms of “substantive weight” to pursue in tandem to our historical efforts?

In asking this I am not attempting to belittle the efforts that have gotten us to this point- with resources and tools to “call out and counter” Jane Crow dynamics. My work in this paper is impossible without the efforts of previous and current generations of Black womenkind to “name” this problem. Their work has been (and is) important. And for many of us it is lifesaving.  But- and yes, this is a ‘but,’ what next? I want a world other than this. And even if that world is impossible, which I half believe, I still want to work towards a world other than this today as part of what it means to live well right now. So, though this may be blasphemous in today’s Black feminist academy, I don’t think that Black women’s literary production is quite the panacea for Jane Crow dynamics that it is often assumed to be.[6] But then, from Hardison’s remarks, she doesn’t assume this either. How we come to this conclusion (and how we would extend it) may be quite different, however.

The Limits and Potential of Literary Production

And, yes, I think a focus on the socio-epistemic and material conditions of Jane Crow can help us detect the limits of relying on black women’s literary production for the abolition of Jane Crow dynamics, even if such production has an integral role to play in its abolition, e.g. producing knowledge that we use to form understandings about potential conditions of unknowability. And though I would argue that black women’s cultural production is key to worlds other than (and better than this). Because, as Hardison explains, such work helps us “confront the epistemic affront intrinsic to black women’s Jane Crow subjection,” (2018, 60).

I will still never argue that such production, by itself, can fix the problems we face. It cannot. But then, Hardison would not argue this either. As Hardison concludes, disruption of Jane Crow dynamics means a “a complete end to its material and epistemological abuses,” (2018, 62). Indeed- this is my position as well. In making this claim, we are not attempting to overshadow what has been (and continues to be) accomplished in US Black women’s literary production, but to continue to push our imaginations towards the abolition of Jane Crow.

Contact details: dotsonk@msu.edu

References

Dotson, Kristie. 2012. “A Cautionary Tale: On Limititng Epistemic Oppression.”  Frontiers: A Journal of Women Studies 33 (1):24-47.

Dotson, Kristie. 2013. “Radical Love: Black Philosophy as Deliberate Acts of Inheritance.”  The Black Scholar 43 (4):38-45.

Dotson, Kristie. 2014. “Conceptualizing Epistemic Oppression.”  Social Epistemology 28 (2).

Dotson, Kristie. 2015. “Inheriting Patricia Hill Collins’ Black Feminist Epistemology.”  Ethnic and Racial Studies 38 (13):2322-2328.

Dotson, Kristie. 2016. “Between Rocks and Hard Places.”  The Black Scholar 46 (2):46-56.

Dotson, Kristie. 2017. “Theorizing Jane Crow, Thoerizing Unknowability.”  Social Epistemology 31 (5):417-430.

Dotson, Kristie, and Marita Gilbert. 2014. “Curious Disappearances: Affectability Imbalances and Process-Based Invisibility.”  Hypatia 29 (4):873-888.

Hardison, Ayesha. 2018. “Theorizing Jane Crow, Theorizing Literary Fragments.”  Social Epistemology Review and Reply Collective 7 (2):53-63.

Hardison, Ayesha K. 2014. Writing Through Jane Crow: Racec and Gender Politics in African American Literarure. Charlottesville: University of Virginia Press.

Lyotard, Jean-Francois. 1988. The Differend: Phases in Dispute. Minneapolis: University of Minnesota Press.

[1] Nothing I am saying here is meant to indicate that literary critics are not (and can never be) philosophers. That is not a position I hold (Dotson 2016). Rather, the claim I am making is that treating people like philosophers can come with certain orientations. It takes extreme amounts of trust and belief that the person(s) whose thought one is exploring can act like a transformative element for the construction of composite understandings (Dotson 2013). It takes trust and belief to utilize someone else’s ideas to extend one’s own imagination, especially where those extensions are not written word for word. One way to treat a person’s work as philosophical work is to assume a form of authorship that allows one to use that work as a “home base” from which to explore and reconstruct the world that is implied in their abstractions. I call this activity, “theoretical archeology” (Dotson 2017, 418). And all I really meant to describe with that term was one way to take a writer as a philosopher. I had to become very detailed about my approach in this paper because of the propensity of anonymous reviewers to attempt to discipline me into literary studies or intellectual history.

[2] This is what I attempt to draw attention to in my work. The epistemological problems in Jane Crow, for example, are epistemological problems that might be able to exist without their corresponding material problems. The material problems in Jane Crow are material problems that might be able to exist without the epistemological problems. But in Jane Crow they are so linked up with each other that they reinforce and reproduce one another.  So, one can address the epistemological problems and leave the material ones (that eventually reintroduce those epistemological problems again). One can address the material problems and still leave the epistemological ones (that will eventually reintroduce those material problems again). Epistemic relations impact material relation and material relations impact epistemic relations, on my account. But they are not the same and they are not subject to domino-effect solutions. Fixing one does not mean one has fixed the other. And it is unclear one can make a claim to have fixed one without having fix both.

[3] If the reader needs more evidence that I have “figured this out,” see (Dotson 2012, 2016).

[4] There is a great deal about Lyotard’s account I would disagree with. But we are undoubtedly grappling with similar dynamics- though our subject population and approach differs significantly. Pauli Murray’s work pre-dates this formulation, however.

[5] I consider the appearance of this kind of seeming paradox to be a symptom of second order epistemic oppression. See (Dotson 2014).

[6] It may be my lower-socio-economic class background that makes it hard to accept the position that writing is going to save us all. I acknowledge that Black womenkind in the places where I am from needed literature and other cultural products for our survival (especially music, social and film medias. The kind of emphasis on writing in this exchange has a tinge of classism. But we can’t do everything here, can we? There is much more dialogue to be had on these issues.) Though, some might say, as Murray did that we need a “brown girl’s heart to hear” our songs of hope. I will agree with this and still maintain that I needed far more than that. When child protective services were coming to attempt to take me from my very good, but not flawless mother, I needed not only brown girl’s hearts. I also needed hierarchical epistemological orientations and oppressive, material conditions to lose hold.

Author Information: Stephen Turner, University of South Florida, turner@usf.edu

Turner, Stephen. “Fuller’s roter Faden.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 25-29.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3WX

Art by William Blake, depicting the creation of reality.
Image via AJC1 via Flickr / Creative Commons

The Germans have a notion of “research intention,” by which they mean the underlying aim of an author’s work as revealed over its whole trajectory. Francis Remedios and Val Dusek have provided, if not an account itself, the material for an account of Steve Fuller’s research intention, or as they put it the “thread” that runs through his work.

These “intentions” are not something that is apparent to the authors themselves, which is part of the point: at the start of their intellectual journey they are working out a path which leads they know not where, but which can be seen as a path with an identifiable beginning and end retrospectively. We are now at a point where we can say something about this path in the case of Fuller. We can also see the ways in which various Leitmotifs, corollaries, and persistent themes fit with the basic research intention, and see why Fuller pursued different topics at different times.

A Continuity of Many Changes

The ur-source for Fuller’s thought is his first book, Social Epistemology. On the surface, this book seems alien to the later work, so much so that one can think of Fuller as having a turn. But seen in terms of an underlying research intention, and indeed in Fuller’s own self-explications included in this text, this is not the case: the later work is a natural development, almost an entailment, of the earlier work, properly understood.

The core of the earlier work was the idea of constructing a genuine epistemology, in the sense of a kind of normative account of scientific knowledge, out of “social” considerations and especially social constructivism, which at the time was considered to be either descriptive or anti-epistemological, or both. For Fuller, this goal meant that the normative content would at least include, or be dominated by, the “social” part of epistemology, considerations of the norms of a community, norms which could be changed, which is to say made into a matter of “policy.”

This leap to community policies leads directly to a set of considerations that are corollaries to Fuller’s long-term project. We need an account of what the “policy” options are, and a way to choose between them. Fuller was trained at a time when there was a lingering controversy over this topic: the conflict between Kuhn and the Popperians. Kuhn represented a kind of consensus driven authoritarianism. For him it was right and necessary for science to be organized around ungroundable premises that enabled science to be turned into puzzle-solving, rather than insoluble disputes over fundamentals. These occurred, and produced new ungroundable consensual premises, at the rare moments of scientific revolutions.

Progress was possible through these revolutions, but our normal notions of progress were suspended during the revolutions and applied only to the normal puzzle-solving phase of science. Popperianism, on the contrary, ascribed progress to a process of conjecture and refutation in which ever broader theories developed to account for the failures of previous conjectures, in an unending process.

Kuhnianism, in the lens of Fuller’s project in Social Epistemology, was itself a kind of normative epistemology, which said “don’t dispute fundamentals until the sad day comes when one must.” Fuller’s instincts were always with Popper on this point: authoritarian consensus has no place in science for either of them. But Fuller provided a tertium quid, which had the effect of upending the whole conflict. He took over the idea of the social construction of reality and gave it a normative and collective or policy interpretation. We make knowledge. There is no knowledge that we do not create.

The creation is a “social” activity, as the social constructivists claimed. But this social itself needed to be governed by a sense of responsibility for these acts of creation, and because they were social, this meant by a “policy.” What this policy should be was not clear: no one had connected the notion of construction to the notion of responsibility in this way. But it was a clear implication of the idea of knowledge as a product of making. Making implies a responsibility for the consequences of making.

Dangers of Acknowledging Our Making

This was a step that few people were willing to take. Traditional epistemology was passive. Theory choice was choice between the theories that were presented to the passive chooser. The choices could be made on purely epistemic grounds. There was no consideration of responsibility, because the choices were an end point, a matter of scientific aesthetics, with no further consequences. Fuller, as Remedios and Dusek point out, rejects this passivity, a rejection that grows directly out of his appropriation of constructivism.

From a “making” or active epistemic perspective, Kuhnianism is an abdication of responsibility, and a policy of passivity. But Fuller also sees that overcoming the passivity Kuhn describes as the normal state of science, requires an alternative policy, which enables the knowledge that is in fact “made” but which is presented as given, to be challenged. This is a condition of acknowledging responsibility for what is made.

There is, however, an oddity in talking about responsibility in relation to collective knowledge producing, which arises because we don’t know in advance where the project of knowledge production will lead. I think of this on analogy to the debate between Malthus and Marx. If one accepts the static assumptions of Malthus, his predictions are valid: Marx made the productivist argument that with every newborn mouth came two hands. He would have been better to argue that with every mouth came a knowledge making brain, because improvements in food production technology enabled the support of much larger populations, more technology, and so forth—something Malthus did not consider and indeed could not have. That knowledge was in the future.

Fuller’s alternative grasps this point: utilitarian considerations from present static assumptions can’t provide a basis for thinking about responsibility or policy. We need to let knowledge production proceed regardless of what we think are the consequences, which is necessarily thinking based on static assumptions about knowledge itself. Put differently, we need to value knowledge in itself, because our future is itself made through the making of knowledge.

“Making” or “constructing” is more than a cute metaphor. Fuller shows that there is a tradition in science itself of thinking about design, both in the sense of making new things as a form of discovery, and in the sense of reverse engineering that which exists in order to see how it works. This leads him to the controversial waters of intelligent design, in which the world itself is understood as, at least potentially, the product of design. It also takes us to some metaphysics about humans, human agency, and the social character of human agency.

One can separate some of these considerations from Fuller’s larger project, but they are natural concomitants, and they resolve some basic issues with the original project. The project of constructivism requires a philosophical anthropology. Fuller provides this with an account of the special character of human agency: as knowledge maker humans are God-like or participating in the mind of God. If there is a God, a super-agent, it will also be a maker and knowledge maker, not in the passive but in the active sense. In participating in the mind of God, we participate in this making.

“Shall We Not Ourselves Have to Become Gods?”

This picture has further implications: if we are already God-like in this respect, we can remake ourselves in God-like ways. To renounce these powers is as much of a choice as using them. But it is difficult for the renouncers to draw a line on what to renounce. Just transhumanism? Or race-related research? Or what else? Fuller rejects renunciation of the pursuit of knowledge and the pursuit of making the world. The issue is the same as the issue between Marx and Malthus. The renouncers base their renunciation on static models. They estimate risks on the basis of what is and what is known now. But these are both things that we can change. This is why Fuller proposes a “pro-actionary” rather than a precautionary stance and supports underwriting risk-taking in the pursuit of scientific advance.

There is, however, a problem with the “social” and policy aspect of scientific advance. On the one hand, science benefits humankind. On the other, it is an elite, even a form of Gnosticism. Fuller’s democratic impulse resists this. But his desire for the full use of human power implies a special role for scientists in remaking humanity and making the decisions that go into this project. This takes us right back to the original impulse for social epistemology: the creation of policy for the creation of knowledge.

This project is inevitably confronted with the Malthus problem: we have to make decisions about the future now, on the basis of static assumptions we have no real alternative to. At best we can hint at future possibilities which will be revealed by future science, and hope that they will work out. As Remedios and Dusek note, Fuller is consistently on the side of expanding human knowledge and power, for risk-taking, and is optimistic about the world that would be created through these powers. He is also highly sensitive to the problem of static assumptions: our utilities will not be the utilities of the creatures of the future we create through science.

What Fuller has done is to create a full-fledged alternative to the conventional wisdom about the science society relation and the present way of handling risk. The standard view is represented by Philip Kitcher: it wishes to guide knowledge in ways that reflect the values we should have, which includes the suppression of certain kinds of knowledge by scientists acting paternalistically on behalf of society.

This is a rigidly Malthusian way of thinking: the values (in this case a particular kind of egalitarianism that doesn’t include epistemic equality with scientists) are fixed, the scientists ideas of the negative consequences of something like research on “racial” differences are taken to be valid, and policy should be made in accordance with the same suppression of knowledge. Risk aversion, especially in response to certain values, becomes the guiding “policy” of science.

Fuller’s alternative preserves some basic intuitions: that science advances by risk taking, and by sometimes failing, in the manner of Popper’s conjectures and refutations. This requires the management of science, but management that ensures openness in science, supports innovation, and now and then supports concerted efforts to challenge consensuses. It also requires us to bracket our static assumptions about values, limits, risks, and so forth, not so much to ignore these things but to relativize them to the present, so that we can leave open the future. The conventional view trades heavily on the problem of values, and the potential conflicts between epistemic values and other kinds of values. Fuller sees this as a problem of thinking in terms of the present: in the long run these conflicts vanish.

This end point explains some of the apparent oddities of Fuller’s enthusiasms and dislikes. He prefers the Logical Positivists to the model-oriented philosophy of science of the present: laws are genuinely universal; models are built by assuming present knowledge and share the problems with Malthus. He is skeptical about science done to support policy, for the same reason. And he is skeptical about ecologism as well, which is deeply committed to acting on static assumptions.

The Rewards of the Test

Fuller’s work stands the test of reflexivity: he is as committed to challenging consensuses and taking risks as he exhorts others to be. And for the most part, it works: it is an old Popperian point that only through comparison with strong alternatives that a theory can be tested; otherwise it will simply pile up inductive support, blind to what it is failing to account for. But as Fuller would note, there is another issue of reflexivity here, and it comes at the level of the organization of knowledge. To have conjectures and refutations one must have partners who respond. In the consensus driven world of professional philosophy today, this does not happen. And that is a tragedy. It also makes Fuller’s point: that the community of inquirers needs to be managed.

It is also a tragedy that there are not more Fullers. Constructing a comprehensive response to major issues and carrying it through many topics and many related issues, as people like John Dewey once did, is an arduous task, but a rewarding one. It is a mark of how much the “professionalization” of philosophy has done to alter the way philosophers think and write. This is a topic that is too large for a book review, but it is one that deserves serious reflection. Fuller raises the question by looking at science as a public good and asking how a university should be organized to maximize its value. Perhaps this makes sense for science, given that science is a money loser for universities, but at the same time its main claim on the public purse. For philosophy, we need to ask different questions. Perhaps the much talked about crisis of the humanities will bring about such a conversation. If it does, it is thinking like Fuller’s that will spark the discussion.

Contact details: turner@usf.edu

References

Remedios, Francis X., and Val Dusek. Knowing Humanity in the Social World. The Path of Steve Fuller’s Social Epistemology. New York: Palgrave MacMillan, 2018.

Author Information: Alfred Moore, University of York, UK, alfred.moore@york.ac.uk

Moore, Alfred. “Transparency and the Dynamics of Trust and Distrust.” Social Epistemology Review and Reply Collective 7, no. 4 (2018), 26-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3W8

Please refer to:

A climate monitoring camp at Blackheath in London, UK, on the evening of 28 August 2009.
Image by fotdmike via Flickr / Creative Commons

 

In 1961 the Journal of the American Medical Association published a survey suggesting that 90% of doctors who diagnosed cancer in their patients would choose not to tell them (Oken 1961). The doctors in the study gave a variety of reasons, including (unsubstantiated) fears that patients might commit suicide, and feelings of futility about the prospects of treatment. Among other things, this case stands as a reminder that, while it is a commonplace that lay people often don’t trust experts, at least as important is that experts often don’t trust lay people.

Paternalist Distrust

I was put in mind of this stunning example of communicative paternalism while reading Stephen John’s recent paper, “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” John makes a case against a presumption of openness in science communication that – although his argument is more subtle – reads at times like a rational reconstruction of a doctor-patient relationship from the 1950s. What is disquieting is that he makes a case that is, at first glance, quite persuasive.

When lay people choose to trust what experts tell them, John argues, they are (or their behaviour can usefully be modelled as though they are) making two implicit judgments. The first, and least controversial, is that ‘if some claim meets scientific epistemic standards for proper acceptance, then [they] should accept that claim’ (John 2018, 77). He calls this the ‘epistemological premise’.

Secondly, however, the lay person needs to be convinced that the ‘[i]nstitutional structures are such that the best explanation for the factual content of some claim (made by a scientist, or group, or subject to some consensus) is that this claim meets scientific “epistemic standards” for proper acceptance’ (John 2018, 77). He calls this the ‘sociological premise.’ He suggests, rightly, I think, that this is the premise in dispute in many contemporary cases of distrust in science. Climate change sceptics (if that is the right word) typically do not doubt that we should accept claims that meet scientific epistemic standards; rather, they doubt that the ‘socio-epistemic institutions’ that produce scientific claims about climate change are in fact working as they should (John 2018, 77).

Consider the example of the so-called ‘climate-gate’ controversy, in which a cache of emails between a number of prominent climate scientists were made public on the eve of a major international climate summit in 2009. The emails below (quoted in Moore 2017, 141) were full of claims that might – to the unitiated – look like evidence of sharp practice. For example:

“I should warn you that some data we have we are not supposed [to] pass on to others. We can pass on the gridded data—which we do. Even if WMO [World Meteorological Organization] agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

“You can delete this attachment if you want. Keep this quiet also, but this is the person who is putting in FOI requests for all emails Keith and Tim have written and received re Ch 6 of AR4 We think we’ve found a way around this.”

“The other paper by MM is just garbage. … I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

“I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from 1961 for Keith’s to hide the decline.”

As Phil Jones, then director of the Climate Research Unit, later admitted, the emails “do not read well.”[1] However, neither, on closer inspection,[2] did they show anything particularly out of the ordinary, and certainly nothing like corruption or fraud. Most of the controversy, it seemed, came from lay people misinterpreting the backstage conversation of scientists in light of a misleading image of what good science is supposed to look like.

The Illusions of Folk Philosophy of Science

This is the central problem identified in John’s paper. Many people, he suggests, evaluate the ‘sociological premise’ in light of a ‘folk philosophy of science’ that is worlds away from the reality of scientific practice. For this reason, revealing to a non-expert public how the sausage is made can lead not to understanding, ‘but to greater confusion’ (John 2017, 82). And worse, as he suggests happened in the climate-gate case, it might lead people to reject well-founded scientific claims in the mistaken belief that they did not meet proper epistemic standards within the relevant epistemic community. Transparency might thus lead to unwarranted distrust.

In a perfect world we might educate everybody in the theory and practice of modern science. In the absence of such a world, however, scientists need to play along with the folk belief in order to get lay audiences to adopt those claims that are in their epistemic best interests. Thus, John argues, scientists explaining themselves to lay publics should seek to ‘well-lead’ (the benevolent counterpart to mislead) their audience. That is, they should try to bring the lay person to hold the most epistemically sound beliefs, even if this means masking uncertainties, glossing complications, pretending more precision than you know to be the case, and so on.

Although John presents his argument as something close to heresy, his model of ‘well-leading’ speech describes a common enough practice. Economists, for instance, face a similar temptation to mask uncertainties and gloss complications and counter-arguments when engaging with political leaders and wider publics on issues such as the benefits and disadvantages of free trade policies.

As Dani Rodrik puts it:

As a professional economist, as an academic economist, day in and day out I see in seminars and papers a great variety of views on what the effects of trade agreements are, the ambiguous effects of deep integration. Inside economics, you see that there is not a single view on globalization. But the moment that gets translated into the political domain, economists have this view that you should never provide ammunition to the barbarians. So the barbarians are these people who don’t understand the notion of comparative advantage and the gains from trade, and you don’t want… any of these caveats, any of these uncertainties, to be reflected in the public debate. (Rodrik 2017, at c.30-34 mins).

‘Well-leading’ speech seems to be the default mode for experts talking to lay audiences.

An Intentional Deception

A crucial feature of ‘well-leading’ speech is that it has no chance of working if you tell the audience what you are up to. It is a strategy that cannot be openly avowed without undermining itself, and thus relies on a degree of deception. Furthermore, the well-leading strategy only works if the audience already trusts the experts in question, and is unlikely to help – and is likely to actively harm expert credibility – in context where experts are already under suspicion and scrutiny. John thus admits that this strategy can backfire if the audience is made aware of some of the hidden complications, and worse, as was case of in climate-gate, if it seems the experts actively sought to evade demands for transparency and accountability (John 2017, 82).

This puts experts in a bind: be ‘open and honest’ and risk being misunderstood; or engage in ‘well-leading’ speech and risk being exposed – and then misunderstood! I’m not so sure the dilemma is actually as stark as all that, but John identifies a real and important problem: When an audience misunderstands what the proper conduct of some activity consists in, then revealing information about the conduct of the activity can lead them to misjudge its quality. Furthermore, to the extent that experts have to adjust their conduct to conform to what the audience thinks it should look like, revealing information about the process can undermine the quality of the outcomes.

One economist has thus argued that accountability works best when it is based on information about outcomes, and that information about process ‘can have detrimental effects’ (Prat 2005: 863). By way of example, she compares two ways of monitoring fund managers. One way is to look at the yearly returns. The other way (exemplified, in her case, by pension funds), involves communicating directly with fund managers and demanding that they ‘explain their investment strategy’ (Prat 2005, 870). The latter strategy, she claims, produces worse outcomes than those monitored only by their results, because the agents have an incentive to act in a way that conforms to what the principal regards as appropriate rather than what the agent regards as the most effective action.

Expert Accountability

The point here is that when experts are held accountable – at the level of process – by those without the relevant expertise, their judgment is effectively displaced by that of their audience. To put it another way, if you want the benefit of expert judgment, you have to forgo the urge to look too closely at what they are doing. Onora O’Neill makes a similar point: ‘Plants don’t flourish when we pull them up too often to check how their roots are growing: political, institutional and professional life too may not flourish if we constantly uproot it to demonstrate that everything is transparent and trustworthy’ (O’Neill 2002: 19).

Of course, part of the problem in the climate case is that the outcomes are also subject to expert interpretation. When evaluating a fund manager you can select good people, leave them alone, and check that they hit their targets. But how do you evaluate a claim about likely sea-level rise over the next century? If radical change is needed now to avert such catastrophic effects, then the point is precisely not to wait and see if they are right before we act. This means that both the ‘select and trust’ and the ‘distrust and monitor’ models of accountability are problematic, and we are back with the problem: How can accountability work when you don’t know enough about the activity in question to know if it’s being done right? How are we supposed to hold experts accountable in ways that don’t undermine the very point of relying on experts?

The idea that communicative accountability to lay people can only diminish the quality either of warranted trust (John’s argument) or the quality of outcomes (Prat’s argument) presumes that expert knowledge is a finished product, so to speak. After all, if experts have already done their due diligence and could not get a better answer, then outsiders have nothing epistemically meaningful to add. But if expert knowledge is not a finished product, then demands for accountability from outsiders to the expert community can, in principle, have some epistemic value.

Consider the case of HIV-AIDS research and the role of activists in challenging expert ideas of what constituted ‘good science’ in conduct of clinical trials. In this engagement they ‘were not rejecting medical science,’ but were rather “denouncing some variety of scientific practice … as not conducive to medical progress and the health and welfare of their constituency” (Epstein 1996: 2). It is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science. What matters is not whether the critics begin with an accurate view of the scientific process; rather, what matters is how the process of criticism and response is carried out.

On 25 April 2012, the AIDS Coalition to Unleash Power (ACT UP) celebrated its 25th anniversary with a protest march through Manhattan’s financial district. The march, held in partnership with Occupy Wall Street, included about 2000 people.
Image by Michael Fleshman via Flickr / Creative Commons

 

We Are Never Alone

This leads me to an important issue that John doesn’t address. One of the most attractive features of his approach is that he moves beyond the limited examples, prevalent in the social epistemology literature, of one lay person evaluating the testimony of one expert, or perhaps two competing experts. He rightly observes that experts speak for collectives and thus that we are implicitly judging the functioning of institutions when we judge expert testimony. But he misses an analogous sociological problem on the side of the lay person. We rarely judge alone. Rather, we use ‘trust proxies’ (MacKenzie and Warren 2012).

I may not know enough to know whether those climate scientists were not doing good science, but others can do that work for me. I might trust my representatives, who have on my behalf conducted open investigations and inquiries. They are not climate scientists, but they have given the matter the kind of sustained attention that I have not. I might trust particular media outlets to do this work. I might trust social movements.

To go back to the AIDS case, ACT-UP functioned for many as a trust proxy of this sort, with the skills and resources to do this sort of monitoring, developing competence but with interests more closely aligned with the wider community affected by the issue. Or I might even trust the judgments of groups of citizens randomly selected and given an opportunity to more deeply engage with the issues for just this purpose (see Gastil, Richards, and Knobloch 2014).

This hardly, on its own, solves the problem of lay judgment of experts. Indeed, it would seem to place it at one remove and introduce a layer of intermediaries. But it is worth attending to these sorts of judgments for at least two reasons. One is because, in a descriptive sense, this is what actually seems to be going on with respect to expert-lay judgment. People aren’t directly judging the claims of climate scientists, and they’re not even judging the functioning of scientific institutions; they’re simply taking cues from their own trusted intermediaries. The second is that the problems and pathologies of expert-lay communication are, in large part, problems with their roots in failures of intermediary institutions and practices.

To put it another way, I suspect that a large part of John’s (legitimate) concern about transparency is at root a concern about unmediated lay judgment of experts. After all, in the climate-gate case, we are dealing with lay people effectively looking over the shoulders of the scientists as they write their emails. One might have similar concerns about video monitoring of meetings: they seem to show you what is going on but in fact are likely to mislead you because you don’t really know what you’re looking at (Licht and Naurin 2015). You lack the context and understanding of the practice that can be provided by observers, who need not themselves be experts, but who need to know enough about the practice to tell the difference between good and bad conduct.

The same idea can apply to transparency of reasoning, involving the demand that actors give a public account of their actions. While the demand that authorities explain how and why they reached their judgments seems to fall victim to the problem of lay misunderstanding, it also offers a way out of it. After all, in John’s own telling of the case, he explains in a convincing way why the first impression (that the ‘sociological premise’ has not been fulfilled) is misleading. The initial scandal initiated a process of scrutiny in which some non-experts (such as the political representatives organising the parliamentary inquiry) engaged in closer scrutiny of the expert practice in question.

Practical lay judgment of experts does not require that lay people become experts (as Lane 2014 and Moore 2017 have argued), but it does require a lot more engagement than the average citizen would either want or have time for. The point here is that most citizens still don’t know enough to properly evaluate the sociological premise and thus properly interpret information they receive about the conduct of scientists. But they can (and do) rely on proxies to do the work of monitoring and scrutinizing experts.

Where does this leave us? John is right to say that what matters is not the generation of trust per se, but warranted trust, or an alignment of trust and trustworthiness. What I think he misses is that distrust is crucial to the possible way in which transparency can (potentially) lead to trustworthiness. Trust and distrust, on this view, are in a dynamic relation: Distrust motivates scrutiny and the creation of institutional safeguards that make trustworthy conduct more likely. Something like this case for transparency was made by Jeremy Bentham (see Bruno 2017).

John rightly points to the danger that popular misunderstanding can lead to a backfire in the transition from ‘scrutiny’ to ‘better behaviour.’ But he responds by asserting a model of ‘well-leading’ speech that seems to assume that lay people already trust experts, and he thus leaves unanswered the crucial questions raised by his central example: What are we to do when we begin from distrust and suspicion? How we might build trustworthiness out of distrust?

Contact details: alfred.moore@york.ac.uk

References

Bruno, Jonathan. “Vigilance and Confidence: Jeremy Bentham, Publicity, and the Dialectic of Trust and Distrust.” American Political Science Review, 111, no. 2 (2017) pp. 295-307.

Epstein, S. Impure Science: AIDS, Activism and the Politics of Knowledge. Berkeley and Los Angeles, CA: University of California Press, 1996.

Gastil, J., Richards, R. C., & Knobloch, K. R. “Vicarious deliberation: How the Oregon Citizens’ Initiative Review influenced deliberation in mass elections.” International Journal of Communication, 8 (2014), 62–89.

John, Stephen. “Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty.” Social Epistemology: A Journal of Knowledge, Culture and Policy 32, no. 2 (2017) 75-87.

Lane, Melissa. “When the Experts are Uncertain: Scientific Knowledge and the Ethics of Democratic Judgment.” Episteme 11, no. 1 (2014) 97-118.

Licht, Jenny de Fine, and Daniel Naurin. “Open Decision-Making Procedures and Public Legitimacy: An Inventory of Causal Mechanisms”. In Jon Elster (ed), Secrecy and Publicity in Votes and Debates. Cambridge: Cambridge University Press (2015), 131-151.

MacKenzie, Michael, and Mark E. Warren, “Two Trust-Based Uses of Minipublics.” In John Parkinson and Jane Mansbridge (eds.) Deliberative Systems. Cambridge: Cambridge University Press (2012), 95-124.

Moore, Alfred. Critical Elitism: Deliberation, Democracy, and the Politics of Expertise. Cambridge: Cambridge University Press, 2017.

Oken, Donald. “What to Tell Cancer Patients: A Study of Medical Attitudes.” Journal of the American Medical Association 175, no. 13 (1961) 1120-1128.

O’Neill, Onora. A Question of Trust. Cambridge: Cambridge University Press, 2002.

Prat, Andrea. The Wrong Kind of Transparency. The American Economic Review 95, no. 3 (2005), 862-877.

[1] In a statement released on 24 November 2009, http://www.uea.ac.uk/mac/comm/media/press/2009/nov/cruupdate

[2] One of eight separate investigations was by the House of Commons select committee on Science and Technology (http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm).

Author Information: Paul R. Smart, University of Southampton, ps02v@ecs.soton.ac.uk

Smart, Paul R. “(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 45-55.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uq

Please refer to:

Image by BTC Keychain via Flickr / Creative Commons

 

Richard Heersmink’s (2018) article, A virtue epistemology of the Internet: Search engines, intellectual virtues, and education, provides an important and timely analysis of the Internet from the standpoint of virtue epistemology.[1] According to Richard, the Internet is an important epistemic resource, but it is one that comes with a range of epistemic hazards. Such hazards, he suggests, motivate a consideration of the ways in which individuals should interact with the Internet.

In particular, Richard appeals to a specific branch of virtue epistemology, known as virtue responsibilism, arguing that certain kinds of cognitive trait (e.g., curiosity and open-mindedness) are useful in helping us press maximal epistemic benefit from the Internet. Given the utility of such traits, coupled with the epistemic importance of the Internet, Richard suggests that educational policy should be adapted so as to equip would-be knowers with the cognitive wherewithal to cope with the epistemic challenges thrown up by the online environment.

There is, no doubt, something right about all this. Few would disagree with the claim that a certain level of discernment and discrimination is important when it comes to the evaluation of online content. Whether such ‘virtues’ are best understood from the perspective of virtue responsibilism or virtue reliabilism is, I think, a moot point, for I suspect that in the case of both virtue responsibilism and virtue reliabilism what matters is the way in which belief-forming informational circuits are subject to active configuration by processes that may be broadly construed as metacognitive in nature (Smart, in pressa). That, however, is a minor quibble, and it is one that is of little consequence to the issues raised in Richard’s paper.

For the most part, then, I find myself in agreement with many of the assumptions that motivate the target article. I agree that the Internet is an important epistemic resource that is unprecedented in terms of its scale, scope, and accessibility. I also agree that, at the present time, the Internet is far from an epistemically safe environment, and this raises issues regarding the epistemic standing of individual Internet users. In particular, it looks unlikely that the indiscriminate selection and endorsement of online information will do much to bolster one’s epistemic credentials.

We thus encounter something of a dilemma: As an epistemic resource, the Internet stands poised to elevate our epistemic standing, but as an open and public space the Internet provides ample opportunities for our doxastic systems to be led astray. The result is that we are obliged to divide the online informational cornucopia into a treasure trove of genuine facts and a ragbag collection of ‘false facts’ and ‘fake news.’ The information superhighway, it seems, promises to expand our epistemic power and potential, but the road ahead is one that is fraught with a dizzying array of epistemic perils, problems, and pitfalls. What ought we to do in response to such a situation?

It is at this point that I suspect my own views start to diverge with those of the target article. Richard’s response to the dilemma is to focus attention on the individual agent and consider the ways in which an agent’s cognitive character can be adapted to meet the challenges of the Internet. My own approach is somewhat different. It is borne out of three kinds of doubt: doubts about the feasibility (although not the value) of virtue-oriented educational policies, doubts about the basic validity of virtue theoretic conceptions of knowledge, and doubts about whether the aforementioned dilemma is best resolved by attempting to change the agent as opposed to the environment in which the agent is embedded. As always, space is limited and life is short, so I will restrict my discussion to issues that I deem to be of greatest interest to the epistemological community.

Reliable Technology

Inasmuch as intellectual virtues are required for online knowledge—i.e., knowledge that we possess as a result of our interactions and engagements with the Internet—they are surely only part of a much  broader (and richer) story that includes details about the environment in which our cognitive systems operate. In judging the role of intellectual virtue in shielding us from the epistemic hazards of the online environment, it therefore seems important to have some understanding of the actual technologies we interact with.

This is important because it helps us understand the kinds of intellectual virtue that might be required, as well as the efficacy of specific intellectual virtues in helping us believe the truth (and thus working as virtues in the first place). Internet technologies are, of course, many and varied, and it will not be possible to assess their general relevance to epistemological debates in the present commentary. For the sake of brevity, I will therefore restrict my attention to one particular technology: blockchain.

Blockchain is perhaps best known for its role in supporting the digital cryptocurrency, Bitcoin. It provides us with a means of storing data in a secure fashion, using a combination of data encryption and data linking techniques. For present purposes, we can think of a blockchain as a connected set of data records (or data blocks), each of which contains some body of encrypted data. In the case of Bitcoin, of course, the data blocks contain data of a particular kind, namely, data pertaining to financial transactions. But this is not the only kind of data that can be stored in a blockchain. In fact, blockchains can be used to store information about pretty much anything. This includes online voting records, news reports, sensor readings, personal health records, and so on.

Once data is recorded inside a blockchain, it is very difficult to modify. In essence, the data stored within a blockchain is immutable, in the sense that it cannot be changed without ‘breaking the chain’ of data blocks, and thereby invalidating the data contained within the blockchain. This property makes blockchains of considerable epistemic significance, because it speaks to some of the issues (e.g., concerns about data tampering and malign forms of information manipulation) that are likely to animate epistemological debates in this area.

This does not mean, of course, that the information stored within a blockchain is guaranteed to be factually correct, in the sense of being true and thus yielding improvements in epistemic standing. Nevertheless, there are, I think, reasons to regard blockchain as an important technology relative to efforts to make the online environment a somewhat safer place for would-be knowers. Consider, for example, the title of the present article. Suppose that we wanted to record the fact that a person known as Paul Smart—that’s me—wrote an article with the title:

(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!

We can incorporate this particular piece of information into a blockchain using something called a cryptographic hash function, which yields a unique identifier for the block and all of its contents. In the case of the aforementioned title, the cryptographic hash (as returned by the SHA256 algorithm[2]) is:

7147bd321e79a63041d9b00a937954976236289ee4de6f8c97533fb6083a8532

Now suppose that someone wants to alter the title, perhaps to garner support for an alternative argumentative position. In particular, let’s suppose they want to claim that the title of the article is:

Fake News Alert: Intellectual Virtues Required for Online Knowledge!

From an orthographic perspective, of course, not much has changed. But the subtlety of the alteration is not something that can be used to cause confusion about the actual wording of the original title—the title that I intended for the present article. (Neither can it be used to cast doubt about the provenance of the paper—the fact that the author of the paper was a person called Paul Smart.) To see this, note that the hash generated for the ‘fake’ title looks nothing like the original:

cc05baf2fa7a439674916fe56611eaacc55d31f25aa6458b255f8290a831ddc4

It is this property that, at least in part, makes blockchains useful for recording information that might otherwise be prone to epistemically malign forms of information manipulation. Imagine, for the sake of argument, that climatological data, as recorded by globally distributed sensors, was stored in a blockchain. The immutability of such data makes it extremely difficult for anyone to manipulate the data in such a way as to confirm or deny the reality of year-on-year changes in global temperature. Neither is it easy to alter information pertaining to the provenance of existing data records, i.e., information about when, where, and how such data was generated.

None of this should delude us into thinking that blockchain technology is a panacea for Internet-related epistemic problems—it isn’t! Neither does blockchain obviate the need for agents to exercise at least some degree of intellectual virtue when it comes to the selection and evaluation of competing data streams. Nevertheless, there is, I think, something that is of crucial epistemological interest and relevance here—something that makes blockchain and other cybersecurity technologies deserving of further epistemological attention. In particular, such technologies may be seen as enhancing the epistemic safety of the online environment, and thus perhaps reducing the need for intellectual virtue.

In this sense, the epistemological analysis of Internet technologies may be best approached from some variant of modal epistemology—e.g., epistemological approaches that emphasize the modal stability of true beliefs across close possible worlds (Pritchard, 2009, chap. 2). But even if we choose to countenance an approach that appeals to issues of intellectual virtue, there is still, I suggest, a need to broaden the analytic net to include technologies that (for the time being at least) lie beyond the bounds of the individual cognitive agent.

Safety in Numbers

“From an epistemic perspective,” Richard writes, “the most salient dimension of the Internet is that it is an information space” (Heersmink, 2018, p. 5). Somewhat surprisingly, I disagree. Although it is obviously true that the Internet is an information space, it is not clear that this is its most salient feature, at least from an epistemological standpoint. In particular, there is, I suggest, a sense in which the Internet is more than just an information space. As is clear from the explosive growth in all things social—social media, social networks, social bots, and so on—the Internet functions as a social technology, yielding all manner of opportunities for people to create, share and process information in a collaborative fashion. The result, I suggest, is that we should not simply think of the Internet as an information space (although it is surely that), we should also view it as a social space.

Viewing the Internet as a social space is important because it changes the way we think about the epistemic impact of the Internet, relative to the discovery, production, representation, acquisition, processing and utilization of knowledge. Smart (in pressb), for example, suggests that some online systems function as knowledge machines, which are systems in which some form of knowledge-relevant processing is realized by a socio-technical mechanism, i.e., a mechanism whose component elements are drawn from either the social (human) or the technological realm.

An interesting feature of many of these systems is the way in which the reliability (or truth-conducive) nature of the realized process is rooted in the socio-technical nature of the underlying (realizing) mechanism. When it comes to human computation or citizen science systems, for example, user contributions are typically solicited from multiple independent users as a means of improving the reliability of specific epistemic outputs (Smart, in pressb; Smart and Shadbolt, in press; Watson and Floridi, 2018). Such insights highlight the socially-distributed character of at least some forms of online knowledge production, thereby moving us beyond the realms of individual, agent-centric analyses.

On a not altogether unrelated note, it is important to appreciate the way in which social participation can itself be used to safeguard online systems from various forms of malign intervention. One example is provided by the Google PageRank algorithm. In this case, any attempt to ‘artificially’ elevate the ranking assigned to specific contributions (e.g., a user’s website) is offset by the globally-distributed nature of the linking effort, coupled with the fact that links to a specific resource are themselves weighted by the ranking of the resource from which the link originates. This makes it difficult for any single agent to subvert the operation of the PageRank algorithm.

Even ostensibly non-social technologies can be seen to rely on the distributed and decentralized nature of the Internet. In the case of blockchain, for example, multiple elements of a peer-to-peer network participate in the computational processes that make blockchain work. In this way, the integrity of the larger system is founded on the collaborative efforts of an array of otherwise independent computational elements. And it is this that (perhaps) allows us to think of blockchain’s epistemically-desirable features as being rooted in something of a ‘social’ substrate.

All of this, I suggest, speaks in favor of an approach that moves beyond a preoccupation with the properties of individual Internet users. In particular, there seems to be considerable merit in approaching the Internet from a more socially-oriented epistemological perspective. It is easy to see the social aspects of the Internet as lying at the root of a panoply of epistemic concerns, especially when it comes to the opportunities for misinformation, deception, and manipulation. But in light of the above discussion, perhaps an alternative, more positive, take on the Internet (qua social space) starts to come into sharper focus. This is a view that highlights the way in which certain kinds of online system can work to transform a ‘vice’ into a ‘virtue,’ exploiting the social properties of the Internet for the purposes of dealing with reliability-related concerns.

Image by Dariorug via Flickr / Creative Commons

 

Filter Bubblicious

Search engines form one of the focal points of Richard’s analysis, and, as with previous work in this area, Richard finds at least some aspects of their operation to be highly problematic. A particular issue surfaces in respect of personalized search. Here, Richard’s analysis echoes the sentiments expressed by other epistemologists who regard personalized search algorithms as of dubious epistemic value.

In fact, I suspect the consensus that has emerged in this area fails to tell the whole story about the epistemic consequences of personalized search. Indeed, from a virtue epistemological position, I worry that epistemologists are in danger of failing to heed their own advice—prematurely converging on a particular view without proper consideration of competing positions. In my new-found role as the virtue epistemologist’s guardian angel (or should that be devil’s advocate?), I will attempt to highlight a couple of reasons why I think more empirical research is required before we can say anything useful about the epistemological impact of personalized search algorithms.

My first worry is that our understanding about the extent to which search results and subsequent user behavior is affected by personalization is surprisingly poor. Consider, for example, the results of one study, which attempted to quantify the effect of personalization on search results (Hannak et al., 2013). Using an empirical approach, Hannak et al. (2013) report a demonstrable personalization effect, with 11.7% of search results exhibiting differences due to personalization. Interestingly, however, the effect of personalization appeared to be greater for search results with lower rankings; highly ranked results (i.e., those appearing at the top of a list of search results) appeared to be much less affected by personalization.

This result is interesting given the observation that college students “prefer to click on links in higher positions even when the abstracts are less relevant to the task at hand” (Heersmink, 2018, p. 6). From one perspective, of course, this tendency looks like a vice that jeopardizes the epistemic standing of the individual user. And yet, from another perspective, it looks like the preference for higher ranked search results is poised to negate (or at least reduce) the negative epistemological effects of personalized search. What we seem to have here, in essence, is a situation in which one kind of ‘intellectual vice’ (i.e., a tendency to select highly-ranked search results) is playing something of a more positive (virtuous?) role in mitigating the negative epistemological sequelae of a seemingly vicious technology (i.e., personalized search).

None of this means that the epistemic effects of personalized search are to the overall benefit of individual users; nevertheless, the aforementioned results do call for a more nuanced and empirically informed approach when considering the veritistic value of search engines, as well as other kinds of Internet-related technology.

A second worry relates to the scope of the epistemological analysis upon which judgements about the veritistic value of search engines are based. In this case, it is unclear whether analyses that focus their attention on individual agents are best placed to reveal the full gamut of epistemic costs and benefits associated with a particular technology, especially one that operates in the socio-technical ecology of the Internet. To help us understand this worry in a little more detail, it will be useful to introduce the notion of mandevillian intelligence (Smart, in pressc; Smart, in pressd).

Mandevillian intelligence is a specific form of collective intelligence in which the cognitive shortcomings and epistemic vices of the individual agent are seen to yield cognitive benefits and epistemic virtues at the collective or social level of analysis, e.g., at the level of collective doxastic agents (see Palermos, 2015) or socio-epistemic systems (see Goldman, 2011). According to this idea, personalized search systems may play a productive role in serving the collective cognitive good, providing a means by which individual vices (e.g., a tendency for confirmation bias) are translated into something that more closely resembles an epistemic virtue (e.g., greater cognitive coverage of a complex space of thoughts, ideas, opinions, and so on). Consider, for example, the way in which personalized search may help to focus individual attention on particular bodies of information, thereby restricting access to a larger space of ideas, opinions, and other information.

While such forms of ‘restricted access’ or ‘selective information exposure’ are unlikely to yield much in the way of an epistemic benefit for the individual agent, it is possible that by exploiting (and, indeed, accentuating!) an existing cognitive bias (e.g., confirmation bias), personalized search may work to promote cognitive diversity, helping to prevent precipitant forms of cognitive convergence (see Zollman, 2010) and assisting with the epistemically optimal division of cognitive labor (see Muldoon, 2013). This possibility reveals something of a tension in how we interpret or evaluate the veritistic value of a particular technology or epistemic practice. In particular, it seems that assessments of veritistic value may vary according to whether our epistemological gaze is directed towards individual epistemic agents or the collective ensembles in which those agents are situated.

The Necessity of Virtue

As Richard notes, virtue epistemology is characterized by a shift in emphasis, away from the traditional targets of epistemological analysis (e.g., truth, justification and belief) and towards the cognitive properties of would-be knowers. “Virtue epistemology,” Richard writes, “is less concerned with the nature of truth and more concerned with the cognitive character of agents” (Heersmink, 2018, p. 2). This is, no doubt, a refreshing change, relative to the intellectual orientation of traditional philosophical debates.

Nevertheless, I assume that virtue epistemologists still recognize the value and priority of truth when it comes to issues of epistemic evaluation. Someone who holds false beliefs is not the possessor of knowledge, and this remains the case irrespective of whatever vices and virtues the agent has. In other words, it does not matter how careful, attentive and assiduous an agent is in selecting and evaluating information, if what the agent believes is false, they simply do not know.

What seems to be important in the case of virtue epistemology is the role that intellectual virtue plays in securing the truth of an agent’s beliefs. In particular, the central feature of virtue epistemology (at least to my mind) is that the truth of an agent’s beliefs stem from the exercise of intellectual virtue. It is thus not the case that truth is unimportant (or less important) when it comes to issues of positive epistemic standing; rather, it is the role that intellectual virtue plays in establishing the truth of an agent’s beliefs. An agent is thus a bona fide knower when they believe the truth and the truth in question is attributable to some aspect of their cognitive character, specifically, a cognitive trait (virtue responsibilism) or cognitive faculty (virtue reliabilism).

What then makes something a vice or virtue seems to be tied to the reliability of token instantiations of processes that are consistent with an agent’s cognitive character. Intellectual virtues are thus “cognitive character traits that are truth-conducive and minimalise error” (Heersmink, 2018, p. 3), while intellectual vices are characterized as “cognitive character traits that are not truth-conducive and do not minimalise error” (Heersmink, 2018, p. 3). It is this feature of the intellectual virtues—the fact that they are, in general, reliable (or give rise to reliable belief-relevant processes)—that looks to be important when it comes to issues of epistemic evaluation.

So this is what I find problematic about virtue theoretic approaches to knowledge. (Note that I am not an epistemologist by training, so this will require a generous—and hopefully virtue-inspiring swig—of the ole intellectual courage.) Imagine a state-of-affairs in which the Internet was (contrary to the present state-of-affairs) a perfectly safe environment—one where the factive status of online information was guaranteed as a result of advances in cyber-security techniques and intelligent fact-checking services. Next, let us imagine that we have two individuals, Paul and Sophia, who differ with respect to their cognitive character. Paul is the least virtuous of the two, unreflectively and automatically accepting whatever the Internet tells him. Sophia is more circumspect, wary of being led astray by (the now non-existent) fake news.

Inasmuch as we see the exercise of intellectual virtue as necessary for online knowledge, it looks unlikely that poor old Paul can be said to know very much. This is because the truth of Paul’s beliefs are not the result of anything that warrants the label ‘intellectual virtue.’ Paul, of course, does have a lot of true beliefs, but the truth of these beliefs does not stem from the exercise of his intellectual virtues—if, indeed, he has any. In fact, inasmuch as there is any evidence of virtue in play here, it is probably best attributed to the technologies that work to ensure the safety of the online environment. The factive status of Paul’s beliefs thus has more to do with the reliability of the Internet than it does with the elements of his cognitive character.

But is it correct to say that Paul has no online knowledge in this situation? Personally, I do not have this intuition. In other words, in a perfectly safe environment, I can see no reason why we should restrict knowledge attributions to agents whose beliefs are true specifically as the result of intellectual virtue. My sense is that even the most unreflective of agents could be credited with knowledge in a situation where there was no possibility of them being wrong. And if that is indeed the case, then why insist that it is only the exercise of intellectual virtue that underwrites positive epistemic standing?

After all, it seems perfectly possible, to my mind, that Sophia’s epistemic caution contributes no more to the minimization of error in an epistemically benign (i.e., safe) environment than does Paul’s uncritical acceptance. (In fact, given the relative efficiency of their doxastic systems, it may very well be the case that Sophia ends up with fewer true beliefs than Paul.) It might be claimed that this case is invalidated by a failure to consider the modal stability of an agent’s beliefs relative to close possible worlds, as well as perhaps their sensitivity to counterfactual error possibilities. But given the way in which the case is characterized, I suggest that there are no close possible worlds that should worry us—the cybersecurity and fact checking technologies are, let us assume, sufficiently robust as to ensure the modal distance of those worrisome worlds.

One implication of all this is to raise doubts about the necessity of intellectual virtue, relative to our conceptual understanding of knowledge. If there are cases where intellectual virtue is not required for positive epistemic standing, then intellectual virtue cannot be a necessary condition for knowledge attribution. And if that is the case, then why should intellectual virtue form the basis of an approach that is intended to deal with the epistemic shortcomings of the (contemporary) Internet?

Part of the attraction of virtue epistemology, I suspect, is the way in which a suite of generally reliable processes are inextricably linked to the agent who is the ultimate target of epistemic evaluation. This linkage, which is established via the appeal to cognitive character, helps to ensure the portability of an agent’s truth-tracking capabilities—it helps to ensure, in other words, that wherever the agent goes their reliable truth-tracking capabilities are sure to follow.

However, in an era where our doxastic systems are more-or-less constantly plugged into a reliable and epistemically safe environment, it is not so clear that agential capabilities are relevant to epistemic standing. This, I suggest, raises doubts about the necessity of intellectual virtue in securing positive epistemic status, and it also (although this is perhaps less clear) encourages us to focus our attention on some of the engineering efforts (as opposed to agent-oriented educational programs) that might be required to make the online world an epistemically safer place.

Conclusion

What, then, should we make of the appeal to virtue epistemology in our attempt to deal with the  epistemic hazards of the Internet. My main concern is that the appeal to virtue epistemology (and the emphasis placed on intellectual virtue) risks an unproductive focus on individual human agents at the expense of both the technological and social features of the online world. This certainly does not rule out the relevance of virtue theoretic approaches as part of our attempt to understand the epistemic significance of the Internet, but other approaches (e.g., modal reliabilism, process reliabilism, distributed reliabilism, and systems-oriented social epistemology) also look to be important.

Personally, I remain agnostic with regard to the relevance of different epistemological approaches, although I worry about the extent to which virtue epistemology is best placed to inform policy-related decisions (e.g., those relating to education). In particular, I fear that by focusing our attention on individual agents and issues of intellectual virtue, we risk overlooking some of the socio-epistemic benefits of the Internet, denigrating a particular technology (e.g., personalized search) on account of its failure to enhance individual knowledge, while ignoring the way a technology contributes to more collective forms of epistemic success.

In concluding his thought-provoking paper on virtue epistemology and the Internet, Richard suggests that “there is an important role for educators to teach and assess [intellectual] virtues as part of formal school and university curricula, perhaps as part of critical thinking courses” (Heersmink, 2018, p. 10). I have said relatively little about this particular issue in the present paper. For what it’s worth, however, I can see no reason to object to the general idea of Internet-oriented educational policies. The only caveat, perhaps, concerns the relative emphasis that might be placed on the instillation of intellectual virtue as opposed to the inculcation of technical skills, especially those that enable future generations to make the online world a safer place.

No doubt there is room for both kinds of pedagogical program (assuming they can even be dissociated). At the very least, it seems to me that the effort to resolve a problem (i.e., engineer a safer Internet) is just as important as the effort to merely cope with it (i.e., acquire a virtuous cognitive character). But, in any case, when it comes to education and learning, we should not lose sight of the fact that the Internet is itself something that is used for educational purposes. Perhaps, then, the more important point about education and the Internet is not so much the precise details of what gets taught, so much as the issue of whether the Internet (with all its epistemic foibles) is really the best place to learn.

Contact details: ps02v@ecs.soton.ac.uk

References

Goldman, A. I. (2011). A guide to social epistemology. In A. I. Goldman and D. Whitcomb (Eds.), Social Epistemology: Essential Readings, pp. 11–37. New York, New York, USA: Oxford University Press.

Hannak, A., P. Sapiezynski, A. Molavi Kakhki, B. Krishnamurthy, D. Lazer, A. Mislove, and C. Wilson (2013). Measuring personalization of Web search. In D. Schwabe, V. Almeida, H. Glaser, R. Baeza-Yates, and S. Moon (Eds.), Proceedings of the 22nd International Conference  on World Wide Web, Rio  de Janeiro, Brazil, pp. 527–538. ACM.

Heersmink, R. (2018). A virtue epistemology of the Internet: Search engines, intellectual virtues, and education. Social Epistemology 32 (1), 1–12.

Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass 8 (2), 117–125.

Palermos, S. O. (2015). Active externalism, virtue reliabilism and scientific knowledge. Synthese 192 (9), 2955–2986.

Pritchard, D. (2009). Knowledge. Basingstoke, England, UK: Palgrave Macmillan.

Smart, P. R. (in pressa). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. (in pressb). Knowledge machines. The Knowledge Engineering Review.

Smart, P. R. (in pressc). Mandevillian intelligence. Synthese.

Smart, P. R. (in pressd). Mandevillian intelligence: From individual vice to collective virtue. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Socially Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. and N. R. Shadbolt (in press). The World Wide Web. In J. Chase and D. Coady (Eds.), The Routledge Handbook of Applied Epistemology. New York, New York, USA: Routledge.

Watson, D. and L. Floridi (2018). Crowdsourced science: Sociotechnical epistemology in the e-research paradigm. Synthese 195 (2), 741–764.

Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis 72 (1), 17–35.

[1] This work is supported under SOCIAM: The Theory and Practice of Social Machines. The SOCIAM Project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/1 and comprises the Universities of Southampton, Oxford and Edinburgh.

[2] See http://www.xorbin.com/tools/sha256-hash-calculator [accessed: 30th  January 2018].

Author Information: Mark D. West, University of North Carolina at Asheville, westinbrevard@yahoo.com

West, Mark D. “Organic Solidarity, Science and Group Knowledge.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 1-11.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3eN

Please refer to:

organic_solidarity

Image credit: NASA Goddard Space Flight Center, via flickr

Abstract

Recently, a discussion has arisen in the pages of Social Epistemology and the Social Epistemology Review and Reply Collective concerning whether groups can have knowledge that individuals cannot. This discussion, it seems to me, has been particularly fruitful in that it has sought to maintain the definition of knowledge as justified true belief, while seeking to determine whether or not groups could have justificatory procedures which individuals could not. As such, the discussion quickly moved towards scientific investigatory groups, particularly those in the “hard” sciences, because of the divisions of labor which arise due to the technological nature of the instrumentation necessitated in such groups. Thus the discussion moved towards a debate about whether or not scientific knowledge could be held by groups, specifically scientific research groups, which I contend here obscured the more fundamental and more important issue about group epistemic knowledge. I present two arguments to support my contention. One, a variant of a Gettier paradox from Reynolds, suggests that even informal groups (groups showing only mechanical solidarity) can have knowledge as groups that the individuals in the group could not. The second argument suggests that discussions of “organic solidarity” à la Durkheim are at once insufficiently precise to enable us to tell “science’’ from “not science,” and descriptions of “scientific knowledge” are essentially value judgments about what sort of knowledge is worthy of respect which have no epistemic justification; if knowledge is “justified true belief,” what value does the descriptor “scientific” provide beyond that?  Continue Reading…

Author Information: K. Brad Wray, State University of New York, Oswego, brad.wray@oswego.edu

Wray, K. Brad. “Collective Knowledge and Collective Justification.” Social Epistemology Review and Reply Collective 5, no. 8 (2016): 24-27.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-39p

Please refer to:

palace_of_the_fine_arts

Image credit: Justin Kern, via flickr

Chris Dragos (2016) offers fresh insight into the debate about which sorts of groups in science can be properly said to have knowledge, with a focus on a debate between Kristina Rolin (2008) and K. Brad Wray (2007). As one of the participants in that debate, I would like to offer some remarks on what Dragos contributes to the debate, and where it might go from here.  Continue Reading…

Author Information: Richard W. Moodey, Gannon University, moodey001@gannon.edu

Moodey, Richard W. “Performing Knowing: A Reply to Collins.” Social Epistemology Review and Reply Collective 5, no. 6 (2016): 42-43.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-33t

Please refer to:

performing_knowing

Image credit: Ian Sane, via flickr

Harry Collins and I agree on many things. We both value clarity of expression, as well as informed agreement and disagreement. We agree about the intimate relation between action and intention. We agree that it is a mistake to assign an intention to a collectivity. This means, I believe, that it is a mistake to attribute actions to groups or other collectivities. “Actions, as we use the term, are tied up with intentions, and intentions are internal states” (Collins and Kusch 1998, 18).  Continue Reading…

Author Information: Leah Carr, University of Queensland, leah.michelle.carr@gmail.com

Carr, Leah. “Review: Essays in Collective Epistemology.Social Epistemology Review and Reply Collective 4, no. 5 (2015): 29-32.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-25g

lackey_essays_cover

Image credit: Oxford University Press

Essays in Collective Epistemology
Edited by Jennifer Lackey
Oxford University Press
272 pp.

Leah Carr, University of Queensland

To address Essays in Collective Epistemology, edited by Jennifer Lackey, I will provide an overview of what collective epistemology involves (based almost entirely on what I have learned from the book) and include a brief chapter-by-chapter digest of the essays themselves. Essays in Collective Epistemology covers a number of issues in the field and might serve as a useful introduction to the field for prospective graduate students or others looking in from outside the field.  Continue Reading…