Author Information: Lee Basham, South Texas College/University of Texas, Rio Grande Valley, labasham@southtexascollege.edu

Basham, Lee. “Border Wall Post Truth: Case Study.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 40-49.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Eu

Please refer to:

Image credit: Anne McCormack, via flickr

“The more you show concern, the closer he’ll go to the edge … Some things are just too awful to publicize.”—Don Dilillio, White Noise

“History is hard to follow. Luckily, they killed Kennedy. Leaves bread crumbs if we stray.”—Alfonso Uribe

Dogs don’t look Up. The higher tossed the bone, the less likely they are to see it. Lost in a horizontal universe, they run tight circles, wondering, “where is it?”. On its way down it hits them on the head. Civilized primates are surely different. Our steep information hierarchies are different. Or in the high castles of information a few above look upon many circling below.

Far South Texas, a bone’s throw (or gun shot) from the US/Mexican border, enjoys post truth as a storied and comfortable tradition. So stable, we might question the addendum “post”. Here truth is ephemeral. Like rain, it appears rarely. When it does it collects in pools, grows strange stuff, gets smelly and then dries up.

Are we suddenly flung into a post-truth world? The sophists lost that one, the Stalinists, too. But history’s lessons, like a grade 2 curriculum, never end. They remain the same. Hope springs eternal. Adam Riggio, in “Subverting Reality”, takes a personal approach, emphasizing trust before truth, even providing a theory of true punk music; if form then content. All else is appropriation. Meet fake punk. While I’m not sure about that, I’m sympathetic. Perhaps form does not formulate in the end, which is why we should be suspicious of any form-allegiance. Including representational democracy. But his is an understandable approach. Like Riggio, I’ll take a personal line.

In letter to the editor style: I reside in McAllen, Texas. It is in the Rio Grande Valley. Locals call this the “RGV” or “956”.[1] Table chat I’ve shared in the wealthy parlors of Austin and San Antonio insists we are not really part of Texas, “They’re all Mexican”. But the map indicates we are. Because we are on the North side of the river.

A few miles South of town we have a long stretch of the Mexico/US Border. The Wall. It looks like minimalist conceptual art from the 1960s. Donald Judd comes to mind, Donald Trump, too.[2] Professional photographers adore it, prostrate before it. They fly in just to see and click. The border wall is by nature post-trust and so, post-truth. This Post Truth is a concrete condition. Literally. Made of concrete and steel, I’ve climbed it. Took me 1.5 minutes (a bit slower than average; wear tennis shoes, not boots). Recently, epistemologists have explored this scenario. Suspicion is natural to social primate life, not shocking, misplaced or shameful: The battle is not for trust, but realistic, strategic distrust.

Post Truth Life

We are Texas and proud we are. We proudly supply Washington DC with its cocaine, providing the capital the highest quality, best prices, in vast quantities. Our product is legend, a truly international undertaking, spanning 13 countries. This is our number one economic achievement. We proudly provide the largest, most vibrant, corporate retail experience to be found anywhere between San Antonio and the Federal District of Mexico. Our shopping is legend, a truly international undertaking, filling the parking lots with cars from the Mexican states of Tamaulipas, Nuevo Leon, DF, alongside Canadian vehicles from Ontario, Alberta Quebec and others.[3] We are Texas and proud we are. This is our number one economic achievement. As one might imagine, such a list goes on. The local banks reflect our achievement. Billions of dollars beyond the productive abilities of our local legal economy are on deposit. Almost every penny in the banks is owned to the success of our local legal economy. But what I take to be our greatest achievement, which all this and more rests upon, is the borderland mind. In the parlance of the moment, it is deliciously post-trust and post-truth. If this isn’t social epistemology, what is?

I have lived on the border for more than a decade. My wife, originally from Monterrey, Mexico, and her family, have lived here since she was 14, and for several years before that just a few blocks South of the river’s South side. While most academics are Anglo imports and cling to the same, I didn’t make that mistake. Her family and my friends provide an intimate understanding.

Conspiracy theory is the way of life here, much of it well informed. Though truth is rare enough, its seasons are established and understood. The winds that sweep from Mexico into the North whip up some remarkable and telling conspiracy theories. As does the wind from Washington. Escobares, one of the oldest cities in the US, is a short drive West of McAllen. The Church is built of petrified wood. On the Border even the US census is post-truth and seen as such; not just in population count (understandably, it misses half the people),

At the 2010 census the city of Escobares had a population of 1,188. The racial composition of the population was 98.3% white (7.2% non-Hispanic white), 1.6% from some other race and 0.1% from two or more races.

Yet, 92.8% of the population was Hispanic or Latino with 92.3% identifying as being ethnically Mexican.[4]

Escobares is a white town? McAllen has a nearly identical US census profile. Derisive laughter on local radio and in front yard parties follows.

The Wall of Conspiracy

The Wall is patchy, has gaps. Erected by President Obama, many miles here, many miles there, ropes dangle everywhere to help travelers across it. Little kid’s shoes, kicked off as they climb, litter its base. Sometimes the kids fall. The Wall is not monolithic.  Nor opinion. Surprisingly, in an almost entirely Hispanic community, completing The Wall is both opposed and supported by many. Often the same people. This is not insanity, it is time honored strategy. Brings to mind the old movies where people hang two-sided picture frames with opposing photos, and flip the frame according to what a glance out the window informs them about their arriving guests. The photos mean nothing, the flipping, everything. Fireside conversations become remarkable. The anti-wall protests of local politicians are viewed in a familiar post-truth, fading race-war narrative: They have to say that. Both Democrats and Republicans copy cat this story line and then deny any allegiance to it at Rotary club meetings before racially well-mixed and approving audiences. Legal trade is good, the rest is a mess. Why a wall? None of them would do any lucrative illegal business. They pray before their meetings. But Northern cities in Mexico promote ineffective boycotts of McAllen’s retail miracle because of The Wall. They fear it hurts them financially. Odd. The McAllen Mayor responds by stringing a broad, mixed language banner across main street, declaring, “Bienvenidos to McAllen, Always Amigos”. The Wall issue dissolves.

Charades require political tension, sincere or contrived, perhaps a tactic of negotiation.

Why local support for The Wall? Too many headless bodies, too many severed heads. People are sick of the untouchable prostitution trap houses north and east of town. Fenced in, barbed wired, cinder-block buildings with armed guards, stocked with poached immigrant girls and boys, a parking lot full of Ford F150 trucks. The kidnappings of immigrants, the torture chambers and videos when the money never arrives. The ones that by shear luck avoid such fates are relegated to back country depots and “abandoned” houses. Often they are abandoned, forced to burglarize and rob to eat and continue their trek north.

People are also tired of the border’s relentless yet ironically impotent police state. One cannot drive the 57 miles from McAllen Texas to Rio Grande City without passing 20 or more roadside State Troopers in their cartel-black SUVs. Don’t bother to count the border patrol SUVs: They are more numerous. The State Troopers, euphemistically agents of “The Department of Public Safety (DPS)”, fill our now crowded jails with locals, on every imaginable infraction, no matter how trivial. After asking me where I lived, at the end of a convenience store line conversation, one told me, white on white, “Then ya know, people here are bad.” [5] These are not local Sheriffs, born and raised here, who understand people and who is and isn’t a problem. DPS is relentless, setting impromptu road blocks throughout our cities, tossing poor people in “county” for not having car insurance and the money to pay for it on the spot. Whole Facebook pages are devoted to avoiding the road-blocks in 956. Down at McAllen’s airport entire multi-story, brand new hotels are now filled with foreign agents of the state. The whole monster-mash, everyday is Halloween scène down on the border could be chronicled for pages.

All of this is perceived by a hardworking, fun-loving, family-driven community as an ill wind from the South, drawn by the bait-and-switch vacuum of an uncaring, all-consuming “great white north”, and a Washingtonian two-face. Right they are. With The Wall, perhaps these police-state parasites will leave. The slave traps will wither by the rule of no supply. Rich white and agringado activists up North be damned; who for their own, disconnected reasons, demand it never end.[6] To quote a close relative, “Nombre! They don’t live here!”.

People see The Wall as a conspiracy to placate the xenophobes up North, not protect anyone. Keep the cheap labor coming but assert, “We did something to stop it.”. People see The Wall as protection for those who otherwise would cross and fall into the many traps set for them by the coyotes, they also see The Wall as protection for themselves. They see The Wall as a conspiracy supported by the drug cartels and the Mexican government the cartels control (its official protests not withstanding) to simplify the business model, driving the local cells and resident smuggling entrepreneurs out of business. Using operatives in ICE and the Border patrol is more efficient: Cut out the middle women and men. People lament the damage this will do to our local economy and in some cases, personal income. People praise this. People see those who in the North who oppose The Wall as political fodder used by those who could not care less about them, but want to pretend they do without having a clue, or even trying to. People believe The Wall is a conspiracy, not just to keep Hispanics out, which they often despise depending on country (“OTMs”, Other than Mexicans) but to keep Americans in. As I quickly learned, though few border-landers verbally self-identify as “Mexicans” (that takes a trip across the river), they view a dangerous Mexico as safe-haven if things “go south” here in the United States. If a theoretical, grave political or economic crisis occurs, or just a particularly unpleasant but very real legal entanglement, escape to Mexico is their first resort.

People ask, after the finished wall, added concertina wire and all, what if they close the bridges? When they need to run, they want to be able. People see The Wall as an attempt to destroy the Mexican economy, forcing them into the proposed North American Union, where Canada has submitted in principle, and the only hold-outs are the resolute patriots of the Republic of Mexico, “Mexico, so far from God, so close to the United States”.[7] Washington will never be its capital. A noble sentiment. More pedestrian conspiracy theories circulate about campaign contributions from international construction corporations and their local minions. Workers on both sides of the river hope the fix is in; it means jobs for everyone. Recall the Israeli government hired eager Palestinians to build their wall; but that’s another post truth reality. Revealingly, the Israeli example has been promoted in the American press as a model with the notorious phrase, “best practices”. Such is the politics of promised lands.

What is Post Truth?

Post truth is, first, access to a shared, community truth, is now lost. But that would only entail agnosticism. Post truth is more. It is also, second, seemingly contradictory claims now have equal legitimacy in the government, media and with the citizenry. No one looks up. This is an unlikely construct. Like choosing wallpaper, but this time for the mind, what a citizen believes, political, economic or otherwise, is entirely a matter of personal taste. And there is no accounting for taste. No epistemic grounds for ordinary controversy, but insidiously a double-truth theory laid upon the collective consciousness of democratic society. Collective madness. Hence: A post truth world. It’s a catastrophe. Or is it? Look up at the above.  What is epistemically interesting is that most of the conspiratorial stances above do enjoy some significant evidence and are mutually consistent. Hence simultaneously believed by the same persons. Enter real “post truth”, and a larger diagnosis of our information hierarchy. It is not reliable. Instead we look to each other.

Five Suggestions about Post Truth

Post truth is about epistemology, social and otherwise, but only at one or more steps removed. On the ground it is entirely pragmatic. Post truth is not to be confused with mere state propaganda. That is another, much more narrow notion. Post truth, as before defined, is ancient and ubiquitous. The 21st century is no different.

❧ ❧ ❧

1. The first, a bit tiresome to repeat, is found in several epistemic critiques of the pathologizing approach to conspiracy theory: We should not conflate suspicions with beliefs. There is nothing cognitively anomalous about post truth states of consciousness when read this way.[8] Suspicion is epistemically virtuous. The fears surrounding ambitions of pathology, how ever great, are immediately de-sized in face of this simple distinction. Suspicion is one of the virtues of Eric Blair’s famous character, Winston Smith—at least until he trusts and is captured, tortured and turned.

❧ ❧ ❧

2. “Post” implies a time before that has passed. More formally, it might be termed a tense-based situational truth agnosticism.[9] Applied to “trust” and “truth”, on the border, this proposed time before would require reference to the more social and intelligent Pleistocene mammals. Maybe to the first human visitors, ten or more thousand years ago, no doubt in search for water. An attitude of panic towards “post truth” seems misplaced. Nothing can survive laughter. This is a second suggestion. Post truth hysteria is, while initially quite understandable, difficult to take seriously for long. Rage concerning it, even more so.[10]

❧ ❧ ❧

3. Linguists point out that “trust” and “truth” are closely related. One births the other. By accident and so inclination, I am an epistemologist of trust, especially its “negative spaces”, to borrow from art-theory. These spaces in our current information hierarchy, where so few control what so many hear, and often believe, are legion. In our society navigating these is elevated to high art, one we should not fear. My third suggestion is that if nothing changes then nothing changes. And my prediction, nothing changes in a post truth world. Because nothing has changed. Or soon will.

Post trust is not the new normal, it is the oldest one. You don’t know people, or societies, until you go about with them. We should be cautious, watchful. As my son would put it, “We should lurk them hard”. A skeptical attitude, an expectation of post truth because of a post trust attitude, is appropriate, an adult attitude. Among billions of humans of all types and classes, we hardly know anyone. And those who protest this, doth protest too much. Such an attitude of truth-privilege, as found among the denizens of the political Avant-gardes and their fellow travelers in our mass media, has always been unearned.[11] One often betrayed. Professional managers of belief I will grant the mainstream media, professional purveyors of truth is quite a stretch, a needless one. But a conceit that has proven lethal.

Consider the 2003 Iraq invasion. We were told at the time, by both current and prior presidents, it was an invasion for feminism.[12] The media, including the New York Times, chimed in approval. Normalizing this invasion was this media’s crowning achievement of the 21st century’s first decade. One might think they got off on the wrong foot, but that would entirely depend on what the right foot is. I argue for a more functional outlook. Their function is basic societal stability, congruence with official narratives when these are fundamental ones, not truth; an establishment of normality in virtually anything. Truth has its place at their table only among the trivial, not basic stability. Consider the US civil rights movement. Here the political Avant-gardes and mass media had an effect we view as laudable. Yet this did not threaten the established political or capitalist order. It ushered old participants into greater integration within it and to new levels of participation on its behalf. Mr. Obama, for instance.

❧ ❧ ❧

4. Mainstream media and Avant-garde political pronouncements are unreliable in proportion to the importance to the purveyors that we accept them. I don’t mean this as revelatory, rather in the manner of reminder. The opportunities for manipulation loom especially large when popular cultures are involved, and the way we identify with these are transitioned to apathy or atrocities. Or both, simultaneously. This transcends political dichotomies like “right” and “left”. Both, because of their simplicity are easy marks. The proper study is, perhaps, is that of “faction”. A war for feminism? A war to extend democracy? A war for Arab prosperity and against child poverty? A war for American energy independence? A war for the world: Pax Americana? But the ploy worked, both popularly and within academia. It’s being re-wrought today. In the popular and academic hysteria following 9/11, Michael Walzer, champion of Just War Theory, wrote,

Old ideas may not fit the current reality; the war against terrorism to take the most current example, requires international cooperation that is radically undeveloped in theory as it is in practice. We should welcome military officers into the theoretical argument. They will make it a better argument than it would be if no one but professors took interest.[13]

Walzer asks to take his place among the generals. Walzer goes on to argue for the importance of aerial bombing while trying not to blow rather younger children to smithereens. Walzer’s justification? Protecting US soldiers. If any of this strikes us as new or news, we live in what I like to call the united states of amnesia.  He claims current bombing technology overwhelmingly protects the innocent. An interesting post truth formula. Who then are the guilty soldiers and functionaries, and how could they be? Denounce the stray bomb fragments, then embrace the counsel of professional conspirators of death in our moral considerations. This is suspect, politically, morally and epistemically. It is also feminism. That’s a post truth world. Long before a real estate agent joined the pantheon of US presidents.

The rebellion of conspiracy theory helps here. Conspiracy theory is typically, and properly, about suspicion, not belief. Certainty, even if just psychological, “truth”, is not an option in a responsible citizen. A vehement lament and protest against post-truth is inadequate if it ignores the importance of suspicion. But nothing like suspicion post-trusts and so post-truths. To borrow a lyric from Cohen, “that’s where the light comes in”. And we post-any-century-primates have good reason for suspicion. True, the opening years of the 21st century hit a home-run here, it wasn’t the first or last. If anything is transcendently true, that’s it.

If this functional, suspicious understanding becomes our baseline epistemology (as it is where I live), we might worry catastrophe will ensue. Like leaving a baby alone in a room with a hungry dog. But what actually happens is the dog patiently awaits, ignoring the obvious. Good dog. People and dogs share much. With humans what actually ensues is table talk, memes on the internet, and winks and rolling eyes across the TV room. Formally known as the “living room”, this post-living room space is not grade school and we are not attentive, intimidated students. We’re artists of negative spaces and we usually negotiate them with aplomb. Unless we really think mass media reliability is what post truth is post to. Then, I suppose, catastrophe does ensue: Only a brief emotional one, similar to losing one’s religion, one’s political piety. Cass Sunstien provides,

“Our main policy claim here is…a distinctive tactic for breaking up the hard core of extremists who supply conspiracy theories: cognitive Infiltration of extremist groups, whereby government agents or their allies (acting either virtually or in real space, and either openly or anonymously) will undermine the crippled epistemology of believers by planting doubts about the theories and stylized facts that circulate within such groups.”[14]

Let’s conspire against citizens who worry you might be conspiring against them. Is there anything new here?

Riggio on Post Truth

Like Riggio, I view the existence of political truth as beyond evident. In the face of rhetoric concerning a “post truth” contagion, Riggio counters there is instead a battle for public trust. He’s right. He’s channeling, in fact, Brian Keeley’s classic public trust approach to alternative thought.[15] As with our confidence in science, mainstream media functions the same. But Riggio seems to think it is a new one, and one worth fighting and “winning”. Now what would be winning? As we finally fall asleep at night, we might appreciate this. But not in daylight. There’s no battle for public trust there. Most don’t, but say we do. And that’s a good thing.

Public trust has long ago headed down the yellow brick road with Dorothy in search of a wizard. Lies and compromise are recognized, from all quarters, as our long-term norm. Dorothy’s surprise and the wizard’s protests when he is revealed should hardly surprise. This is the road of the golden calf, representational democracy.

The closer you get to Washington DC, Paris, Beijing, London or the democratic republic of Moscow, the more obvious this perception and reality is. It’s celebrated in transatlantic, transnational airplane conversations that last for hours. It’s palpable before the edifices of any of these capitals’ secular monuments. As palpable before the non-secular: Like standing a few blocks before the Vatican, a previous political model, we can’t really deny it. These edifices now, as they were before, are saturated in farce.[16] Adam Riggio’s impassioned political piece, with his hands on the cold marble, reminds us that being too close to the temple can blind us to its real shape, strength and impressive age. Riggio writes,

[Mainstream media’s behavior] harms their reputation as sources of trustworthy knowledge about the world. Their knowledge of their real inadequacy can be seen in their steps to repair their knowledge production processes. These efforts are not a submission to the propagandistic demands of the Trump Presidency, but an attempt to rebuild real research capacities after the internet era’s disastrous collapse of the traditional newspaper industry.[17]

I see this as idealized media primitivism, “If only we could go back”. It’s absolutely admirable. But was print media ever supposed to be trusted? Print media set the stage for the invasion of Cuba and Mexico. It suppressed the deadly effects of nuclear testing in in the 1950s and 60s and then promulgated apologetics for the same. Between 1963 and 1967 the Vietnam War was, “the good guys shooting the Reds”.[18]  It played a similar role in Central American intervention, as well as the first and second “gulf” wars, fought deep in the desert. Mainstream media has long been superb at helping start wars, but way late to the anti-war party and poor in slowing or ending the same wars they supported. A post truth world hypothesis predicts this. An interesting point, one more interesting the more intense the consequences are. The more seemingly significant a political event—such as bizarre politics or senseless wars—the more normal it is initially portrayed by mainstream media. Eventually damage control follows. Public trust? Not likely. Certainly not well placed.

❧ ❧ ❧

5. So a final, fifth suggestion: Our paleo post-truth vision taps on our shoulders: The “new normal” political panic concerning a “post truth” world we find in political conversation and in mass media is an ahistorical and ephemeral protest. Our strange amnesia concerning our wars, the conduct of such and their strange results should be evidence enough. Communist Vietnam, with its victory in 1975, was by 1980 a capitalist country par excellence. An old point, going back to Orson Wells’ Citizen Kane. “I remember the good ole days when we had newspapers” seems an unlikely thesis.

Recall Eastern Europe. While giving a talk on conspiracy theories and media in Romania, one that might be characterized as a post truth position on media reliability in times of extreme crisis, the audience found the remarks welcome but fairly obvious. They doubted we of the West really had a free mainstream media in contrast, but they enjoyed the idea, the way we might enjoy a guest’s puppy; he’s cute. The truth can be toxic in many social and political settings. Good arguments indicate mass media hierarchies react accordingly everywhere. Far from being tempted to promulgate such truths, like afore mentioned hungry dog and baby, they leave toxic investigation alone. Why look? Why bite?

Conclusion

Politicization of knowledge is dubious. “Post Truth” is a political term of abuse, one that will quickly pass; a bear trap that springs on any and all. Just before the first World War, in 1912, Bertrand Russell pointed out that the truth “must be strange” about the most ordinary things, like tables or chairs.[19] Are politics, mass media power, any less strange? Now we all stand, down by the river, awaiting the evening’s usual transactions and gunfire.

We live in the united states of amnesia. In the rush of cotemporary civilization, memories are short, attention fractured and concentration quickly perishes. We just move on. The awesome spectacle of seemingly omnipotent governments and ideologically unified corporate global mass media along with a population driven by consumption and hedonism, might create a sense of futility where subversive narratives are concerned. But then in new form the subversive narratives are reborn and powerfully spread. The growing intensity of this cycle should give us pause. Perhaps the answer does not lie in seeking new, remedial, intellectually sophisticated ways to ignore it, but in addressing our information desert, our scarcity of real epistemic access to the information hierarchy hovering above us. And discovering ways this can be reversed in a world of unprecedented connectivity, so epistemic rationality can play a decisive role.[20]

For some this truth about post truth and its vicious ironies creates a scary place. Here on the edge of the United States, people have learned to live through that edge and embrace it. But in cozy heartlands in the US, Canada and Europe, most prefer to die in the comfort of our TV rooms so we don’t die “out there”, as Cormac McCarthy puts it, “…in all that darkness and all that cold”. But when the long reality of a post trust, post truth world is forcibly brought to their attention by real estate developers, some react, like Dorothy, with rage and despair. This is a mistake.

Social epistemology should embrace a socially borne epistemic skepticism. This is not an airborne toxic event, it is fresh air. Social epistemology might not be about explaining what we know so much as explaining what we don’t and the value of this negative space, its inescapability and benefits: The truth about post trust and truth. Post truth is everywhere, not just here on the border. We can’t land in Washington DC at Ronald Regan international airport and escape it. Welcome to the post-truth border, bienvenidos al frontera, where we all live and always have. Certainty is an enemy of the wise. If thought a virtue, representational democracy is the cure.

This returns us to dogs. Dog-like, though we be, primates can certainly learn to look up in intense interest. At the stars, for instance. I oppose The Wall. And can climb it. We don’t know until we go. The border is just beyond your cellar door. Do you live in Boston? There you are. Once you open up, look up. Don’t circle about in tight illusions. Embrace bright, buzzing, booming confusion.[21] You don’t know my real name.

[1] Local area code.

[2] Chilvers, Ian & Glaves-Smith, John eds., Dictionary of Modern and Contemporary Art. Oxford: Oxford University Press, 2009.

[3] The latter are the so called “Winter Texans”. Fleeing the North’s ice and snow, but unwilling to cross the border and venture farther South into Mexico (except for one military controlled, dusty tourist town immediately across the river, wonderfully named “Nuevo Progreso”), they make their home here through fall, winter and spring.

[4] United States Census Bureau. Archived from the original on 2013-09-11. Retrieved 2008-01-31.

[5] DPS officers are not all this way. Many are quite compassionate, and increasingly confused by their massive presence here.

[6] “Agingado”; “becoming a gringo”.

[7] President Porfirio Diaz, “Tan lejos de Dios y tan cerca de los Estados Unidos.”.

[8] See Basham, Lee and Matthew R. X. Dentith. “Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone.” Social Epistemology Review and Reply Collective 5, no. 10 (2016): 12-19, and subsequent remarks, Dieguez, Sebastian, Gérald Bronner, Véronique Campion-Vincent, Sylvain Delouvée, Nicolas Gauvrit, Anthony Lantian & Pascal Wagner-Egger. “’They’ Respond: Comments on Basham et al.’s ‘Social Science’s Conspiracy-Theory Panic: Now They Want to Cure Everyone’.” Social Epistemology Review and Reply Collective 5, no. 12 (2016): 20-39. Basham, Lee. “Pathologizing Open Societies: A Reply to the Le Monde Social Scientists.” Social Epistemology Review and Reply Collective 6, no. 2 (2017): 59-68.

[9] While a realist about truth, a situational truth agnosticism does not entail warrant/justification agnosticism. We don’t need to know if something is true to know it is probably true, given our best evidence, or probably not true.

[10] The political fate of Bernie Sanders comes to mind. A fine candidate, and my preferred, he was forced to recant at the Democratic Party Convention in 2016. One recalls the Hindenburg.

[11] The usual US suspects include CNN (“Combat News Network” in 2003-10 and more recently, “Clinton News Network”), NBC (“National Bombing Communications”) and FOX (a bit harder to parody due to the “x”, even though Mr. O’Reilly offered his services).

[12] George W. Bush and William J. Clinton.

[13] Walzer, Michael. “International Justice, War Crimes, and Terrorism: The U.S. Record.” Social Research, 69, no. 4 (winter 2002): 936.

[14] Cass Sunstein and Adrian Vermeule, “Conspiracy Theories: Causes and Cures”, University of Chicago Law School Public Law & Legal Theory Research Paper Series Paper No. 199 and University of Chicago Law School Law & Economics Research Paper Series Paper No. 387, 2008, 19, reprinted in the Journal of Political Philosophy, 2009.

[15] Keeley, Brian. “Of Conspiracy Theories”, Journal of Philosophy, 96, no. 3 (1999): 109-26. Keeley’s is a classic, but the Public Trust Approach (PTA) he advocates appears to fail on several levels. See the several critiques by Lee Basham, David Coady, Charles Pigden and Matthew R.X. Dentith.

[16] Not only farce, but a fair share.

[17] Riggio, Adam. “Subverting Reality: We Are Not ‘Post-Truth,’ But in a Battle for Public Trust.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 71.

[18] See Hallin, Daniel C. The Uncensored War: The Media and Vietnam. New York: Oxford University Press, 1986.

[19] Russell, Bertrand, The Problems of Philosophy, Henry Holt and Company, New York, 1912. Russell continues, “In the following pages I have confined myself in the main to those problems of philosophy in regard to which I thought it possible to say something positive and constructive, since merely negative criticism seemed out of place.”

[20] A paraphrase from, “Conspiracy and Rationality” in Beyond Rationality, Contemporary Issues.Rom Harré and Carl Jenson, eds. Cambridge Scholars, Newcastle (2011): 84-85.

[21] James, William. The Principles of Psychology. Cambridge, MA, Harvard University Press, 1890, page 462.

Author Information: Derek Anderson, Boston University, derek.e.anderson@gmail.com

Anderson, Derek. “Relevance Theory and Conceptual Competence Injustice.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 35-39.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Ec

Please refer to:

Image credit: Yasmeen, via flickr

Manuel Padilla Cruz (2017) has proposed that the notion of conceptual competence injustice that I offer (2017) can be usefully deployed within the field of linguistic pragmatics, specifically within relevance theory (Sperber and Wilson 1986/1995), to characterize certain unintended and harmful pragmatic implicatures arising from lexical mistakes. Lexical mistakes involve a speaker producing an utterance that omits or misuses some crucial piece of vocabulary. Padilla Cruz suggests that these mistakes predictably lead listeners to infer that the speaker lacks lexical competence in with one or more terms. Conceptual competence injustice occurs when a member of a marginalized group is judged to have less conceptual competence than she in fact has and suffers from unjustly diminished credibility when making conceptual or linguistic claims. A judgment that a speaker lacks lexical competence could result in conceptual competence injustice under certain circumstances. This paper explores Padilla Cruz’s proposal by investigating the relationship between relevance theory and conceptual competence injustice.

On Relevance Theory

Relevance theory provides a model of cognition and its relationship with linguistic pragmatic phenomena. According to relevance theory, human cognition evolved to maximize relevance. Our innate cognitive architecture is organized so that computational resources are efficiently allocated to gather information that connects with available background information to produce conclusions that matter to the agent (Wilson and Sperber 2002). Let us say that a proposition matters to an agent when knowing its truth would facilitate the agent’s achieving some epistemic, moral, or practical goal. (An account of relevance as goal-sensitive is closely aligned with the model of relevance theory developed by Gorayska and Lindsay 1993.)

This understanding of cognition as maximizing relevance lays the foundation for a model of linguistic pragmatics. Under normal conditions speakers and their audiences presuppose that everyone is seeking to maximize relevance. Speakers can use this fact to pragmatically convey information that is not semantically encoded in their utterances; meanwhile, audiences infer things that are pragmatically implied by what speakers say. Sometimes the speaker intends the audience’s inferences and sometimes they do not.

When a speaker makes a lexical mistake, this piece of information may give rise to a number of unintended implicatures. Most salient for the present discussion, her audience may infer that she fails to grasp some concept. This inference might constitute conceptual competence injustice if the hearer judges the speaker to have a lesser degree of competence with some concept than she in fact does, or that she has a lesser understanding of some analytic or conceptual truth than she in fact has. Another possibility is that the hearer wrongfully judges the speaker to lack lexical competence but does not take her to lack conceptual competence. This might occur when a speaker appears to have difficulty speaking a second language. The audience detects a lexical mistake but assumes that the speaker grasps the concepts she is trying to express, i.e. that the speaker could display proper conceptual understanding in her primary language. This second case may be an example of what we could call ‘lexical competence injustice’ if the speaker is wrongfully judged to be lexically incompetent. That would be a case of lexical competence injustice without conceptual competence injustice.

Conceptual Competence Injustice

Padilla Cruz’s proposal to model conceptual competence injustice within relevance theory must be tempered with the proper understanding of that phenomenon as a structural injustice. Conceptual competence injustice is a form of epistemic oppression (Dotson 2014). It occurs when false judgments of incompetence function as part of a broader, reliable pattern of marginalization that systematically undermines the epistemic agency of members of an oppressed social identity. It cannot be accurately characterized merely as the result of a certain type of pragmatic inference without specifying facts about the social identities of the speakers and hearers involved, together with facts about the structure of their social circumstances. Since relevance theory is formulated in neutral language with respect to social identities and takes no account of local matrices of domination (Collins 2002), the theory does not intrinsically have the resources to identify instances of conceptual competence injustice.

However, relevance theory could help to illuminate patterns of conceptual competence injustice if it is embedded within a broader sociological framework. Together with a specification of relevant social identities, relationships, histories, and prevailing institutions, relevance theory can be used to model and predict instances of conceptual competence injustice arising from lexical mistakes. It can also model other ways in which pragmatic inferences are likely to occasion conceptual competence injustice. This might serve as a template for thinking about how relevance theory or other approaches to linguistic pragmatics can model local mechanisms through which epistemic oppressions are maintained and implemented within a given society.

Whether a type of pragmatic inference will contribute to a broader pattern of epistemic oppression depends largely on the distribution of background assumptions maintained by epistemic agents throughout a society. If there is a pervasive prejudice against the intellectual credibility of a particular social group regarding a certain domain of discourse, then speakers from that group will predictably trigger episodes of conceptual competence injustice when they make lexical mistakes in that domain. For example, in a community that harbors prejudice against the intellectual capability of women to understand abstract epistemology, a woman who makes a lexical mistake in that domain is more likely to be judged conceptually incompetent than a man who makes the same mistake. Mistakes made by dominant epistemic agents are more likely to be attributed to some other factor—a lack of sleep, a random lapse in concentration, a momentary confusion—rather than prompting a pragmatic inference to conceptual incompetence. Members of dominant groups generally have this kind of edge in perceived competence over marginalized epistemic agents.

Relevance theory also allows us to model ways in which individual and group interests can shape patterns of conceptual competence injustice. Since our cognitive architecture is wired to maximize relevance, and since relevance is determined by an individual’s goals and interests, it follows that an individual who has an interest in perceiving some marginalized group as less intellectually sophisticated will be more attuned to information that could pragmatically implicate conceptual ineptitudes, including lexical mistakes. A man who passionately maintains the misogynistic view that women cannot comprehend politics would be quick to process any potential evidence indicating that some woman lacks conceptual competence in that domain. He would thus be more alert to lexical mistakes in conversations about politics with women. Relevance theory makes explicit the way that conscious, premeditated interest in intellectual authority can produce judgments that lead to conceptual competence injustice.

Unconscious Interests

The interests that guide our cognition can also be less than fully conscious. Think of the white person who unconsciously holds white supremacist convictions about his own intellectual capability. This person might consciously believe that people of color are just as intellectually capable as whites, yet he is vaguely uncomfortable with the thought of himself being intellectually inferior to a person of color. Consequently, he unconsciously manifests vigilance for signs that prove his own intellectual superiority when he interacts with people of color. This makes him hyper aware of lexical mistakes during such interactions. He is likely to contribute to the pattern of epistemic oppression, insofar as he is likely to underestimate the conceptual acumen of people of color, by reliably committing conceptual competence injustices.

The prevalence of unconscious interests of the kind just outlined must necessarily be a subject of controversy. However, relevance theory has the important virtue of allowing us to model sources of injustice grounded in dominant interests even in the absence of conscious and unconscious commitments to dominant power structures. Consider a different white person who harbors neither conscious nor unconscious commitments to white supremacy. Many white supremacist propositions are still in this person’s interest, in the sense that the truth of such propositions would promote his goals, for example: the proposition that he is the most qualified candidate for a position and that the person of color who got the job only did so because of affirmative action. Even if the white man does not believe this proposition, information that supports it would be highly relevant to him. He will therefore exhibit heightened sensitivity to signs that pragmatically imply that the person of color who got the job is less qualified, less capable, less intelligent, etc. Hence he would be more sensitive to any lexical mistakes this person makes. Other white people in the company who are similarly ‘threatened’ by affirmative action will have similarly heightened sensitivity, even if all of them are explicitly pro affirmative action and have no unconscious bias or prejudice. This pervasive sensitivity will increase the probability that conceptual competence injustices are inflicted on the one who got the job.

More broadly, dominant interests in white supremacy will influence pragmatic inferences concerning the intellectual authority of people of color. These interests may be unwanted by those who have them. Progressive white people may wish that they did not have a stake in white supremacy, yet white supremacy is still in their interest because it promotes their economic, political, and social well-being (albeit at the unjust expense of people of color). Similar considerations reveal that men have an unshakable interest in patriarchy, the rich have an unshakeable interest in capitalism, the heterosexual have an unshakable interest in hetero-normativity, and so on. The goals of the privileged are facilitated by those systems that lend them their privilege, regardless of how they feel about or think of those systems. By postulating that our epistemic and practical goals fundamentally shape our cognitive processes, relevance theory has great potential for modeling the force of unwanted, yet unshakable, self-interested stakes in oppressive systems.

Not all pragmatic inferences to conceptual incompetence proceed on the basis of lexical mistakes. Perfectly cogent utterances can lead audiences to commit conceptual competence injustices. Relevance theory can model these pragmatic inferences as well. Consider a case in which a speaker who is a marginalized epistemic agent makes no mistake or glaring omission regarding the terminology she employs in some discourse, but her audience disagrees with what she says. Suppose, according to the background assumptions of her audience, the truth of the speaker’s utterance cannot be adjudicated on the basis of empirical evidence. The audience, believing her utterance to be false, infers that a conceptual error has been made. This purported conceptual error pragmatically implies that the speaker lacks conceptual competence in the domain.

For example imagine a person of color says, “Malcolm X was not racist when he called white people ‘white devils’ and condemned their participation in the historical oppression of black people.” Suppose her audience, a white man, disagrees. He thinks that Malcolm X was racist for using the term “white devils.” He takes the speaker’s utterance to be false, but (perhaps implicitly) recognizes that the claim cannot be adjudicated on the basis of evidence. No observable facts are in dispute. Rather, the question of truth must somehow turn on the definition of “racist.” Recognizing this fact, the man is likely to infer that the speaker does not correctly understand the meaning of “racist” if she is speaking sincerely. This pragmatic inference constitutes conceptual competence injustice; it is not a harmless or blameless mistake but relies on and reinforces a pattern of epistemic oppression that ignores and undermines the intellectual authority of people of color concerning their understanding of racism.

The pragmatic inference in question depends on the audience’s background beliefs about relative credibility regarding conceptual claims about racism. The hearer takes himself to be more credible than the speaker concerning conceptual claims about racism; that is why he infers that the speaker is wrong and he is right about the definition of “racism.” If he took himself to be equally credible, his disagreement with the speaker would prompt him to open a dialogue about the definition of “racism,” as often happens when epistemic peers disagree over some concept. This too would probably constitute conceptual competence injustice, since it is unlikely that a white man who disagreed with a woman of color about whether Malcolm X was a racist is her epistemic peer concerning the concept of racism. A white person who was adequately conscious of the relevant social and historical facts involved in the exchange would be disposed to defer to the speaker’s use of “racist,” to stand corrected in his use of that term, rather than being disposed to question her use or disposed to infer that she is conceptually incompetent.

The harmful mistake made in this exchange can also be rendered within relevance theory as a failure to have the proper interests. If the white man were primarily interested in learning about racism rather than in maintaining and defending his own beliefs about racism, he would not be disposed to pragmatically infer that the woman was conceptually incompetent. He would be disposed to infer that he was incompetent! The role that interests play in shaping discourse around social justice is very important for a theory of epistemic oppression. What interlocutors pragmatically infer crucially depends on their purposes for engaging in conversation. Relevance theory is especially virtuous in its capacity to model this aspect of conversations about race and other dimensions of social justice.

Relevance theory allows us to understand some of the mechanisms through which conceptual competence injustices proliferate. I have elaborated on Padilla Cruz’s suggestion to develop relevance theory by incorporating the notion of conceptual competence injustice. Certainly there are more uses to which relevance theory can be put in modeling conceptual competence injustice than I have touched on here. These would be worth exploring at greater length. It is also clear that relevance theory and other approaches to linguistic pragmatics have much to contribute to our understanding epistemic oppression more broadly conceived.

References

Anderson, Derek Egan. “Conceptual Competence Injustice.” Social Epistemology 31, no. 2 (2017): 210-223.

Collins, Patricia Hill. Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment. Routledge, 2002.

Dotson, Kristie. “Conceptualizing Epistemic Oppression.” Social Epistemology 28, no. 2 (2014): 115-138.

Gorayska, Barbara, and Roger Lindsay. “The Roots of Relevance.” Journal of Pragmatics 19, no. 4 (1993): 301-323.

Padilla Cruz, Manuel. “On the Usefulness of the Notion of Conceptual Competence Injustice to Linguistic Pragmatics.” Social Epistemology Review and Reply Collective 6, no. 4 (2017): 12-19.

Sperber, Dan, and Deirdre Wilson. Relevance: Communication and Cognition. Oxford: Blackwell, 1986.

Sperber, Dan, and Deirdre Wilson. Relevance: Communication and Cognition. 2nd  edition. Oxford: Blackwell, 1995.

Wilson, Deirdre, and Dan Sperber. “Relevance Theory”. In The Handbook of Pragmatics, edited by Larry Horn and Gregory Ward, 607-632. Oxford: Blackwell, 2004.

Author Information: Stephen Kemp, University of Edinburgh, S.kemp@ed.ac.uk

Kemp, Stephen. “On Popper, Problems and Problem-Solving: A Review of Cruickshank and Sassower’s Democratic Problem-Solving.Social Epistemology Review and Reply Collective 6, no. 7 (2017): 27-34.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3DO

Please refer to:

Image credit: Rowman & Littlefield

Democratic Problem-Solving: Dialogues in Social Epistemology (2017), edited by Justin Cruickshank and Raphael Sassower, offers a thought-provoking take on a range of issues of dialogue, democracy and reasoning in the social sciences and beyond. Jana Bacevic (2017) has usefully summed up the orientation of the book in her review, and raises important questions about the relationship between epistemic democracy and liberal democracy that I do not, unfortunately, have any worthwhile answers to.

This review focuses instead on issues the book very helpfully raises about the modes of reasoning in natural science, social science and in society more generally. In particular I want to focus on the core notions of ‘problem’ and ‘problem-solving’ that are discussed in this volume, and will do so from a perspective that, as with some of the contributors, is sympathetic to the approach of Popper.[1] I will be reconstructing aspects of the discussion between Cruickshank, Sassower and Isaac Ariel Reed, and then suggesting one way it could be taken forward in relation to the concept of normativity.

Setting Problems

Let me start, then, with the question of ‘problems’ in the natural sciences and beyond. My initial observation would be that in Democratic Problem-Solving there is discussion of at least three ‘settings’ within which problems could be located—one is within the natural sciences, the second is within the ‘research problems’ of the social sciences, and the third is in society more generally. The general thrust of Cruickshank’s analysis is that the idea of problems and problem solving is applicable in all of these domains. In this respect he is following, and developing, the ideas of Popper and also those of John Holmwood, who has defended the importance of the concept of problem-solving for both natural and social scientific analysis (see e.g. Holmwood, 1996).

Of course the term ‘problem’ could be taken in different ways, and it will be useful to consider how Cruickshank uses it in his fascinating chapter ‘Anti-Authority: Comparing Popper and Rorty on the Dialogic Development of Beliefs and Practices’ which sets the agenda for the book. To explore this, let us start with Cruickshank’s account of Popper’s problem-solving epistemology:

For Popper (1963, 1972, 1999), if it is accepted that knowledge is fallible, then it follows that one should always seek out better interpretations and explanations of reality. To do this, existing solutions to problems in ethics, science, politics, and so on, need to be subject to criticism, with new solutions to the problems found then being subjected to criticism and eventually replaced by new solutions, in a never-ending critical dialogue (6).

What comes through in this quote, as I interpret it, is a focus on problematizing as much as on problems. That is to say, the encouragement here is to be oriented to critique and to perpetual overturning—to making things problematic. And this is consistent with Cruickshank’s orientation throughout the book which focuses very much on questions of critique and how one can avoid wrongly foreclosing criticism.[2] It should certainly be noted that Cruickshank does refer to a more specific usage of Popper’s, referring to the latter’s concern with “practical problems in our environment” such that when we resolve problems we have adapted successfully—temporarily—to this environment (6). However, this usage is rarely discussed beyond the core opening chapter, with the general treatment of problem being a sense of ‘something that has been problematized by certain actors’ (to put it in my own words).

What about the idea of a ‘solution’, or ‘problem-solution’? In Popperian terms, a solution could be seen as a successful ‘adaptation’ but, as mentioned, this idea does not receive extensive treatment by Cruickshank (or indeed other authors) in the book. The same is true of the idea that problem-solving has a connection to the pragmatist concern with ‘usefulness’ (7). The implication of that link seems to be that we will have more useful knowledge once a problem is solved, but this is not really taken further. Rather, the idea that is probably most extensively used in the book is the notion that problem-solving has the potential for ‘alleviating harm’ (xiii).

This provides a broad orientation to the debate insofar as much of the ensuing discussion is about the harms of neo-liberalism and how they might be responded to. However, it is doubtful that this could be used to account for what problem-solving in the natural sciences is about, and should probably be seen as one particularly important kind of problem-solving. It could be said, then, that what a problem-solution involves is left fairly vague in the book. In one sense this chimes in with the orientation of the discussion towards criticism and problematizing. Given the overall focus on open-endedness the very idea of a solution could be considered to be potentially suspect. A solution might be taken to imply a resting place, a stopping place, whereas the orientation that Cruickshank is promoting is precisely the opposite, a form of permanent restlessness.

Although problems and problem-solving are treated in this fairly broad, open-ended way by Cruickshank, Reed nevertheless expresses doubts about the value of these concepts in his well-argued chapter ‘Science, Democracy and the Sociology of Power’. Reed formulates particular concerns about whether it is justified to take the idea of ‘problem solving’ from the natural sciences and apply it elsewhere. In relation to social scientific knowledge, Reed questions whether the problem-solving framework associated with Popper’s thought will be able to cope with certain features of society such as the ‘looping kinds’ discussed by Ian Hacking or the ‘concept dependence’ discussed by Roy Bhaskar.

In relation to social problems, Reed has even greater concerns. For one thing, he points out that there is a large literature on the construction of ‘social problems’ which identifies the importance of selectiveness and framing in defining what is taken to be a problem in society. For another thing, Reed points out that the sort of scientistic orientation one may associate with Popper’s problem-solving can actually contribute to normatively doubtful social outcomes. That is to say, the invocation of the scientific status of expert judgements, e.g. where a psychiatrist’s expertise is used to characterise a type of individual as problematic in legal deliberations, involves a problematic exercise of authority.

Cruickshank’s response to Reed (‘Criticism vs Dogmatism’) is based on the idea that Popper’s thought can be divided into the dogmatic and the critical. For Cruickshank, the dogmatic Popper was inclined to fetishize aspects of science as exemplifying critical rationality and was not prepared to submit these to critical appraisal themselves. By contrast, the critical Popper would allow criticism free rein, including that directed at science and its existing methods. Cruickshank argues that the critical Popper can usefully address the issues raised regarding the distinctiveness of the social world and the framing of problems. We shall now examine each of these in turn.

In relation to the distinctive features of the social world, Cruickshank contends that whereas the dogmatic Popper might insist that a scientific analysis of the social world must involve the use of hypothetico-deductive reasoning, the critical Popper would allow that methodological tools and arguments are also up for criticism and revision. This would mean that for the critical Popper it could be perfectly appropriate to question the value of hypthetico-deductive reasoning in relation to the social sciences and replace this with other alternatives as appropriate, such as a focus on the qualitative investigation of meaning.

I would like to briefly mention here an alternative response that could be made to Reed’s critique, based in the work of John Holmwood and Alexander Stewart (1991). Their Explanation and Social Theory (1991) is a rich book which discusses many facets of sociological thought, but one of the key arguments is that the idea of a fundamental difference between natural and social science is based on a problematic understanding of the role of meaning and practical activity in each activity. Once this understanding is rejected, there are much greater continuities than notions like ‘concept-dependence’ or the ‘double-hermeneutic’ might suggest. For Holmwood and Stewart, problem-solving can be undertaken perfectly consistently across the social and natural sciences. I do not have space to say more about it here, but the approach of Explanation and Social Theory is certainly worthy of attention.

Normative Framing

Let’s move on, then, to Cruickshank’s response to the issue of social problems and their framing. Cruickshank’s key move is to clarify that his approach to problem-solving is entirely consistent with the idea that problems are normatively framed. Indeed, Popper himself, in his critical mode, admitted this. Cruickshank states that:

…any proper recognition of the role of intersubjective norms entails the need to study how intersubjective norms have, and will, shape what are perceived as problems and what are perceived as solutions (86).

This emphasis on the importance of framing and normativity in relation to problems and solutions also seems to be accepted by Sassower who, in a later chapter, discusses their importance:

The reason to focus on frames of reference has already been fully articulated by sociologists, behavioural economists and psychologists: the way a problem is framed predetermines the range of possibilities for its solution (197).

Thus, Cruickshank’s response to Reed’s challenge is to readily admit that problems and solutions are normatively framed, and Sassower seems to agree with this.[3]

Cruickshank’s responses to Reed allow him to defend the idea that ‘problem-solving’ can be usefully retained across the domains of natural science, social science and wider social life, because it has shed narrowly scientistic connotations, instead being connected with permanent open-ended critique and an up-front (rather than concealed) normative orientation. I find these arguments valuable and persuasive, but it seems to me that the idea of normativity can be analysed further in a way that articulates with, and develops a little further, what a Popperian orientation to problem-solving might entail. This is the approach that I want to follow in the remainder of this review.

A typical sociological concern with normative framing involves an argument that we need to identify cases where this has been concealed and naturalized, with the intention of showing that other framings are possible. And, indeed, this kind of point is explored in Democratic Problem-Solving (e.g. 87). However, a somewhat trickier issue is to then analyse how to decide between one framing and another, once the range of possibilities is before us. One way to treat this—which could be seen as Weberian—is to see the choice of frame as a commitment in some fundamental sense.

On this approach there is no way to assess normative frames, there can be no reasoned argument for one rather than another—rather, one just has to commit to a frame and work on this basis. It’s not obvious to me that any of the participants of this volume accept this view and I would say that there are good reasons for not doing so. After all, if what a person takes to be a problem is a matter of commitment then it’s not at all obvious why anyone else should be moved by it. What is a problem in my framing can be a boon in your framing and there is nowhere further to go in the discussion. This view gets even less appealing if we take it through to the question of problem solutions.  It suggests that even if we share a view of the problem, our normative commitments may operate such that what seems a very good solution to me seems a very bad solution to you with there being no way for reasonable discussion to impact upon the disagreement.

As already mentioned, I don’t see the authors of Democratic Problem-Solving explicitly adopting the ‘commitment’ view of the normative framing of problems and solutions. But is there an alternative expressed? I think Cruickshank does put forward another way of looking at this issue. He states:

The terms used to define problems—which will always be normative with those norms always having traction—will need to be assessed through the democratic co-production of knowledge, taking time, to work with many agents to change values and reframe problems (88-89).

Although there is disagreement between Cruickshank and Sassower in the volume about whether the latter’s views have elements that stifle a democratic orientation, at least in parts of his argument Sassower also seems committed to such a view. He states the following of the Popperian approach:

Perhaps the main lessons from this way of thinking about solving problems are that we should listen as much as we talk, that we should read more than we write and that we should consider global options when choosing local policies (238).

I agree with both writers that the democratic co-production of knowledge is a laudable idea and is valuable to pursue. However, I wonder if it can be usefully supplemented by a further sense of what is involved in debating about problems and problem-solutions. One reason for doing this is to try to think about what engaging with others might involve. After all, even though democratic, open discussion is surely welcome, there is a question of how to engage in this discussion in a way that neither unreasonably imposes on others nor simply submits to their framings. The contributors to this volume clearly all have views about what is problematic and not problematic in contemporary society. Assuming that they are not all speaking for democratic co-produced collectives it could be useful to think about how they formulate what they see as problematic and how that can be related to the views of others.

In a debate with others, how can we think about engaging with different framings without either imposing a perspective or resorting back to the notion that the choice of framings is a matter of commitment? Take for example a topic which is debated in a very interesting way within the volume, neo-liberalism. How can there be a reasonable discussion between a critic of neo-liberalism who sees the problem of people in poverty as one of a failure of the state to intervene sufficiently and an enthusiast for neo-liberalism who sees the problem as the failure of the state to get out of the way and let people look after themselves?

Popperian Problems and Problem-Solving

I want to suggest that there is a broadly Popperian way to expand on the notion of problems and problem-solving which can make a useful contribution to thinking about engagement with those who have different framings to us. To begin with, as Cruickshank points out (13), for Popper and his followers contact with the world is not direct, rather we interact with it through a theory/set of understandings. ‘Framings’ will be a crucial part of these understandings. The question is, then, how to have a reasonable engagement with those who do not start from the same set of understandings/framings as we do.

This is where the concept of problem is useful, in my view. Within the work of Popper and his followers there is a strong emphasis on the way in which no attempt to understand and frame the world is able to produce a fully consistent account of all known relevant evidence. In other words, there is a strong focus on anomalies, on that which does not fit with a particular framing of the world. Although it would be questionable to argue that this is the only meaning that Popper gives to the idea of a ‘problem’, it is, in my view a core meaning, that is central to The Logic of Scientific Discovery (2005 [1934]) and is also taken up by writers like Lakatos (1970) in analysing the natural sciences and Holmwood (1996) in analysing the social sciences. Furthermore, this can also provide us with one way of thinking about what a ‘problem solution’ involves—the reconstruction of a particular framing/set of understandings of the world to remove an anomaly and produce a more coherent[4] take on the subject-matter. Of course, in keeping with Cruickshank’s remarks about continuous criticism, the removal of an anomaly is not a final resting point for the defender of a framing/set of understandings. There will always be new anomalies to reflect on and wrestle with.

In my view, these Popperian ideas of problems as anomalies and solutions as coherence-expanding reconstructions give us one helpful way of thinking about how to have a critical but non-impositional dialogue with those who frame social (and other) problems in different ways (for further discussion see Kemp, 2012). This is to engage with the framings of others and try to identify what is anomalous from within the way the other is presenting it rather than attempting to simply impose a contrary framing. Taking this further, a participant in the dialogue might also argue that the identified anomaly could be resolved if the person whose views they are critiquing reconstructed their framing in a way that was consistent with the first participant’s own views. To give an example of this kind of approach, a critic of neo-liberalism might argue that poverty cannot be avoided simply by the state getting out of the way because there are countries where the state offers very little if any support and yet there is still grinding poverty. In such an argumentative move, these examples are being presented as an anomaly to the neo-liberal viewpoint. The critic could go on to argue that there have been cases where impoverished groups were supported by the state in a way that actually provided them with the capacity to then look after themselves. This would cast doubt on the opponent’s views and suggest another way to look at the issue.

It would be foolish of me to suggest that any politically engaged actor would be quickly won over by such arguments. In that respect, I find Cruickshank’s concept of ‘critical slow dialogues’ a very persuasive one. As Cruickshank usefully observes:

People may be emotionally, ethically and politically committed to their ideas, as well as under political or institutional pressure to support certain sets of ideas (36).

As such, change may well take time. Of course, dialogues are also two way, and an interlocutor is likely to hit back that the critic’s own position contains anomalies, laying down a—reasonable—challenge that these need to be addressed. In this way, engagements of this kind are two-way and provide challenges to both parties.

Although we cannot expect speedy results, this way of thinking about problems and problem solutions may contribute to understanding how to have a critical engagement without this involving either an under-motivated choice between framings or the imposition of an alternative viewpoint. It is worth noting, I think, that in using the ideas of problem/anomaly and problem-solution in this way I am not denying the normativity of the framings of actors. What I am denying, though, is that normativity involves a commitment that is untouchable by reasoning processes. Normatively-shaped claims generate anomalies which can be critiqued.

This review has surely gone on long enough, so I will just briefly recap the main thrust of it to conclude. The animating issue of the review was how the notions of ‘problems’ and ‘problem-solving’ were addressed and debated within Democratic Problem-Solving. I was sympathetic to Cruickshank’s view that these notions can usefully be applied in the natural sciences, the social sciences and to wider social issues as long as the role of normativity is admitted. However, I argued that the idea of normativity could usefully be further explored to help think through the character of dialogue and criticism. I made some initial arguments in this direction, including the suggestion that connecting problems with the idea of anomalies provides a ground for critical appraisal of normative framings. This allows us to avoid seeing such framings as either commitments outside the realm of reason or impositions on others. I see the arguments made in this review as sketching out a further way to extend the kind of Popperian orientation that Cruickshank and Sassower defend very nicely in Democratic Problem-Solving.

References

Bacevic, Jana. “Solving the Democratic Problem.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 50-52.

Cruickshank, Justin and Raphael Sassower, eds. Democratic Problem-Solving: Dialogues in Social Epistemology. London: Rowman & Littlefield, 2017.

Holmwood, John. Founding Sociology? Talcott Parsons and the Idea of General Theory, London, Longman, 1996.

Holmwood, John and Alexander Stewart. Explanation and Social Theory, London: Houndmills, 1991.

Kemp, Stephen. ‘Evaluating Interests in Social Science: Beyond Objectivist Evaluation and the Non-judgemental Stance’, Sociology, 46, no. 4 (2012): 664-679

Kemp, Stephen. ‘Transformational Fallibilism and the Development of Understanding’, Social Epistemology, 31, no. 2 (2017): 192-209.

Lakatos, Imre. ‘Falsification and the Methodology of Scientific Research Programmes.’, In Criticism and the Growth of Knowledge, edited by Imre Lakatos and Alan Musgrave, 170-196. Cambridge: Cambridge University Press, 1970.

Popper, Karl. The Logic of Scientific Discovery, London: Routledge, 2005 [1934].

[1]  Perhaps this sympathy arose, in part, because I grew up in New Zealand ‘of all places’ (Sassower, 28).

[2]  For Cruickshank, criticism can be foreclosed in various ways including the treatment of knowledge as ‘justified’, the invocation of ‘authority’ to support a knowledge-claim, and the presentation of solutions as ‘technocratically’ necessary.

[3]  Insisting that normative framing is made clear is also a way to stop the kind of unproblematized reliance on expertise that Reed discusses drawing on Foucault’s work.

[4] There are some important challenges in spelling out what a more coherent response involves, and I have doubts about the way that Popper and Lakatos deal with this issue. I have a go at an alternative in Kemp (2017).

Author Information: Ryan D. Tweney, Bowling Green State University, tweney@bgsu.edu

Tweney, Ryan D. “Commentary on Anderson and Feist’s ‘Transformative Science’.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 23-26.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Dx

Please refer to:

Image credit: Phylogeny Figures, via flickr

Traditionally, historic transformations in science were seen as the products of “great men”; Copernicus, Newton, Darwin, or, in the modern era, Einstein and Marie Curie. It was “genius” that propelled science to new levels of achievement and understanding. Such views have fallen out of favor as the collective efforts that go into scientific advances have come to be recognized, a change in perspective often attributed to Thomas Kuhn.

“Transformative Science” is a new phrase, now used even by funding agencies as one of the criteria for worthy projects. Barrett Anderson and Gregory Feist (2017), however, note how fuzzy the term has been and offer something like a definition. Transformative science, they suggest, is science that leads to a new branch on the “tree of knowledge.”

This is not a true definition, of course, since it is based upon a metaphor, one which is itself only fuzzily defined. Anderson and Feist note that the tree metaphor has been formalized in biology via cladistics. The present paper seeks to extend something similar to the domain of research evaluation. As with cladistics, if formal tools can be developed to measure aspects relevant to the growth of knowledge in science, then it may be that we will advance toward an understanding of transformative science. They thus propose a method for measuring the influence of a given, highly-cited, paper in a way potentially leading to the goal of identifying truly transformative results.

Plotting Generativity

Anderson and Feist’s exploratory study focused upon a single year of publication (2002) from a single field (psychology), selecting randomly some 887 articles that were among the top 10% of most highly cited articles. They then looked at the articles that had cited these 887, identifying those that were themselves among the most cited. They then developed a “generativity score” for each of the original articles. In effect, among the 887 articles, they singled out those that had generated the highest numbers of highly cited articles. Each of the 887 were then examined and coded for funding source.

Descriptively, both generativity and times cited were heavily skewed (Figures 6, 140, and 8, 141), leading the authors to carry out a log transformation of each (Figures 7 and 9, 141), in an attempt to normalize the distributions. They claim that this was successful for the generativity scores, but not for the number of times cited. But note that the plots are severely misleading. Since there are 887 articles in the sample, and the number of points on each graph is far smaller, it must be the case that multiple articles are hidden within each of the plotted points. Is it the case that the vast majority of the articles are somewhere in the middle of each distribution? At the lower end? At the upper end? If so, the claim that generativity was successfully normalized is suspect. This is even apparent from the graph (Figure 7, 141) which, while roughly bell-shaped (as far as the outer “envelope” of points is concerned), clearly must have a large majority of points that share the same value. Since the mean and median of “G log 10” (see Table 4, 140) are reported as roughly equal at around 1.0, these shared points must be at the lower end of the scale (below an untransformed generativity score of 10). A better plot, with the individual points “jittered” to separate them might then make the claim of approximate normality more convincing (Cleveland 1985).

Similar considerations applied to the times cited plots suggest a different distribution, though still far from normal, whether in raw scores or log transformed scores. Is it a Poisson distribution? Clearly not, since, in a Poisson, the mean and variance should be roughly equal. This is far from the case, whether raw scores or transformed scores are used.

The nature of the distribution matters here because Pearson r was used to determine the relationship between generativity and times cited. But Pearson’s statistic is only appropriate for determining the linear relationship between two bivariate normal variables. Anderson and Feist report the correlations as r = 0.87 for G and TC and 0.69 for G log10 and TC log 10.  This strikes me as meaningless, especially if there are large numbers of low generativity points masked by the lack of jittering (as suggested above). From the similarly unjittered scatterplots (Figures 10 and 11, 142), which are superficially, more-or-less bivariate linear, the points at the lower end look to be unrelated. This suggests that a small number of points at the upper end are pulling the regression line upwards, a possibility that recalls “Anscombe’s Quartet” (Tufte 2001, 14), a set of four relationships that each show a Pearson correlation of +0.82, but which are wildly different (see Figure 1 below).

Similar problems with non-normal distributions may affect analysis of the relationship between funding source, generativity, and times cited. In any case, these relationships are incredibly small—among the reported eta-squared values, the largest is only 0.014. Whether or not the result is significant is not the issue; a relationship between variables that accounts for only 1.4% of the variance is too small to be of practical significance. The best conclusion to draw from these data is that there is no relationship between funding source (or its absence) and either generativity or times cited.

Ways to Look at the Data

Anderson and Feist have, of course, given us an exploratory study, so statistical and graphic nitpicking is not the main point. Instead, the real value of the study has to lie in the directions it points and the issues it raises. What they refer to as the “structure” of citations is an important aspect of scientific literature and, indeed, one that has been overlooked. Their operational implementation of generativity is potentially important, and it suggests a number of new ways to look at their data. In particular (and in the spirit of seeking to move toward a true recognition of transformative science), more attention needs to paid to the extreme outliers in their data. Thus, both generativity and times cited show two (or more?) points at extremely large values in Figures 6 (140) and 8 (141). Are these the same two papers (assuming there are only two), as suggested by the scatterplot in Figure 10 (142)? And what are they and where did they appear? What can be said about their content, the content of the citing articles, and about the purposes for which they were cited? If they are methodological contributions, instead of articles that report a new phenomenon, we might draw different lessons from their structural extremity.

Many other questions could be raised using the existing data set. Is there a relationship between generativity and the lag in citations? That is, are highly generative articles more likely to show citations increasing over time, as one would expect if the influence of a generative article is to generate more research (which takes time and sometimes funding), rather than simply nods to something interesting. Or, similarly, what does the “decay” curve of citations look like? One might find large differences, even among relatively low generativity articles in their “half life,” thinking perhaps that truly generative articles have a longer half-life than even highly cited, otherwise seemingly generative, articles. There is a great deal more to be learned here.

Since this is an exploratory study, it would also make sense to use exploratory data analysis (Tukey 1970) to search for structural patterns in the data set. For example, one could plot the relation between generativity and times cited by dividing the generativity data by deciles and looking at the distribution of times cited for each decile; if the middle ranges of generativity had approximately bell-shaped distributions of times cited, then Pearson correlation coefficients might be appropriate for quantifying the middle range of the relationship.

Finally, since the goal is to obtain information about the structure of citations (rather than simply their number), aggregate statistics like means, correlation coefficients, and the like seem to rather miss the point. For example, is it the case that highly generative articles have chains of subsequent citations that branch off when new articles citing them become themselves highly cited? If so, and if non-generative articles (which by definition have simple “fan-like” patterns without branching), one would have a direct look at the structure of the network of citations.

At the end of the article, Anderson and Feist make a number of suggestions for further research, all of which suggest gathering more data. These are welcome suggestions and should indeed be pursued, even, as they acknowledge, truly transformative science must ultimately await the judgment of history. In the meantime, I hope that this intriguing contribution can be further strengthened, expanded, and subjected to further exploratory analysis.

References

Barrett R. Anderson and Gregory J. Feist. “Transformative Science: A New Index and the Impact of Non-Funding, Private Funding, and Public Funding.” Social Epistemology 31, no. 2 (2017): 130-151.

Cleveland, William S. The Elements of Graphing Data. Monterey, CA: Wadsworth, 1985.

Tufte, Edward R. The Visual Display of Quantitative Information (2nd ed.). Cheshire, CT: Graphics Press, 2001.

Tukey, John W. Exploratory Data Analysis. New York: Addison-Wesley, 1970.

Figure 1: Anscombe’s Quartet

Author Information: Dorte Henriksen, Aarhus University, dh@ps.au.dk

Henriksen, Dorte. “Reply to Liberman and López Olmedo’s ‘Psychological Meaning of “Coauthorship” Among Scientists’.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 20-22.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Df

Please refer to:

Image credit: aKaMu aDaM, via flickr

The concepts of co-authorship and research collaborations seems both straightforward and at the same time very complicated. Several studies have emphasized that co-authorship practices differ depending on the research field, and this needs to be taking into consideration when examining research communication, collaboration and productivity. The study by Sofia Liberman and Roberto López Olmedo (2017) is a great contribution to the examination and discussion of these concepts. I will discuss their study based on my knowledge from the fields of scientometrics and sociology of science.

Measuring Co-Authorship

The first thing to note is that the authors state that there are different approaches in how scientific studies measures co-authorship, however, without referring to such studies. The co-authorship studies that have come to my attention measure co-authorship through the list of authors named in of byline of the same publication this creates a quantitative measurable link or bond between these researchers. Therefore, the measuring of co-authorship is straightforward.

The problem arises when research collaboration comes into measurement, since scientific studies use different ways of measuring research collaboration. Liberman and López Olmedo point out how “different types of collaborations entails different types of relationships between researchers (…).” Hence, it is problematic to equate co-authorship and research collaboration. Earlier bibliometric studies had a tendency to use co-authorship and research collaboration as synonymous concepts, and some studies still do. The common argument for using co-authorship as a proxy for research collaboration is that it is the best available indicator for quantitative studies of research collaboration, and co-authorship is a reflection of some kind of collaboration.

Hence, co-authorship is what links the researchers together by the publication in the formal or the “outer cycle” of science communication system. Researchers become authors through this link and establish themselves in both the science communication and reward system. The scientific norms in these systems assume that it is possible to identify and assign the individual intellectual responsibility of a piece of scientific work (Biagioli and Galison 2003). The tendency to co-author publications and the general rise in the average number of authors makes it more complicated to identify the individual contribution and necessary to discuss the concept of co-authorship. Not to mention, how this should incorporate into an examination and discussion of how different epistemic cultures affect researchers’ practices of co-authorship.

Liberman and López Olmedo’s study brings interesting aspects about how scientists conceptualize co-authorship, but it is not surprising that research collaboration and teamwork are among the most used words to define co-authorship for all fields, except for chemistry. If one examines co-authorship studies, it will probably be the same words appearing as definer words of such studies. However, the remaining definer words confirm many suggestions from previous studies (e.g. Corley and Sabharwal 2010, Lee and Bozeman 2005). Not to mention, that many of these words correspond to the Mertonian norms of science (Merton 1973), and focuses on how the collaboration and co-authoring improves the research. For example Sharing ideas and work, Richer ideas, and Ideas enrichment all focus on the synergy that collaboration adds to the research process. The words Learning and Knowledge fit the idea that collaborations occur because of needs for experts and educating junior researchers.

A Broader Scope

At the same time, one could question why the researchers use the word Active Participation, since in an ideal world one would expect all collaborators to be actively involved in the research process. Could this be an indication of problems with gift or career authorship? On the other hand, is it just a way of being inclusive, so all who contributed to the research project should be included as co-authors? It could be interesting to go deeper into the meaning of these words and discuss how they reflect certain research cultures. Similarly, the words More publications and productivity corresponds to the often-mentioned problems with publication pressure, where researchers need to spread their work in multiple projects to ensure no zero publishing periods.

The results of Liberman and López Olmedo study would benefit from a greater discussion and references to science communication and sociology of science studies, especially of the included research fields; physics, mathematics, biological sciences and chemistry. For example, multiple studies have concluded that the extent of research collaboration differs across subject fields, thus it affects the researchers’ perception and practices of co-authorship. Birnholtz (2006) interviewed researchers from CERN and found that in high-energy physics the practices of co-authorship do not correspond to the traditional model of authorship. The authorship it-self was a reward for working on a common project, and some admitted they have not even read the articles they were co-authoring.

Lariviere et al. (2016) examined the authorship contribution statements for PLOS ONE articles, and found differences in co-authors’ contributions depending on their field and rank. Physics has an egalitarian view on authorship, so they give all the authors credit for all kinds of contributions, while biomedicine and clinical medicine have a greater division of labor, and have a greater focus on the individuals reward. Chemistry is moderate, where execution and analysis of technical and experimental work, like life sciences, is highly associated with authorship, but often in association with another type of task (writing, designing experiment etc.). Mathematics is a vastly theoretical field with fewer authors on the research articles, which reflects the contribution statements giving equal credit and responsibility for all parts of the research article. The authorship and science cultures studies correspond very well with the results from the Liberman and López Olmedo study, and they enhance the validity of their study.

I was surprised about the results of the study by Liberman and Galán Díaz (2005) of the concept of international collaboration, where the findings show that researchers did not mention co-authorship as a main concept. Therefore, I checked their study and found that international publications were among the main concepts, and I might be over-interpreting, but the link from international collaboration to international publication is co-authorship. Thus, co-authorship is definitely among the mentioned concepts, and in association with the main concepts. Therefore, I think it could be interesting to repeat a similar study using the stimulus word collaboration to explore how researchers conceptualize research collaboration and whether they immediately think of co-authorship. This study would also reveal to what extent there are variability between the definer words for co-authorship, international collaboration and collaboration, as well as between different research fields. There would be a great possibility to do a comparative study based on the collected interview and survey data of these concepts and explore to what extent these concepts semantically overlap.

References

Biagioli, Mario, and Peter Galison. Scientific Authorship: Credit and Intellectual Property in Science. New York: Routledge, 2003.

Birnholtz, Jeremy P. “What Does it Mean to be an Author? The Intersection of Credit, Contribution, and Collaboration in Science.”  Journal of the American Society for Information Science and Technology 57, no. 13 (2006): 1758-1770.

Corley, Elizabeth A. and Megina Sabharwal. “Scholarly Collaboration and Productivity Patterns in Public Administration: Analysing Recent Trends.”  Public Administration 88, no. 3 (2010): 627-648.

Lariviere, Vincent, Nadine Desrochers, Benoît Macaluso, Philippe Mongeon, Adèle Paul-Hus, Cassidy R. Sugimoto. “Contributorship and Division of Labor in Knowledge Production.”  Social Studies of Science 46, no. 3 (2016): 417-435.

Lee, Sooho and Barry Bozeman. “The Impact of Research Collaboration on Scientific Productivity.”  Social Studies of Science 35, no. 5 (2005): 673-702.

Liberman, Sofia, and Carlos Robeto Galán Díaz. “Shared Semantic Meaning of the Concept of International Collaboration among Scientists.”  Journal of Information Management and Scientometrics 2, no. 2 (2005): 27-34.

Liberman, Sofia and Roberto López Olmedo. “Psychological Meaning of ‘Coauthorship’ Among Scientists Using the Natural Semantic Networks Technique.”  Social Epistemology 31, no. 2 (2017): 152-164.

Merton, Robert K. The Sociology of Science: Theoretical and Empirical Investigations. Chicago: The University of Chicago Press, 1973.

Author Information: Tommaso Bertolotti, University of Pavia, Italy, bertolotti@unipv.it

Bertolotti, Tommaso “Science-Like Gossip, or Gossip-Like Science?” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 15-19.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3D0

Please refer to:

Image credit: Megan, via flickr

Richard Feynman is credited for having once quipped that “philosophy of science is as useful to scientists as ornithology is useful to birds.” Feynman, a Nobel prize laureate, got called the smartest man on Earth. Hearing this, his mother said, “Then I don’t want to think about the others.”

Ornithologists normally won’t teach birds how to fly. But they can tell the rest of us people how to protect birds, how to make our activities less harmful with their respect. They can teach us how to tell a bat from an owl, a butterfly from a hummingbird, and also how to determine whether what is soaring above us is a hawk or a drone. Ornithologists, teaming up with engineers and medical scientist, can directly affect birds by recreating broken beaks, patching up wings, or even teaching them to fly. All of those things couldn’t be achieved without the knowledge carefully gathered by ornithologists.

Today, the analogy between epistemology and ornithology is all the more actual. Science is under attack. Research is being cut most everywhere in the North-Western Hemisphere. Where it is not heavily cut, it is arguably managed by leveraging the Matthew Effect[1] in ways that hardly encourage, for instance, the diversification of Science. Science needs philosophy of science. Philosophy, especially in its epistemological avatar, needs to protect the formidable adversary it was once challenged by. A stern claim that radically collaborative science shows no cause for epistemic alarm seems a quick way of washing one’s hands, thus endorsing Feynman’s mockery.

The fact that there is no epistemic alarm, Søren Klausen (2017) concedes, does not imply saying there is no cause for alarm. There can be pragmatic, social, political causes for alarm (although the author seems to take them quite lightly). But what if the pragmatic, social, political causes for alarm are actual, and this in turn has alarming epistemic effect?

Alarmed By Radically Collaborative Science?

Let’s consider gun control: we can say that it’s pointless to engage a moral discourse about weapons because weapons have no volition, and ultimately, it’s people as moral subjects that kill through them. Would it be reasonable to hold that guns don’t cause moral alarm on such bases? In spite of their being morally neutral, if certain things cause morally alarming outcomes, then they give reason to be morally alarmed.

Klausen, against the so-called Georgetown Alarmists, maintains that there is no reason to be alarmed about radically collaborative science. The Georgetown Alarmists, conversely, present an epistemically alarmed view and they issue an epistemologically negative judgement on radically collaborative science. In my view, being alarmed does not entail issuing a negative judgement. If I hear a huge ruckus coming from the ground floor at night, I have the right to be alarmed. It would be reasonable that I go down and check. I am not necessarily justified in going down and blindly fire a full round in the dark. I am epistemically alarmed about radically collaborative science, if only for the pragmatic consequences that seem likely to affect the epistemic procedures of Science. Considering that there is reason to be epistemically alarmed amounts to say “Hey, there is something strange going on there. Let’s pull it over and have a closer look.”

By this, I’m not saying that we must condemn radically collaborative science on epistemic grounds, but we might be at the same time a little more careful before green-lighting it.

Let me start with a minor yet compelling reason why the non-epistemic effects of radically collaborative science might not be neutral from an epistemological point of view and hence cause some kind of epistemic alarm: citation indexes. Citation indexes are the mixed blessing of contemporary academia. Academics care about their h-index, their i-10 index, their Scopus, their Google Scholar index. Like it or not, right or wrong, these indexes rule the academic job market through the world. Even humanities adhered to it, producing indexes that are quite ridiculous compared to their scientific counterparts. How will radically collaborative science impact the indexing? How is this going to affect the job market? This is clearly not an epistemic trait, but since what is at stake are the next generations of scientists and academics, the next prime producers of episteme, this issue is somehow allowed to trigger some epistemic alert.

Let’s now consider some properly epistemic issues: one of the Georgetown Alarmists’ chief reasons for epistemic alarm is that scientific claims are not fully accountable in the framework of radical collaboration. In their view, an epistemically relevant feature of scientific knowledge is that it is accountable. Such accountability has been traditionally enforced in scientific practice, for instance in publications: if Black & White make a claim in a paper, if that claim is proved true or wrong they are held accountable for it. And if they are wrong because of some results they have received from Green and Brown’s paper, Green and Brown will be held epistemically accountable. Such epistemic accountability has tacitly regulated the publishing heuristic according to which it is better to publish fewer accurate papers than many shakier ones.[2]

Science has a kind of a double standard concerning failures and drawbacks: in the finest Popperian spirit, falsifications are sought for, any truth is provisional and errors illuminate the correct path. Science thrives through its mistakes, but scientists don’t. If you are the leader of a important research program and you got it wrong, you won’t be charged nor prosecuted but chances are you won’t get hired again. As said in the Gospel of Matthew (18:7), “[…] it is inevitable that stumbling blocks come; but woe to that man through whom the stumbling block comes”.

A radically collaborative paper may have dozens of authors, some extreme cases have hundreds, including honorary authorship attributions. The authors might even count the programmers who developed a certain data mining or machine learning software, and such software might be credited with a certain authorship, too. What kind of accountability is there left? Claims in the paper are shared between dozens or hundreds of subjects. It doesn’t make sense anymore to look for the accountable origin of this and that claim. This as far as authorship is concerned. What about the accountability relating to references to similarly collaborative outputs?

Epistemic Accountability and Gossip

Klausen interestingly argues that sticking to epistemic accountability as we’ve always known it is somewhat of an old-fashioned feat, inasmuch as it might be desirable, it may make things easier, but it is not an epistemic must for the constitution of scientific knowledge. I say, “fine”. I don’t have any argument in favor of strong accountability. Still, if I accept with Klausen that accountability is a preferable feature that I can actually make without, and not a must, something makes me wonder: How can I tell science from gossip? Let me explain why.

The 1990s marked the turning point in gossip studies, especially as far a new ethical evaluation of gossip was concerned. The edited book Good Gossip is one of the best examples of this new view. A feminist Peircean scholar, Maryann Ayim, contributed to this book with the essay “Knowledge through the Grapevine: Gossip as Inquiry”, in which she pointed out the epistemological resemblance between scientific inquiry and gossipy investigations in a social group.

Gossip’s model captures several aspects of Peirce’s notion of a community of investigators. Describing what he sees as the causes of “the triumph of modern science,” Peirce speaks specifically of the scientists’

unreserved discussion with one another … each being fully informed about the work of his neighbour, and availing himself of that neighbour’s results; and thus in storming the stronghold of truth one mounts upon the shoulders of another who has to ordinary apprehension failed, but has in truth succeeded by virtue of the lessons of his failure. This is the veritable essence of science” (CP7.51).

If Peirce is right that the unreserved discussions with one another are a cornerstone in the triumph of modern science, then gossip, by its very nature, would appear to be an ideal vehicle for the acquisition of knowledge. Gossips certainly avail themselves of their neighbours’ results, discussing unreservedly and sharing results constitute the very essence of gossip.[3]

Elaborating on the epistemology of gossip together with Lorenzo Magnani,[4] we unfolded Ayim’s seminal insight, showing how the inquiries of gossip can be modeled as abductions: abduction is the prime inferential structure to describe hypothetical reasoning, namely science. One cannot work on gossip without working on rumor, and we took from social epistemologist David Coady this intriguing distinction:

This is one difference between rumor and another form of communication with which it is often confused, gossip. Gossip may well be first-hand. By contrast, no first-hand account of an event can be a rumor, though it may later become.[5]

It can be said that what Coady is referring to is the accountability of gossip. Gossip is, in principle, accountable because gossip traces back to some eye-witnessed even and its inferential elaboration by a specific group of people with their specific background and so on (Bertolotti & Magnani, 2014). Why then do we have a problem with gossip, usually relating to the fact that gossip cannot be trusted because no-one is accountable for it? Because there are two dynamics at play in gossip: the actual dynamic between peers, and the “projected”, subject-less dynamic concerning the group-level. The projected subject instantiated by sentences such as “Oh come on, everybody knows that Joe’s been cheating on his wife for ages!”. When we say things such as “everybody knows” referring to our social group, we mean that those small idle exchanges that are the bricks of gossip leveled up and became as true rumors for the whole group. At group level, a true rumor is a fact. Gossip that makes it all the way up to become common knowledge entertains an ambiguous relation with accountability, akin to the one described by Klausen concerning radically collaborative science. The collaborators’ contribution is sublimated into the radically collaborative accountability, which is indeed different from the accountability in traditional, less collaborative science.

By these considerations, I am not smuggling in a bad company fallacy, suggesting that if radically collaborative science is like gossip then radically collaborative science is bad. I don’t have strong opinion on radically collaborative science, I am careful. I am neither as hostile as the Georgetown Alarmists, nor as permissive as Klausen. I think that scientists might not have any choice but to go with the flow, but philosophers of science might accept the epistemic alarm, unravel it, and then decide whether such alarm was justified or not. There is no need to endorse it too quickly.

If we think of gossip in the evolution of human cognition, gossip is one of the most ancient form of social cognition and social communication, to the point that some argue that language as an adaptation was selected for its capacity to afford gossip.[6]

Gossip is an extremely sophisticated tool for collective inferences, and as such it might have permitted the emergence of hypothesis taking into account multiple causality.[7] At the same time, human beings needed to go beyond gossip and find more rigorous methods to produce and secure knowledge, keeping the latter’s inferential power but leveraging its accuracy and predictive power at the same it. In a speculative gaze, it can be argued that both justice and science developed by and for making people accountable for their claims. Sure, accountability has had its shortcomings (people have been forced to pay with their freedom or their life for some erroneous claim) but the other face of this coin is recognition. Recognition is not an epistemic value, but it is an epistemic drive.

We struggled our way out of gossip by making hypotheses and predictions accountable, and we got to science. Science shaped its own way of producing knowledge, which basically amounts to the world as we know it. Now, if we feel compelled to sacrifice traditional accountability at the altar of our challenges and our current means, it is maybe the epistemically right thing to do, but there is need to be epistemically alert, to think it through, and so help us God.

References

Ayim, Maryann. “Knowledge Through the Grapevine: Gossip as Inquiry.” In Good Gossip edited by Robert F. Goodman & Aaron Ben-Ze’ev, 85–99. Lawrence, KS: University Press of Kansas, 1994.

Bertolotti, Tommaso and Magnani, Lorenzo. “An Epistemological Analysis of Gossip and Gossip-Based Knowledge.” Synthese 191, no. 17 (2014): 4037–4067.

Bertolotti, Tommaso and Lorenzo Magnani. “Gossip as a Model of Inference to Composite Hypotheses.” Pragmatics & Cognition, 22 (2016): 309–324.

Coady, David. What to Believe Now: Applying Epistemology to Contemporary Issues. New York: Blackwell, 2012.

Dunbar, Robin. “Gossip in an Evolutionary Perspective.” Review of General Psychology, 8 (2004): 100–110.

Klausen, Søren Harnow. “No Cause for Epistemic Alarm: Radically Collaborative Science, Knowledge and Authorship.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 38-61.

Merton, Robert K. “The Matthew Effect in Science.” Science 159 (1968): 56-63.

Peirce, Charles Sanders. Collected Papers of Charles Sanders Peirce, Vol 7. Edited by Arthur W. Burks, CP7.51. Cambridge, MA: Harvard University Press, 1958.

[1] Robert K. Merton “The Matthew Effect in Science” (1968). The expression was coined by sociologist Robert Merton and it refers to a snowball-like, cumulative advantage of the good at stake. It is inspired by the ominous sentence in the Parable of the Talents in Matthew’s Gospel 25:29: “For to everyone who has will more be given, and he will have an abundance. But from the one who has not, even what he has will be taken away.” It is used to indicate how rich people get richer, famous people get more famous, and highly cited papers get even more citations.

[2] In certain disciplinary fields, characterized by lesser degrees of collaboration such as the humanities, the accountability principle is sometimes led to paroxysm, for instance when “the contribution of each author to the paper is to be clearly described.”

[3] Ayim, “Knowledge Through the Grapevine: Gossip as Inquiry,” 87.

[4] Bertolotti & Magnani, “An Epistemological Analysis of Gossip and Gossip-Based Knowledge.”

[5] Coady, “What to Believe Now,” 87.

[6] Dunbar, “Gossip in an Evolutionary Perspective.”

[7] Bertolotti and Magnani, “Gossip as a Model of Inference to Composite Hypotheses.”

Author Information: Val Dusek, University of New Hampshire, valdusek@aol.com

Dusek, Val. “Regarding Alternative Scientific Theories: A Reply to Pigliucci.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 10-14.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3CD

Please refer to:

Image credit: Otto Magus, via flickr

Massimo Pigliucci (2017) rejects Paul Feyerabend’s plea for pluralism as unneeded. I claim that there are many cases where viable or valid alternative theories that were rejected.

How much pluralism exists depends on what one counts as pluralism. Also, there is the difference between pluralism within an established field of science, such as professional physics, geology, or biology and pluralism involving theories outside the institutionalized science profession, such as holistic medicine, Chinese and Ayurvedic medicine, creationism and intelligent design, or various New Age conceptions.

There certainly have been limits on the permissible alternatives. This does not mean that Thomas Kuhn’s “paradigm monopoly” thesis is correct for all subfields of science. There have been periods in the history of science when competing theories openly divided the scientific community. Examples are the opposition between the wave and particle theories of light in the early nineteenth century and Wilhelm Weber’s action at a distance versus Maxwell’s field theory in electromagnetism in the late nineteenth century. However, pluralism has been suppressed in a number of fields during various time periods.

I give six examples of valid or at least respectable alternative theories in physics, biology, and geology, respectively, that were rejected and largely suppressed from discussion in print by the community of professional scientists. I also add the case of one New Age theorist considered beyond the fringe, but with impeccable scientific credentials.

On Felix Ehrenhaft

Feyerabend came to recognize the suppression of alternative views through exposure to two controversies. In Austria Feyerabend attended lectures of Felix Ehrenhaft, who questioned the claim that electrons have unitary charge, showing experiments that he claimed exhibited electrons with fractional charges. His claims were dismissed, largely because the idea of fractional charges was thought ridiculous. (Decades later charges of 1/3 were found in quarks.) The standard experiments concerning the charge of the electron were those of Robert Millikan at Chicago. It has later turned out that Millikan removed from his published reports observations that went against his thesis. One could say, in the manner of Michael Polanyi’s authoritarian and hierarchical view of science, that, as in the cases of Newton and Mendel, there may be fudging or worse going one, but that the genius and intuition of the great scientist gave prophetic insight.

The other area in which Ehrenhaft presented views that were rejected is in his claim that there are magnetic monopoles, that is particles with a magnetic north pole but not a south pole, or vice versa. Here again the notion of monopoles (though not Ehrenhaft’s macroscopic observations) came to be respectable, if not confirmed, with the recognition of the asymmetry in Maxwell’s equations of electromagnetism and the consequences of Paul Dirac’s equations for relativistic quantum theory of the electron. Dirac in 1931 derived the existence of monopoles but this did not become a center of interest Post-1950 quantum field theory. In 1974 two physicists using the gauge theory of the electroweak field independently derived monopoles as Moebus-strip-like twists in the relevant fiber bundles. When This field theory is combined with theories of the origin of the universe, one gets the prediction that in the early universe there was asymmetry Though no monopoles have been confirmed the acceptance of the monopole is shown by physicists’ enthusiasm for several apparent observations, one in 1975, immediately after the such as a Spanish one on Valentine’s Day 1982 while no one was in the lab. At my university, the soon discredited discovery led to the college of engineering and physics to call a public meeting to explain the observation of the monopoles, which shortly after was discredited.

On David Bohm

The second controversy that led Feyerabend to point out the lack of pluralism in science is the cases of the decades-long rejection of David Bohm’s alternative version of quantum mechanics with hidden variables. Bohm’s theory is based on a simple mathematical transformation of the standard Schroedinger equation and is logically unexceptionable, but physicists at the time general rejected it as crackpot. Cushing’s fascinating Copenhagen Hegemony and Historical Contingency shows how de Broglie’s alternative, deterministic quantum mechanics was basically shouted down and the big Solvay meeting in 1927, and his approach was not revived until the 1950s with David Bohm and Jean-Paul Vigier. There were two major reasons for the discrediting of their views.

One was John von Neumann’s proof, or supposed proof that deterministic hidden variables are impossible. Most working physicists hadn’t read the proof, and many chemists and solid-state physicists, wouldn’t have understood it, given the highly abstract algebraic formulation of quantum theory in which it is formulated. Yet it was taken on faith because of von Neumann’s superior ability in abstract mathematics. This was until John Bell reexamined it in the early 1960s, and even then it took time for Bell’s critique to sink in in the 1970s and 1980s. Secondly, the Marxism of the two determinists helped discredit their approach.

At the Institute for Advanced Study Oppenheimer accused Bohm of being a “Trotskyite” (as the assembled physicists were mostly ex-orthodox Stalinists) and said “We will refute Bohm by not reading him.” Not all the rejection was that of Marxism. De Broglie was not Marxist, but his 1927 original pilot wave theory was rejected. He kept silent for twenty-five years, until Bohm’s theory appeared, which gave him the courage to revive his original theory. This is just one example, and Bohm was whom Feyerabend defended in his debates with Hanson. Bohm’s alternative wasn’t developed by many until the 1990s and beyond. Even then it remains very much a minority view.

Examples From Biology

There are also examples in biology. One is the claim that there is genetic material in the mitochondria in the cell, different from the main genetic material in the nucleus of the cell. Initially supporters of mitochondrial DNA were accused of being Communists. This was due to the need for militant opposition to Stalinist Lysenkoite biology. Lysenko, a non-scientist agronomist was supported by Stalin because of initial success with improving wheat output and later bogus promises of improving all crops without lengthy, multi-generational breeding and selection of strains. Lysenkoism’s support by Stalin led to the suppression of Mendelian genetics in the USSR, including the Siberian exile or execution of recalcitrant geneticists. In reaction to the Western biologists combatted any suggestion that the simplest nuclear and chromosomal account of genetic material was incomplete. Hence what later was to become accepted theory was rejected.

Another case where an alternative theory has been rejected is Barbara McClintock’s theory of mobile genetic elements or “jumping genes.” Although McClintock’s orthodox work on maize genetics and meticulous experiments within it were respected, her claims that experiments of hers showed that genetic material could migrate around on or among the chromosomes was rejected. Ironically, her theory was only accepted over three decades later, not because of her experiments and observations, but because of results in bacterial genetics within molecular biology. Evelyn Fox Keller’s account, involving, among other things, the rejection of McClintock’s work because she was a woman, and the rejection of her empathetic, non-dominant-manipulative was of doing science, has been later criticized by those who wish to defend the detached and objective portrait of science, such as by Comfort. However, even he accepts much of the story of the rejection of jumping genes.

Continental Drift and Prehistoric Floods

In geology, a classic example of rejection of a superior alternative theory is that of Wegener’s theory of continental drift. When I describe the account of spread and migration of prehistoric creatures such a dinosaurs and early mammals via narrow “land bridges” between the continents, such as between Africa and South America across the Atlantic Ocean that was scientifically accepted until 1967 and that I learned as a child and as a college student, my students laugh and wonder how such a ridiculous theory was believed. Yet the alternative theory of drifting continents was ridiculed and rejected for the first two thirds of the twentieth century. Only when radiometric evidence of sea floor spreading and subduction of continental plates was collected, did Wegener’s theory become accepted. It has been claimed that Wegener’s lack of a physical mechanism for his process prevented geologists from accepting his theory. However, many of the theories accepted by paleontologists were not founded on physical mechanisms, but through qualitative theorizing.

Another, less famous case of an alternative, valid theory being rejected, despite much descriptive observation in its favor is J Harlen Betz’s theory of humongous prehistoric floods in the American West. Betz in the early 1920s, on the basis of his observations in eastern Washington state that there had been gigantic floods in the area of the Channeled Scablands. He noted that the desert landscape must have been sculpted by massive erosion from what he called the Spokane Flood. Massive ice dams had built up during the Ice Age in around Missoula MT and Spokane WA, which burst and produced a vastly forceful flood over eastern Washington state. For fifty-five years Betz’s theory was rejected as crackpot by the geology profession despite respect for his other work. Again, only by 1967, on the basis of improved knowledge of glaciation and aerial photography of the scablands that Betz’s theory was accepted, praised, and received awards. (One may wonder why both Wegener’s and Betz’s theories were both finally accepted only in 1967. The standard account appeals to further empirical observations, but might it be that the loosening up of thought during the radical and countercultural sixties?)

Now the defender of the claim of tolerance within the scientific community may answer that all these alternative theories were eventually accepted, or at least deemed acceptable, even if several or many decades later.

Morphogenetic Fields

An interesting case of a New Age theory that has been totally rejected by mainstream science and is explicitly associated with the most far out hippie conceptions, drug exploration, and shamanism, is Rupert Sheldrake’s theory of morphogenetic fields. Unlike much New Age or countercultural theory this theory is propounded by someone with top scientific qualifications. Sheldrake received studied at Harvard, earned a PhD in biology at Cambridge University, was a fellow of Clare College and did research on embryology.

Sheldrake has proposed the reality of “morphic resonance,” in which form is transmitted via action at a distance among crystals and biological organisms. One crystallographer of my acquaintance who was rabidly opposed to New Age thought surprised me by agreeing with Sheldrake and telling a story of such transmission of crystal shape concerning industrial crystal growing.

Again, unlike many New Age theorists, Sheldrake has suggested several simple, easy to perform experiments. One includes having people sit with their backs to the observer with a baffle or board around them so that they cannot see in back of them, and then to guess whether the other subject is looking at them or not. Another is to place video recorders in the house in which a dog has been left alone in the house of its owner, who is away at work. The experiment is to see whether the dog shows heightened activity and agitation when the owner, at her office, begins to get ready to commute home.

The prestigious Nature magazine’s editor Maddox wrote an editorial advocating that Sheldrake’s New Theory of Life should be burned. Even the most adamant defender of the reality of the Popperian or Mertonian openness to alternatives in science would have to admit that this shows some desire to suppress an alternative theory.

Sheldrake has associated himself through video dialogues with Ralph Abraham, a leading theorist of chaos theory, advanced theoretical mechanics, and global analysis, who supports Hindu accounts of the world and notoriously mentioned in the men’s magazine GQ that countercultural drug use had offered a gentle “kiss” to topologists in the 1960s. Also involved in the trialogue was the prematurely deceased Terrence McKenna, far out Amazon psychedelic drug explorer, chemist, and shaman, who found enlightenment dialoging with a grasshopper in the rainforest. These associations, among other, would understandably alienate more buttoned up mainstream scientists, but don’t prejudge the results of his suggested experiments.

References

Bohm, David. Causality and Chance in Modern Physics. University of Pennsylvania Press, 1971.

Bohm, David. Wholeness and Implicate Order. London: Routledge 2002.

Dusek, Val, The Holistic Inspirations of Physics: The Underground History of Electromagnetic Theory. New Brunswick, NJ: Rutgers University Press, 1999.

Ehrenhaft, Felix, and Leo Banet. “The Magnetic Ion.” Science 96 (Sept. 4, 1942): 228-229.

“Felix Ehrenhaft.” Physics Today 5, no. 5 (1952): 37. http://dx.doi.org/10.1063/1.3067599

Fox Keller, Evelyn. A Feeling for the Organism: The Life and Work of Barbara McClintock. San Francisco: W.H. Freeman, 1983.

Frankel, Ken: “Magnetic Monopole Search, Past and Present.” Physics Today 70, no. 6 (2017).13

Frankel, Henry. “The Continental Drift Debate.” In Resolution of Scientific Controversies: Theoretical Perspectives on Closure, edited by H. Tristram Engelhardt and Arthur L. Caplan, 312-373. Cambridge University Press: Cambridge, 1985.

Gould, Stephen Jay. Panda’s Thumb: More Reflections in Natural History New York: W. W. Norton & Company, 1992.

Holton, Gerald. The Scientific Imagination. Cambridge: Harvard University Press, 1998.

Le Grand, Homer Eugene. Drifting Continents and Shifting Theories. Cambridge University Press: Cambridge. 1988.

Oreskes, Naomi. The Rejection of Continental Drift. Oxford University Press, 1999.

Peat, F. David. Infinite Potential: The Life and Times of David Bohm, Addison-Wesley Publishing Company, 1995.

Pigliucci, Massimo. “Feyerabend and the Cranks: A Response to Shaw.” Social Epistemology Review and Reply Collective 6, no. 7 (2016): 1-6.

Sheldrake, Rupert. A New Science of Life: The Theory of Morphic Resonance. Park Street Press, 1995.

Sheldrake, Rupert, Terrence McKenna, and Ralph Abraham. The Evolution of Creativity. Monkfish Book Publishing, 2002.

Thompson, Dietrich. “Monopoles Oughtn’t to be a Monopoly.” ScienceNews 109, no. 8 (February 21, 1976): 122-123.

Author Information: Ian James Kidd, Durham University, ian.kidd@nottingham.ac.uk

Kidd, Ian James. “Cranks, Pluralists, and Epistemic Vices.” Social Epistemology Review and Reply Collective 6, no. 7 (2017): 7-9.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Cp

Please refer to:

Image credit: Holly Hayes, via flickr

A debate that began about how best to understand Feyerabend’s motivations for his ‘defenses’ of astrology has, thanks to Massimo Pigliucci (2017) and Jamie Shaw (2017), developed into a larger reflection on pluralism. Along the way, our exchange explored the authority of science, demarcation problems, and, in its most recent stages, the status and rationality of science. In his contribution to this exchange, Shaw give an overview of the main principles of Feyerabend’s pluralism, namely, the commitments to proliferation and tenacity. Together, they function methodologically to urge scientists to develop theories that are inconsistent with established points of view, and then to defend those alternatives, even in the face of criticisms and obstacles (see Oberheim 2006).

Understanding Pluralism

The general style of argument for pluralism, developed by Feyerabend during the 1960s, merits two comments. The first is that, as Feyerabend himself constantly affirmed, the pluralistic nature of scientific enquiry is perfectly obvious to anyone with acquaintance with its history or practice. In his writings, the methodologists to admire are not philosophers of science, isolated in their studies from the laboratory workbench; rather, they are those reflective scientists, like Einstein, Mach, and the other heroes, whose epistemic authority on matters of methodology is rooted in their practical experience. So, when Pigliucci remarks that Feyerabend was complaining about nothing, since pluralism has always been a hallmark of scientific theorizing, he’s quite right—for one deep complaint of Against Method was a lack of pluralism in philosophical models of science, not in science itself.

The second comment on Feyerabend’s arguments for pluralism is that, in his hands, they were unsystematically developed—as one should expect, of someone hostile to theoretical pretensions. What one finds throughout his work, instead, are experiments with different types of argument for pluralism, adapted to changing concerns and interests. A job for later scholars, most obviously Eric Oberheim (2006) and Hasok Chang (2012, chapter 5), was therefore to give a more systematic treatment of ‘Feyerabendian’ arguments for pluralism—ones informed by, but not articulated in, the writings of, everyone’s favorite epistemological anarchist. Chang, for instance, divides pro-pluralist arguments in terms of those with ‘benefits of tolerance’ and ‘benefits of interaction’, locating instances of both mixed up in Feyerabend’s writings.

At this point, though, we run into the worries that motivate Pigliucci; namely, that these forms of sensible pluralisms are apt to degenerate, at least in Feyerabend’s hands, into grossly permissive forms of the ‘anything goes!’ variety. Closely attending to the history of science can, it’s true, give us enough cautionary tales to keep open a space for alternative theories—no one doubts that. But what’s not reasonable, argues Pigliucci, ‘is for Feyerabend to think that astrology, or demonology, or homeopathy, are alternative “theories” that ought to be included in the modern pluralist portfolio’ (2017, 2). An appeal for pluralism should not degenerate into an abuse of pluralism, and the million-dollar question is how to mark the point of that shift in a principled way. Unfortunately, Feyerabend does not offer a crisp answer to that question. But, I think, there is no need for one in the case of astrology.

In my original article (Kidd 2016a), I argued that the defenses of astrology were not motivated by a sense of astrology’s epistemic value—so, on my reading, there’s no call for inclusion of astrology and the rest in our ‘pluralist portfolio’. There was no question of including astrology within the modern scientific imagination as a first-order epistemic resource, able to inform contemporary enquiries. That being so, there’s no need to demarcate inclusion worries. Indeed, what one sees in Feyerabend’s essay, ‘The Strange Case of Astrology’, is not really a defense of astrology at all, but rather of the epistemic virtues that are integral to the character of scientists qua epistemic authorities. Astrology was discussed since it was attacked, by a group of scientists, who failed to provide easily-available arguments against it, and who instead relied on dogmatic assertion, arrogant rhetoric, and appeals to authority. It was this bad epistemic behavior that really motivated Feyerabend, rather than any sense on his part that astrology belongs in our pluralist portfolio. I suggested that Feyerabend’s purposes in defending astrology can be profitably understood as an appeal to epistemic virtues and vices—that was he was really concerned with are the virtues of the mind scientists ought to evince, and the danger to their authority if they evince the related vices of the mind (see Battaly 2014, Cassam 2016).

Epistemic Virtues and Vices

I want to suggest that, at this point in our debate, another role for epistemic vices comes into view. Pigliucci rightly remarks that ‘a constant danger for pluralism of any sort is that it risks becoming a fairly lazy intellectual position, where anything goes because one is not willing to do the hard work of narrowing down its scope’ (2017, 1). Two points should be made here. The first is that pluralism can admit of epistemically vicious forms, licensing failures to do the sorts of epistemic work that effective enquiry requires—if ‘anything goes’, one can suspend the hard work of investigating and evaluating those things, and shrug off the responsibility to remove those that aren’t. Although pluralism may enjoy benefits of tolerance and interaction, as Chang calls them, it can also pose costs—disorientation, confusion, and incapacitation, say. It is not always virtuous to be pluralistic, a point that Feyerabend often neglects.

A second point is that Feyerabend, at least as I read him, tends to only see pluralism as virtuous. Throughout his writings, the underlying sense is that being pluralistic is edifying, an expression of—and means to exercise—admirable qualities, like humility, imaginativeness, and open-mindedness. An epistemic anarchist, after all, enjoys an openness unavailable to the poor Kuhnian normal scientist, stifled by their self-imposed dogmatism—a virtue-epistemic aspect of Feyerabend’s famous essay, ‘Consolations for the Specialist’ (1970), that has gone unnoticed. Indeed, note that Feyerabend’s two pluralist principles can both function as virtues of enquirers, as well as norms of enquiry: tenacity can be an epistemic virtue, a disposition close to the virtue of epistemic perseverance (Battaly forthcoming), and proliferation might not itself be an epistemic virtue, but surely requires the exercise of several, including creativity and diligence. Indeed, Feyerabend constantly praises qualities like creativity, imaginativeness, and tolerance while also castigating vices like arrogance and dogmatism.

I want to suggest that we take seriously the idea that certain epistemic stances can be epistemically virtuous or vicious. Clearly, the stance of those scientists who attacked astrology was epistemically vicious, specifically, arrogant and dogmatic, as I argued in my original paper. I think that certain pluralistic stances can be vicious, too, such as the overly permissive sorts that Pigliucci criticizes. But other pluralist stances can be virtuous, encouraging tolerance and imaginativeness and other admirable qualities, perhaps as in Chang’s account. The claim is not that a stance can have epistemic virtues or vices in the full-blooded ways that human agents do, only that stances can have the essential components of those virtues and vices. I have given a methodology for appraising stances in virtue-and-vice-epistemic terms elsewhere and offered a set of examples (Kidd 2016b, Kidd forthcoming a). In one of these, I argue that many forms of scientism, construed as a stance, is epistemically vicious (Kidd forthcoming b). Investigating the the various stances emerging in this debate in vice-epistemic terms would be a worthy project. Perhaps what is really wrong with doctrinaire scientism, flaccid pluralism, and uncritical zeal for pseudoscientific sentiment is that all of these are, deep down, epistemically vicious.

References

Battaly, Heather. “Intellectual Perseverance.” Journal of Moral Philosophy, forthcoming.

Battaly, Heather. “Varieties of Epistemic Vice.” In The Ethics of Belief, edited by Jon Matheson and Rico Vitz, 51-76. Oxford: Oxford University Press, 2014.

Cassam, Quassim. “Vice Epistemology.” The Monist 99, no. 3 (2016): 159-180.

Chang, Hasok. Is Water H2O? Evidence, Pluralism, Realism. Dordrecht, Springer, 2012.

Feyerabend, Paul. “Consolations for the Specialist.” In Criticism and the Growth of Knowledge, edited by Imre Lakatos and Alan Musgrave, 197-231. Cambridge: Cambridge University Press, 1970.

Kidd, Ian James. “Why Did Feyerabend Defend Astrology? Integrity, Virtue, and the Authority of Science.” Social Epistemology 30, no. 4 (2016a): 464-482.

Kidd, Ian James. “Charging Others with Epistemic Vice.” The Monist 99, no. 3 (2016b): 181-197.

Kidd, Ian James. “Epistemic Vices in Public Debate: The Case of New Atheism.” In New Atheism: Critical Perspectives and Contemporary Debates, edited by Christopher Cotter and Philip Quadrio. Dordrecht, Springer, forthcoming a.

Kidd, Ian James. “Is Scientism Epistemically Vicious?” In Scientism: Problems and Prospects, edited by Jeroen de Ridder, Rik Peels, and René van Woudenberg. Oxford: Oxford University Press, forthcoming b.

Oberheim, Eric. Feyerabend’s Philosophy. Berlin: Walter de Gruyter, 2006.

Pigliucci, Massimo. “Feyerabend and the Cranks: A Response to Shaw.” Social Epistemology Review and Reply Collective 6, no.7 (2017): 1-6.

Shaw, Jamie. “Feyerabend and the Cranks: On Demarcation, Epistemic Virtues, and Astrology.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 74-88.

Author Information: Massimo Pigliucci, City College of New York, massimo@platofootnote.org

Pigliucci, Massimo. “Feyerabend and the Cranks: A Response to Shaw.” Social Epistemology Review and Reply Collective 6, no. 7 (2016): 1-6.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3C4

Please refer to:

Image credit: magro_kr, via flickr

Jamie Shaw (2017) has vigorously engaged both my (mild) criticism of Ian Kidd’s take on Feyerabend’s famous defense of astrology (Pigliucci 2016), and Kidd’s own (mild) concessions in light of such criticism. Here I want to push back a little against Shaw’s approach, with two goals in mind: (i) to identify the limits of Feyerabend’s “hard core pluralism”; and (ii) to elaborate on and defend my take on the feasibility of the science-pseudoscience demarcation project. I will proceed by following the steps in Shaw’s argument, highlighting those that in my opinion are the most important bits, and addressing them to the best of my abilities.

Citing Farrell, Shaw writes: “pluralism is the hard-core of the Feyerabendian philosophical program and it came to permeate all aspects of his thought” (2017, 75). That is surely correct. But a constant danger for pluralism of any sort is that it risks becoming a fairly lazy intellectual position, where anything goes because one is not willing to do the hard work of narrowing down its scope. This is, for instance, what is clearly in evidence in the recently (obviously, posthumously) published book by Feyerabend himself, Philosophy of Nature (2016). The book is a perfect display, writ large, of what the author’s take on astrology was in a more narrow instance: erudite scholarship, sharp criticism of others’ positions, and grand claims about methodological anarchism. The whole thing followed, unfortunately, by very little in the way of positive delivery.

Why Feyerabend is Wrong About Radical Pluralism

To be specific, Shaw rightly says that

two principles comprise Feyerabend’s pluralism: the principles of proliferation and tenacity … the principle of proliferation [says that] we should “[i]nvent, and elaborate theories which are inconsistent with the accepted point of view,  even if the latter should happen to be highly confirmed and generally accepted.” … Proliferation must be complemented by the principle of tenacity which states that we should “select from a number of theories the one that promises to lead to the most fruitful results, and stick to this theory even if the actual difficulties it encounters are considerable (2017, 75).

Neither of these two principles would find much resistance from either scientists or philosophers of science, in part because they are so vague that it is hard to see what, exactly, one would resist. Moreover, there are plenty of instances in the history of science in which precisely what Feyerabend advocates did, in fact, happen. The Copernican revolution (Kuhn 1957), for example, during which Copernicus elaborated a theory that was certainly highly inconsistent with the accepted point of view, with Galileo and others tenaciously keeping it alive for decades, in spite of its obvious difficulties, which were resolved only with Kepler’s adoption of the non-circularity of planetary orbits.

A second example, from biology, is the period of so-called “eclipse” of the Darwinian theory (Bowler 1992), between the end of the 19th and the early parts of the 20th centuries, when criticism of Darwinism from paleontology first, and the new science of genetics later, brought about a proliferation of radically alternative theories, from orthogenesis to saltationism. These theories were tenaciously defended for decades, despite increasing issues confronting them, and which eventually led to their rejection in favor of the so-called Modern Synthesis in evolutionary biology (Huxley 1942/2009).

Many other examples could be plucked from the history of science, and it seems to me that Feyerabend was engaging in much complaining about nothing, since pluralism has always been a hallmark of scientific theorizing. That, after all, is how science makes progress in the first place. What does not seem reasonable, however, is for Feyerabend to think that astrology, or demonology, or homeopathy, are alternative “theories” that ought to be included in the modern pluralist portfolio. Sure, there is always the logical possibility that fringe notions may turn out to contain a kernel of truth, but Feyerabend does not provide us with an iota of reason for why we should keep clearly discredited ones such as those just mentioned around as potentially viable alternatives to investigate, particularly when there is only so much time, money, and resources that go into the scientific enterprise in the first place.

“If we were to abandon theories the moment they came into difficulties,” Shaw continues (2017, 76), “we would have abandoned many of the most successful theories throughout the history of science.” Notice the conditional: turns out, historically, that scientists often did no such thing. Not even Popper (1934/1959) at his most strictly falsificationist ever advocated such a stance.

Feyerabend’s mature view of tenacity is exceptionally radical in two ways. Firstly, it has no conditions for acceptance; any theory can be held tenaciously. … Even theories that have blatant internal contradictions or seem to conflict with facts can be, and often are, developed into useful research programs … The principle of tenacity does not, of course, commit us to indefinitely pursuing every line of research we inquire about but simply that it is always perfectly rational to continue developing ideas despite their extant problems (2017, 77).

I take it Shaw (and Feyerabend) and I subscribe to different concepts of what counts as “perfectly rational.” If there are no conditions for acceptance (or rejection) of a theory, then how, exactly, does the principle of tenacity not commit us to indefinitely pursuing every line of research? Who, and on what grounds, makes the decision to stop being tenacious? It seems that Shaw and Feyerabend simply want their cake and eat it too. As for the statement that theories affected by blatant internal or factual contradictions are “often” developed into useful research programs, it is curious that Shaw does not provide us with a single example. Feyerabend is awfully vague about this point as well.

Shaw then goes on to state that “[a]lternatives will be more efficient the more radically they differ from the point of view to be investigated. … what is ‘non-scientific’ one day is ‘scientific’ the next and the transition between the two requires being placed within scientific debates.” But hold on. Why, exactly, should such a counterintuitive relation between radicality and efficiency hold? What historical evidence has been marshaled for that being the case? Indeed, what criterion of efficiency is being deployed here? And yes, sometimes what may appear non scientific may turn out to be so later on (e.g., the idea of continental drift in geology: Frankel 1979). But I never proposed that the science / quasi-science / pseudo-science territory is demarcated a-temporally. Rather, it is a territory marked by fluid boundaries that evolve because they reflect the understanding of the world on then part of the scientific community at any given moment. That said, my colleague Maarten Boudry and I (2013) have observed that there don’t seem to be cases where a notion has been seriously entertained by the scientific community, then relegated to the pseudoscience bin, and later on somehow re-emerged to find new life and success. We refer to this, informally, as the “pseudoscience black hole”: once in, you never get out. It is true for astrology just as much as for demonology and homeopathy, and we have yet to find exceptions.

Once More on Demarcation

Shaw then proceeds to consider my proposal for the science-pseudoscience demarcation problem. Correctly noting that I do not think classical attempts based on small sets of necessary and jointly sufficient conditions could possibly work, he acknowledges that my proposal is that of considering both science and pseudoscience as Wittgensteinian family resemblance concepts, with no sharp boundaries and no set of criteria that are instantiated in all cases. To which I would add the above reminder that I am also explicit about the fact that the science-pseudoscience territory is temporally fluid, to a point (Pigliucci 2013).

In that paper, I propose two axes (not really “criteria”) that may help us map said territory: one that has to do with the internal coherence and theoretical sophistication of a given theory, the second that captures the empirical content of the theory. So for instance, the Standard Model in physics scores high on both counts, as does modern evolutionary theory. Accordingly, they are (currently) considered very solid sciences. At the opposite extreme, astrology and intelligent design creationism are both empirically and theoretically poor, so they are classed as obvious instances of pseudosciences (again, for now). The interesting stuff lies in the middle, e.g., fields that are high on theoretical but low on empirical content (economics) or vice versa (psychology). The borderlands also include fields of inquiry that are usually considered pseudoscientific, such as parapsychology, but are still sufficiently interesting—either theoretically or empirically—to warrant further study.

Shaw acknowledges all this and yet says that”theories that contain low degrees of empirical support (or even conflict with known facts) or are theoretically confused are perfectly pursuit-worthy on Feyerabend’s account,” and that “Pigliucci’s criteria fail to provide reasonable grounds to prevent the consideration of ‘pseudosciences’” (2017, 79). Well, but my criteria are not those of Feyerabend, so the fact that we disagree may be taken as an indictment of my views just as much as of his, it all depends on which position one finds more plausible. And the latter part of the quote misunderstands what my criteria were developed to do: not to prescriptively separate science from pseudoscience, but rather to provide a compass of sorts to navigate the territory and its complex, temporally fluid, boundaries.

Demarcation criteria affect people with different intellectual backgrounds. They affect funding distribution policies, taxation policies, those who benefit or are harmed by the creation (or lack thereof) of particular pieces of scientific knowledge, and so on. This is far beyond the domain of scientists or philosophers of science who provide, at best, one perspective on demarcation. … If scientists are forced to conform to certain views because their education does not provide viable alternatives, if peer review is so conservative that it causes long-term conformity, and so on, then those intuitions aren’t worth taking seriously (2017, 79).

The first bit seems to me to confuse epistemic assessment, which is definitely within the purview of the scientist, with other, surely important, aspects of social discourse. I have never claimed that scientists should be in charge of taxation policies, or more broadly of decisions concerning the broader societal impact of scientific research. Indeed, I most certainly oppose such a stance. Take the issue of climate change, for instance (Bennett 2016). It seems eminently sensible, in that case, to leave the science to the scientists—because they are the ones who are qualified to carrying it on, just like dentists are qualified to take care of teeth—while the much broader and more complex question of how to deal with climate change requires that we call to the high table a number of other actors, including but not limited to economists, various types of technologists, sociologists and even ethicists.

As for the series of conditionals in the second bit quoted from Shaw above, there are far too many unsubstantiated ones. Is it the case that scientists are “forced” to conform because of their education? Is it true that peer review is “too conservative”? On what grounds, according to what criteria? A lot of heavy duty legwork needs to be done to establish those points, work that is obviously beyond the scope of Shaw’s commentary, but that Feyerabend himself simply never did. He was content to throw the bomb in the crowd and watch the ensuing chaos from the outside.

So, was Feyerabend Right in “Defending” Astrology?

The last part of Shaw’s commentary returns to the question that began this whole series of interesting, and I hope useful, exchanges: was Feyerabend right in mounting his peculiar defense of astrology?

“Feyerabend defended the epistemic integrity of some practitioners of astrology because he was practicing the pluralism he preached and decided to defend views that were dismissed or ostracized from the philosophy of science. In other words, Feyerabend was proliferating” (2017, 80). Indeed, but epistemic integrity is a necessary and yet not sufficient condition for being taken seriously as a scientific research program. It is truly astounding that people still think astrology is worth defending, and I’m not talking just about the horoscope variety, as Shaw suggests. While it was certainly the case that some of the signatories of the infamous anti-astrology manifesto that so railed Feyerabend did not due their homework on astrology, plenty of others have. Among them Carl Sagan, who famously did not sign the manifesto, precisely for the reasons Feyerabend thought it was a bad move, but who nonetheless was a harsh critic of much pseudoscience, including astrology.

Shaw writes that “a view one may have may have reason to reject may still be true. To deny this is to assume our own infallibility. … A problematic view ‘may and very commonly does, contain a portion of truth.” (2017, 80) Well, no, definitely not. The accusation of infallibility against critics of pseudoscience is ludicrous. If taken seriously that would mean that every time one has very strong theoretical or empirical reasons to reject a given notion (until, and if, proven wrong) one ipso facto thinks of himself as infallible. Yes, a problematic view may turn out to contain a portion of truth, but “very commonly does”? This is another example of Shaw using a page from Feyerabend’s playbook, making grand statements that are accompanied by absolutely no evidence. Why should philosophers of science take them seriously?

“Because its critics are being arrogant, defending a ‘pro-astrology’ perspective is  necessary to combat this vice,” continues Shaw (2017, 82). But why is it a good idea to fight vice with vice? Why is it not enough to embarrass some of the signatories of the anti-astrology manifesto, showing to the public that they did not know what they were talking about? Why is it that one has to take the next step and defend the possibility that there is still value in astrology, when one patently does not actually believe it, as Feyerabend did not?

Complaining about my objection that Feyerabend’s attitude is positively dangerous, because it facilitates the acceptance by the public of notions that endanger safety, Shaw replies:

Feyerabend never, to my knowledge, discusses climate change, anti-vaccination movements, or AIDS denialism; these (mostly) became issues after Feyerabend’s death. Furthermore, there is no legitimate inference from Feyerabend’s pluralism to defending these topics in a direct way. … Pigliucci cannot ascribe any of these particular consequences as emanating from Feyerabend (2017, 83-84).

Except, of course, that I do no such thing. I’m perfectly aware that those issues became prominent after Feyerabend’s death. But it would be naive to believe that there is no connection to be made here. Indeed, the infamous “science wars” of the ’90s (Gross and Levitt 1994), pitting strongly postmodernist philosophers and sociologists on one hand against scientists and philosophers of science on the other, had a pretty direct connection with Feyerabend’s work, which was, predictably, ailed by the first group and condemned by the second one.

Finally, Shaw takes up what Feyerabend, Kidd, and myself have in common: a strong suspicion for what nowadays is referred to as “scientism,” the exponents of which are those who Shaw labels “the cranks.” Citing Feyerabend, Shaw writes: “The crank usually is content with defending the point of view in its original, undeveloped, metaphysical form, and he is not prepared to test its usefulness in all those cases which seem to favor the opponent, or even admit that there exists a problem” (2017, 85). I have certainly encountered such types, both among scientists and among so-called skeptics. They are not serving the interests of science, critical thinking, or society. So Shaw is correct when he states that “it is clear that there is a commonality between Pigliucci, Kidd, and Feyerabend: their disdain for the cranks!” Indeed. But, contra, Feyerabend, I do not think that the way to do it is to take an anti-rationalist stance about the value of pseudoscience. It is both counterproductive (Feyerabend was famously labeled “the Salvador Dali of academic philosophy, and currently the worst enemy of science” by two physicists in the prestigious journal Nature: Theocharis and Psimopoulos 1987), and simply not the virtuous thing to do.

References

Bennett, Jeffrey. A Global Warming Primer: Answering your Questions About the Science, the Consequences, and the Solutions. Big Kid Science, 2016.

Bowler, Peter. The Eclipse of Darwinism. Johns Hopkins University Press, 1992.

Feyerabend, Paul. Philosophy of Nature. Polity, 2016.

Frankel, Henry. “The Career of Continental Drift Theory: An Application of Imre Lakatos’ Analysis of Scientific Growth to the Rise of Drift Theory.” Studies in History and Philosophy of Science Part A 10, no. 1 (1979): 21-66.

Gross, Paul and Levitt, Norman. Higher Superstition: The Academic Left and Its Quarrels With Science. Johns Hopkins University Press, 1994.

Huxley, Julian. Evolution: The Modern Synthesis. MIT Press, 1942/2009.

Kidd, Ian James. “How Should Feyerabend Have Defended Astrology? A Reply to Pigliucci.” Social Epistemology Review and Reply Collective 5, no. 6 (2016): 11-17.

Kuhn, Thomas. The Copernican Revolution: Planetary Astronomy in the Development of Western Thought. Harvard University Press, 1957.

Pigliucci, Massimo. “The Demarcation Problem: A (Belated) Response to Laudan.” In Philosophy of Pseudoscience: Reconsidering the Demarcation Problem, edited by Massimo Pigliucci and Maarten Boudry, 9-28. Chicago: University of Chicago Press, 2013.

Pigliucci, Massimo. “Was Feyerabend Right in Defending Astrology? A Commentary on Kidd.” Social Epistemology Review and Reply Collective 5, no. 5 (2016): 1-6.

Pigliucci, Massimo, and Maarten Boudry, eds. Philosophy of Pseudoscience: Reconsidering the Demarcation Problem. University of Chicago Press, 2013.

Popper, Karl. The Logic of Scientific Discovery. Hutchinson, 1934/1959.

Shaw, Jamie. “Feyerabend and the Cranks: On Demarcation, Epistemic Virtues, and Astrology.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 74-88.

Theocharides, Theocharis and Mihalis Psimopoulos. “Where Science Has Gone Wrong.” Nature 329 (1987): 595-598.

Author Information: Jana Bacevic, University of Cambridge, jb906@cam.ac.uk

Bacevic, Jana. “Solving the Democratic Problem.” Social Epistemology Review and Reply Collective 6, no. 5 (2017): 50-52.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Bl

Please refer to:

Image credit: Rowman & Littlefield

It is a testament to the lasting influence of Karl Popper and Richard Rorty that their work continues to provide inspiration for debates concerning the role and purpose of knowledge, democracy, and intellectuals in society. Alternatively, it is a testament to the recurrence of the problem that continues to lurk under the glossy analytical surface or occasional normative consensus of these debates: the impossibility to reconcile the concepts of liberal and epistemic democracy. Essays collected under the title Democratic Problem-Solving (Cruickshank and Sassower 2017) offer grounds for both assumptions, so this is what my review will focus on.

Boundaries of Rational Discussion

Democratic Problem-Solving is a thorough and comprehensive (if at times seemingly meandering) meditation on the implications of Popper’s and Rorty’s ideas for the social nature of knowledge and truth in contemporary Angloamerican context. This context is characterised by combined forces of neoliberalism and populism, growing social inequalities, and what has for a while now been dubbed, perhaps euphemistically, the crisis of democracy. Cruickshank’s (in other contexts almost certainly heretical) opening that questions the tenability of distinctions between Popper and Rorty, then, serves to remind us that both were devoted to the purpose of defining the criteria for and setting the boundaries of rational discussion, seen as the road to problem-solving. Jürgen Habermas, whose name also resonates throughout this volume, elevated communicative rationality to the foundational principle of Western democracies, as the unifying/normalizing ground from which to ensure the participation of the greatest number of members in the public sphere.

Intellectuals were, in this view, positioned as guardians—epistemic police, of sorts—of this discursive space. Popper’s take on epistemic ‘policing’ (see DPS, 42) was to use the standards of scientific inquiry as exemplars for maintaining a high level, and, more importantly, neutrality of public debates. Rorty saw it as the minimal instrument that ensured civility without questioning, or at least without implicitly dismissing, others’ cultural premises, or even ontological assumptions. The assumption they and authors in this volume have in common is that rational dialogue is, indeed, both possible and necessary: possible because standards of rationality were shared across humanity, and necessary because it was the best way to ensure consensus around the basic functioning principles of democracy. This also ensured the pairing of knowledge and politics: by rendering visible the normative (or political) commitments of knowledge claims, sociology of knowledge (as Reed shows) contributed to affirming the link between the epistemic and the political. As Agassi’s syllogism succinctly demonstrates, this link quickly morphed from signifying correlation (knowledge and power are related) to causation (the more knowledge, the more power), suggesting that epistemic democracy was if not a precursor, then certainly a correlate of liberal democracy.

This is why Democratic Problem-Solving cannot avoid running up against the issue of public intellectuals (qua epistemic police), and, obviously, their relationship to ‘Other minds’ (communities being policed). In the current political context, however, to the well-exercised questions Sassower raises such as—

should public intellectuals retain their Socratic gadfly motto and remain on the sidelines, or must they become more organically engaged (Gramsci 2011) in the political affairs of their local communities? Can some academics translate their intellectual capital into a socio-political one? Must they be outrageous or only witty when they do so? Do they see themselves as leaders or rather as critics of the leaders they find around them (149)?

—we might need to add the following: “And what if none of this matters?”

After all, differences in vocabularies of debate matter only if access to it depends on their convergence to a minimal common denominator. The problem for the guardians of public sphere today is not whom to include in these debates and how, but rather what to do when those ‘others’ refuse, metaphorically speaking, to share the same table. Populist right-wing politicians have at their disposal the wealth of ‘alternative’ outlets (Breitbart, Fox News, and increasingly, it seems, even the BBC), not to mention ‘fake news’ or the ubiquitous social media. The public sphere, in this sense, resembles less a (however cacophonous) town hall meeting than a series of disparate village tribunals. Of course, as Fraser (1990) noted, fragmentation of the public sphere has been inherent since its inception within the Western bourgeois liberal order.

The problem, however, is less what happens when other modes of arguing emerge and demand to be recognized, and more what happens when they aspire for redistribution of political power that threatens to overturn the very principles that gave rise to them in the first place. We are used to these terms denoting progressive politics, but there is little that prevents them from being appropriated for more problematic ideologies: after all, a substantial portion of the current conservative critique of the ‘culture of political correctness’, especially on campuses in the US, rests on the argument that ‘alternative’ political ideologies have been ‘repressed’, sometimes justifying this through appeals to the freedom of speech.

Dialogic Knowledge

In assuming a relatively benevolent reception of scientific knowledge, then, appeals such as Chis and Cruickshank’s to engage with different publics—whether as academics, intellectuals, workers, or activists—remain faithful to Popper’s normative ideal concerning the relationship between reasoning and decision-making: ‘the people’ would see the truth, if only we were allowed to explain it a bit better. Obviously, in arguing for dialogical, co-produced modes of knowledge, we are disavowing the assumption of a privileged position from which to do so; but, all too often, we let in through the back door the implicit assumption of the normative force of our arguments. It rarely, if ever, occurs to us that those we wish to persuade may have nothing to say to us, may be immune or impervious to our logic, or, worse, that we might not want to argue with them.

For if social studies of science taught us anything, it is that scientific knowledge is, among other things, a culture. An epistemic democracy of the Rortian type would mean that it’s a culture like any other, and thus not automatically entitled to a privileged status among other epistemic cultures, particularly not if its political correlates are weakened—or missing (cf. Hart 2016). Populist politics certainly has no use for critical slow dialogue, but it is increasingly questionable whether it has use for dialogue at all (at the time of writing of this piece, in the period leading up to the 2017 UK General Election, the Prime Minister is refusing to debate the Leader of the Opposition). Sassower’s suggestion that neoliberalism exhibits a penchant for justification may hold a promise, but, as Cruickshank and Chis (among others) show on the example of UK higher education, ‘evidence’ can be adjusted to suit a number of policies, and political actors are all too happy to do that.

Does this mean that we should, as Steve Fuller suggested in another SERRC article (http://wp.me/p1Bfg0-3nx) see in ‘post-truth’ the STS symmetry principle? I am skeptical. After all, judgments of validity are the privilege of those who can still exert a degree of control over access to the debate. In this context, I believe that questions of epistemic democracy, such as who has the right to make authoritative knowledge claims, in what context, and how, need to, at least temporarily, come second in relation to questions of liberal democracy. This is not to be teary-eyed about liberal democracy: if anything, my political positions lie closer to Cruickshank and Chis’ anarchism. But it is the only system that can—hopefully—be preserved without a massive cost in human lives, and perhaps repurposed so as to make them more bearable.

In this sense, I wish the essays in the volume confronted head-on questions such as whether we should defend epistemic democracy (and what versions of it) if its principles are mutually exclusive with liberal democracy, or, conversely, would we uphold liberal democracy if it threatened to suppress epistemic democracy. For the question of standards of public discourse is going to keep coming up, but it may decreasingly have the character of an academic debate, and increasingly concern the possibility to have one at all. This may turn out to be, so to speak, a problem that precedes all other problems. Essays in this volume have opened up important venues for thinking about it, and I look forward to seeing them discussed in the future.

References

Cruickshank, Justin and Raphael Sassower. Democratic Problem Solving: Dialogues in Social Epistemology. London: Rowman & Littlefield, 2017.

Fraser, Nancy. “Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy.” Social Text 25/26 (1990): 56-80.

Fuller, Steve. “Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.” Social Epistemology Review and Reply Collective, December 25, 2016. http://wp.me/p1Bfg0-3nx

Hart, Randle J. “Is a Rortian Sociology Desirable? Will It Help Us Use Words Like ‘Cruelty’?” Humanity and Society, 40, no. 3 (2016): 229-241.