Archives For social epistemology

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu

Sassower, Raphael. “Heidegger and the Sociologists: A Forced Marriage?.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 30-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3X8

The town of Messkirch, the hometown of Martin Heidegger.
Image by Renaud Camus via Flickr / Creative Commons

 

Jeff Kochan is upfront about not being able “to make everyone happy” in order to write “a successful book.” For him, choices had to be made, such as promoting “Martin Heidegger’s existential conception of science . . . the sociology of scientific knowledge . . . [and the view that] the accounts of science presented by SSK [sociology of scientific knowledge] and Heidegger are, in fact, largely compatible, even mutually reinforcing.” (1) This means combining the existentialist approach of Heidegger with the sociological view of science as a social endeavour.

Such a marriage is bound to be successful, according to the author, because together they can exercise greater vitality than either would on its own.  If each party were to incorporate the other’s approach and insights, they would realize how much they needed each other all along. This is not an arranged or forced marriage, according to Kochan the matchmaker, but an ideal one he has envisioned from the moment he laid his eyes on each of them independently.

The Importance of Practice

Enumerating the critics of each party, Kochan hastens to suggest that “both SSK and Heidegger have much more to offer a practice-based approach to science than has been allowed by their critics.” (6) The Heideggerian deconstruction of science, in this view, is historically informed and embodies a “form of human existence.” (7) Focusing on the early works of Heidegger Kochan presents an ideal groom who can offer his SSK bride the theoretical insights of overcoming the Cartesian-Kantian false binary of subject-object (11) while benefitting from her rendering his “theoretical position” more “concrete, interesting, and useful through combination with empirical studies and theoretical insights already extant in the SSK literature.” (8)

In this context, there seems to be a greater urgency to make Heidegger relevant to contemporary sociological studies of scientific practices than an expressed need by SSK to be grounded existentially in the Heideggerian philosophy (or for that matter, in any particular philosophical tradition). One can perceive this postmodern juxtaposition (drawing on seemingly unrelated sources in order to discover something novel and more interesting when combined) as an attempt to fill intellectual vacuums.

This marriage is advisable, even prudent, to ward off criticism levelled at either party independently: Heidegger for his abstract existential subjectivism and SSK for unwarranted objectivity. For example, we are promised, with Heidegger’s “phenomenology of the subject as ‘being-in-the-world’ . . . SSK practitioners will no longer be vulnerable to the threat of external-world scepticism.” (9-10) Together, so the argument proceeds, they will not simply adopt each other’s insights and practices but will transform themselves each into the other, shedding their misguided singularity and historical positions for the sake of this idealized research program of the future.

Without flogging this marriage metaphor to death, one may ask if the two parties are indeed as keen to absorb the insights of their counterpart. In other words, do SSK practitioners need the Heideggerian vocabulary to make their work more integrated? Their adherents and successors have proven time and again that they can find ways to adjust their studies to remain relevant. By contrast, the Heideggerians remain fairly insulated from the studies of science, reviving “The Question Concerning Technology” (1954) whenever asked about technoscience. Is Kochan too optimistic to think that citing Heidegger’s earliest works will make him more rather than less relevant in the 21st century?

But What Can We Learn?

Kochan seems to think that reviving the Heideggerian project is worthwhile: what if we took the best from one tradition and combined it with the best of another? What if we transcended the subject-object binary and fully appreciated that “knowledge of the object [science] necessarily implicates the knowing subject [practitioner]”? (351) Under such conditions (as philosophers of science have understood for a century), the observer is an active participant in the observation, so much so (as some interpreters of quantum physics admit) that the very act of observing impacts the objects being perceived.

Add to this the social dimension of the community of observers-participants and the social dynamics to which they are institutionally subjected, and you have the contemporary landscape that has transformed the study of Science into the study of the Scientific Community and eventually into the study of the Scientific Enterprise.

But there is another objection to be made here: Even if we agree with Kochan that “the subject is no longer seen as a social substance gaining access to an external world, but an entity whose basic modes of existence include being-in-the-world and being-with-others,” (351) what about the dynamics of market capitalism and democratic political formations? What about the industrial-academic-military complex? To hope for the “subject” to be more “in-the-world” and “with-others” is already quite common among sociologists of science and social epistemologists, but does this recognition alone suffice to understand that neoliberalism has a definite view of what the scientific enterprise is supposed to accomplish?

Though Kochan nods at “conservative” and “liberal” critics, he fails to concede that theirs remain theoretical critiques divorced from the neoliberal realities that permeate every sociological study of science and that dictate the institutional conditions under which the very conception of technoscience is set.

Kochan’s appreciation of the Heideggerian oeuvre is laudable, even admirable in its Quixotic enthusiasm for Heidegger’s four-layered approach (“being-in-the-world,” “being-with-others,” “understanding,” and “affectivity”, 356), but does this amount to more than “things affect us, therefore they exist”? (357) Just like the Cartesian “I think, therefore I am,” this formulation brings the world back to us as a defining factor in how we perceive ourselves instead of integrating us into the world.

Perhaps a Spinozist approach would bridge the binary Kochan (with Heidegger’s help) wishes to overcome. Kochan wants us to agree with him that “we are compelled by the system [of science and of society?] only insofar as we, collectively, compel one another.” (374) Here, then, we are shifting ground towards SSK practices and focusing on the sociality of human existence and the ways the world and our activities within it ought to be understood. There is something quite appealing in bringing German and Scottish thinkers together, but it seems that merging them is both unrealistic and perhaps too contrived. For those, like Kochan, who dream of a Hegelian aufhebung of sorts, this is an outstanding book.

For the Marxist and sociological skeptics who worry about neoliberal trappings, this book will remain an erudite and scholarly attempt to force a merger. As we look at this as yet another arranged marriage, we should ask ourselves: would the couple ever have consented to this on their own? And if the answer is no, who are we to force this on them?

Contact details: rsassowe@uccs.edu

References

Kochan, Jeff. Science as Social Existence: Heidegger and the Sociology of Scientific Knowledge. Cambridge, UK: Open Book Publishers, 2017.

Author Information: Stephen Turner, University of South Florida, turner@usf.edu

Turner, Stephen. “Fuller’s roter Faden.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 25-29.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3WX

Art by William Blake, depicting the creation of reality.
Image via AJC1 via Flickr / Creative Commons

The Germans have a notion of “research intention,” by which they mean the underlying aim of an author’s work as revealed over its whole trajectory. Francis Remedios and Val Dusek have provided, if not an account itself, the material for an account of Steve Fuller’s research intention, or as they put it the “thread” that runs through his work.

These “intentions” are not something that is apparent to the authors themselves, which is part of the point: at the start of their intellectual journey they are working out a path which leads they know not where, but which can be seen as a path with an identifiable beginning and end retrospectively. We are now at a point where we can say something about this path in the case of Fuller. We can also see the ways in which various Leitmotifs, corollaries, and persistent themes fit with the basic research intention, and see why Fuller pursued different topics at different times.

A Continuity of Many Changes

The ur-source for Fuller’s thought is his first book, Social Epistemology. On the surface, this book seems alien to the later work, so much so that one can think of Fuller as having a turn. But seen in terms of an underlying research intention, and indeed in Fuller’s own self-explications included in this text, this is not the case: the later work is a natural development, almost an entailment, of the earlier work, properly understood.

The core of the earlier work was the idea of constructing a genuine epistemology, in the sense of a kind of normative account of scientific knowledge, out of “social” considerations and especially social constructivism, which at the time was considered to be either descriptive or anti-epistemological, or both. For Fuller, this goal meant that the normative content would at least include, or be dominated by, the “social” part of epistemology, considerations of the norms of a community, norms which could be changed, which is to say made into a matter of “policy.”

This leap to community policies leads directly to a set of considerations that are corollaries to Fuller’s long-term project. We need an account of what the “policy” options are, and a way to choose between them. Fuller was trained at a time when there was a lingering controversy over this topic: the conflict between Kuhn and the Popperians. Kuhn represented a kind of consensus driven authoritarianism. For him it was right and necessary for science to be organized around ungroundable premises that enabled science to be turned into puzzle-solving, rather than insoluble disputes over fundamentals. These occurred, and produced new ungroundable consensual premises, at the rare moments of scientific revolutions.

Progress was possible through these revolutions, but our normal notions of progress were suspended during the revolutions and applied only to the normal puzzle-solving phase of science. Popperianism, on the contrary, ascribed progress to a process of conjecture and refutation in which ever broader theories developed to account for the failures of previous conjectures, in an unending process.

Kuhnianism, in the lens of Fuller’s project in Social Epistemology, was itself a kind of normative epistemology, which said “don’t dispute fundamentals until the sad day comes when one must.” Fuller’s instincts were always with Popper on this point: authoritarian consensus has no place in science for either of them. But Fuller provided a tertium quid, which had the effect of upending the whole conflict. He took over the idea of the social construction of reality and gave it a normative and collective or policy interpretation. We make knowledge. There is no knowledge that we do not create.

The creation is a “social” activity, as the social constructivists claimed. But this social itself needed to be governed by a sense of responsibility for these acts of creation, and because they were social, this meant by a “policy.” What this policy should be was not clear: no one had connected the notion of construction to the notion of responsibility in this way. But it was a clear implication of the idea of knowledge as a product of making. Making implies a responsibility for the consequences of making.

Dangers of Acknowledging Our Making

This was a step that few people were willing to take. Traditional epistemology was passive. Theory choice was choice between the theories that were presented to the passive chooser. The choices could be made on purely epistemic grounds. There was no consideration of responsibility, because the choices were an end point, a matter of scientific aesthetics, with no further consequences. Fuller, as Remedios and Dusek point out, rejects this passivity, a rejection that grows directly out of his appropriation of constructivism.

From a “making” or active epistemic perspective, Kuhnianism is an abdication of responsibility, and a policy of passivity. But Fuller also sees that overcoming the passivity Kuhn describes as the normal state of science, requires an alternative policy, which enables the knowledge that is in fact “made” but which is presented as given, to be challenged. This is a condition of acknowledging responsibility for what is made.

There is, however, an oddity in talking about responsibility in relation to collective knowledge producing, which arises because we don’t know in advance where the project of knowledge production will lead. I think of this on analogy to the debate between Malthus and Marx. If one accepts the static assumptions of Malthus, his predictions are valid: Marx made the productivist argument that with every newborn mouth came two hands. He would have been better to argue that with every mouth came a knowledge making brain, because improvements in food production technology enabled the support of much larger populations, more technology, and so forth—something Malthus did not consider and indeed could not have. That knowledge was in the future.

Fuller’s alternative grasps this point: utilitarian considerations from present static assumptions can’t provide a basis for thinking about responsibility or policy. We need to let knowledge production proceed regardless of what we think are the consequences, which is necessarily thinking based on static assumptions about knowledge itself. Put differently, we need to value knowledge in itself, because our future is itself made through the making of knowledge.

“Making” or “constructing” is more than a cute metaphor. Fuller shows that there is a tradition in science itself of thinking about design, both in the sense of making new things as a form of discovery, and in the sense of reverse engineering that which exists in order to see how it works. This leads him to the controversial waters of intelligent design, in which the world itself is understood as, at least potentially, the product of design. It also takes us to some metaphysics about humans, human agency, and the social character of human agency.

One can separate some of these considerations from Fuller’s larger project, but they are natural concomitants, and they resolve some basic issues with the original project. The project of constructivism requires a philosophical anthropology. Fuller provides this with an account of the special character of human agency: as knowledge maker humans are God-like or participating in the mind of God. If there is a God, a super-agent, it will also be a maker and knowledge maker, not in the passive but in the active sense. In participating in the mind of God, we participate in this making.

“Shall We Not Ourselves Have to Become Gods?”

This picture has further implications: if we are already God-like in this respect, we can remake ourselves in God-like ways. To renounce these powers is as much of a choice as using them. But it is difficult for the renouncers to draw a line on what to renounce. Just transhumanism? Or race-related research? Or what else? Fuller rejects renunciation of the pursuit of knowledge and the pursuit of making the world. The issue is the same as the issue between Marx and Malthus. The renouncers base their renunciation on static models. They estimate risks on the basis of what is and what is known now. But these are both things that we can change. This is why Fuller proposes a “pro-actionary” rather than a precautionary stance and supports underwriting risk-taking in the pursuit of scientific advance.

There is, however, a problem with the “social” and policy aspect of scientific advance. On the one hand, science benefits humankind. On the other, it is an elite, even a form of Gnosticism. Fuller’s democratic impulse resists this. But his desire for the full use of human power implies a special role for scientists in remaking humanity and making the decisions that go into this project. This takes us right back to the original impulse for social epistemology: the creation of policy for the creation of knowledge.

This project is inevitably confronted with the Malthus problem: we have to make decisions about the future now, on the basis of static assumptions we have no real alternative to. At best we can hint at future possibilities which will be revealed by future science, and hope that they will work out. As Remedios and Dusek note, Fuller is consistently on the side of expanding human knowledge and power, for risk-taking, and is optimistic about the world that would be created through these powers. He is also highly sensitive to the problem of static assumptions: our utilities will not be the utilities of the creatures of the future we create through science.

What Fuller has done is to create a full-fledged alternative to the conventional wisdom about the science society relation and the present way of handling risk. The standard view is represented by Philip Kitcher: it wishes to guide knowledge in ways that reflect the values we should have, which includes the suppression of certain kinds of knowledge by scientists acting paternalistically on behalf of society.

This is a rigidly Malthusian way of thinking: the values (in this case a particular kind of egalitarianism that doesn’t include epistemic equality with scientists) are fixed, the scientists ideas of the negative consequences of something like research on “racial” differences are taken to be valid, and policy should be made in accordance with the same suppression of knowledge. Risk aversion, especially in response to certain values, becomes the guiding “policy” of science.

Fuller’s alternative preserves some basic intuitions: that science advances by risk taking, and by sometimes failing, in the manner of Popper’s conjectures and refutations. This requires the management of science, but management that ensures openness in science, supports innovation, and now and then supports concerted efforts to challenge consensuses. It also requires us to bracket our static assumptions about values, limits, risks, and so forth, not so much to ignore these things but to relativize them to the present, so that we can leave open the future. The conventional view trades heavily on the problem of values, and the potential conflicts between epistemic values and other kinds of values. Fuller sees this as a problem of thinking in terms of the present: in the long run these conflicts vanish.

This end point explains some of the apparent oddities of Fuller’s enthusiasms and dislikes. He prefers the Logical Positivists to the model-oriented philosophy of science of the present: laws are genuinely universal; models are built by assuming present knowledge and share the problems with Malthus. He is skeptical about science done to support policy, for the same reason. And he is skeptical about ecologism as well, which is deeply committed to acting on static assumptions.

The Rewards of the Test

Fuller’s work stands the test of reflexivity: he is as committed to challenging consensuses and taking risks as he exhorts others to be. And for the most part, it works: it is an old Popperian point that only through comparison with strong alternatives that a theory can be tested; otherwise it will simply pile up inductive support, blind to what it is failing to account for. But as Fuller would note, there is another issue of reflexivity here, and it comes at the level of the organization of knowledge. To have conjectures and refutations one must have partners who respond. In the consensus driven world of professional philosophy today, this does not happen. And that is a tragedy. It also makes Fuller’s point: that the community of inquirers needs to be managed.

It is also a tragedy that there are not more Fullers. Constructing a comprehensive response to major issues and carrying it through many topics and many related issues, as people like John Dewey once did, is an arduous task, but a rewarding one. It is a mark of how much the “professionalization” of philosophy has done to alter the way philosophers think and write. This is a topic that is too large for a book review, but it is one that deserves serious reflection. Fuller raises the question by looking at science as a public good and asking how a university should be organized to maximize its value. Perhaps this makes sense for science, given that science is a money loser for universities, but at the same time its main claim on the public purse. For philosophy, we need to ask different questions. Perhaps the much talked about crisis of the humanities will bring about such a conversation. If it does, it is thinking like Fuller’s that will spark the discussion.

Contact details: turner@usf.edu

References

Remedios, Francis X., and Val Dusek. Knowing Humanity in the Social World. The Path of Steve Fuller’s Social Epistemology. New York: Palgrave MacMillan, 2018.

Author Information: Eric Kerr, National University of Singapore, erictkerr@gmail.com

Kerr, Eric. “A Hermeneutic of Non-Western Philosophy.” Social Epistemology Review and Reply Collective 7, no. 4 (2018): 1-6.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3VV

Please refer to:

Image by Güldem Üstün via Flickr / Creative Commons

 

Professional philosophy, not for the first time, finds itself in crisis. When public intellectuals like Stephen Hawking, Lawrence Krauss, Sam Harris, Bill Nye, and Neil deGrasse Tyson (to list some Anglophonic examples) proclaim their support for science, it is through a disavowal of philosophy. When politicians reach for an example within the academy worthy of derision, they often find it in the footnotes to Plato. Bryan Van Norden centres one chapter of Taking Back Philosophy around the anti-intellectual and ungrammatical comment by US politician Marco Rubio that “We need more welders and less philosophers.” Although Rubio later repented, commenting approvingly of Stoicism, the school of thought that has recently been appropriated by Silicon Valley entrepreneurs, the message stuck.[1]

Two Contexts

As the Stoics would say, we’ve been here before. Richard Feynman, perhaps apocryphally, bowdlerized Barnett Newman’s quip that “aesthetics is to artists what ornithology is to birds,” proclaiming that, “philosophy of science is about as useful to scientists as ornithology is to birds.” A surly philosopher might respond that the views on philosophy of a scientist with no philosophical training is about as useful to philosophy as a bird’s view on ornithology. Or, more charitably, to point out that ornithology is actually quite useful, even if the birds themselves are not interested in it and that birds, sometimes, do benefit from our better understanding of their condition.

However, according to some accounts, philosophers within this ivory aviary frequently make themselves unemployed. “Philosophy” historically has referred simply to any body of knowledge or the whole of it.[2] As our understanding of a particular domain grows, and we develop empirical means to study it further, it gets lopped off the philosophical tree and becomes, say, psychology or computer science. While we may quibble with the accuracy of this potted history, it does capture the perspective of many that the discipline of philosophy is especially endangered and perhaps particularly deserving of conservation.

Despite this, perhaps those most guilty of charging philosophy with lacking utility have been philosophers themselves either through the pragmatic admonishings of Karl Marx (1888) or Richard Rorty (Kerr and Carter 2016) or through the internecine narcissism of small differences between rival philosophers and schools of thought.

This is, in part, the context out of which Jay Garfield and Bryan van Norden wrote an op-ed piece in the New York Times’ Stone column, promoting the inclusion of non-Western philosophy in US departments.[3] Today, the university is under threat on multiple fronts (Crow and Dabars 2015; Heller 2016) and while humanities faculties often take the brunt of the attack, philosophers can feel themselves particularly isolated when departments are threatened with closure or shrunk.[4]

Garfield and van Norden’s central contention was that philosophy departments in the US should include more non-Western philosophy both on the faculty and in the curriculum and that if they cannot do this, then they should be renamed departments of Anglo-European philosophy and perhaps be relocated within area studies. The huge interest and discussion around that article prompted van Norden to write this manifesto.

The thought that philosophy departments should be renamed departments of European or Western philosophy is not a new one. Today, many universities in China and elsewhere in Asia have departments or research groups for “Western philosophy” where Chinese philosophy and its subdisciplines dominate. In his influential text, Asia as Method, Kuan-Hsing Chen argued that, if area studies is to mean anything, it should apply equally to scholars in Asia producing “Asian studies” as to scholars in Europe:

If “we” have been doing Asian studies, Europeans, North Americans, Latin Americans, and Africans have also been doing studies in relation to their own living spaces. That is, Martin Heidegger was actually doing European studies, as were Michel Foucault, Pierre Bourdieu, and Jürgen Habermas. European experiences were their system of reference. Once we recognize how extremely limited the current conditions of knowledge are, we learn to be humble about our knowledge claims. The universalist assertions of theory are premature, for theory too must be deimperialized.” (Chen, p. 3)

Taking Back Philosophy is peppered with historical examples showing that Chinese philosophy, van Norden’s area of expertise, meets whatever standards one may set for “real philosophy”. Having these examples compiled and clearly stated is of great use to anyone putting forth a similar case and for this alone the book is a worthy addition to one’s library. These examples, piled up liberally one on top of the other, are hard to disagree with and the contrary opinion has a sordid history.

The litany of philosophers disparaging non-Western philosophy does not need to be detailed here – we all have stories and van Norden includes his own in the book. The baldest statement of this type is due to Immanuel Kant who claimed that “[p]hilosophy is not to be found in the whole Orient,” but one can find equally strong claims made among colonial administrators, early anthropologists, historians, educators, missionaries, and civil servants.[5] Without wishing to recount that history the most egregious that resonates in my mind was spoken by the British Ambassador to Thailand from 1965-1967, Sir Anthony Rumbold:

[Thailand has] no literature, no painting and hideous interior decoration. Nobody can deny that gambling and golf are the chief pleasures of the rich, and that licentiousness is the main pleasure of them all.

Taking Back Social Epistemology

Van Norden’s book wrestles, and finds its resonant anger, with these two histories: One in which professional philosophy is isolated, and isolates itself, from the rest of academy and the wider “marketplace of ideas” and one in which sub-altern and non-Western histories and perspectives are marginalized within philosophy. Since this is a journal of social epistemology, I’d like to return to a similar debate from the late 1990s and early 2000s, spearheaded by James Maffie, under the banner of ethno-epistemology.

Maffie’s bêtes noires were not primarily institutional so much as conceptual – he thought that epistemological inquiry was hampered by an ignorance of the gamut of epistemological thinking that has taken place outside of the Western world (2001, 2009). Maffie’s concern was primarily with Aztec (Mexicana) philosophy and with indigenous philosophies of the Americas (see also Burkhart 2003) although similar comparative epistemologies have been done by others (e.g. Dasti 2012; Hallen and Sodipo 1986; Hamminga 2005).

Broadly, the charge was that epistemology is and has been enthnocentric. It has hidden its own cultural biases within supposedly general claims. Given that knowledge is social, the claim that it is universal across cultures would be in need of weighty justification (Stich 1990). That Dharmottara and Roderick Chisholm derived seemingly similar conclusions from seemingly-similar thought experiments is not quite enough (Kerr 2015, forthcoming). Translation is the elephant in the room being described by several different people.[6] Language changes, of course, as does its meanings.

In ancient China, Tao had only the non-metaphorical sense of a road or pathway. It took up the first of its many abstract meanings in the Analects of Confucius. Similarly, in ancient Greece, logos had many non-metaphorical meanings, before Heraclitus gave it a philosophical one (Guthrie, 1961-1982: 1:124-126, 420-434) For epistemology, just take the word ‘know’ as an example. Contemporary philosophy departments in the English-speaking world, or at least epistemologists therein, focus on the English word ‘know’ and draw conclusions from that source. To think that such conclusions would generalize beyond the English-speaking world, sounds parochial.

Reading Taking Back Philosophy alongside Maffie’s work is instructive. The borders of philosophy are as subject to history, and boundary work by other scholars, as any other discipline and we should also be aware of the implications of Taking Back Philosophy’s conclusions beyond “professional” philosophy which may extend the proper body of knowledge to so-called “folk epistemologies”. The term “professional philosophy” restricts the object of our attention to a very recent portion of history and to a particular class and identity (Taking Back Philosophy also argues forcefully for the diversification of philosophers as well as philosophies). How do we make sure that the dissident voices, so crucial to the history of philosophy throughout the world, are accorded a proper hearing in this call for pluralism?

Mending Wall

At times, Taking Back Philosophy is strikingly polemical. Van Norden compares philosophers who “click their tongues” about “real philosophy” to Donald Trump and Ronald Reagan. All, he says, are in the business of building walls, in constructing tribalism and us-versus-them mentalities. Indeed, the title itself is reminiscent of Brexit’s mantra, “Taking Back Control.” It’s unlikely that van Norden and the Brexit proponents would have much in common politically, so it may be a coincidence of powerful sloganeering. Van Norden is a thoroughgoing pluralist: he wants to “walk side by side with Aristotle through the sacred grounds of the Lyceum … [and to] … ‘follow the path of questioning and learning’ with Zhu Xi.” (p. 159)

Where choices do have to be made for financial reasons, they would have to be made anyway since no department has space for every subdiscipline of philosophy and, analogously, we might say that no mind has space for every text that should be read.[7] Social epistemology has itself been the target of this kind of boundary work. Alvin Goldman, for example, dismisses much of it as not “real epistemology”. (2010)

As can probably be gleaned from the descriptions above, Taking Back Philosophy is also heavily invested in American politics and generally follows a US-centric slant. Within its short frame, Taking Back Philosophy draws in political debates that are live in today’s United States on diversity, identity, graduate pay, and the politicization and neoliberalization of the American model of the university. Many of these issues, no doubt, are functions of globalization but another book, which took back philosophy, from outside of the US would be a useful complement.

The final chapter contains an uplifting case for broad-mindedness in academic philosophy. Van Norden describes philosophy as being one of the few humanities disciplines that employ a “hermeneutic of faith” meaning that old texts are read in the hope that one might discover something true as opposed to a “hermeneutic of suspicion” oft-followed in other humanities and social science disciplines which emphasizes the “motives for the composition of a text that are unrelated to its truth or plausibility.” (p139) “[Philosophy is] open to the possibility that other people, including people in very different times and cultures, might know more about these things than we do, or at least they might have views that can enrich our own in some way.” (p139) The problem, he contends, is that the people “in very different times and cultures” are narrowly drawn in today’s departments.

Although Taking Back Philosophy ends with the injunction – Let’s discuss it… – one suspects that after the ellipses should be a tired “again” since van Norden, and others, have been arguing the case for some time. Philosophers in Europe were, at different times, more or less fascinated with their non-Western contemporaries, often tracking geopolitical shifts. What is going to make the difference this time? Perhaps the discussion could begin again by taking up his hermeneutic distinction and asking: can we preserve faith while being duly suspicious?

Contact details: erictkerr@gmail.com

References

Alatas, S.H. 2010. The Myth of the Lazy Native: A Study of the Image of the Malays, Filipinos and Javanese from the 16th to the 20th Century and its Function in the Ideology of Colonial Capitalism. Routledge.

Burkhart, B.Y. 2003. What Coyote and Thales can Teach Us: An Outline of American Indian Epistemology. In A. Waters (Ed.) American Indian Thought. Wiley-Blackwell: 15-26.

Chen, Kuan-Hsing. 2010. Asia as Method. Duke University Press.

Collins, R. 2000. The Sociology of Philosophies: A Global Theory of Intellectual Change. Harvard University Press.

Crow, M.M. and W.B. Dabars. 2015. Designing the New American University. Johns Hopkins University Press.

Dasti, M.R. 2012. Parasitism and Disjunctivism in Nyaya Epistemology. Philosophy East and West 62(1): 1-15.

Fanon, F.  1952 [2008]. Black Skin, White Masks, trans. R. Philcox. New York: Grove Press.

Goldman, A. 2010. Why Social Epistemology is Real Epistemology. In A. Haddock, A. Millar and D. Pritchard (Eds.), Social Epistemology. Oxford University Press: 1-29.

E.B. Goldstein. 2010. Encyclopedia of Perception. SAGE.

Guthrie, W.K.C. 1961 [1982]. A History of Greek Philosophy. Cambridge: Cambridge University Press.

Hallen, B. and J.O. Sodipo. 1986. Knowledge, Belief, and Witchcraft. London: Ethnographica.

Hamminga, B. 2005. Epistemology from the African Point of View. Poznan Studies in the Philosophy of the Sciences and the Humanities 88(1): 57-84.

Heller, H. 2016. The Capitalist University: The Transformations of Higher Education in the United States, 1945-2016. Pluto Press.

Kerr, E. 2015. Epistemological Experiments in Cross-Cultural Contexts. Asia Research Institute Working Paper Series 223: 1-27.

Kerr, E. forthcoming. Cross-Cultural Epistemology. In P. Graham, M. Fricker, D. Henderson, and N. Pedersen (Eds.) Routledge Handbook of Social Epistemology.

Kerr, E. and J.A. Carter. 2016. Richard Rorty and Epistemic Normativity. Social Epistemology 30(1): 3-24.

Maffie, J. 2001. Alternative Epistemologies and the Value of Truth. Social Epistemology 14: 247-257.

Maffie, J. 2009. “‘In the End, We have the Gatling Gun, And they have not:’ Future Prospects for Indigenous Knowledges,” Futures: The Journal of Policy, Planning, and Futures Studies, 41: 53-65.

Marx, K. 1888. Theses on Feuerbach. Appendix to Ludwig Feuerbach and the End of Classical German Philosophy. Retrieved from https://www.marxists.org/archive/marx/works/1845/theses/theses.htm

Said, E. 1979. Orientalism. New York: Vintage.

[1] Goldhill, O. “ Marco Rubio Admits he was Wrong… About Philosophy.” Quartz, 30 March 2018. Retrieved from https://qz.com/1241203/marco-rubio-admits-he-was-wrong-about-philosophy/amp/.

[2] Philosophy. Online Etymology Dictionary. Retrieved from https://www.etymonline.com/word/philosophy.

[3] Garfield, J.L. and B.W. Van Norden. “If Philosophy Won’t Diversity, Let’s Call it What it Really Is.” New York Times, 11 May 2016. Retrieved from https://www.nytimes.com/2016/05/11/opinion/if-philosophy-wont-diversify-lets-call-it-what-it-really-is.html.

[4] See, e.g., N. Power. “A Blow to Philosophy, and Minorities.” The Guardian, 29 April 2010. Retrieved from https://www.theguardian.com/commentisfree/2010/apr/29/philosophy-minorities-middleqsex-university-logic. Weinberg, J. “Serious Cuts and Stark Choices at Aberdeen.” Daily Nous, 27 March 2015. Retrieved from http://dailynous.com/2015/03/27/serious-cuts-and-stark-choices-at-aberdeen/.

[5] See e.g., Edward Said’s Orientalism (1979), Fritz Fanon’s Black Skin, White Masks (1952) and, more recently, Syed Alatas’ The Myth of the Lazy Native (2010).

[6] The reader will recall the parable wherein three blind men describe an elephant through their partial experience (the coarseness and hairiness of the tail or the snakelike trunk) but none of whom describes it accurately (e.g. In Goldstein 2010, p. 492).

[7] Several people have had the honour of being called the last to have read everything including Giovanni Pico della Mirandola, who ironically wrote the first printed book to be universally banned by the Catholic Church, and Desiderius Erasmus, after whom a European student exchange programme, facilitating cross-cultural learning is founded. Curiously, Thomas Babington Macauley is said to have been the best-read man of his time and he appears in Jay Garfield’s foreword to TAKING BACK PHILOSOPHY to voice a particularly distasteful and ignorant remark (p. xiv). We can conclude that the privilege of having read widely, or having a wide syllabus, is not enough in itself for greater understanding.

Author Information: Adam Riggio, SERRC Digital Editor, serrc.digital@gmail.com

Riggio, Adam. “Action in Harmony with a Global World.” Social Epistemology Review and Reply Collective 7, no. 3 (2018): 20-26.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Vp

Image by cornie via Flickr / Creative Commons

 

Bryan Van Norden has become about as notorious as an academic philosopher can be while remaining a virtuous person. His notoriety came with a column in the New York Times that took the still-ethnocentric approach of many North American and European university philosophy departments to task. The condescending and insulting dismissal of great works of thought from cultures and civilizations beyond Europe and European-descended North America should scandalize us. That it does not is to the detriment of academic philosophy’s culture.

Anyone who cares about the future of philosophy as a tradition should read Taking Back Philosophy and take its lessons to heart, if one does not agree already with its purpose. The discipline of philosophy, as practiced in North American and European universities, must incorporate all the philosophical traditions of humanity into its curriculum and its subject matter. It is simple realism.

A Globalized World With No Absolute Hierarchies

I am not going to argue for this decision, because I consider it obvious that this must be done. Taking Back Philosophy is a quick read, an introduction to a political task that philosophers, no matter their institutional homes, must support if the tradition is going to survive beyond the walls of universities increasingly co-opted by destructive economic, management, and human resources policies.

Philosophy as a creative tradition cannot survive in an education economy built on the back of student debt, where institutions’ priorities are set by a management class yoked to capital investors and corporate partners, which prioritizes the proliferation of countless administrative-only positions while highly educated teachers and researchers compete ruthlessly for poverty wages.

With this larger context in mind, Van Norden’s call for the enlargement of departments’ curriculums to cover all traditions is one essential pillar of the vision to liberate philosophy from the institutions that are destroying it as a viable creative process. In total, those four pillars are 1) universal accessibility, economically and physically; 2) community guidance of a university’s priorities; 3) restoring power over the institution to creative and research professionals; and 4) globalizing the scope of education’s content.

Taking Back Philosophy is a substantial brick through the window of the struggle to rebuild our higher education institutions along these democratic and liberating lines. Van Norden regularly publishes work of comparative philosophy that examines many problems of ethics and ontology using texts, arguments, and concepts from Western, Chinese, and Indian philosophy. But if you come to Taking Back Philosophy expecting more than a brick through those windows, you’ll be disappointed. One chapter walks through a number of problems as examples, but the sustained conceptual engagement of a creative philosophical work is absent. Only the call to action remains.

What a slyly provocative call it is – the book’s last sentence, “Let’s discuss it . . .”

Unifying a Tradition of Traditions

I find it difficult to write a conventional review of Taking Back Philosophy, because so much of Van Norden’s polemic is common sense to me. Of course, philosophy departments must be open to primary material from all the traditions of the human world, not just the Western. I am incapable of understanding why anyone would argue against this, given how globalized human civilization is today. For the context of this discussion, I will consider a historical and a technological aspect of contemporary globalization. Respectively, these are the fall of the European military empires, and the incredible intensity with which contemporary communications and travel technology integrates people all over Earth.

We no longer live in a world dominated by European military colonial empires, so re-emerging centres of culture and economics must be taken on their own terms. The Orientalist presumption, which Edward Said spent a career mapping, that there is no serious difference among Japanese, Malay, Chinese, Hindu, Turkic, Turkish, Persian, Arab, Levantine, or Maghreb cultures is not only wrong, but outright stupid. Orientalism as an academic discipline thrived for the centuries it did only because European weaponry intentionally and persistently kept those cultures from asserting themselves.

Indigenous peoples – throughout the Americas, Australia, the Pacific, and Africa – who have been the targets of cultural and eradicative genocides for centuries now claim and agitate for their human rights, as well as inclusion in the broader human community and species. I believe most people of conscience are appalled and depressed that these claims are controversial at all, and even seen by some as a sign of civilizational decline.

The impact of contemporary technology I consider an even more important factor than the end of imperialist colonialism in the imperative to globalize the philosophical tradition. Despite the popular rhetoric of contemporary globalization, the human world has been globalized for millennia. Virtually since urban life first developed, long-distance international trade and communication began as well.

Here are some examples. Some of the first major cities of ancient Babylon achieved their greatest economic prosperity through trade with cities on the south of the Arabian Peninsula, and as far east along the Indian Ocean coast as Balochistan. From 4000 to 1000 years ago, Egyptian, Roman, Greek, Persian, Arab, Chinese, Mongol, Indian, Bantu, Malian, Inca, and Anishinaabeg peoples, among others, built trade networks and institutions stretching across continents.

Contemporary globalization is different in the speed and quantity of commerce, and diversity of goods. It is now possible to reach the opposite side of the planet in a day’s travel, a journey so ordinary that tens of millions of people take these flights each year. Real-time communication is now possible between anywhere on Earth with broadband internet connections thanks to satellite networks and undersea fibre-optic cables. In 2015, the total material value of all goods and commercial services traded internationally was US$21-trillion. That’s a drop from the previous year’s all-time (literally) high of US$24-trillion.[1]

Travel, communication, and productivity has never been so massive or intense in all of human history. The major control hubs of the global economy are no longer centralized in a small set of colonial powers, but a variety of economic centres throughout the world, depending on industry. From Beijing, Moscow, Mumbai, Lagos, and Berlin to Tokyo, and Washington, the oil fields of Kansas, the Dakotas, Alberta, and Iraq, and the coltan, titanium, and tantalum mines of Congo, Kazakhstan, and China.

All these proliferating lists express a simple truth – all cultures of the world now legitimately claim recognition as equals, as human communities sharing our Earth as we hollow it out. Philosophical traditions from all over the world are components of those claims to equal recognition.

The Tradition of Process Thought

So that is the situation forcing a recalcitrant and reactionary academy to widen its curricular horizons – Do so, or face irrelevancy in a global civilization with multiple centres all standing as civic equals in the human community. This is where Van Norden himself leaves us. Thankfully, he understands that a polemic ending with a precise program immediately becomes empty dogma, a conclusion which taints the plausibility of an argument. His point is simple – that the academic discipline must expand its arms. He leaves the more complex questions of how the philosophical tradition itself can develop as a genuinely global community.

Process philosophy is a relatively new philosophical tradition, which can adopt the classics of Daoist philosophy as broad frameworks and guides. By process philosophy, I mean the research community that has grown around Gilles Deleuze and Félix Guattari as primary innovators of their model of thought – a process philosophy that converges with an ecological post-humanism. The following are some essential aspects of this new school of process thinking, each principle in accord with the core concepts of the foundational texts of Daoism, Dao De Jing and Zhuang Zi.

Ecological post-humanist process philosophy is a thorough materialism, but it is an anti-reductive materialism. All that exists is bodies of matter and fields of force, whose potentials include everything for which Western philosophers have often felt obligated to postulate a separate substance over and above matter, whether calling it mind, spirit, or soul.

As process philosophy, the emphasis in any ontological analysis is on movement, change, and relationships instead of the more traditional Western focus on identity and sufficiency. If I can refer to examples from the beginning of Western philosophy in Greece, process thought is an underground movement with the voice of Heraclitus critiquing a mainstream with the voice of Parmenides. Becoming, not being, is the primary focus of ontological analysis.

Process thinking therefore is primarily concerned with potential and capacity. Knowledge, in process philosophy, as a result becomes inextricably bound with action. This unites a philosophical school identified as “Continental” in common-sense categories of academic disciplines with the concerns of pragmatist philosophy. Analytic philosophy took up many concepts from early 20th century pragmatism in the decades following the death of John Dewey. These inheritors, however, remained unable to overcome the paradoxes stymieing traditional pragmatist approaches, particularly how to reconcile truth as correspondence with knowledge having a purpose in action and achievement.

A solution to this problem of knowledge and action was developed in the works of Barry Allen during the 2000s. Allen built an account of perception that was rooted in contemporary research in animal behaviour, human neurology, and the theoretical interpretations of evolution in the works of Steven Jay Gould and Richard Lewontin.

His first analysis, focussed as it was on the dynamics of how human knowledge spurs technological and civilizational development, remains humanistic. Arguing from discoveries of how profoundly the plastic human brain is shaped in childhood by environmental interaction, Allen concludes that successful or productive worldly action itself constitutes the correspondence of our knowledge and the world. Knowledge does not consist of a private reserve of information that mirrors worldly states of affairs, but the physical and mental interaction of a person with surrounding processes and bodies to constitute those states of affairs. The plasticity of the human brain and our powers of social coordination are responsible for the peculiarly human mode of civilizational technology, but the same power to constitute states of affairs through activity is common to all processes and bodies.[2]

“Water is fluid, soft, and yielding. But water will wear away rock, which is rigid and cannot yield. Whatever is soft, fluid, and yielding will overcome whatever is rigid and hard.” – Lao Zi
The Burney Falls in Shasta County, Northern California. Image by melfoody via Flickr / Creative Commons

 

Action in Phase With All Processes: Wu Wei

Movement of interaction constitutes the world. This is the core principle of pragmatist process philosophy, and as such brings this school of thought into accord with the Daoist tradition. Ontological analysis in the Dao De Jing is entirely focussed on vectors of becoming – understanding the world in terms of its changes, movements, and flows, as each of these processes integrate in the complexity of states of affairs.

Not only is the Dao De Jing a foundational text in what is primarily a process tradition of philosophy, but it is also primarily pragmatist. Its author Lao Zi frames ontological arguments in practical concerns, as when he writes, “The most supple things in the world ride roughshod over the most rigid” (Dao De Jing §43). This is a practical and ethical argument against a Parmenidean conception of identity requiring stability as a necessary condition.

What cannot change cannot continue to exist, as the turbulence of existence will overcome and erase what can exist only by never adapting to the pressures of overwhelming external forces. What can only exist by being what it now is, will eventually cease to be. That which exists in metamorphosis and transformation has a remarkable resilience, because it is able to gain power from the world’s changes. This Daoist principle, articulated in such abstract terms, is in Deleuze and Guattari’s work the interplay of the varieties of territorializations.

Knowledge in the Chinese tradition, as a concept, is determined by an ideal of achieving harmonious interaction with an actor’s environment. Knowing facts of states of affairs – including their relationships and tendencies to spontaneous and proliferating change – is an important element of comprehensive knowledge. Nonetheless, Lao Zi describes such catalogue-friendly factual knowledge as, “Those who know are not full of knowledge. Those full of knowledge do not know” (Dao De Jing 81). Knowing the facts alone is profoundly inadequate to knowing how those facts constrict and open potentials for action. Perfectly harmonious action is the model of the Daoist concept of Wu Wei – knowledge of the causal connections among all the bodies and processes constituting the world’s territories understood profoundly enough that self-conscious thought about them becomes unnecessary.[3]

Factual knowledge is only a condition of achieving the purpose of knowledge: perfectly adapting your actions to the changes of the world. All organisms’ actions change their environments, creating physically distinctive territories: places that, were it not for my action, would be different. In contrast to the dualistic Western concept of nature, the world in Daoist thought is a complex field of overlapping territories whose tensions and conflicts shape the character of places. Fulfilled knowledge in this ontological context is knowledge that directly conditions your own actions and the character of your territory to harmonize most productively with the actions and territories that are always flowing around your own.

Politics of the Harmonious Life

The Western tradition, especially in its current sub-disciplinary divisions of concepts and discourses, has treated problems of knowledge as a domain separate from ethics, morality, politics, and fundamental ontology. Social epistemology is one field of the transdisciplinary humanities that unites knowledge with political concerns, but its approaches remain controversial in much of the conservative mainstream academy. The Chinese tradition has fundamentally united knowledge, moral philosophy, and all fields of politics especially political economy since the popular eruption of Daoist thought in the Warring States period 2300 years ago. Philosophical writing throughout eastern Asia since then has operated in this field of thought.

As such, Dao-influenced philosophy has much to offer contemporary progressive political thought, especially the new communitarianism of contemporary social movements with their roots in Indigenous decolonization, advocacy for racial, sexual, and gender liberation, and 21st century socialist advocacy against radical economic inequality. In terms of philosophical tools and concepts for understanding and action, these movements have dense forebears, but a recent tradition.

The movement for economic equality and a just globalization draws on Antonio Gramsci’s introduction of radical historical contingency to the marxist tradition. While its phenomenological and testimonial principles and concepts are extremely powerful and viscerally rooted in the lived experience of subordinated – what Deleuze and Guattari called minoritarian – people as groups and individuals, the explicit resources of contemporary feminism is likewise a century-old storehouse of discourse. Indigenous liberation traditions draw from a variety of philosophical traditions lasting millennia, but the ongoing systematic and systematizing revival is almost entirely a 21st century practice.

Antonio Negri, Rosi Braidotti, and Isabelle Stengers’ masterworks unite an analysis of humanity’s destructive technological and ecological transformation of Earth and ourselves to develop a solution to those problems rooted in communitarian moralities and politics of seeking harmony while optimizing personal and social freedom. Daoism offers literally thousands of years of work in the most abstract metaphysics on the nature of freedom in harmony and flexibility in adaptation to contingency. Such conceptual resources are of immense value to these and related philosophical currents that are only just beginning to form explicitly in notable size in the Western tradition.

Van Norden has written a book that is, for philosophy as a university discipline, is a wake-up call to this obstinate branch of Western academy. The world around you is changing, and if you hold so fast to the contingent borders of your tradition, your territory will be overwritten, trampled, torn to bits. Live and act harmoniously with the changes that are coming. Change yourself.

It isn’t so hard to read some Lao Zi for a start.

Contact details: serrc.digital@gmail.com

References

Allen, Barry. Knowledge and Civilization. Boulder, Colorado: Westview Press, 2004.

Allen, Barry. Striking Beauty: A Philosophical Look at the Asian Martial Arts. New York: Columbia University Press, 2015.

Allen, Barry. Vanishing Into Things: Knowledge in Chinese Tradition. Cambridge: Harvard University Press, 2015.

Bennett, Jane. Vibrant Matter: A Political Ecology of Things. Durham: Duke University Press, 2010.

Betasamosake Simpson, Leanne. As We Have Always Done: Indigenous Freedom Through Radical Resistance. Minneapolis: University of Minnesota Press, 2017.

Bogost, Ian. Alien Phenomenology, Or What It’s Like to Be a Thing. Minneapolis: Minnesota University Press, 2012.

Braidotti, Rosi. The Posthuman. Cambridge: Polity Press, 2013.

Deleuze, Gilles. Bergsonism. Translated by Hugh Tomlinson and Barbara Habberjam. New York: Zone Books, 1988.

Chew, Sing C. World Ecological Degradation: Accumulation, Urbanization, and Deforestation, 3000 B.C. – A.D. 2000. Walnut Creek: Altamira Press, 2001.

Negri, Antonio, and Michael Hardt. Assembly. New York: Oxford University Press, 2017.

Parikka, Jussi. A Geology of Media. Minneapolis: University of Minnesota Press, 2015.

Riggio, Adam. Ecology, Ethics, and the Future of Humanity. New York: Palgrave MacMillan, 2015.

Stengers, Isabelle. Cosmopolitics I. Translated by Robert Bononno. Minneapolis: Minnesota University Press, 2010.

Stengers, Isabelle. Cosmopolitics II. Translated by Robert Bononno. Minneapolis: Minnesota University Press, 2011.

Van Norden, Bryan. Taking Back Philosophy: A Multicultural Manifesto. New York: Columbia University Press, 2017.

World Trade Organization. World Trade Statistical Review 2016. Retrieved from https://www.wto.org/english/res_e/statis_e/wts2016_e/wts2016_e.pdf

[1] That US$3-trillion drop in trade was largely the proliferating effect of the sudden price drop of human civilization’s most essential good, crude oil, to just less than half of its 2014 value.

[2] A student of Allen’s arrived at this conclusion in combining his scientific pragmatism with the French process ontology of Deleuze and Guattari in the context of ecological problems and eco-philosophical thinking.

[3] This concept of knowledge as perfectly harmonious but non-self-conscious action also conforms to Henri Bergson’s concept of intuition, the highest (so far) form of knowledge that unites the perfect harmony in action of brute animal instinct with the self-reflective and systematizing power of human understanding. This is a productive way for another creative contemporary philosophical path – the union of vitalist and materialist ideas in the work of thinkers like Jane Bennett – to connect with Asian philosophical traditions for centuries of philosophical resources on which to draw. But that’s a matter for another essay.

Author Information: Paul R. Smart, University of Southampton, ps02v@ecs.soton.ac.uk

Smart, Paul R. “(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 45-55.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uq

Please refer to:

Image by BTC Keychain via Flickr / Creative Commons

 

Richard Heersmink’s (2018) article, A virtue epistemology of the Internet: Search engines, intellectual virtues, and education, provides an important and timely analysis of the Internet from the standpoint of virtue epistemology.[1] According to Richard, the Internet is an important epistemic resource, but it is one that comes with a range of epistemic hazards. Such hazards, he suggests, motivate a consideration of the ways in which individuals should interact with the Internet.

In particular, Richard appeals to a specific branch of virtue epistemology, known as virtue responsibilism, arguing that certain kinds of cognitive trait (e.g., curiosity and open-mindedness) are useful in helping us press maximal epistemic benefit from the Internet. Given the utility of such traits, coupled with the epistemic importance of the Internet, Richard suggests that educational policy should be adapted so as to equip would-be knowers with the cognitive wherewithal to cope with the epistemic challenges thrown up by the online environment.

There is, no doubt, something right about all this. Few would disagree with the claim that a certain level of discernment and discrimination is important when it comes to the evaluation of online content. Whether such ‘virtues’ are best understood from the perspective of virtue responsibilism or virtue reliabilism is, I think, a moot point, for I suspect that in the case of both virtue responsibilism and virtue reliabilism what matters is the way in which belief-forming informational circuits are subject to active configuration by processes that may be broadly construed as metacognitive in nature (Smart, in pressa). That, however, is a minor quibble, and it is one that is of little consequence to the issues raised in Richard’s paper.

For the most part, then, I find myself in agreement with many of the assumptions that motivate the target article. I agree that the Internet is an important epistemic resource that is unprecedented in terms of its scale, scope, and accessibility. I also agree that, at the present time, the Internet is far from an epistemically safe environment, and this raises issues regarding the epistemic standing of individual Internet users. In particular, it looks unlikely that the indiscriminate selection and endorsement of online information will do much to bolster one’s epistemic credentials.

We thus encounter something of a dilemma: As an epistemic resource, the Internet stands poised to elevate our epistemic standing, but as an open and public space the Internet provides ample opportunities for our doxastic systems to be led astray. The result is that we are obliged to divide the online informational cornucopia into a treasure trove of genuine facts and a ragbag collection of ‘false facts’ and ‘fake news.’ The information superhighway, it seems, promises to expand our epistemic power and potential, but the road ahead is one that is fraught with a dizzying array of epistemic perils, problems, and pitfalls. What ought we to do in response to such a situation?

It is at this point that I suspect my own views start to diverge with those of the target article. Richard’s response to the dilemma is to focus attention on the individual agent and consider the ways in which an agent’s cognitive character can be adapted to meet the challenges of the Internet. My own approach is somewhat different. It is borne out of three kinds of doubt: doubts about the feasibility (although not the value) of virtue-oriented educational policies, doubts about the basic validity of virtue theoretic conceptions of knowledge, and doubts about whether the aforementioned dilemma is best resolved by attempting to change the agent as opposed to the environment in which the agent is embedded. As always, space is limited and life is short, so I will restrict my discussion to issues that I deem to be of greatest interest to the epistemological community.

Reliable Technology

Inasmuch as intellectual virtues are required for online knowledge—i.e., knowledge that we possess as a result of our interactions and engagements with the Internet—they are surely only part of a much  broader (and richer) story that includes details about the environment in which our cognitive systems operate. In judging the role of intellectual virtue in shielding us from the epistemic hazards of the online environment, it therefore seems important to have some understanding of the actual technologies we interact with.

This is important because it helps us understand the kinds of intellectual virtue that might be required, as well as the efficacy of specific intellectual virtues in helping us believe the truth (and thus working as virtues in the first place). Internet technologies are, of course, many and varied, and it will not be possible to assess their general relevance to epistemological debates in the present commentary. For the sake of brevity, I will therefore restrict my attention to one particular technology: blockchain.

Blockchain is perhaps best known for its role in supporting the digital cryptocurrency, Bitcoin. It provides us with a means of storing data in a secure fashion, using a combination of data encryption and data linking techniques. For present purposes, we can think of a blockchain as a connected set of data records (or data blocks), each of which contains some body of encrypted data. In the case of Bitcoin, of course, the data blocks contain data of a particular kind, namely, data pertaining to financial transactions. But this is not the only kind of data that can be stored in a blockchain. In fact, blockchains can be used to store information about pretty much anything. This includes online voting records, news reports, sensor readings, personal health records, and so on.

Once data is recorded inside a blockchain, it is very difficult to modify. In essence, the data stored within a blockchain is immutable, in the sense that it cannot be changed without ‘breaking the chain’ of data blocks, and thereby invalidating the data contained within the blockchain. This property makes blockchains of considerable epistemic significance, because it speaks to some of the issues (e.g., concerns about data tampering and malign forms of information manipulation) that are likely to animate epistemological debates in this area.

This does not mean, of course, that the information stored within a blockchain is guaranteed to be factually correct, in the sense of being true and thus yielding improvements in epistemic standing. Nevertheless, there are, I think, reasons to regard blockchain as an important technology relative to efforts to make the online environment a somewhat safer place for would-be knowers. Consider, for example, the title of the present article. Suppose that we wanted to record the fact that a person known as Paul Smart—that’s me—wrote an article with the title:

(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!

We can incorporate this particular piece of information into a blockchain using something called a cryptographic hash function, which yields a unique identifier for the block and all of its contents. In the case of the aforementioned title, the cryptographic hash (as returned by the SHA256 algorithm[2]) is:

7147bd321e79a63041d9b00a937954976236289ee4de6f8c97533fb6083a8532

Now suppose that someone wants to alter the title, perhaps to garner support for an alternative argumentative position. In particular, let’s suppose they want to claim that the title of the article is:

Fake News Alert: Intellectual Virtues Required for Online Knowledge!

From an orthographic perspective, of course, not much has changed. But the subtlety of the alteration is not something that can be used to cause confusion about the actual wording of the original title—the title that I intended for the present article. (Neither can it be used to cast doubt about the provenance of the paper—the fact that the author of the paper was a person called Paul Smart.) To see this, note that the hash generated for the ‘fake’ title looks nothing like the original:

cc05baf2fa7a439674916fe56611eaacc55d31f25aa6458b255f8290a831ddc4

It is this property that, at least in part, makes blockchains useful for recording information that might otherwise be prone to epistemically malign forms of information manipulation. Imagine, for the sake of argument, that climatological data, as recorded by globally distributed sensors, was stored in a blockchain. The immutability of such data makes it extremely difficult for anyone to manipulate the data in such a way as to confirm or deny the reality of year-on-year changes in global temperature. Neither is it easy to alter information pertaining to the provenance of existing data records, i.e., information about when, where, and how such data was generated.

None of this should delude us into thinking that blockchain technology is a panacea for Internet-related epistemic problems—it isn’t! Neither does blockchain obviate the need for agents to exercise at least some degree of intellectual virtue when it comes to the selection and evaluation of competing data streams. Nevertheless, there is, I think, something that is of crucial epistemological interest and relevance here—something that makes blockchain and other cybersecurity technologies deserving of further epistemological attention. In particular, such technologies may be seen as enhancing the epistemic safety of the online environment, and thus perhaps reducing the need for intellectual virtue.

In this sense, the epistemological analysis of Internet technologies may be best approached from some variant of modal epistemology—e.g., epistemological approaches that emphasize the modal stability of true beliefs across close possible worlds (Pritchard, 2009, chap. 2). But even if we choose to countenance an approach that appeals to issues of intellectual virtue, there is still, I suggest, a need to broaden the analytic net to include technologies that (for the time being at least) lie beyond the bounds of the individual cognitive agent.

Safety in Numbers

“From an epistemic perspective,” Richard writes, “the most salient dimension of the Internet is that it is an information space” (Heersmink, 2018, p. 5). Somewhat surprisingly, I disagree. Although it is obviously true that the Internet is an information space, it is not clear that this is its most salient feature, at least from an epistemological standpoint. In particular, there is, I suggest, a sense in which the Internet is more than just an information space. As is clear from the explosive growth in all things social—social media, social networks, social bots, and so on—the Internet functions as a social technology, yielding all manner of opportunities for people to create, share and process information in a collaborative fashion. The result, I suggest, is that we should not simply think of the Internet as an information space (although it is surely that), we should also view it as a social space.

Viewing the Internet as a social space is important because it changes the way we think about the epistemic impact of the Internet, relative to the discovery, production, representation, acquisition, processing and utilization of knowledge. Smart (in pressb), for example, suggests that some online systems function as knowledge machines, which are systems in which some form of knowledge-relevant processing is realized by a socio-technical mechanism, i.e., a mechanism whose component elements are drawn from either the social (human) or the technological realm.

An interesting feature of many of these systems is the way in which the reliability (or truth-conducive) nature of the realized process is rooted in the socio-technical nature of the underlying (realizing) mechanism. When it comes to human computation or citizen science systems, for example, user contributions are typically solicited from multiple independent users as a means of improving the reliability of specific epistemic outputs (Smart, in pressb; Smart and Shadbolt, in press; Watson and Floridi, 2018). Such insights highlight the socially-distributed character of at least some forms of online knowledge production, thereby moving us beyond the realms of individual, agent-centric analyses.

On a not altogether unrelated note, it is important to appreciate the way in which social participation can itself be used to safeguard online systems from various forms of malign intervention. One example is provided by the Google PageRank algorithm. In this case, any attempt to ‘artificially’ elevate the ranking assigned to specific contributions (e.g., a user’s website) is offset by the globally-distributed nature of the linking effort, coupled with the fact that links to a specific resource are themselves weighted by the ranking of the resource from which the link originates. This makes it difficult for any single agent to subvert the operation of the PageRank algorithm.

Even ostensibly non-social technologies can be seen to rely on the distributed and decentralized nature of the Internet. In the case of blockchain, for example, multiple elements of a peer-to-peer network participate in the computational processes that make blockchain work. In this way, the integrity of the larger system is founded on the collaborative efforts of an array of otherwise independent computational elements. And it is this that (perhaps) allows us to think of blockchain’s epistemically-desirable features as being rooted in something of a ‘social’ substrate.

All of this, I suggest, speaks in favor of an approach that moves beyond a preoccupation with the properties of individual Internet users. In particular, there seems to be considerable merit in approaching the Internet from a more socially-oriented epistemological perspective. It is easy to see the social aspects of the Internet as lying at the root of a panoply of epistemic concerns, especially when it comes to the opportunities for misinformation, deception, and manipulation. But in light of the above discussion, perhaps an alternative, more positive, take on the Internet (qua social space) starts to come into sharper focus. This is a view that highlights the way in which certain kinds of online system can work to transform a ‘vice’ into a ‘virtue,’ exploiting the social properties of the Internet for the purposes of dealing with reliability-related concerns.

Image by Dariorug via Flickr / Creative Commons

 

Filter Bubblicious

Search engines form one of the focal points of Richard’s analysis, and, as with previous work in this area, Richard finds at least some aspects of their operation to be highly problematic. A particular issue surfaces in respect of personalized search. Here, Richard’s analysis echoes the sentiments expressed by other epistemologists who regard personalized search algorithms as of dubious epistemic value.

In fact, I suspect the consensus that has emerged in this area fails to tell the whole story about the epistemic consequences of personalized search. Indeed, from a virtue epistemological position, I worry that epistemologists are in danger of failing to heed their own advice—prematurely converging on a particular view without proper consideration of competing positions. In my new-found role as the virtue epistemologist’s guardian angel (or should that be devil’s advocate?), I will attempt to highlight a couple of reasons why I think more empirical research is required before we can say anything useful about the epistemological impact of personalized search algorithms.

My first worry is that our understanding about the extent to which search results and subsequent user behavior is affected by personalization is surprisingly poor. Consider, for example, the results of one study, which attempted to quantify the effect of personalization on search results (Hannak et al., 2013). Using an empirical approach, Hannak et al. (2013) report a demonstrable personalization effect, with 11.7% of search results exhibiting differences due to personalization. Interestingly, however, the effect of personalization appeared to be greater for search results with lower rankings; highly ranked results (i.e., those appearing at the top of a list of search results) appeared to be much less affected by personalization.

This result is interesting given the observation that college students “prefer to click on links in higher positions even when the abstracts are less relevant to the task at hand” (Heersmink, 2018, p. 6). From one perspective, of course, this tendency looks like a vice that jeopardizes the epistemic standing of the individual user. And yet, from another perspective, it looks like the preference for higher ranked search results is poised to negate (or at least reduce) the negative epistemological effects of personalized search. What we seem to have here, in essence, is a situation in which one kind of ‘intellectual vice’ (i.e., a tendency to select highly-ranked search results) is playing something of a more positive (virtuous?) role in mitigating the negative epistemological sequelae of a seemingly vicious technology (i.e., personalized search).

None of this means that the epistemic effects of personalized search are to the overall benefit of individual users; nevertheless, the aforementioned results do call for a more nuanced and empirically informed approach when considering the veritistic value of search engines, as well as other kinds of Internet-related technology.

A second worry relates to the scope of the epistemological analysis upon which judgements about the veritistic value of search engines are based. In this case, it is unclear whether analyses that focus their attention on individual agents are best placed to reveal the full gamut of epistemic costs and benefits associated with a particular technology, especially one that operates in the socio-technical ecology of the Internet. To help us understand this worry in a little more detail, it will be useful to introduce the notion of mandevillian intelligence (Smart, in pressc; Smart, in pressd).

Mandevillian intelligence is a specific form of collective intelligence in which the cognitive shortcomings and epistemic vices of the individual agent are seen to yield cognitive benefits and epistemic virtues at the collective or social level of analysis, e.g., at the level of collective doxastic agents (see Palermos, 2015) or socio-epistemic systems (see Goldman, 2011). According to this idea, personalized search systems may play a productive role in serving the collective cognitive good, providing a means by which individual vices (e.g., a tendency for confirmation bias) are translated into something that more closely resembles an epistemic virtue (e.g., greater cognitive coverage of a complex space of thoughts, ideas, opinions, and so on). Consider, for example, the way in which personalized search may help to focus individual attention on particular bodies of information, thereby restricting access to a larger space of ideas, opinions, and other information.

While such forms of ‘restricted access’ or ‘selective information exposure’ are unlikely to yield much in the way of an epistemic benefit for the individual agent, it is possible that by exploiting (and, indeed, accentuating!) an existing cognitive bias (e.g., confirmation bias), personalized search may work to promote cognitive diversity, helping to prevent precipitant forms of cognitive convergence (see Zollman, 2010) and assisting with the epistemically optimal division of cognitive labor (see Muldoon, 2013). This possibility reveals something of a tension in how we interpret or evaluate the veritistic value of a particular technology or epistemic practice. In particular, it seems that assessments of veritistic value may vary according to whether our epistemological gaze is directed towards individual epistemic agents or the collective ensembles in which those agents are situated.

The Necessity of Virtue

As Richard notes, virtue epistemology is characterized by a shift in emphasis, away from the traditional targets of epistemological analysis (e.g., truth, justification and belief) and towards the cognitive properties of would-be knowers. “Virtue epistemology,” Richard writes, “is less concerned with the nature of truth and more concerned with the cognitive character of agents” (Heersmink, 2018, p. 2). This is, no doubt, a refreshing change, relative to the intellectual orientation of traditional philosophical debates.

Nevertheless, I assume that virtue epistemologists still recognize the value and priority of truth when it comes to issues of epistemic evaluation. Someone who holds false beliefs is not the possessor of knowledge, and this remains the case irrespective of whatever vices and virtues the agent has. In other words, it does not matter how careful, attentive and assiduous an agent is in selecting and evaluating information, if what the agent believes is false, they simply do not know.

What seems to be important in the case of virtue epistemology is the role that intellectual virtue plays in securing the truth of an agent’s beliefs. In particular, the central feature of virtue epistemology (at least to my mind) is that the truth of an agent’s beliefs stem from the exercise of intellectual virtue. It is thus not the case that truth is unimportant (or less important) when it comes to issues of positive epistemic standing; rather, it is the role that intellectual virtue plays in establishing the truth of an agent’s beliefs. An agent is thus a bona fide knower when they believe the truth and the truth in question is attributable to some aspect of their cognitive character, specifically, a cognitive trait (virtue responsibilism) or cognitive faculty (virtue reliabilism).

What then makes something a vice or virtue seems to be tied to the reliability of token instantiations of processes that are consistent with an agent’s cognitive character. Intellectual virtues are thus “cognitive character traits that are truth-conducive and minimalise error” (Heersmink, 2018, p. 3), while intellectual vices are characterized as “cognitive character traits that are not truth-conducive and do not minimalise error” (Heersmink, 2018, p. 3). It is this feature of the intellectual virtues—the fact that they are, in general, reliable (or give rise to reliable belief-relevant processes)—that looks to be important when it comes to issues of epistemic evaluation.

So this is what I find problematic about virtue theoretic approaches to knowledge. (Note that I am not an epistemologist by training, so this will require a generous—and hopefully virtue-inspiring swig—of the ole intellectual courage.) Imagine a state-of-affairs in which the Internet was (contrary to the present state-of-affairs) a perfectly safe environment—one where the factive status of online information was guaranteed as a result of advances in cyber-security techniques and intelligent fact-checking services. Next, let us imagine that we have two individuals, Paul and Sophia, who differ with respect to their cognitive character. Paul is the least virtuous of the two, unreflectively and automatically accepting whatever the Internet tells him. Sophia is more circumspect, wary of being led astray by (the now non-existent) fake news.

Inasmuch as we see the exercise of intellectual virtue as necessary for online knowledge, it looks unlikely that poor old Paul can be said to know very much. This is because the truth of Paul’s beliefs are not the result of anything that warrants the label ‘intellectual virtue.’ Paul, of course, does have a lot of true beliefs, but the truth of these beliefs does not stem from the exercise of his intellectual virtues—if, indeed, he has any. In fact, inasmuch as there is any evidence of virtue in play here, it is probably best attributed to the technologies that work to ensure the safety of the online environment. The factive status of Paul’s beliefs thus has more to do with the reliability of the Internet than it does with the elements of his cognitive character.

But is it correct to say that Paul has no online knowledge in this situation? Personally, I do not have this intuition. In other words, in a perfectly safe environment, I can see no reason why we should restrict knowledge attributions to agents whose beliefs are true specifically as the result of intellectual virtue. My sense is that even the most unreflective of agents could be credited with knowledge in a situation where there was no possibility of them being wrong. And if that is indeed the case, then why insist that it is only the exercise of intellectual virtue that underwrites positive epistemic standing?

After all, it seems perfectly possible, to my mind, that Sophia’s epistemic caution contributes no more to the minimization of error in an epistemically benign (i.e., safe) environment than does Paul’s uncritical acceptance. (In fact, given the relative efficiency of their doxastic systems, it may very well be the case that Sophia ends up with fewer true beliefs than Paul.) It might be claimed that this case is invalidated by a failure to consider the modal stability of an agent’s beliefs relative to close possible worlds, as well as perhaps their sensitivity to counterfactual error possibilities. But given the way in which the case is characterized, I suggest that there are no close possible worlds that should worry us—the cybersecurity and fact checking technologies are, let us assume, sufficiently robust as to ensure the modal distance of those worrisome worlds.

One implication of all this is to raise doubts about the necessity of intellectual virtue, relative to our conceptual understanding of knowledge. If there are cases where intellectual virtue is not required for positive epistemic standing, then intellectual virtue cannot be a necessary condition for knowledge attribution. And if that is the case, then why should intellectual virtue form the basis of an approach that is intended to deal with the epistemic shortcomings of the (contemporary) Internet?

Part of the attraction of virtue epistemology, I suspect, is the way in which a suite of generally reliable processes are inextricably linked to the agent who is the ultimate target of epistemic evaluation. This linkage, which is established via the appeal to cognitive character, helps to ensure the portability of an agent’s truth-tracking capabilities—it helps to ensure, in other words, that wherever the agent goes their reliable truth-tracking capabilities are sure to follow.

However, in an era where our doxastic systems are more-or-less constantly plugged into a reliable and epistemically safe environment, it is not so clear that agential capabilities are relevant to epistemic standing. This, I suggest, raises doubts about the necessity of intellectual virtue in securing positive epistemic status, and it also (although this is perhaps less clear) encourages us to focus our attention on some of the engineering efforts (as opposed to agent-oriented educational programs) that might be required to make the online world an epistemically safer place.

Conclusion

What, then, should we make of the appeal to virtue epistemology in our attempt to deal with the  epistemic hazards of the Internet. My main concern is that the appeal to virtue epistemology (and the emphasis placed on intellectual virtue) risks an unproductive focus on individual human agents at the expense of both the technological and social features of the online world. This certainly does not rule out the relevance of virtue theoretic approaches as part of our attempt to understand the epistemic significance of the Internet, but other approaches (e.g., modal reliabilism, process reliabilism, distributed reliabilism, and systems-oriented social epistemology) also look to be important.

Personally, I remain agnostic with regard to the relevance of different epistemological approaches, although I worry about the extent to which virtue epistemology is best placed to inform policy-related decisions (e.g., those relating to education). In particular, I fear that by focusing our attention on individual agents and issues of intellectual virtue, we risk overlooking some of the socio-epistemic benefits of the Internet, denigrating a particular technology (e.g., personalized search) on account of its failure to enhance individual knowledge, while ignoring the way a technology contributes to more collective forms of epistemic success.

In concluding his thought-provoking paper on virtue epistemology and the Internet, Richard suggests that “there is an important role for educators to teach and assess [intellectual] virtues as part of formal school and university curricula, perhaps as part of critical thinking courses” (Heersmink, 2018, p. 10). I have said relatively little about this particular issue in the present paper. For what it’s worth, however, I can see no reason to object to the general idea of Internet-oriented educational policies. The only caveat, perhaps, concerns the relative emphasis that might be placed on the instillation of intellectual virtue as opposed to the inculcation of technical skills, especially those that enable future generations to make the online world a safer place.

No doubt there is room for both kinds of pedagogical program (assuming they can even be dissociated). At the very least, it seems to me that the effort to resolve a problem (i.e., engineer a safer Internet) is just as important as the effort to merely cope with it (i.e., acquire a virtuous cognitive character). But, in any case, when it comes to education and learning, we should not lose sight of the fact that the Internet is itself something that is used for educational purposes. Perhaps, then, the more important point about education and the Internet is not so much the precise details of what gets taught, so much as the issue of whether the Internet (with all its epistemic foibles) is really the best place to learn.

Contact details: ps02v@ecs.soton.ac.uk

References

Goldman, A. I. (2011). A guide to social epistemology. In A. I. Goldman and D. Whitcomb (Eds.), Social Epistemology: Essential Readings, pp. 11–37. New York, New York, USA: Oxford University Press.

Hannak, A., P. Sapiezynski, A. Molavi Kakhki, B. Krishnamurthy, D. Lazer, A. Mislove, and C. Wilson (2013). Measuring personalization of Web search. In D. Schwabe, V. Almeida, H. Glaser, R. Baeza-Yates, and S. Moon (Eds.), Proceedings of the 22nd International Conference  on World Wide Web, Rio  de Janeiro, Brazil, pp. 527–538. ACM.

Heersmink, R. (2018). A virtue epistemology of the Internet: Search engines, intellectual virtues, and education. Social Epistemology 32 (1), 1–12.

Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass 8 (2), 117–125.

Palermos, S. O. (2015). Active externalism, virtue reliabilism and scientific knowledge. Synthese 192 (9), 2955–2986.

Pritchard, D. (2009). Knowledge. Basingstoke, England, UK: Palgrave Macmillan.

Smart, P. R. (in pressa). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. (in pressb). Knowledge machines. The Knowledge Engineering Review.

Smart, P. R. (in pressc). Mandevillian intelligence. Synthese.

Smart, P. R. (in pressd). Mandevillian intelligence: From individual vice to collective virtue. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Socially Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. and N. R. Shadbolt (in press). The World Wide Web. In J. Chase and D. Coady (Eds.), The Routledge Handbook of Applied Epistemology. New York, New York, USA: Routledge.

Watson, D. and L. Floridi (2018). Crowdsourced science: Sociotechnical epistemology in the e-research paradigm. Synthese 195 (2), 741–764.

Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis 72 (1), 17–35.

[1] This work is supported under SOCIAM: The Theory and Practice of Social Machines. The SOCIAM Project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/1 and comprises the Universities of Southampton, Oxford and Edinburgh.

[2] See http://www.xorbin.com/tools/sha256-hash-calculator [accessed: 30th  January 2018].

Author Information: Inkeri Koskinen, University of Helsinki, inkeri.koskinen@helsinki.fi

Koskinen, Inkeri. “Not-So-Well-Designed Scientific Communities.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 54-58.

The pdf of the article includes specific page numbers. Shortlink: http://wp.me/p1Bfg0-3PB

Please refer to:

Image from Katie Walker via Flickr

 

The idea of hybrid concepts, simultaneously both epistemic and moral, has recently attracted the interest of philosophers, especially since the notion of epistemic injustice (Fricker 2007) became the central topic of a lively and growing discussion. In her article, Kristina Rolin adopts the idea of such hybridity, and investigates the possibility of understanding epistemic responsibility as having both epistemic and moral qualities.

Rolin argues that scientists belonging to epistemically well-designed communities are united by mutual epistemic responsibilities, and that these responsibilities ought to be understood in a specific way. Epistemically responsible behaviour towards fellow researchers—such as adopting a defense commitment with respect to one’s knowledge claims, or offering constructive criticism to colleagues—would not just be an epistemic duty, but also a moral one; one that shows moral respect for other human beings in their capacity as knowers.

However, as Rolin focuses on “well-designed scientific communities”, I fear that she fails to notice an implication of her own argument. Current trends in science policy encourage researchers in many fields to take up high-impact, solution-oriented, multi-, inter-, and transdisciplinary projects. If one can talk about “designing scientific communities” in this context, the design is clearly meant to challenge the existing division of epistemic labour in academia, and to destabilise speciality communities. If we follow Rolin’s own argumentation, understanding epistemic responsibility as a moral duty can thus become a surprisingly heavy burden for an individual researcher in such a situation.

Epistemic Cosmopolitanism

According to Rolin, accounts of epistemic responsibility that appeal to self-interested or epistemic motives need to be complemented with a moral account. Without one it is not always possible to explain why it is rational for an individual researcher to behave in an epistemically responsible way.

Both the self-interest account and the epistemic account state that scientists behave in an epistemically responsible way because they believe that it serves their own ends—be it career advancement, fame, and financial gain, or purely epistemic individual ends. However, as Rolin aptly points out, both accounts are insufficient in a situation where the ends of the individual researcher and the impersonal epistemic ends of science are not aligned. Only if researchers see epistemically responsible behaviour as a moral duty, will they act in an epistemically responsible way even if this does not serve their own ends.

It is to some degree ambiguous how Rolin’s account should be read—how normative it is, and in what sense. Some parts of her article could be interpreted as a somewhat Mertonian description of actual moral views held by individual scientists, and cultivated in scientific communities (Merton [1942] 1973). However, she also clearly gives normative advice: well-designed scientific communities should foster a moral account of epistemic responsibility.

But when offering a moral justification for her view, she at times seems to defend a stronger normative stance, one that would posit epistemic responsibility as a universal moral duty. However, her main argument does not require the strongest reading. I thus interpret her account as partly descriptive and partly normative: many researchers treat epistemic responsibility as a moral duty, and it is epistemically beneficial for scientific communities to foster such a view. Moreover, a moral justification can be offered for the view.

When defining her account more closely, Rolin cites ideas developed in political philosophy. She adopts Robert Goodin’s (1988) distinction between general and special moral duties, and names her account epistemic cosmopolitanism:

Epistemic cosmopolitanism states that (a) insofar as we are engaged in knowledge-seeking practices, we have general epistemic responsibilities, and (b) the special epistemic responsibilities scientists have as members of scientific communities are essentially distributed general epistemic responsibilities (Rolin 2017, 478).

One of the advantages of this account is of particular interest to me. Rolin notes that if epistemically responsible behaviour would be seen as just a general moral duty, it could be too demanding for individual researchers. Any scientist is bound to fail in an attempt to behave in an entirely epistemically responsible manner towards all existing scientific speciality communities, taking all their diverse standards of evidence into account. This result can be avoided through a division of epistemic labour. The general responsibilities can be distributed in a way that limits the audience towards which individual scientists must behave in an epistemically responsible way. Thus, “in epistemically well-designed scientific communities, no scientist is put into a position where she is not capable of carrying out her special epistemic responsibilities” (Rolin 2017, 478).

Trends in Science Policy

Rolin’s main interest is in epistemically well-designed scientific communities. However, she also takes up an example I mention in a recent paper (Koskinen 2016). In it I examine a few research articles in order to illustrate situations where a relevant scientific community has not been recognised, or where there is no clear community to be found. In these articles, researchers from diverse fields attempt to integrate archaeological, geological or seismological evidence with orally transmitted stories about great floods. In other words, they take the oral stories seriously, and attempt to use them as historical evidence. However, they fail to take into account folkloristic expertise on myths. This I find highly problematic, as the stories the researchers try to use as historical evidence include typical elements of the flood myth.

The aims of such attempts to integrate academic and extra-academic knowledge are both emancipatory—taking the oral histories of indigenous communities seriously—and practical, as knowledge about past natural catastrophes may help prevent new ones. This chimes well with certain current trends in science policy. Collaborations across disciplinary boundaries, and even across the boundaries of science, are promoted as a way to increase the societal impact of science and provide solutions to practical problems. Researchers are expected to contribute to solving the problems by integrating knowledge from different sources.

Such aims have been articulated in terms of systems theory, the Mode-2 concept of knowledge production and, recently, open science (Gibbons et al. 1994; Nowotny et al. 2001; Hirsch Hadorn et al. 2008), leading to the development of solution-oriented multi, inter-, and transdisciplinary research approaches. At the same time, critical feminist and postcolonial theories have influenced collaborative and participatory methodologies (Reason and Bradbury 2008; Harding 2011), and recently ideas borrowed from business have led to an increasing amount of ‘co-creation’ and ‘co-research’ in academia (see e.g. Horizon 2020).

All this, combined with keen competition for research funding, leads in some areas of academic research to increasing amounts of solution-oriented research projects that systematically break disciplinary boundaries. And simultaneously they often challenge the existing division of epistemic labour.

Challenging the Existing Division of Epistemic Labour

According to Rolin, well-designed scientific communities need to foster the moral account of epistemic responsibilities. The necessity becomes clear in such situations as are described above: it would be in the epistemic interests of scientific communities, and science in general, if folklorists were to offer constructive criticism to the archaeologists, geologists and seismologists. However, if the folklorists are motivated only by self-interest, or by personal epistemic goals, they have no reason to do so. Only if they see epistemic responsibility as a moral duty, one that is fundamentally based on general moral duties, will their actions be in accord with the epistemic interests of science. Rolin argues that this happens because the existing division of epistemic labour can be challenged.

Normally, according to epistemic cosmopolitanism, the epistemic responsibilities of folklorists would lie mainly in their own speciality community. However, if the existing division of epistemic labour does not serve the epistemic goals of science, this does not suffice. And if special moral duties are taken to be distributed general moral duties, the way of distributing them can always be changed. In fact, it must be changed, if that is the only way to follow the underlying general moral duties:

If the cooperation between archaeologists and folklorists is in the epistemic interests of science, a division of epistemic labour should be changed so that, at least in some cases, archaeologists and folklorists should have mutual special epistemic responsibilities. This is the basis for claiming that a folklorist has a moral obligation to intervene in the problematic use of orally transmitted stories in archaeology (Rolin 2017, 478–479).

The solution seems compelling, but I see a problem that Rolin does not sufficiently address. She seems to believe that situations where the existing division of epistemic labour is challenged are fairly rare, and that they lead to a new, stable division of epistemic labour. I do not think that this is the case.

Rolin cites Brad Wray (2011) and Uskali Mäki (2016) when emphasising that scientific speciality communities are not eternal. They may dissolve and new ones may emerge, and interdisciplinary collaboration can lead to the formation of new speciality communities. However, as Mäki and I have noted (Koskinen & Mäki 2016), solution-oriented inter- or transdisciplinary research does not necessarily, or even typically, lead to the formation of new scientific communities. Only global problems, such as biodiversity loss or climate change, are likely to function as catalysts in the disciplinary matrix, leading to the formation of numerous interdisciplinary research teams addressing the same problem field. Smaller, local problems generate only changeable constellations of inter- and transdisciplinary collaborations that dissolve once a project is over. If such collaborations become common, the state Rolin describes as a rare period of transition becomes the status quo.

It Can be Too Demanding

Rather than a critique of Rolin’s argument, the conclusion of this commentary is an observation that follows from the said argument. It helps us to clarify one possible reason for the difficulties that researchers encounter with inter- and transdisciplinary research.

Rolin argues that epistemically well-designed scientific communities should foster the idea of epistemic responsibilities being not only epistemic, but also moral duties. The usefulness of such an outlook becomes particularly clear in situations where the prevailing division of epistemic labour is challenged—for instance, when an interdisciplinary project fails to take some relevant viewpoint into account, and the researchers who would be able to offer valuable criticism do not benefit from offering it. In such a situation researchers motivated by self-interest or by individual epistemic goals would have no reason to offer the required criticism. This would be unfortunate, given the impersonal epistemic goals of science. So, we must hope that scientists see epistemically responsible behaviour as their moral duty.

However, for a researcher working in an environment where changeable, solution-oriented, multi-, inter-, and transdisciplinary projects are common, understanding epistemic responsibility as a moral duty may easily become a burden. The prevailing division of epistemic labour is challenged constantly, and without a new, stable division necessarily replacing it.

As Rolin notes, it is due to a tolerably clear division of labour that epistemic responsibilities understood as moral duties do not become too demanding for individual researchers. But as trends in science policy erode disciplinary boundaries, the division of labour becomes unstable. If it continues to be challenged, it is not just once or twice that responsible scientists may have to intervene and comment on research that is not in their area of specialisation. This can become a constant and exhausting duty. So if instead of well-designed scientific communities, we get their erosion by design, we may have to reconsider the moral account of epistemic responsibility.

References

Fricker, M. Epistemic injustice: power and the ethics of knowing. Oxford: Oxford University Press, 2007.

Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. & Trow, M. The new production of knowledge: the dynamics of science and research in contemporary societies. London: Sage, 1994.

Goodin, R. “What is So Special about Our Fellow Countrymen?” Ethics 98 no. 4 (1988): 663–686.

Hirsch Hadorn, G., Hoffmann-Riem, H., Biber-Klemm, S., Grossenbacher-Mansuy, W., Joye, D., Pohl, C., Wiesmann, U., Zemp, E. (Eds.). Handbook of Transdisciplinary Research. Berlin: Springer, 2008.

Harding, S. (Ed.). The postcolonial science and technology studies reader. Durham and London: Duke University Press, 2011.

Horizon 2020. Work Programme 2016–2017. European Commission Decision C (2017)2468 of 24 April 2017.

Koskinen, I. “Where is the Epistemic Community? On Democratisation of Science and Social Accounts of Objectivity.” Synthese. 4 August 2016. doi:10.1007/s11229-016-1173-2.

Koskinen, I., & Mäki, U. “Extra-academic transdisciplinarity and scientific pluralism: What might they learn from one another?” The European Journal of Philosophy of Science 6, no. 3 (2016): 419–444.

Mäki, U. “Philosophy of Interdisciplinarity. What? Why? How?” European Journal for Philosophy of Science 6, no. 3 (2016): 327–342.

Merton, R. K. “Science and Technology in a Democratic Order.” Journal of Legal and Political Sociology 1 (1942): 115–126. Reprinted as “The Normative Structure of Science.” In R. K Merton, The Sociology of Science. Theoretical and Empirical Investigations. Chicago: University of Chicago Press, 1973: 267–278.

Nowotny, H., Scott, P., & Gibbons, M. Re-thinking science: knowledge and the public in an age of uncertainty. Cambridge: Polity, 2001.

Reason, P. and Bradbury, H. (Eds.). The Sage Handbook of Action Research: Participative Inquiry and Practice. Sage, CA: 2008.

Rolin, K. “Scientific Community: A Moral Dimension.” Social Epistemology 31, no. 5 (2017), 468–483.

Wray, K. B. Kuhn’s Evolutionary Social Epistemology. Cambridge: Cambridge University Press, 2001.

Author Information: James Collier, Virginia Tech, jim.collier@vt.edu

Shortlink: http://wp.me/p1Bfg0-3xo

Editor’s Note: The publishers of Social Epistemology—Routledge and Taylor & Francis—have been kind enough to allow me to publish the full-text “Introduction” to issues on the SERRC and on the journal’s website.

At the beginning of August 2016, I received word from Greg Feist that Sofia Liberman had died. I was taken aback having recently corresponded with Professor Liberman about the online publication of her article (coauthored with Roberto López Olmedo). Professor Liberman’s work came to my attention through her association with Greg, Mike Gorman and scholars studying the psychology of science. We offer our sincere condolences to Sofia Liberman’s family, friends and colleagues. With gratitude and great respect for her intellectual legacy, we share Sofia Liberman’s scholarship with you in this issue of Social Epistemology.

Since the advent of publishing six issues a year, we adopted the practice of printing the journal triannually; thus, combining two issues for each print edition. The result makes for a panoply of fascinating topics and arguments. Still, we invite our readers to focus on the first four articles in this edition—articles addressing topics in the psychology of science, edited by Mike Gorman and Greg Feist—as a discrete, but linked, part of the whole. These articles signal the Social Epistemology’s wish to renew ties with the psychology of science community, ties established since at least the publication of William Shadish and Steve Fuller’s edited book The Social Psychology of Science (Guilford Press) in 1993.

Beginning by reflexively tracing the trajectory of his own research Mike Gorman, and Nora Kashani, ethnographically and archivally examine the work of A. Jean Ayres. Ayers, known for inventing Sensory Integration (SI) theory, sought to identify and treat children having difficulty interpreting sensation from the body and incorporating those sensations into academic and motor learning. To gain a more comprehensive account of the development and reception of SI, Gorman and Kashani integrated a cognitive historical analysis—a sub species historiae approach—of Ayers’ research with interactions and interviews with current practitioners—an in vivo approach. Through Gorman and Kashani’s method, we map Ayers’ ability to build a network of independent students and clients leading both to the wide acceptance and later fragmentation of SI.

We want scientific research that positively transforms an area of inquiry. Yet, how do we know when we achieve such changes and, so, may determine in advance the means by which we can achieve further transformations? Barrett Anderson and Greg Feist investigate the funding of what became, after 2002, impactful articles in psychology. While assessing impact relies, in part, on citation counts, Anderson and Feist argue for “generativity” as a new index. Generative work leads to the growth of a new branch on the “tree of knowledge”. Using the tree of knowledge as a metaphorical touchstone, we can trace and measure generative work to gain a fuller sense of which factors, such as funding, policy makers might consider in encouraging transformative research.

Sofia Liberman and Roberto López Olmedo question the meaning of coauthorship for scientists. Specifically, given the contentiousness—often found in the sciences—surrounding the assignation of primary authorship of articles and the priority of discovery, what might a better understanding of the social psychology of coauthorship yield? Liberman and López Olmedo find that, for example, fields emphasizing theoretical, in contrast to, experimental practices consider different semantic relations, such as “common interest” or “active participation”, associated with coauthroship. More generally, since scientists do not hold universal values regarding collaboration, differing group dynamics and reward structures affect how one approaches and decides coauthorship. We need more research, Liberman and López Olmedo claim, to further understand scientific collaboration in order, perhaps, to encourage more, and more fruitful, collaborations across fields and disciplines.

Complex, or “wicked”, problems require the resources of multiple disciplines. Moreover, addressing such problems calls for “T-shaped” practitioners—students educated to possess, and professionals possessing, both a singular expertise—the vertical part of the “T”—and the breadth expert knowledge—the horizontal part of the “T”. On examining the origin and development of the concept of the “T-shaped” practitioner, Conley et al. share case studies involving teaching students at James Madison University and the University of Virginia learning to make the connections that underwrite “T-shaped” expertise. Conley et al. analyze the students use of concept maps to illustrate connections, and possible trading zones, among types of knowledge.

Are certain scientists uniquely positioned—given their youth or age, their insider or outsider disciplinary status to bring about scientific change? Do joint commitments to particular beliefs—and, so, an obligation to act in accord with, and not contrarily to, those beliefs—hinder one’s ability to think differently and pose potential alternative solutions? Looking at these issues, Line Andersen describes Kenneth Appel and Wolfgang Haken’s solution to Four Color Problem—“any map can be colored with only four colors so that no two adjacent countries have the same color.” From of this case, and other examples, Andersen suggests that a scientist’s outsider status may enable scientific change.

We generally, and often blithely, assume our knowledge is fallible. What can we learn if we take fallibility rather more seriously? Stephen Kemp argues for “transformational fallibilism.” In order to improve our understanding should we question, and be willing to revise or reconstruct, any aspect in our network of understanding? How should we extend our Popperian attitude, and what we learn accordingly, to knowledge claim and forms of inquiry in other fields? Kemp advocates that we not allow our easy agreement on knowledge’s fallibility to make us passive regarding accepted knowledge claims. Rather, coming to grips with the “impermanence” of knowledge sharpens and maintains our working sense of fallible knowledge.

Derek Anderson introduces the idea of “conceptual competence injustice”. Such an injustice arises when “a member of a marginalized group is unjustly regarded as lacking conceptual or linguistic competence as a consequence of structural oppression”. Anderson details three conditions one might find in a graduate philosophy classroom. For example, a student judges a member of a marginalized group, who makes a conceptual claim, and accords their claim less credibility than it actually has. That judgment leads to a subsequent assessment regarding the marginalized person’s lower degree of competence—than they in fact have—with a relevant word or concept. By depicting conceptual competence injustice, Anderson gives us important matters to consider in deriving a more complete accounting of Miranda Fricker’s forms of epistemic injustice.

William Lynch gauges Steve Fuller’s views in support of intelligent design theory. Lynch challenges Fuller’s psychological assumptions, and corresponding questions as to what motivates human beings to do science in the first place. In creating and pursuing the means and ends of science do humans—seen as the image and likeness of God—seek to render nature intelligible and thereby know the mind of God? If we take God out of the equation—as does Darwin’s theory—how do we understand the pursuit of science in both historical and future terms? Still, as Lynch explains, Fuller desires a broader normative landscape in which human beings might rewardingly follow unachieved, unconventional or forgotten, paths to science that could yield epistemic benefits. Lynch concludes that the pursuit of parascience likely leads both to opportunism and dangerous forms of doubt in traditional science.

Exchanges on many of the articles that appear in this issue of Social Epistemology—and in recent past issues—can be found on the Social Epistemology Review and Reply Collective: https://social-epistemology.com/. Please join us. We realise knowledge together.

Author Information: Adam Riggio, New Democratic Party of Canada, adamriggio@gmail.com

Riggio, Adam. “Subverting Reality: We Are Not ‘Post-Truth,’ But in a Battle for Public Trust.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 66-73.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3vZ

Image credit: Cornerhouse, via flickr

Note: Several of the links in this article are to websites featuring alt-right news and commentary. This exists both as a warning for offensive content, as well as a sign of precisely how offensive the content we are dealing with actually is.

An important purpose of philosophical writing for public service is to prevent important ideas from slipping into empty buzzwords. You can give a superficial answer to the meaning of living in a “post-truth” world or discourse, but the most useful way to engage this question is to make it a starting point for a larger investigation into the major political and philosophical currents of our time. Post-truth was one of the many ideas American letters haemorrhaged in the maelstrom of Trumpism’s wake, the one seemingly most relevant to the concerns of social epistemology.

It is not enough simply to say that the American government’s communications have become propagandistic, or that the Trump Administration justifies its policies with lies. This is true, but trivial. We can learn much more from philosophical analysis. In public discourse, the stability of what information, facts, and principles are generally understood to be true has been eroding. General agreement on which sources of information are genuinely reliable in their truthfulness and trustworthiness has destabilized and diverged. This essay explores one philosophical hypothesis as to how that happened: through a sustained popular movement of subversion – subversion of consensus values, of reliability norms about information sources, and of who can legitimately claim the virtues of subversion itself. The drive to speak truth to power is today co-opted to punch down at the relatively powerless. This essay is a philosophical examination of how that happens.

Subversion as a Value and an Act

A central virtue in contemporary democracy is subversion. To be a subversive is to progress society against conservative, oppressive forces. It is to commit acts that transgress popular morality while providing a simultaneous critique of it. As new communities form in a society, or as previously oppressed communities push for equal status and rights, subversion calls attention to the inadequacy of currently mainstream morality to the new demands of this social development. Subversive acts can be publications, artistic works, protests, or even the slow process of conducting your own life publicly in a manner that transgresses mainstream social norms and preconceptions about what it is right to do.

Values of subversiveness are, therefore, politically progressive in their essence. The goal of subversion values is to destabilize an oppressive culture and its institutions of authority, in the name of greater inclusiveness and freedom. This is clear when we consider the popular paradigm case of subversive values: punk rock and punk culture. In the original punk and new wave scenes of 1970s New York and Britain, we can see subversion values in action. Punk’s embrace of BDSM and drag aesthetics subvert the niceties of respectable fashion. British punk’s embrace of reggae music promotes solidarity with people oppressed by racist and colonialist norms. Most obviously, punk enshrined a morality of musical composition through simplicity, jamming, and enthusiasm. All these acts and styles subverted popular values that suppressed all but vanilla hetero sexualities, marginalized immigrant groups and ethnic minorities, denigrated the poor, and esteemed an erudite musical aesthetic.

American nationalist conservatism today has adopted the form and rhetoric of subversion values, if not the content. The decadent, oppressive mainstream the modern alt-right opposes and subverts is a general consensus of liberal values – equal rights regardless of race or gender, an imperative to build a fair economy for all citizens, end police oppressive of marginalized communities, and so on. Alt-right activists push for the return of segregation and even ethnic cleansing of Hispanics from the United States. Curtis Yarvin, the intellectual centre of America’s alt-right, openly calls for an end to democratic institutions and their replacement with government by a neo-cameralist state structure that replaces citizenship with shareholds and reduces all public administration and foreign policy to the aim of profit. Yet because these ideas are a radical front opposing a broadly liberal democratic mainstream culture, alt-right activists declare themselves punk. They claim subversiveness in their appropriation of punk fashion in apparel and hair, and their gleeful offensiveness to liberal sensibilities with their embrace of public bigotry.

Subversion Logics: The Vicious Paradox and Trolling

Alt-right discourse and aesthetic claim to have inherited subversion values because their activists oppose a liberal democratic mainstream whose presumptions include the existence of universal human rights and the encouragement of cultural, ethnic, and gender diversity throughout society. If subversion values are defined entirely according to the act of subverting any mainstream, then this is true. But this would decouple subversion values from democratic political thought. At question in this essay – and at this moment in human democratic civilization – is whether such decoupling is truly possible.

If subversion as an act is decoupled from democratic values, then we can understand it as the act of forcing an opponent into a vicious paradox. One counters an opponent by interpreting their position as implying a hypocritical or self-contradictory logic. The most general such paradox is Karl Popper’s paradox of tolerance. Alt-right discourse frames their most bigoted communications as subversive acts of total free speech – an absolutism of freedom that decries as censorship any critique or opposition to what they say. This is true whether they write on a comment thread, through an anonymous Twitter feed, or on a stage at UC Berkeley. We are left with the apparent paradox that a democratic society must, if we are to respect our democratic values without being hypocrites ourselves, accept the rights of the most vile bigots to spread racism, misogyny, anti-trans and heterosexist ideas, Holocaust denial, and even the public release of their opponents’ private information. As Popper himself wrote, the only response to such an argument is to deny its validity – a democratic society cannot survive if it allows its citizens to argue and advocate for the end of democracy. The actual hypocritical stance is free speech absolutism: permitting assaults on democratic society and values in the name of democracy itself.

Trolling, the chief rhetorical weapon of the alt-right, is another method of subversion, turning an opponent’s actions against herself. To troll is to communicate with statements so dripping in irony that an opponent’s own opposition can be turned against itself. In a simple sense, this is the subversion of insults into badges of honour and vice versa. Witness how alt-right trolls refer to themselves as shitlords, or denounce ‘social justice warriors’ as true fascists. But trolling also includes a more complex rhetorical strategy. For example, one posts a violent, sexist, or racist meme – say, Barack Obama as a witch doctor giving Brianna Wu a lethal injection. If you criticize the post, they respond that they were merely trying to bait you, and mock you as a fragile fool who takes people seriously when they are not – a snowflake. You are now ashamed, having fallen into their trap of baiting earnest liberals into believing in the sincerity of their racism, so you encourage people to dismiss such posts as ‘mere trolling.’ This allows for a massive proliferation of racist, misogynist, anti-democratic ideas under the cover of being ‘mere trolling’ or just ‘for the lulz.’

No matter the content of the ideology that informs a subversive act, any subversive rhetoric challenges truth. Straightforwardly, subversion challenges what a preponderant majority of a society takes to be true. It is an attack on common sense, on a society’s truisms, on that which is taken for granted. In such a subversive social movement, the agents of subversion attack common sense truisms because of their conviction that the popular truisms are, in fact, false, and their own perspective is true, or at least acknowledges more profound and important truths than what they attack. As we tell ourselves the stories of our democratic history, the content of those subversions were actually true. Now that the loudest voices in American politics claiming to be virtuous subversives support nationalist, racist, anti-democratic ideologies, we must confront the possibility that those who speak truth to power have a much more complicated relationship with facts than we often believe.

Fake News as Simply Lies

Fake news is the central signpost of what is popularly called the ‘post-truth’ era, but it quickly became a catch-all term that refers to too many disparate phenomena to be useful. When preparing for this series of articles, we at the Reply Collective discussed the influence of post-modern thinkers on contemporary politics, particularly regarding climate change denialism. But I don’t consider contemporary fake news as having roots in these philosophies. The tradition is regarded in popular culture (and definitely in self-identified analytic philosophy communities) as destabilizing the possibility of truth, knowledge, and even factuality.

This conception is mistaken, as any attentive reading of Jacques Derrida, Michel Foucault, Gilles Deleuze, Jean-Francois Lyotard, or Jean Beaudrillard will reveal that they were concerned – at least on the question of knowledge and truth – with demonstrating that there were many more ways to understand how we justify our knowledge and the nature of facticity than any simple propositional definition in a Tarskian tradition can include. There are more ways to understand knowledge and truth than seeing whether and how a given state of affairs grounds the truth and truth-value of a description. A recent article by Steve Fuller at the Institute of Art and Ideas considers many concepts of truth throughout the history of philosophy more complicated than the popular idea of simple correspondence. So when we ask whether Trumpism has pushed us into a post-truth era, we must ask which concept of truth had become obsolete. Understanding what fake news is and can be, is one productive probe of this question.

So what are the major conceptions of ‘fake news’ that exist in Western media today? I ask this question with the knowledge that, given the rapid pace of political developments in the Trump era, my answers will probably be obsolete, or at least incomplete, by publication. The proliferation of meanings that I now describe happened in popular Western discourse in a mere two months from Election Day to Inauguration Day. My account of these conceptual shifts in popular discourse shows how these shifts of meaning have acquired such speed.

Fake news, as a political phenomenon, exists as one facet of a broad global political culture where the destabilization of what gets to count as a fact and how or why a proposition may be considered factual has become fully mainstream. As Bruno Latour has said, the destabilization of facticity’s foundation is rooted in the politics and epistemology of climate change denialism, the root of wider denialism of any real value for scientific knowledge. The centrepiece of petroleum industry public relations and global government lobbying efforts, climate change denialism was designed to undercut the legitimacy of international efforts to shift global industry away from petroleum reliance. Climate change denial conveniently aligns with the nationalist goals of Trump’s administration, since a denialist agenda requires attacking American loyalty to international emissions reduction treaties and United Nations environmental efforts. Denialism undercuts the legitimacy of scientific evidence for climate change by countering the efficacy of its practical epistemic truth-making function. It is denial and opposition all the way down. Ontologically, the truth-making functions of actual states of affairs on climatological statements remain as fine as they always were. What’s disappeared is the popular belief in the validity of those truth-makers.

So the function of ‘fake news’ as an accusation is to sever the truth-making powers of the targeted information source for as many people who hear the accusation as possible. The accusation is an attempt to deny and destroy a channel’s credibility as a source of true information. To achieve this, the accusation itself requires its own credibility for listeners. The term ‘fake news’ first applied to the flood of stories and memes flowing from a variety of dubious websites, consisting of uncorroborated and outright fabricated reports. The articles and images originated on websites based largely in Russia and Macedonia, then disseminated on Facebook pages like Occupy Democrats, Eagle Rising, and Freedom Daily, which make money using clickthrough-generating headlines and links. Much of the extreme white nationalist content of these pages came, in addition to the content mills of eastern Europe, from radical think tanks and lobby groups like the National Policy Institute. These feeds are a very literal definition of fake news: content written in the form of actual journalism so that their statements appear credible, but communicating blatant lies and falsehoods.

The feeds and pages disseminating these nonsensical stories were successful because the infrastructure of Facebook as a medium incentivizes comforting falsehoods over inconvenient truths. Its News Feed algorithm is largely a similarity-sorting process, pointing a user to sources that resemble what has been engaged before. Pages and websites that depend on by-clickthrough advertising revenue will therefore cater to already-existing user opinions to boost such engagement. A challenging idea that unsettles a user’s presumptions about the world will receive fewer clickthroughs because people tend to prefer hearing what they already agree with. The continuing aggregation of similarity after similarity reinforces your perspective and makes changing your mind even harder than it usually is.

Trolling Truth Itself

Donald Trump is an epically oversignified cultural figure. But in my case for the moment, I want to approach him as the most successful troll in contemporary culture. In his 11 January 2017 press conference, Trump angrily accused CNN and Buzzfeed of themselves being “fake news.” This proposition seems transparent, at first, as a clear act of trolling, a President’s subversive action against critical media outlets. Here, the insulting meaning of the term is retained, but its reference has shifted to cover the Trump-critical media organizations that first brought the term to ubiquity shortly after the 8 November 2016 election. The intention and meaning of the term has been turned against those who coined it.

In this context, the nature of the ‘post-truth’ era of politics appears simple. We are faced with two duelling conceptions of American politics and global social purpose. One is the Trump Administration, with its propositions about the danger of Islamist terror and the size of this year’s live Inauguration audience. The other is the usual collection of news outlets referred to as the mainstream media. Each gives a presentation of what is happening regarding a variety of topics, neither of which is compatible, both of which may be accurate to greater or lesser degrees in each instance. The simple issue is that the Trump Administration pushes easily falsified transparent propaganda such as the lie about an Islamist-led mass murder in Bowling Green, Kentucky. This simple issue becomes an intractable problem because significantly large spaces in the contemporary media economy constitutes a hardening of popular viewpoints into bubbles of self-reinforcing extremism. Thanks to Facebook’s sorting algorithms, there will likely always be a large group of Trumpists who will consider all his administration’s blatant lies to be truth.

This does not appear to be a problem for philosophy, but for public relations. We can solve this problem of the intractable audience for propaganda by finding or creating new paths to reach people in severely comforting information bubbles. There is a philosophical problem, but it is far more profound than even this practically difficult issue of outreach. The possibility conditions for the character of human society itself is the fundamental battlefield in the Trumpist era.

The accusation “You are fake news!” of Trump’s January press conference delivered a tactical subversion, rendering the original use of the term impossible. The moral aspects of this act of subversion appeared a few weeks later, in a 7 February interview Trump Administration communications official Sebastian Gorka did with Michael Medved. Gorka’s words first appear to be a straightforward instance of authoritarian delegitimizing of opposition, as he equates ‘fake news’ with opposition to President Trump. But Gorka goes beyond this simple gesture to contribute to a re-valuation of the values of subversion and opposition in our cultural discourse. He accuses Trump-critical news organizations of such a deep bias and hatred of President Trump and Trumpism that they themselves have failed to understand and perceive the world correctly. The mainstream media have become untrustworthy, says Gorka, not merely because many of their leaders and workers oppose President Trump, but because those people no longer understand the world as it is. That conclusion is, as Breitbart’s messaging would tell us, the reason to trust the mainstream media no longer is their genuine ignorance. And because it was a genuine mistake about the facts of the world, that accusation of ignorance and untrustworthiness is actually legitimate.

Real Failures of Knowledge

Donald Trump, as well as the political movements that backed his Presidential campaign and the anti-EU side of the Brexit referendum, knew something about the wider culture that many mainstream analysts and journalists did not: they knew that their victory was possible. This is not a matter of ideology, but a fact about the world. It is not a matter of interpretive understanding or political ideology like the symbolic meanings of a text, object, or gesture, but a matter of empirical knowledge. It is not a straightforward fact like the surface area of my apartment building’s front lawn or the number of Boeing aircraft owned by KLM. Discovering such a fact as the possibility conditions and likelihood of an election or referendum victory involving thousands of workers, billions of dollars of infrastructure and communications, and millions of people deliberating over their vote or refusal to vote is a massively complicated process. But it is still an empirical process and can be achieved to varying levels of success and failure. In the two most radical reversals of the West’s (neo)liberal democratic political programs in decades, the press as an institution failed to understand what is and is not possible.

Not only that, these organizations know they have failed, and know that their failure harms their reputation as sources of trustworthy knowledge about the world. Their knowledge of their real inadequacy can be seen in their steps to repair their knowledge production processes. These efforts are not a submission to the propagandistic demands of the Trump Presidency, but an attempt to rebuild real research capacities after the internet era’s disastrous collapse of the traditional newspaper industry. Through most of the 20th century, the news media ecology of the United States consisted of a hierarchy of local, regional, and inter/national newspapers. Community papers reported on local matters, these reports were among the sources for content at regional papers, and those regional papers in turn provided source material for America’s internationally-known newsrooms in the country’s major urban centres. This information ecology was the primary route not only for content, but for general knowledge of cultural developments beyond those few urban centres.

With the 21st century, it became customary to read local and national news online for free, causing sales and advertising revenue for those smaller newspapers to collapse. The ensuing decades saw most entry-level journalism work become casual and precarious, cutting off entry to the profession from those who did not have the inherited wealth to subsidize their first money-losing working years. So most poor and middle class people were cut off from work in journalism, removing their perspectives and positionality from the field’s knowledge production. The dominant newspaper culture that centred all content production in and around a local newsroom persisted into the internet era, forcing journalists to focus their home base in major cities. So investigation outside major cities rarely took place beyond parachute journalism, visits by reporters with little to no cultural familiarity with the region. This is a real failure of empirical knowledge gathering processes. Facing this failure, major metropolitan news organizations like the New York Times and Mic have begun building a network of regional bureaus throughout the now-neglected regions of America, where local independent journalists are hired as contractual workers to bring their lived experiences to national audiences.

America’s Democratic Party suffered a similar failure of knowledge, having been certain that the Trump campaign could never have breached the midwestern regions – Michigan, Wisconsin, Pennsylvania – that for decades have been strongholds of their support in Presidential elections. I leave aside the critical issue of voter suppression in these states to concentrate on a more epistemic aspect of Trump’s victory. This was the campaign’s unprecedented ability to craft messages with nuanced detail. Cambridge Analytica, the data analysis firm that worked for both Trump and leave.eu, provided the power to understand and target voter outreach with almost individual specificity. This firm derives incredibly complex and nuanced data sets from the Facebook behaviour of hundreds of millions of people, and is the most advanced microtargeting analytics company operating today. They were able to craft messages intricately tailored to individual viewers and deliver them through Facebook advertising. So the Trump campaign has a legitimate claim to have won based on superior knowledge of the details of the electorate and how best to reach and influence them.

Battles Over the Right to Truth

With this essay, I have attempted an investigation that is a blend of philosophy and journalism, an examination of epistemological aspects of dangerous and important contemporary political and social phenomena and trends. After such a mediation, I feel confident in proposing the following conclusions.

1) Trumpist propaganda justifies itself with an exclusive and correct claim to reliability as a source of knowledge: that the Trump campaign was the only major information source covering the American election that was always certain of the possibility that they could win. That all other media institutions at some point did not understand or accept the truth of Trump’s victory being possible makes them less reliable than the Trump team and Trump personally.

2) The denial of a claim’s legitimacy as truth, and of an institution’s fidelity to informing people of truths, has become such a powerful weapon of political rhetoric that it has ended all cross-partisan agreement on what sources of information about the wider world are reliable.

3) Because of the second conclusion, journalism has become an unreliable set of knowledge production techniques. The most reliable source of knowledge about that election was the analysis of mass data mining Facebook profiles, the ground of all Trump’s public outreach communications. Donald Trump became President of the United States with the most powerful quantitative sociology research program in human history.

4) This is Trumpism’s most powerful claim to the mantle of the true subversives of society, the virtuous rebel overthrowing a corrupt mainstream. Trumpism’s victory, which no one but Trumpists themselves thought possible, won the greatest achievement of any troll. Trumpism has argued its opponent into submission, humiliated them for the fact of having lost, then turned out to be right anyway.

The statistical analysis and mass data mining of Cambridge Analytica made Trump’s knowledge superior to that of the entire journalistic profession. So the best contribution that social epistemology as a field can make to understanding our moment is bringing all its cognitive and conceptual resources to an intense analysis of statistical knowledge production itself. We must understand its strengths and weaknesses – what statistical knowledge production emphasizes in the world and what escapes its ability to comprehend. Social epistemologists must ask themselves and each other: What does qualitative knowledge discover and allow us to do, that quantitative knowledge cannot? How can the qualitative form of knowledge uncover a truth of the same profundity and power to popularly shock an entire population as Trump’s election itself?

Author Information: Frank Scalambrino franklscalambrino@gmail.com

Shortlink: http://wp.me/p1Bfg0-3nI

Editor’s Note: As we near the end of an eventful 2016, the SERRC will publish reflections considering broadly the immediate future of social epistemology as an intellectual and political endeavor.

Please refer to:

group_account

Image credit: Walt Jabsco, via flickr

Presently my interest in social epistemology is primarily related to policy development. Though I continue to be interested in the ways technology influences the formation of social identities, I also want to examine corporate agency. On the one hand, this relates to the notion of persona ficta and the idea that, beyond the persons comprising a group, a group itself may be considered a “person.” Take, for example, search committees for tenure-track professor positions. There is a sense in which the committee is supposed to represent the interests of the persona ficta of some group, be it the department, the university, etc. Otherwise, it would simply be the case that the committees were representing their own desires, or merely applying a merit-based template, and though the former characterization may often be true, the latter is clearly not the case. Moreover, because the decision-making is supposed to be in the name of, and based on the authority of, the persona ficta, the members of the search committee are supposedly not personally responsible for the decisions made. The questions raised by such a situation in which a persona ficta may be seen as a kind of mask covering the true social relations within the group determining the group’s decisions, I contextualize in terms of social epistemology.

On the other hand, I am interested in thinking about corporate agency and its efficacy in social environments. This is not unrelated to the question of the relation between the interests, knowledge, and actions of the corporate members which in some sense condition and sustain different types of (persona ficta) corporate agents. In other words, it is as if the collective interests, knowledge, and actions of members of a group constitute a kind of collective agent back to which changes in the world may be traced. I am interested in what I consider to be the ethical questions, which to some degree should factor into the various organizations of knowledge and power which sustain such corporate agents. To put it more narrowly and concretely would be to say, social epistemology may help us locate the points at which constitutive group members may be accountable for their contributions otherwise masked by some persona ficta. Subsequently, such accountability may be worked into policy development.