Archives For social epistemology

Author Information: Adam Riggio, SERRC Digital Editor,

Riggio, Adam. “Action in Harmony with a Global World.” Social Epistemology Review and Reply Collective 7, no. 3 (2018): 20-26.

The pdf of the article gives specific page references. Shortlink:

Image by cornie via Flickr / Creative Commons


Bryan Van Norden has become about as notorious as an academic philosopher can be while remaining a virtuous person. His notoriety came with a column in the New York Times that took the still-ethnocentric approach of many North American and European university philosophy departments to task. The condescending and insulting dismissal of great works of thought from cultures and civilizations beyond Europe and European-descended North America should scandalize us. That it does not is to the detriment of academic philosophy’s culture.

Anyone who cares about the future of philosophy as a tradition should read Taking Back Philosophy and take its lessons to heart, if one does not agree already with its purpose. The discipline of philosophy, as practiced in North American and European universities, must incorporate all the philosophical traditions of humanity into its curriculum and its subject matter. It is simple realism.

A Globalized World With No Absolute Hierarchies

I am not going to argue for this decision, because I consider it obvious that this must be done. Taking Back Philosophy is a quick read, an introduction to a political task that philosophers, no matter their institutional homes, must support if the tradition is going to survive beyond the walls of universities increasingly co-opted by destructive economic, management, and human resources policies.

Philosophy as a creative tradition cannot survive in an education economy built on the back of student debt, where institutions’ priorities are set by a management class yoked to capital investors and corporate partners, which prioritizes the proliferation of countless administrative-only positions while highly educated teachers and researchers compete ruthlessly for poverty wages.

With this larger context in mind, Van Norden’s call for the enlargement of departments’ curriculums to cover all traditions is one essential pillar of the vision to liberate philosophy from the institutions that are destroying it as a viable creative process. In total, those four pillars are 1) universal accessibility, economically and physically; 2) community guidance of a university’s priorities; 3) restoring power over the institution to creative and research professionals; and 4) globalizing the scope of education’s content.

Taking Back Philosophy is a substantial brick through the window of the struggle to rebuild our higher education institutions along these democratic and liberating lines. Van Norden regularly publishes work of comparative philosophy that examines many problems of ethics and ontology using texts, arguments, and concepts from Western, Chinese, and Indian philosophy. But if you come to Taking Back Philosophy expecting more than a brick through those windows, you’ll be disappointed. One chapter walks through a number of problems as examples, but the sustained conceptual engagement of a creative philosophical work is absent. Only the call to action remains.

What a slyly provocative call it is – the book’s last sentence, “Let’s discuss it . . .”

Unifying a Tradition of Traditions

I find it difficult to write a conventional review of Taking Back Philosophy, because so much of Van Norden’s polemic is common sense to me. Of course, philosophy departments must be open to primary material from all the traditions of the human world, not just the Western. I am incapable of understanding why anyone would argue against this, given how globalized human civilization is today. For the context of this discussion, I will consider a historical and a technological aspect of contemporary globalization. Respectively, these are the fall of the European military empires, and the incredible intensity with which contemporary communications and travel technology integrates people all over Earth.

We no longer live in a world dominated by European military colonial empires, so re-emerging centres of culture and economics must be taken on their own terms. The Orientalist presumption, which Edward Said spent a career mapping, that there is no serious difference among Japanese, Malay, Chinese, Hindu, Turkic, Turkish, Persian, Arab, Levantine, or Maghreb cultures is not only wrong, but outright stupid. Orientalism as an academic discipline thrived for the centuries it did only because European weaponry intentionally and persistently kept those cultures from asserting themselves.

Indigenous peoples – throughout the Americas, Australia, the Pacific, and Africa – who have been the targets of cultural and eradicative genocides for centuries now claim and agitate for their human rights, as well as inclusion in the broader human community and species. I believe most people of conscience are appalled and depressed that these claims are controversial at all, and even seen by some as a sign of civilizational decline.

The impact of contemporary technology I consider an even more important factor than the end of imperialist colonialism in the imperative to globalize the philosophical tradition. Despite the popular rhetoric of contemporary globalization, the human world has been globalized for millennia. Virtually since urban life first developed, long-distance international trade and communication began as well.

Here are some examples. Some of the first major cities of ancient Babylon achieved their greatest economic prosperity through trade with cities on the south of the Arabian Peninsula, and as far east along the Indian Ocean coast as Balochistan. From 4000 to 1000 years ago, Egyptian, Roman, Greek, Persian, Arab, Chinese, Mongol, Indian, Bantu, Malian, Inca, and Anishinaabeg peoples, among others, built trade networks and institutions stretching across continents.

Contemporary globalization is different in the speed and quantity of commerce, and diversity of goods. It is now possible to reach the opposite side of the planet in a day’s travel, a journey so ordinary that tens of millions of people take these flights each year. Real-time communication is now possible between anywhere on Earth with broadband internet connections thanks to satellite networks and undersea fibre-optic cables. In 2015, the total material value of all goods and commercial services traded internationally was US$21-trillion. That’s a drop from the previous year’s all-time (literally) high of US$24-trillion.[1]

Travel, communication, and productivity has never been so massive or intense in all of human history. The major control hubs of the global economy are no longer centralized in a small set of colonial powers, but a variety of economic centres throughout the world, depending on industry. From Beijing, Moscow, Mumbai, Lagos, and Berlin to Tokyo, and Washington, the oil fields of Kansas, the Dakotas, Alberta, and Iraq, and the coltan, titanium, and tantalum mines of Congo, Kazakhstan, and China.

All these proliferating lists express a simple truth – all cultures of the world now legitimately claim recognition as equals, as human communities sharing our Earth as we hollow it out. Philosophical traditions from all over the world are components of those claims to equal recognition.

The Tradition of Process Thought

So that is the situation forcing a recalcitrant and reactionary academy to widen its curricular horizons – Do so, or face irrelevancy in a global civilization with multiple centres all standing as civic equals in the human community. This is where Van Norden himself leaves us. Thankfully, he understands that a polemic ending with a precise program immediately becomes empty dogma, a conclusion which taints the plausibility of an argument. His point is simple – that the academic discipline must expand its arms. He leaves the more complex questions of how the philosophical tradition itself can develop as a genuinely global community.

Process philosophy is a relatively new philosophical tradition, which can adopt the classics of Daoist philosophy as broad frameworks and guides. By process philosophy, I mean the research community that has grown around Gilles Deleuze and Félix Guattari as primary innovators of their model of thought – a process philosophy that converges with an ecological post-humanism. The following are some essential aspects of this new school of process thinking, each principle in accord with the core concepts of the foundational texts of Daoism, Dao De Jing and Zhuang Zi.

Ecological post-humanist process philosophy is a thorough materialism, but it is an anti-reductive materialism. All that exists is bodies of matter and fields of force, whose potentials include everything for which Western philosophers have often felt obligated to postulate a separate substance over and above matter, whether calling it mind, spirit, or soul.

As process philosophy, the emphasis in any ontological analysis is on movement, change, and relationships instead of the more traditional Western focus on identity and sufficiency. If I can refer to examples from the beginning of Western philosophy in Greece, process thought is an underground movement with the voice of Heraclitus critiquing a mainstream with the voice of Parmenides. Becoming, not being, is the primary focus of ontological analysis.

Process thinking therefore is primarily concerned with potential and capacity. Knowledge, in process philosophy, as a result becomes inextricably bound with action. This unites a philosophical school identified as “Continental” in common-sense categories of academic disciplines with the concerns of pragmatist philosophy. Analytic philosophy took up many concepts from early 20th century pragmatism in the decades following the death of John Dewey. These inheritors, however, remained unable to overcome the paradoxes stymieing traditional pragmatist approaches, particularly how to reconcile truth as correspondence with knowledge having a purpose in action and achievement.

A solution to this problem of knowledge and action was developed in the works of Barry Allen during the 2000s. Allen built an account of perception that was rooted in contemporary research in animal behaviour, human neurology, and the theoretical interpretations of evolution in the works of Steven Jay Gould and Richard Lewontin.

His first analysis, focussed as it was on the dynamics of how human knowledge spurs technological and civilizational development, remains humanistic. Arguing from discoveries of how profoundly the plastic human brain is shaped in childhood by environmental interaction, Allen concludes that successful or productive worldly action itself constitutes the correspondence of our knowledge and the world. Knowledge does not consist of a private reserve of information that mirrors worldly states of affairs, but the physical and mental interaction of a person with surrounding processes and bodies to constitute those states of affairs. The plasticity of the human brain and our powers of social coordination are responsible for the peculiarly human mode of civilizational technology, but the same power to constitute states of affairs through activity is common to all processes and bodies.[2]

“Water is fluid, soft, and yielding. But water will wear away rock, which is rigid and cannot yield. Whatever is soft, fluid, and yielding will overcome whatever is rigid and hard.” – Lao Zi
The Burney Falls in Shasta County, Northern California. Image by melfoody via Flickr / Creative Commons


Action in Phase With All Processes: Wu Wei

Movement of interaction constitutes the world. This is the core principle of pragmatist process philosophy, and as such brings this school of thought into accord with the Daoist tradition. Ontological analysis in the Dao De Jing is entirely focussed on vectors of becoming – understanding the world in terms of its changes, movements, and flows, as each of these processes integrate in the complexity of states of affairs.

Not only is the Dao De Jing a foundational text in what is primarily a process tradition of philosophy, but it is also primarily pragmatist. Its author Lao Zi frames ontological arguments in practical concerns, as when he writes, “The most supple things in the world ride roughshod over the most rigid” (Dao De Jing §43). This is a practical and ethical argument against a Parmenidean conception of identity requiring stability as a necessary condition.

What cannot change cannot continue to exist, as the turbulence of existence will overcome and erase what can exist only by never adapting to the pressures of overwhelming external forces. What can only exist by being what it now is, will eventually cease to be. That which exists in metamorphosis and transformation has a remarkable resilience, because it is able to gain power from the world’s changes. This Daoist principle, articulated in such abstract terms, is in Deleuze and Guattari’s work the interplay of the varieties of territorializations.

Knowledge in the Chinese tradition, as a concept, is determined by an ideal of achieving harmonious interaction with an actor’s environment. Knowing facts of states of affairs – including their relationships and tendencies to spontaneous and proliferating change – is an important element of comprehensive knowledge. Nonetheless, Lao Zi describes such catalogue-friendly factual knowledge as, “Those who know are not full of knowledge. Those full of knowledge do not know” (Dao De Jing 81). Knowing the facts alone is profoundly inadequate to knowing how those facts constrict and open potentials for action. Perfectly harmonious action is the model of the Daoist concept of Wu Wei – knowledge of the causal connections among all the bodies and processes constituting the world’s territories understood profoundly enough that self-conscious thought about them becomes unnecessary.[3]

Factual knowledge is only a condition of achieving the purpose of knowledge: perfectly adapting your actions to the changes of the world. All organisms’ actions change their environments, creating physically distinctive territories: places that, were it not for my action, would be different. In contrast to the dualistic Western concept of nature, the world in Daoist thought is a complex field of overlapping territories whose tensions and conflicts shape the character of places. Fulfilled knowledge in this ontological context is knowledge that directly conditions your own actions and the character of your territory to harmonize most productively with the actions and territories that are always flowing around your own.

Politics of the Harmonious Life

The Western tradition, especially in its current sub-disciplinary divisions of concepts and discourses, has treated problems of knowledge as a domain separate from ethics, morality, politics, and fundamental ontology. Social epistemology is one field of the transdisciplinary humanities that unites knowledge with political concerns, but its approaches remain controversial in much of the conservative mainstream academy. The Chinese tradition has fundamentally united knowledge, moral philosophy, and all fields of politics especially political economy since the popular eruption of Daoist thought in the Warring States period 2300 years ago. Philosophical writing throughout eastern Asia since then has operated in this field of thought.

As such, Dao-influenced philosophy has much to offer contemporary progressive political thought, especially the new communitarianism of contemporary social movements with their roots in Indigenous decolonization, advocacy for racial, sexual, and gender liberation, and 21st century socialist advocacy against radical economic inequality. In terms of philosophical tools and concepts for understanding and action, these movements have dense forebears, but a recent tradition.

The movement for economic equality and a just globalization draws on Antonio Gramsci’s introduction of radical historical contingency to the marxist tradition. While its phenomenological and testimonial principles and concepts are extremely powerful and viscerally rooted in the lived experience of subordinated – what Deleuze and Guattari called minoritarian – people as groups and individuals, the explicit resources of contemporary feminism is likewise a century-old storehouse of discourse. Indigenous liberation traditions draw from a variety of philosophical traditions lasting millennia, but the ongoing systematic and systematizing revival is almost entirely a 21st century practice.

Antonio Negri, Rosi Braidotti, and Isabelle Stengers’ masterworks unite an analysis of humanity’s destructive technological and ecological transformation of Earth and ourselves to develop a solution to those problems rooted in communitarian moralities and politics of seeking harmony while optimizing personal and social freedom. Daoism offers literally thousands of years of work in the most abstract metaphysics on the nature of freedom in harmony and flexibility in adaptation to contingency. Such conceptual resources are of immense value to these and related philosophical currents that are only just beginning to form explicitly in notable size in the Western tradition.

Van Norden has written a book that is, for philosophy as a university discipline, is a wake-up call to this obstinate branch of Western academy. The world around you is changing, and if you hold so fast to the contingent borders of your tradition, your territory will be overwritten, trampled, torn to bits. Live and act harmoniously with the changes that are coming. Change yourself.

It isn’t so hard to read some Lao Zi for a start.

Contact details:


Allen, Barry. Knowledge and Civilization. Boulder, Colorado: Westview Press, 2004.

Allen, Barry. Striking Beauty: A Philosophical Look at the Asian Martial Arts. New York: Columbia University Press, 2015.

Allen, Barry. Vanishing Into Things: Knowledge in Chinese Tradition. Cambridge: Harvard University Press, 2015.

Bennett, Jane. Vibrant Matter: A Political Ecology of Things. Durham: Duke University Press, 2010.

Betasamosake Simpson, Leanne. As We Have Always Done: Indigenous Freedom Through Radical Resistance. Minneapolis: University of Minnesota Press, 2017.

Bogost, Ian. Alien Phenomenology, Or What It’s Like to Be a Thing. Minneapolis: Minnesota University Press, 2012.

Braidotti, Rosi. The Posthuman. Cambridge: Polity Press, 2013.

Deleuze, Gilles. Bergsonism. Translated by Hugh Tomlinson and Barbara Habberjam. New York: Zone Books, 1988.

Chew, Sing C. World Ecological Degradation: Accumulation, Urbanization, and Deforestation, 3000 B.C. – A.D. 2000. Walnut Creek: Altamira Press, 2001.

Negri, Antonio, and Michael Hardt. Assembly. New York: Oxford University Press, 2017.

Parikka, Jussi. A Geology of Media. Minneapolis: University of Minnesota Press, 2015.

Riggio, Adam. Ecology, Ethics, and the Future of Humanity. New York: Palgrave MacMillan, 2015.

Stengers, Isabelle. Cosmopolitics I. Translated by Robert Bononno. Minneapolis: Minnesota University Press, 2010.

Stengers, Isabelle. Cosmopolitics II. Translated by Robert Bononno. Minneapolis: Minnesota University Press, 2011.

Van Norden, Bryan. Taking Back Philosophy: A Multicultural Manifesto. New York: Columbia University Press, 2017.

World Trade Organization. World Trade Statistical Review 2016. Retrieved from

[1] That US$3-trillion drop in trade was largely the proliferating effect of the sudden price drop of human civilization’s most essential good, crude oil, to just less than half of its 2014 value.

[2] A student of Allen’s arrived at this conclusion in combining his scientific pragmatism with the French process ontology of Deleuze and Guattari in the context of ecological problems and eco-philosophical thinking.

[3] This concept of knowledge as perfectly harmonious but non-self-conscious action also conforms to Henri Bergson’s concept of intuition, the highest (so far) form of knowledge that unites the perfect harmony in action of brute animal instinct with the self-reflective and systematizing power of human understanding. This is a productive way for another creative contemporary philosophical path – the union of vitalist and materialist ideas in the work of thinkers like Jane Bennett – to connect with Asian philosophical traditions for centuries of philosophical resources on which to draw. But that’s a matter for another essay.

Author Information: Paul R. Smart, University of Southampton,

Smart, Paul R. “(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 45-55.

The pdf of the article gives specific page references. Shortlink:

Please refer to:

Image by BTC Keychain via Flickr / Creative Commons


Richard Heersmink’s (2018) article, A virtue epistemology of the Internet: Search engines, intellectual virtues, and education, provides an important and timely analysis of the Internet from the standpoint of virtue epistemology.[1] According to Richard, the Internet is an important epistemic resource, but it is one that comes with a range of epistemic hazards. Such hazards, he suggests, motivate a consideration of the ways in which individuals should interact with the Internet.

In particular, Richard appeals to a specific branch of virtue epistemology, known as virtue responsibilism, arguing that certain kinds of cognitive trait (e.g., curiosity and open-mindedness) are useful in helping us press maximal epistemic benefit from the Internet. Given the utility of such traits, coupled with the epistemic importance of the Internet, Richard suggests that educational policy should be adapted so as to equip would-be knowers with the cognitive wherewithal to cope with the epistemic challenges thrown up by the online environment.

There is, no doubt, something right about all this. Few would disagree with the claim that a certain level of discernment and discrimination is important when it comes to the evaluation of online content. Whether such ‘virtues’ are best understood from the perspective of virtue responsibilism or virtue reliabilism is, I think, a moot point, for I suspect that in the case of both virtue responsibilism and virtue reliabilism what matters is the way in which belief-forming informational circuits are subject to active configuration by processes that may be broadly construed as metacognitive in nature (Smart, in pressa). That, however, is a minor quibble, and it is one that is of little consequence to the issues raised in Richard’s paper.

For the most part, then, I find myself in agreement with many of the assumptions that motivate the target article. I agree that the Internet is an important epistemic resource that is unprecedented in terms of its scale, scope, and accessibility. I also agree that, at the present time, the Internet is far from an epistemically safe environment, and this raises issues regarding the epistemic standing of individual Internet users. In particular, it looks unlikely that the indiscriminate selection and endorsement of online information will do much to bolster one’s epistemic credentials.

We thus encounter something of a dilemma: As an epistemic resource, the Internet stands poised to elevate our epistemic standing, but as an open and public space the Internet provides ample opportunities for our doxastic systems to be led astray. The result is that we are obliged to divide the online informational cornucopia into a treasure trove of genuine facts and a ragbag collection of ‘false facts’ and ‘fake news.’ The information superhighway, it seems, promises to expand our epistemic power and potential, but the road ahead is one that is fraught with a dizzying array of epistemic perils, problems, and pitfalls. What ought we to do in response to such a situation?

It is at this point that I suspect my own views start to diverge with those of the target article. Richard’s response to the dilemma is to focus attention on the individual agent and consider the ways in which an agent’s cognitive character can be adapted to meet the challenges of the Internet. My own approach is somewhat different. It is borne out of three kinds of doubt: doubts about the feasibility (although not the value) of virtue-oriented educational policies, doubts about the basic validity of virtue theoretic conceptions of knowledge, and doubts about whether the aforementioned dilemma is best resolved by attempting to change the agent as opposed to the environment in which the agent is embedded. As always, space is limited and life is short, so I will restrict my discussion to issues that I deem to be of greatest interest to the epistemological community.

Reliable Technology

Inasmuch as intellectual virtues are required for online knowledge—i.e., knowledge that we possess as a result of our interactions and engagements with the Internet—they are surely only part of a much  broader (and richer) story that includes details about the environment in which our cognitive systems operate. In judging the role of intellectual virtue in shielding us from the epistemic hazards of the online environment, it therefore seems important to have some understanding of the actual technologies we interact with.

This is important because it helps us understand the kinds of intellectual virtue that might be required, as well as the efficacy of specific intellectual virtues in helping us believe the truth (and thus working as virtues in the first place). Internet technologies are, of course, many and varied, and it will not be possible to assess their general relevance to epistemological debates in the present commentary. For the sake of brevity, I will therefore restrict my attention to one particular technology: blockchain.

Blockchain is perhaps best known for its role in supporting the digital cryptocurrency, Bitcoin. It provides us with a means of storing data in a secure fashion, using a combination of data encryption and data linking techniques. For present purposes, we can think of a blockchain as a connected set of data records (or data blocks), each of which contains some body of encrypted data. In the case of Bitcoin, of course, the data blocks contain data of a particular kind, namely, data pertaining to financial transactions. But this is not the only kind of data that can be stored in a blockchain. In fact, blockchains can be used to store information about pretty much anything. This includes online voting records, news reports, sensor readings, personal health records, and so on.

Once data is recorded inside a blockchain, it is very difficult to modify. In essence, the data stored within a blockchain is immutable, in the sense that it cannot be changed without ‘breaking the chain’ of data blocks, and thereby invalidating the data contained within the blockchain. This property makes blockchains of considerable epistemic significance, because it speaks to some of the issues (e.g., concerns about data tampering and malign forms of information manipulation) that are likely to animate epistemological debates in this area.

This does not mean, of course, that the information stored within a blockchain is guaranteed to be factually correct, in the sense of being true and thus yielding improvements in epistemic standing. Nevertheless, there are, I think, reasons to regard blockchain as an important technology relative to efforts to make the online environment a somewhat safer place for would-be knowers. Consider, for example, the title of the present article. Suppose that we wanted to record the fact that a person known as Paul Smart—that’s me—wrote an article with the title:

(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!

We can incorporate this particular piece of information into a blockchain using something called a cryptographic hash function, which yields a unique identifier for the block and all of its contents. In the case of the aforementioned title, the cryptographic hash (as returned by the SHA256 algorithm[2]) is:


Now suppose that someone wants to alter the title, perhaps to garner support for an alternative argumentative position. In particular, let’s suppose they want to claim that the title of the article is:

Fake News Alert: Intellectual Virtues Required for Online Knowledge!

From an orthographic perspective, of course, not much has changed. But the subtlety of the alteration is not something that can be used to cause confusion about the actual wording of the original title—the title that I intended for the present article. (Neither can it be used to cast doubt about the provenance of the paper—the fact that the author of the paper was a person called Paul Smart.) To see this, note that the hash generated for the ‘fake’ title looks nothing like the original:


It is this property that, at least in part, makes blockchains useful for recording information that might otherwise be prone to epistemically malign forms of information manipulation. Imagine, for the sake of argument, that climatological data, as recorded by globally distributed sensors, was stored in a blockchain. The immutability of such data makes it extremely difficult for anyone to manipulate the data in such a way as to confirm or deny the reality of year-on-year changes in global temperature. Neither is it easy to alter information pertaining to the provenance of existing data records, i.e., information about when, where, and how such data was generated.

None of this should delude us into thinking that blockchain technology is a panacea for Internet-related epistemic problems—it isn’t! Neither does blockchain obviate the need for agents to exercise at least some degree of intellectual virtue when it comes to the selection and evaluation of competing data streams. Nevertheless, there is, I think, something that is of crucial epistemological interest and relevance here—something that makes blockchain and other cybersecurity technologies deserving of further epistemological attention. In particular, such technologies may be seen as enhancing the epistemic safety of the online environment, and thus perhaps reducing the need for intellectual virtue.

In this sense, the epistemological analysis of Internet technologies may be best approached from some variant of modal epistemology—e.g., epistemological approaches that emphasize the modal stability of true beliefs across close possible worlds (Pritchard, 2009, chap. 2). But even if we choose to countenance an approach that appeals to issues of intellectual virtue, there is still, I suggest, a need to broaden the analytic net to include technologies that (for the time being at least) lie beyond the bounds of the individual cognitive agent.

Safety in Numbers

“From an epistemic perspective,” Richard writes, “the most salient dimension of the Internet is that it is an information space” (Heersmink, 2018, p. 5). Somewhat surprisingly, I disagree. Although it is obviously true that the Internet is an information space, it is not clear that this is its most salient feature, at least from an epistemological standpoint. In particular, there is, I suggest, a sense in which the Internet is more than just an information space. As is clear from the explosive growth in all things social—social media, social networks, social bots, and so on—the Internet functions as a social technology, yielding all manner of opportunities for people to create, share and process information in a collaborative fashion. The result, I suggest, is that we should not simply think of the Internet as an information space (although it is surely that), we should also view it as a social space.

Viewing the Internet as a social space is important because it changes the way we think about the epistemic impact of the Internet, relative to the discovery, production, representation, acquisition, processing and utilization of knowledge. Smart (in pressb), for example, suggests that some online systems function as knowledge machines, which are systems in which some form of knowledge-relevant processing is realized by a socio-technical mechanism, i.e., a mechanism whose component elements are drawn from either the social (human) or the technological realm.

An interesting feature of many of these systems is the way in which the reliability (or truth-conducive) nature of the realized process is rooted in the socio-technical nature of the underlying (realizing) mechanism. When it comes to human computation or citizen science systems, for example, user contributions are typically solicited from multiple independent users as a means of improving the reliability of specific epistemic outputs (Smart, in pressb; Smart and Shadbolt, in press; Watson and Floridi, 2018). Such insights highlight the socially-distributed character of at least some forms of online knowledge production, thereby moving us beyond the realms of individual, agent-centric analyses.

On a not altogether unrelated note, it is important to appreciate the way in which social participation can itself be used to safeguard online systems from various forms of malign intervention. One example is provided by the Google PageRank algorithm. In this case, any attempt to ‘artificially’ elevate the ranking assigned to specific contributions (e.g., a user’s website) is offset by the globally-distributed nature of the linking effort, coupled with the fact that links to a specific resource are themselves weighted by the ranking of the resource from which the link originates. This makes it difficult for any single agent to subvert the operation of the PageRank algorithm.

Even ostensibly non-social technologies can be seen to rely on the distributed and decentralized nature of the Internet. In the case of blockchain, for example, multiple elements of a peer-to-peer network participate in the computational processes that make blockchain work. In this way, the integrity of the larger system is founded on the collaborative efforts of an array of otherwise independent computational elements. And it is this that (perhaps) allows us to think of blockchain’s epistemically-desirable features as being rooted in something of a ‘social’ substrate.

All of this, I suggest, speaks in favor of an approach that moves beyond a preoccupation with the properties of individual Internet users. In particular, there seems to be considerable merit in approaching the Internet from a more socially-oriented epistemological perspective. It is easy to see the social aspects of the Internet as lying at the root of a panoply of epistemic concerns, especially when it comes to the opportunities for misinformation, deception, and manipulation. But in light of the above discussion, perhaps an alternative, more positive, take on the Internet (qua social space) starts to come into sharper focus. This is a view that highlights the way in which certain kinds of online system can work to transform a ‘vice’ into a ‘virtue,’ exploiting the social properties of the Internet for the purposes of dealing with reliability-related concerns.

Image by Dariorug via Flickr / Creative Commons


Filter Bubblicious

Search engines form one of the focal points of Richard’s analysis, and, as with previous work in this area, Richard finds at least some aspects of their operation to be highly problematic. A particular issue surfaces in respect of personalized search. Here, Richard’s analysis echoes the sentiments expressed by other epistemologists who regard personalized search algorithms as of dubious epistemic value.

In fact, I suspect the consensus that has emerged in this area fails to tell the whole story about the epistemic consequences of personalized search. Indeed, from a virtue epistemological position, I worry that epistemologists are in danger of failing to heed their own advice—prematurely converging on a particular view without proper consideration of competing positions. In my new-found role as the virtue epistemologist’s guardian angel (or should that be devil’s advocate?), I will attempt to highlight a couple of reasons why I think more empirical research is required before we can say anything useful about the epistemological impact of personalized search algorithms.

My first worry is that our understanding about the extent to which search results and subsequent user behavior is affected by personalization is surprisingly poor. Consider, for example, the results of one study, which attempted to quantify the effect of personalization on search results (Hannak et al., 2013). Using an empirical approach, Hannak et al. (2013) report a demonstrable personalization effect, with 11.7% of search results exhibiting differences due to personalization. Interestingly, however, the effect of personalization appeared to be greater for search results with lower rankings; highly ranked results (i.e., those appearing at the top of a list of search results) appeared to be much less affected by personalization.

This result is interesting given the observation that college students “prefer to click on links in higher positions even when the abstracts are less relevant to the task at hand” (Heersmink, 2018, p. 6). From one perspective, of course, this tendency looks like a vice that jeopardizes the epistemic standing of the individual user. And yet, from another perspective, it looks like the preference for higher ranked search results is poised to negate (or at least reduce) the negative epistemological effects of personalized search. What we seem to have here, in essence, is a situation in which one kind of ‘intellectual vice’ (i.e., a tendency to select highly-ranked search results) is playing something of a more positive (virtuous?) role in mitigating the negative epistemological sequelae of a seemingly vicious technology (i.e., personalized search).

None of this means that the epistemic effects of personalized search are to the overall benefit of individual users; nevertheless, the aforementioned results do call for a more nuanced and empirically informed approach when considering the veritistic value of search engines, as well as other kinds of Internet-related technology.

A second worry relates to the scope of the epistemological analysis upon which judgements about the veritistic value of search engines are based. In this case, it is unclear whether analyses that focus their attention on individual agents are best placed to reveal the full gamut of epistemic costs and benefits associated with a particular technology, especially one that operates in the socio-technical ecology of the Internet. To help us understand this worry in a little more detail, it will be useful to introduce the notion of mandevillian intelligence (Smart, in pressc; Smart, in pressd).

Mandevillian intelligence is a specific form of collective intelligence in which the cognitive shortcomings and epistemic vices of the individual agent are seen to yield cognitive benefits and epistemic virtues at the collective or social level of analysis, e.g., at the level of collective doxastic agents (see Palermos, 2015) or socio-epistemic systems (see Goldman, 2011). According to this idea, personalized search systems may play a productive role in serving the collective cognitive good, providing a means by which individual vices (e.g., a tendency for confirmation bias) are translated into something that more closely resembles an epistemic virtue (e.g., greater cognitive coverage of a complex space of thoughts, ideas, opinions, and so on). Consider, for example, the way in which personalized search may help to focus individual attention on particular bodies of information, thereby restricting access to a larger space of ideas, opinions, and other information.

While such forms of ‘restricted access’ or ‘selective information exposure’ are unlikely to yield much in the way of an epistemic benefit for the individual agent, it is possible that by exploiting (and, indeed, accentuating!) an existing cognitive bias (e.g., confirmation bias), personalized search may work to promote cognitive diversity, helping to prevent precipitant forms of cognitive convergence (see Zollman, 2010) and assisting with the epistemically optimal division of cognitive labor (see Muldoon, 2013). This possibility reveals something of a tension in how we interpret or evaluate the veritistic value of a particular technology or epistemic practice. In particular, it seems that assessments of veritistic value may vary according to whether our epistemological gaze is directed towards individual epistemic agents or the collective ensembles in which those agents are situated.

The Necessity of Virtue

As Richard notes, virtue epistemology is characterized by a shift in emphasis, away from the traditional targets of epistemological analysis (e.g., truth, justification and belief) and towards the cognitive properties of would-be knowers. “Virtue epistemology,” Richard writes, “is less concerned with the nature of truth and more concerned with the cognitive character of agents” (Heersmink, 2018, p. 2). This is, no doubt, a refreshing change, relative to the intellectual orientation of traditional philosophical debates.

Nevertheless, I assume that virtue epistemologists still recognize the value and priority of truth when it comes to issues of epistemic evaluation. Someone who holds false beliefs is not the possessor of knowledge, and this remains the case irrespective of whatever vices and virtues the agent has. In other words, it does not matter how careful, attentive and assiduous an agent is in selecting and evaluating information, if what the agent believes is false, they simply do not know.

What seems to be important in the case of virtue epistemology is the role that intellectual virtue plays in securing the truth of an agent’s beliefs. In particular, the central feature of virtue epistemology (at least to my mind) is that the truth of an agent’s beliefs stem from the exercise of intellectual virtue. It is thus not the case that truth is unimportant (or less important) when it comes to issues of positive epistemic standing; rather, it is the role that intellectual virtue plays in establishing the truth of an agent’s beliefs. An agent is thus a bona fide knower when they believe the truth and the truth in question is attributable to some aspect of their cognitive character, specifically, a cognitive trait (virtue responsibilism) or cognitive faculty (virtue reliabilism).

What then makes something a vice or virtue seems to be tied to the reliability of token instantiations of processes that are consistent with an agent’s cognitive character. Intellectual virtues are thus “cognitive character traits that are truth-conducive and minimalise error” (Heersmink, 2018, p. 3), while intellectual vices are characterized as “cognitive character traits that are not truth-conducive and do not minimalise error” (Heersmink, 2018, p. 3). It is this feature of the intellectual virtues—the fact that they are, in general, reliable (or give rise to reliable belief-relevant processes)—that looks to be important when it comes to issues of epistemic evaluation.

So this is what I find problematic about virtue theoretic approaches to knowledge. (Note that I am not an epistemologist by training, so this will require a generous—and hopefully virtue-inspiring swig—of the ole intellectual courage.) Imagine a state-of-affairs in which the Internet was (contrary to the present state-of-affairs) a perfectly safe environment—one where the factive status of online information was guaranteed as a result of advances in cyber-security techniques and intelligent fact-checking services. Next, let us imagine that we have two individuals, Paul and Sophia, who differ with respect to their cognitive character. Paul is the least virtuous of the two, unreflectively and automatically accepting whatever the Internet tells him. Sophia is more circumspect, wary of being led astray by (the now non-existent) fake news.

Inasmuch as we see the exercise of intellectual virtue as necessary for online knowledge, it looks unlikely that poor old Paul can be said to know very much. This is because the truth of Paul’s beliefs are not the result of anything that warrants the label ‘intellectual virtue.’ Paul, of course, does have a lot of true beliefs, but the truth of these beliefs does not stem from the exercise of his intellectual virtues—if, indeed, he has any. In fact, inasmuch as there is any evidence of virtue in play here, it is probably best attributed to the technologies that work to ensure the safety of the online environment. The factive status of Paul’s beliefs thus has more to do with the reliability of the Internet than it does with the elements of his cognitive character.

But is it correct to say that Paul has no online knowledge in this situation? Personally, I do not have this intuition. In other words, in a perfectly safe environment, I can see no reason why we should restrict knowledge attributions to agents whose beliefs are true specifically as the result of intellectual virtue. My sense is that even the most unreflective of agents could be credited with knowledge in a situation where there was no possibility of them being wrong. And if that is indeed the case, then why insist that it is only the exercise of intellectual virtue that underwrites positive epistemic standing?

After all, it seems perfectly possible, to my mind, that Sophia’s epistemic caution contributes no more to the minimization of error in an epistemically benign (i.e., safe) environment than does Paul’s uncritical acceptance. (In fact, given the relative efficiency of their doxastic systems, it may very well be the case that Sophia ends up with fewer true beliefs than Paul.) It might be claimed that this case is invalidated by a failure to consider the modal stability of an agent’s beliefs relative to close possible worlds, as well as perhaps their sensitivity to counterfactual error possibilities. But given the way in which the case is characterized, I suggest that there are no close possible worlds that should worry us—the cybersecurity and fact checking technologies are, let us assume, sufficiently robust as to ensure the modal distance of those worrisome worlds.

One implication of all this is to raise doubts about the necessity of intellectual virtue, relative to our conceptual understanding of knowledge. If there are cases where intellectual virtue is not required for positive epistemic standing, then intellectual virtue cannot be a necessary condition for knowledge attribution. And if that is the case, then why should intellectual virtue form the basis of an approach that is intended to deal with the epistemic shortcomings of the (contemporary) Internet?

Part of the attraction of virtue epistemology, I suspect, is the way in which a suite of generally reliable processes are inextricably linked to the agent who is the ultimate target of epistemic evaluation. This linkage, which is established via the appeal to cognitive character, helps to ensure the portability of an agent’s truth-tracking capabilities—it helps to ensure, in other words, that wherever the agent goes their reliable truth-tracking capabilities are sure to follow.

However, in an era where our doxastic systems are more-or-less constantly plugged into a reliable and epistemically safe environment, it is not so clear that agential capabilities are relevant to epistemic standing. This, I suggest, raises doubts about the necessity of intellectual virtue in securing positive epistemic status, and it also (although this is perhaps less clear) encourages us to focus our attention on some of the engineering efforts (as opposed to agent-oriented educational programs) that might be required to make the online world an epistemically safer place.


What, then, should we make of the appeal to virtue epistemology in our attempt to deal with the  epistemic hazards of the Internet. My main concern is that the appeal to virtue epistemology (and the emphasis placed on intellectual virtue) risks an unproductive focus on individual human agents at the expense of both the technological and social features of the online world. This certainly does not rule out the relevance of virtue theoretic approaches as part of our attempt to understand the epistemic significance of the Internet, but other approaches (e.g., modal reliabilism, process reliabilism, distributed reliabilism, and systems-oriented social epistemology) also look to be important.

Personally, I remain agnostic with regard to the relevance of different epistemological approaches, although I worry about the extent to which virtue epistemology is best placed to inform policy-related decisions (e.g., those relating to education). In particular, I fear that by focusing our attention on individual agents and issues of intellectual virtue, we risk overlooking some of the socio-epistemic benefits of the Internet, denigrating a particular technology (e.g., personalized search) on account of its failure to enhance individual knowledge, while ignoring the way a technology contributes to more collective forms of epistemic success.

In concluding his thought-provoking paper on virtue epistemology and the Internet, Richard suggests that “there is an important role for educators to teach and assess [intellectual] virtues as part of formal school and university curricula, perhaps as part of critical thinking courses” (Heersmink, 2018, p. 10). I have said relatively little about this particular issue in the present paper. For what it’s worth, however, I can see no reason to object to the general idea of Internet-oriented educational policies. The only caveat, perhaps, concerns the relative emphasis that might be placed on the instillation of intellectual virtue as opposed to the inculcation of technical skills, especially those that enable future generations to make the online world a safer place.

No doubt there is room for both kinds of pedagogical program (assuming they can even be dissociated). At the very least, it seems to me that the effort to resolve a problem (i.e., engineer a safer Internet) is just as important as the effort to merely cope with it (i.e., acquire a virtuous cognitive character). But, in any case, when it comes to education and learning, we should not lose sight of the fact that the Internet is itself something that is used for educational purposes. Perhaps, then, the more important point about education and the Internet is not so much the precise details of what gets taught, so much as the issue of whether the Internet (with all its epistemic foibles) is really the best place to learn.

Contact details:


Goldman, A. I. (2011). A guide to social epistemology. In A. I. Goldman and D. Whitcomb (Eds.), Social Epistemology: Essential Readings, pp. 11–37. New York, New York, USA: Oxford University Press.

Hannak, A., P. Sapiezynski, A. Molavi Kakhki, B. Krishnamurthy, D. Lazer, A. Mislove, and C. Wilson (2013). Measuring personalization of Web search. In D. Schwabe, V. Almeida, H. Glaser, R. Baeza-Yates, and S. Moon (Eds.), Proceedings of the 22nd International Conference  on World Wide Web, Rio  de Janeiro, Brazil, pp. 527–538. ACM.

Heersmink, R. (2018). A virtue epistemology of the Internet: Search engines, intellectual virtues, and education. Social Epistemology 32 (1), 1–12.

Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass 8 (2), 117–125.

Palermos, S. O. (2015). Active externalism, virtue reliabilism and scientific knowledge. Synthese 192 (9), 2955–2986.

Pritchard, D. (2009). Knowledge. Basingstoke, England, UK: Palgrave Macmillan.

Smart, P. R. (in pressa). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. (in pressb). Knowledge machines. The Knowledge Engineering Review.

Smart, P. R. (in pressc). Mandevillian intelligence. Synthese.

Smart, P. R. (in pressd). Mandevillian intelligence: From individual vice to collective virtue. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Socially Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. and N. R. Shadbolt (in press). The World Wide Web. In J. Chase and D. Coady (Eds.), The Routledge Handbook of Applied Epistemology. New York, New York, USA: Routledge.

Watson, D. and L. Floridi (2018). Crowdsourced science: Sociotechnical epistemology in the e-research paradigm. Synthese 195 (2), 741–764.

Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis 72 (1), 17–35.

[1] This work is supported under SOCIAM: The Theory and Practice of Social Machines. The SOCIAM Project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/1 and comprises the Universities of Southampton, Oxford and Edinburgh.

[2] See [accessed: 30th  January 2018].

Author Information: Inkeri Koskinen, University of Helsinki,

Koskinen, Inkeri. “Not-So-Well-Designed Scientific Communities.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 54-58.

The pdf of the article includes specific page numbers. Shortlink:

Please refer to:

Image from Katie Walker via Flickr


The idea of hybrid concepts, simultaneously both epistemic and moral, has recently attracted the interest of philosophers, especially since the notion of epistemic injustice (Fricker 2007) became the central topic of a lively and growing discussion. In her article, Kristina Rolin adopts the idea of such hybridity, and investigates the possibility of understanding epistemic responsibility as having both epistemic and moral qualities.

Rolin argues that scientists belonging to epistemically well-designed communities are united by mutual epistemic responsibilities, and that these responsibilities ought to be understood in a specific way. Epistemically responsible behaviour towards fellow researchers—such as adopting a defense commitment with respect to one’s knowledge claims, or offering constructive criticism to colleagues—would not just be an epistemic duty, but also a moral one; one that shows moral respect for other human beings in their capacity as knowers.

However, as Rolin focuses on “well-designed scientific communities”, I fear that she fails to notice an implication of her own argument. Current trends in science policy encourage researchers in many fields to take up high-impact, solution-oriented, multi-, inter-, and transdisciplinary projects. If one can talk about “designing scientific communities” in this context, the design is clearly meant to challenge the existing division of epistemic labour in academia, and to destabilise speciality communities. If we follow Rolin’s own argumentation, understanding epistemic responsibility as a moral duty can thus become a surprisingly heavy burden for an individual researcher in such a situation.

Epistemic Cosmopolitanism

According to Rolin, accounts of epistemic responsibility that appeal to self-interested or epistemic motives need to be complemented with a moral account. Without one it is not always possible to explain why it is rational for an individual researcher to behave in an epistemically responsible way.

Both the self-interest account and the epistemic account state that scientists behave in an epistemically responsible way because they believe that it serves their own ends—be it career advancement, fame, and financial gain, or purely epistemic individual ends. However, as Rolin aptly points out, both accounts are insufficient in a situation where the ends of the individual researcher and the impersonal epistemic ends of science are not aligned. Only if researchers see epistemically responsible behaviour as a moral duty, will they act in an epistemically responsible way even if this does not serve their own ends.

It is to some degree ambiguous how Rolin’s account should be read—how normative it is, and in what sense. Some parts of her article could be interpreted as a somewhat Mertonian description of actual moral views held by individual scientists, and cultivated in scientific communities (Merton [1942] 1973). However, she also clearly gives normative advice: well-designed scientific communities should foster a moral account of epistemic responsibility.

But when offering a moral justification for her view, she at times seems to defend a stronger normative stance, one that would posit epistemic responsibility as a universal moral duty. However, her main argument does not require the strongest reading. I thus interpret her account as partly descriptive and partly normative: many researchers treat epistemic responsibility as a moral duty, and it is epistemically beneficial for scientific communities to foster such a view. Moreover, a moral justification can be offered for the view.

When defining her account more closely, Rolin cites ideas developed in political philosophy. She adopts Robert Goodin’s (1988) distinction between general and special moral duties, and names her account epistemic cosmopolitanism:

Epistemic cosmopolitanism states that (a) insofar as we are engaged in knowledge-seeking practices, we have general epistemic responsibilities, and (b) the special epistemic responsibilities scientists have as members of scientific communities are essentially distributed general epistemic responsibilities (Rolin 2017, 478).

One of the advantages of this account is of particular interest to me. Rolin notes that if epistemically responsible behaviour would be seen as just a general moral duty, it could be too demanding for individual researchers. Any scientist is bound to fail in an attempt to behave in an entirely epistemically responsible manner towards all existing scientific speciality communities, taking all their diverse standards of evidence into account. This result can be avoided through a division of epistemic labour. The general responsibilities can be distributed in a way that limits the audience towards which individual scientists must behave in an epistemically responsible way. Thus, “in epistemically well-designed scientific communities, no scientist is put into a position where she is not capable of carrying out her special epistemic responsibilities” (Rolin 2017, 478).

Trends in Science Policy

Rolin’s main interest is in epistemically well-designed scientific communities. However, she also takes up an example I mention in a recent paper (Koskinen 2016). In it I examine a few research articles in order to illustrate situations where a relevant scientific community has not been recognised, or where there is no clear community to be found. In these articles, researchers from diverse fields attempt to integrate archaeological, geological or seismological evidence with orally transmitted stories about great floods. In other words, they take the oral stories seriously, and attempt to use them as historical evidence. However, they fail to take into account folkloristic expertise on myths. This I find highly problematic, as the stories the researchers try to use as historical evidence include typical elements of the flood myth.

The aims of such attempts to integrate academic and extra-academic knowledge are both emancipatory—taking the oral histories of indigenous communities seriously—and practical, as knowledge about past natural catastrophes may help prevent new ones. This chimes well with certain current trends in science policy. Collaborations across disciplinary boundaries, and even across the boundaries of science, are promoted as a way to increase the societal impact of science and provide solutions to practical problems. Researchers are expected to contribute to solving the problems by integrating knowledge from different sources.

Such aims have been articulated in terms of systems theory, the Mode-2 concept of knowledge production and, recently, open science (Gibbons et al. 1994; Nowotny et al. 2001; Hirsch Hadorn et al. 2008), leading to the development of solution-oriented multi, inter-, and transdisciplinary research approaches. At the same time, critical feminist and postcolonial theories have influenced collaborative and participatory methodologies (Reason and Bradbury 2008; Harding 2011), and recently ideas borrowed from business have led to an increasing amount of ‘co-creation’ and ‘co-research’ in academia (see e.g. Horizon 2020).

All this, combined with keen competition for research funding, leads in some areas of academic research to increasing amounts of solution-oriented research projects that systematically break disciplinary boundaries. And simultaneously they often challenge the existing division of epistemic labour.

Challenging the Existing Division of Epistemic Labour

According to Rolin, well-designed scientific communities need to foster the moral account of epistemic responsibilities. The necessity becomes clear in such situations as are described above: it would be in the epistemic interests of scientific communities, and science in general, if folklorists were to offer constructive criticism to the archaeologists, geologists and seismologists. However, if the folklorists are motivated only by self-interest, or by personal epistemic goals, they have no reason to do so. Only if they see epistemic responsibility as a moral duty, one that is fundamentally based on general moral duties, will their actions be in accord with the epistemic interests of science. Rolin argues that this happens because the existing division of epistemic labour can be challenged.

Normally, according to epistemic cosmopolitanism, the epistemic responsibilities of folklorists would lie mainly in their own speciality community. However, if the existing division of epistemic labour does not serve the epistemic goals of science, this does not suffice. And if special moral duties are taken to be distributed general moral duties, the way of distributing them can always be changed. In fact, it must be changed, if that is the only way to follow the underlying general moral duties:

If the cooperation between archaeologists and folklorists is in the epistemic interests of science, a division of epistemic labour should be changed so that, at least in some cases, archaeologists and folklorists should have mutual special epistemic responsibilities. This is the basis for claiming that a folklorist has a moral obligation to intervene in the problematic use of orally transmitted stories in archaeology (Rolin 2017, 478–479).

The solution seems compelling, but I see a problem that Rolin does not sufficiently address. She seems to believe that situations where the existing division of epistemic labour is challenged are fairly rare, and that they lead to a new, stable division of epistemic labour. I do not think that this is the case.

Rolin cites Brad Wray (2011) and Uskali Mäki (2016) when emphasising that scientific speciality communities are not eternal. They may dissolve and new ones may emerge, and interdisciplinary collaboration can lead to the formation of new speciality communities. However, as Mäki and I have noted (Koskinen & Mäki 2016), solution-oriented inter- or transdisciplinary research does not necessarily, or even typically, lead to the formation of new scientific communities. Only global problems, such as biodiversity loss or climate change, are likely to function as catalysts in the disciplinary matrix, leading to the formation of numerous interdisciplinary research teams addressing the same problem field. Smaller, local problems generate only changeable constellations of inter- and transdisciplinary collaborations that dissolve once a project is over. If such collaborations become common, the state Rolin describes as a rare period of transition becomes the status quo.

It Can be Too Demanding

Rather than a critique of Rolin’s argument, the conclusion of this commentary is an observation that follows from the said argument. It helps us to clarify one possible reason for the difficulties that researchers encounter with inter- and transdisciplinary research.

Rolin argues that epistemically well-designed scientific communities should foster the idea of epistemic responsibilities being not only epistemic, but also moral duties. The usefulness of such an outlook becomes particularly clear in situations where the prevailing division of epistemic labour is challenged—for instance, when an interdisciplinary project fails to take some relevant viewpoint into account, and the researchers who would be able to offer valuable criticism do not benefit from offering it. In such a situation researchers motivated by self-interest or by individual epistemic goals would have no reason to offer the required criticism. This would be unfortunate, given the impersonal epistemic goals of science. So, we must hope that scientists see epistemically responsible behaviour as their moral duty.

However, for a researcher working in an environment where changeable, solution-oriented, multi-, inter-, and transdisciplinary projects are common, understanding epistemic responsibility as a moral duty may easily become a burden. The prevailing division of epistemic labour is challenged constantly, and without a new, stable division necessarily replacing it.

As Rolin notes, it is due to a tolerably clear division of labour that epistemic responsibilities understood as moral duties do not become too demanding for individual researchers. But as trends in science policy erode disciplinary boundaries, the division of labour becomes unstable. If it continues to be challenged, it is not just once or twice that responsible scientists may have to intervene and comment on research that is not in their area of specialisation. This can become a constant and exhausting duty. So if instead of well-designed scientific communities, we get their erosion by design, we may have to reconsider the moral account of epistemic responsibility.


Fricker, M. Epistemic injustice: power and the ethics of knowing. Oxford: Oxford University Press, 2007.

Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P. & Trow, M. The new production of knowledge: the dynamics of science and research in contemporary societies. London: Sage, 1994.

Goodin, R. “What is So Special about Our Fellow Countrymen?” Ethics 98 no. 4 (1988): 663–686.

Hirsch Hadorn, G., Hoffmann-Riem, H., Biber-Klemm, S., Grossenbacher-Mansuy, W., Joye, D., Pohl, C., Wiesmann, U., Zemp, E. (Eds.). Handbook of Transdisciplinary Research. Berlin: Springer, 2008.

Harding, S. (Ed.). The postcolonial science and technology studies reader. Durham and London: Duke University Press, 2011.

Horizon 2020. Work Programme 2016–2017. European Commission Decision C (2017)2468 of 24 April 2017.

Koskinen, I. “Where is the Epistemic Community? On Democratisation of Science and Social Accounts of Objectivity.” Synthese. 4 August 2016. doi:10.1007/s11229-016-1173-2.

Koskinen, I., & Mäki, U. “Extra-academic transdisciplinarity and scientific pluralism: What might they learn from one another?” The European Journal of Philosophy of Science 6, no. 3 (2016): 419–444.

Mäki, U. “Philosophy of Interdisciplinarity. What? Why? How?” European Journal for Philosophy of Science 6, no. 3 (2016): 327–342.

Merton, R. K. “Science and Technology in a Democratic Order.” Journal of Legal and Political Sociology 1 (1942): 115–126. Reprinted as “The Normative Structure of Science.” In R. K Merton, The Sociology of Science. Theoretical and Empirical Investigations. Chicago: University of Chicago Press, 1973: 267–278.

Nowotny, H., Scott, P., & Gibbons, M. Re-thinking science: knowledge and the public in an age of uncertainty. Cambridge: Polity, 2001.

Reason, P. and Bradbury, H. (Eds.). The Sage Handbook of Action Research: Participative Inquiry and Practice. Sage, CA: 2008.

Rolin, K. “Scientific Community: A Moral Dimension.” Social Epistemology 31, no. 5 (2017), 468–483.

Wray, K. B. Kuhn’s Evolutionary Social Epistemology. Cambridge: Cambridge University Press, 2001.

Author Information: James Collier, Virginia Tech,


Editor’s Note: The publishers of Social Epistemology—Routledge and Taylor & Francis—have been kind enough to allow me to publish the full-text “Introduction” to issues on the SERRC and on the journal’s website.

At the beginning of August 2016, I received word from Greg Feist that Sofia Liberman had died. I was taken aback having recently corresponded with Professor Liberman about the online publication of her article (coauthored with Roberto López Olmedo). Professor Liberman’s work came to my attention through her association with Greg, Mike Gorman and scholars studying the psychology of science. We offer our sincere condolences to Sofia Liberman’s family, friends and colleagues. With gratitude and great respect for her intellectual legacy, we share Sofia Liberman’s scholarship with you in this issue of Social Epistemology.

Since the advent of publishing six issues a year, we adopted the practice of printing the journal triannually; thus, combining two issues for each print edition. The result makes for a panoply of fascinating topics and arguments. Still, we invite our readers to focus on the first four articles in this edition—articles addressing topics in the psychology of science, edited by Mike Gorman and Greg Feist—as a discrete, but linked, part of the whole. These articles signal the Social Epistemology’s wish to renew ties with the psychology of science community, ties established since at least the publication of William Shadish and Steve Fuller’s edited book The Social Psychology of Science (Guilford Press) in 1993.

Beginning by reflexively tracing the trajectory of his own research Mike Gorman, and Nora Kashani, ethnographically and archivally examine the work of A. Jean Ayres. Ayers, known for inventing Sensory Integration (SI) theory, sought to identify and treat children having difficulty interpreting sensation from the body and incorporating those sensations into academic and motor learning. To gain a more comprehensive account of the development and reception of SI, Gorman and Kashani integrated a cognitive historical analysis—a sub species historiae approach—of Ayers’ research with interactions and interviews with current practitioners—an in vivo approach. Through Gorman and Kashani’s method, we map Ayers’ ability to build a network of independent students and clients leading both to the wide acceptance and later fragmentation of SI.

We want scientific research that positively transforms an area of inquiry. Yet, how do we know when we achieve such changes and, so, may determine in advance the means by which we can achieve further transformations? Barrett Anderson and Greg Feist investigate the funding of what became, after 2002, impactful articles in psychology. While assessing impact relies, in part, on citation counts, Anderson and Feist argue for “generativity” as a new index. Generative work leads to the growth of a new branch on the “tree of knowledge”. Using the tree of knowledge as a metaphorical touchstone, we can trace and measure generative work to gain a fuller sense of which factors, such as funding, policy makers might consider in encouraging transformative research.

Sofia Liberman and Roberto López Olmedo question the meaning of coauthorship for scientists. Specifically, given the contentiousness—often found in the sciences—surrounding the assignation of primary authorship of articles and the priority of discovery, what might a better understanding of the social psychology of coauthorship yield? Liberman and López Olmedo find that, for example, fields emphasizing theoretical, in contrast to, experimental practices consider different semantic relations, such as “common interest” or “active participation”, associated with coauthroship. More generally, since scientists do not hold universal values regarding collaboration, differing group dynamics and reward structures affect how one approaches and decides coauthorship. We need more research, Liberman and López Olmedo claim, to further understand scientific collaboration in order, perhaps, to encourage more, and more fruitful, collaborations across fields and disciplines.

Complex, or “wicked”, problems require the resources of multiple disciplines. Moreover, addressing such problems calls for “T-shaped” practitioners—students educated to possess, and professionals possessing, both a singular expertise—the vertical part of the “T”—and the breadth expert knowledge—the horizontal part of the “T”. On examining the origin and development of the concept of the “T-shaped” practitioner, Conley et al. share case studies involving teaching students at James Madison University and the University of Virginia learning to make the connections that underwrite “T-shaped” expertise. Conley et al. analyze the students use of concept maps to illustrate connections, and possible trading zones, among types of knowledge.

Are certain scientists uniquely positioned—given their youth or age, their insider or outsider disciplinary status to bring about scientific change? Do joint commitments to particular beliefs—and, so, an obligation to act in accord with, and not contrarily to, those beliefs—hinder one’s ability to think differently and pose potential alternative solutions? Looking at these issues, Line Andersen describes Kenneth Appel and Wolfgang Haken’s solution to Four Color Problem—“any map can be colored with only four colors so that no two adjacent countries have the same color.” From of this case, and other examples, Andersen suggests that a scientist’s outsider status may enable scientific change.

We generally, and often blithely, assume our knowledge is fallible. What can we learn if we take fallibility rather more seriously? Stephen Kemp argues for “transformational fallibilism.” In order to improve our understanding should we question, and be willing to revise or reconstruct, any aspect in our network of understanding? How should we extend our Popperian attitude, and what we learn accordingly, to knowledge claim and forms of inquiry in other fields? Kemp advocates that we not allow our easy agreement on knowledge’s fallibility to make us passive regarding accepted knowledge claims. Rather, coming to grips with the “impermanence” of knowledge sharpens and maintains our working sense of fallible knowledge.

Derek Anderson introduces the idea of “conceptual competence injustice”. Such an injustice arises when “a member of a marginalized group is unjustly regarded as lacking conceptual or linguistic competence as a consequence of structural oppression”. Anderson details three conditions one might find in a graduate philosophy classroom. For example, a student judges a member of a marginalized group, who makes a conceptual claim, and accords their claim less credibility than it actually has. That judgment leads to a subsequent assessment regarding the marginalized person’s lower degree of competence—than they in fact have—with a relevant word or concept. By depicting conceptual competence injustice, Anderson gives us important matters to consider in deriving a more complete accounting of Miranda Fricker’s forms of epistemic injustice.

William Lynch gauges Steve Fuller’s views in support of intelligent design theory. Lynch challenges Fuller’s psychological assumptions, and corresponding questions as to what motivates human beings to do science in the first place. In creating and pursuing the means and ends of science do humans—seen as the image and likeness of God—seek to render nature intelligible and thereby know the mind of God? If we take God out of the equation—as does Darwin’s theory—how do we understand the pursuit of science in both historical and future terms? Still, as Lynch explains, Fuller desires a broader normative landscape in which human beings might rewardingly follow unachieved, unconventional or forgotten, paths to science that could yield epistemic benefits. Lynch concludes that the pursuit of parascience likely leads both to opportunism and dangerous forms of doubt in traditional science.

Exchanges on many of the articles that appear in this issue of Social Epistemology—and in recent past issues—can be found on the Social Epistemology Review and Reply Collective: Please join us. We realise knowledge together.

Author Information: Adam Riggio, New Democratic Party of Canada,

Riggio, Adam. “Subverting Reality: We Are Not ‘Post-Truth,’ But in a Battle for Public Trust.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 66-73.

The PDF of the article gives specific page numbers. Shortlink:

Image credit: Cornerhouse, via flickr

Note: Several of the links in this article are to websites featuring alt-right news and commentary. This exists both as a warning for offensive content, as well as a sign of precisely how offensive the content we are dealing with actually is.

An important purpose of philosophical writing for public service is to prevent important ideas from slipping into empty buzzwords. You can give a superficial answer to the meaning of living in a “post-truth” world or discourse, but the most useful way to engage this question is to make it a starting point for a larger investigation into the major political and philosophical currents of our time. Post-truth was one of the many ideas American letters haemorrhaged in the maelstrom of Trumpism’s wake, the one seemingly most relevant to the concerns of social epistemology.

It is not enough simply to say that the American government’s communications have become propagandistic, or that the Trump Administration justifies its policies with lies. This is true, but trivial. We can learn much more from philosophical analysis. In public discourse, the stability of what information, facts, and principles are generally understood to be true has been eroding. General agreement on which sources of information are genuinely reliable in their truthfulness and trustworthiness has destabilized and diverged. This essay explores one philosophical hypothesis as to how that happened: through a sustained popular movement of subversion – subversion of consensus values, of reliability norms about information sources, and of who can legitimately claim the virtues of subversion itself. The drive to speak truth to power is today co-opted to punch down at the relatively powerless. This essay is a philosophical examination of how that happens.

Subversion as a Value and an Act

A central virtue in contemporary democracy is subversion. To be a subversive is to progress society against conservative, oppressive forces. It is to commit acts that transgress popular morality while providing a simultaneous critique of it. As new communities form in a society, or as previously oppressed communities push for equal status and rights, subversion calls attention to the inadequacy of currently mainstream morality to the new demands of this social development. Subversive acts can be publications, artistic works, protests, or even the slow process of conducting your own life publicly in a manner that transgresses mainstream social norms and preconceptions about what it is right to do.

Values of subversiveness are, therefore, politically progressive in their essence. The goal of subversion values is to destabilize an oppressive culture and its institutions of authority, in the name of greater inclusiveness and freedom. This is clear when we consider the popular paradigm case of subversive values: punk rock and punk culture. In the original punk and new wave scenes of 1970s New York and Britain, we can see subversion values in action. Punk’s embrace of BDSM and drag aesthetics subvert the niceties of respectable fashion. British punk’s embrace of reggae music promotes solidarity with people oppressed by racist and colonialist norms. Most obviously, punk enshrined a morality of musical composition through simplicity, jamming, and enthusiasm. All these acts and styles subverted popular values that suppressed all but vanilla hetero sexualities, marginalized immigrant groups and ethnic minorities, denigrated the poor, and esteemed an erudite musical aesthetic.

American nationalist conservatism today has adopted the form and rhetoric of subversion values, if not the content. The decadent, oppressive mainstream the modern alt-right opposes and subverts is a general consensus of liberal values – equal rights regardless of race or gender, an imperative to build a fair economy for all citizens, end police oppressive of marginalized communities, and so on. Alt-right activists push for the return of segregation and even ethnic cleansing of Hispanics from the United States. Curtis Yarvin, the intellectual centre of America’s alt-right, openly calls for an end to democratic institutions and their replacement with government by a neo-cameralist state structure that replaces citizenship with shareholds and reduces all public administration and foreign policy to the aim of profit. Yet because these ideas are a radical front opposing a broadly liberal democratic mainstream culture, alt-right activists declare themselves punk. They claim subversiveness in their appropriation of punk fashion in apparel and hair, and their gleeful offensiveness to liberal sensibilities with their embrace of public bigotry.

Subversion Logics: The Vicious Paradox and Trolling

Alt-right discourse and aesthetic claim to have inherited subversion values because their activists oppose a liberal democratic mainstream whose presumptions include the existence of universal human rights and the encouragement of cultural, ethnic, and gender diversity throughout society. If subversion values are defined entirely according to the act of subverting any mainstream, then this is true. But this would decouple subversion values from democratic political thought. At question in this essay – and at this moment in human democratic civilization – is whether such decoupling is truly possible.

If subversion as an act is decoupled from democratic values, then we can understand it as the act of forcing an opponent into a vicious paradox. One counters an opponent by interpreting their position as implying a hypocritical or self-contradictory logic. The most general such paradox is Karl Popper’s paradox of tolerance. Alt-right discourse frames their most bigoted communications as subversive acts of total free speech – an absolutism of freedom that decries as censorship any critique or opposition to what they say. This is true whether they write on a comment thread, through an anonymous Twitter feed, or on a stage at UC Berkeley. We are left with the apparent paradox that a democratic society must, if we are to respect our democratic values without being hypocrites ourselves, accept the rights of the most vile bigots to spread racism, misogyny, anti-trans and heterosexist ideas, Holocaust denial, and even the public release of their opponents’ private information. As Popper himself wrote, the only response to such an argument is to deny its validity – a democratic society cannot survive if it allows its citizens to argue and advocate for the end of democracy. The actual hypocritical stance is free speech absolutism: permitting assaults on democratic society and values in the name of democracy itself.

Trolling, the chief rhetorical weapon of the alt-right, is another method of subversion, turning an opponent’s actions against herself. To troll is to communicate with statements so dripping in irony that an opponent’s own opposition can be turned against itself. In a simple sense, this is the subversion of insults into badges of honour and vice versa. Witness how alt-right trolls refer to themselves as shitlords, or denounce ‘social justice warriors’ as true fascists. But trolling also includes a more complex rhetorical strategy. For example, one posts a violent, sexist, or racist meme – say, Barack Obama as a witch doctor giving Brianna Wu a lethal injection. If you criticize the post, they respond that they were merely trying to bait you, and mock you as a fragile fool who takes people seriously when they are not – a snowflake. You are now ashamed, having fallen into their trap of baiting earnest liberals into believing in the sincerity of their racism, so you encourage people to dismiss such posts as ‘mere trolling.’ This allows for a massive proliferation of racist, misogynist, anti-democratic ideas under the cover of being ‘mere trolling’ or just ‘for the lulz.’

No matter the content of the ideology that informs a subversive act, any subversive rhetoric challenges truth. Straightforwardly, subversion challenges what a preponderant majority of a society takes to be true. It is an attack on common sense, on a society’s truisms, on that which is taken for granted. In such a subversive social movement, the agents of subversion attack common sense truisms because of their conviction that the popular truisms are, in fact, false, and their own perspective is true, or at least acknowledges more profound and important truths than what they attack. As we tell ourselves the stories of our democratic history, the content of those subversions were actually true. Now that the loudest voices in American politics claiming to be virtuous subversives support nationalist, racist, anti-democratic ideologies, we must confront the possibility that those who speak truth to power have a much more complicated relationship with facts than we often believe.

Fake News as Simply Lies

Fake news is the central signpost of what is popularly called the ‘post-truth’ era, but it quickly became a catch-all term that refers to too many disparate phenomena to be useful. When preparing for this series of articles, we at the Reply Collective discussed the influence of post-modern thinkers on contemporary politics, particularly regarding climate change denialism. But I don’t consider contemporary fake news as having roots in these philosophies. The tradition is regarded in popular culture (and definitely in self-identified analytic philosophy communities) as destabilizing the possibility of truth, knowledge, and even factuality.

This conception is mistaken, as any attentive reading of Jacques Derrida, Michel Foucault, Gilles Deleuze, Jean-Francois Lyotard, or Jean Beaudrillard will reveal that they were concerned – at least on the question of knowledge and truth – with demonstrating that there were many more ways to understand how we justify our knowledge and the nature of facticity than any simple propositional definition in a Tarskian tradition can include. There are more ways to understand knowledge and truth than seeing whether and how a given state of affairs grounds the truth and truth-value of a description. A recent article by Steve Fuller at the Institute of Art and Ideas considers many concepts of truth throughout the history of philosophy more complicated than the popular idea of simple correspondence. So when we ask whether Trumpism has pushed us into a post-truth era, we must ask which concept of truth had become obsolete. Understanding what fake news is and can be, is one productive probe of this question.

So what are the major conceptions of ‘fake news’ that exist in Western media today? I ask this question with the knowledge that, given the rapid pace of political developments in the Trump era, my answers will probably be obsolete, or at least incomplete, by publication. The proliferation of meanings that I now describe happened in popular Western discourse in a mere two months from Election Day to Inauguration Day. My account of these conceptual shifts in popular discourse shows how these shifts of meaning have acquired such speed.

Fake news, as a political phenomenon, exists as one facet of a broad global political culture where the destabilization of what gets to count as a fact and how or why a proposition may be considered factual has become fully mainstream. As Bruno Latour has said, the destabilization of facticity’s foundation is rooted in the politics and epistemology of climate change denialism, the root of wider denialism of any real value for scientific knowledge. The centrepiece of petroleum industry public relations and global government lobbying efforts, climate change denialism was designed to undercut the legitimacy of international efforts to shift global industry away from petroleum reliance. Climate change denial conveniently aligns with the nationalist goals of Trump’s administration, since a denialist agenda requires attacking American loyalty to international emissions reduction treaties and United Nations environmental efforts. Denialism undercuts the legitimacy of scientific evidence for climate change by countering the efficacy of its practical epistemic truth-making function. It is denial and opposition all the way down. Ontologically, the truth-making functions of actual states of affairs on climatological statements remain as fine as they always were. What’s disappeared is the popular belief in the validity of those truth-makers.

So the function of ‘fake news’ as an accusation is to sever the truth-making powers of the targeted information source for as many people who hear the accusation as possible. The accusation is an attempt to deny and destroy a channel’s credibility as a source of true information. To achieve this, the accusation itself requires its own credibility for listeners. The term ‘fake news’ first applied to the flood of stories and memes flowing from a variety of dubious websites, consisting of uncorroborated and outright fabricated reports. The articles and images originated on websites based largely in Russia and Macedonia, then disseminated on Facebook pages like Occupy Democrats, Eagle Rising, and Freedom Daily, which make money using clickthrough-generating headlines and links. Much of the extreme white nationalist content of these pages came, in addition to the content mills of eastern Europe, from radical think tanks and lobby groups like the National Policy Institute. These feeds are a very literal definition of fake news: content written in the form of actual journalism so that their statements appear credible, but communicating blatant lies and falsehoods.

The feeds and pages disseminating these nonsensical stories were successful because the infrastructure of Facebook as a medium incentivizes comforting falsehoods over inconvenient truths. Its News Feed algorithm is largely a similarity-sorting process, pointing a user to sources that resemble what has been engaged before. Pages and websites that depend on by-clickthrough advertising revenue will therefore cater to already-existing user opinions to boost such engagement. A challenging idea that unsettles a user’s presumptions about the world will receive fewer clickthroughs because people tend to prefer hearing what they already agree with. The continuing aggregation of similarity after similarity reinforces your perspective and makes changing your mind even harder than it usually is.

Trolling Truth Itself

Donald Trump is an epically oversignified cultural figure. But in my case for the moment, I want to approach him as the most successful troll in contemporary culture. In his 11 January 2017 press conference, Trump angrily accused CNN and Buzzfeed of themselves being “fake news.” This proposition seems transparent, at first, as a clear act of trolling, a President’s subversive action against critical media outlets. Here, the insulting meaning of the term is retained, but its reference has shifted to cover the Trump-critical media organizations that first brought the term to ubiquity shortly after the 8 November 2016 election. The intention and meaning of the term has been turned against those who coined it.

In this context, the nature of the ‘post-truth’ era of politics appears simple. We are faced with two duelling conceptions of American politics and global social purpose. One is the Trump Administration, with its propositions about the danger of Islamist terror and the size of this year’s live Inauguration audience. The other is the usual collection of news outlets referred to as the mainstream media. Each gives a presentation of what is happening regarding a variety of topics, neither of which is compatible, both of which may be accurate to greater or lesser degrees in each instance. The simple issue is that the Trump Administration pushes easily falsified transparent propaganda such as the lie about an Islamist-led mass murder in Bowling Green, Kentucky. This simple issue becomes an intractable problem because significantly large spaces in the contemporary media economy constitutes a hardening of popular viewpoints into bubbles of self-reinforcing extremism. Thanks to Facebook’s sorting algorithms, there will likely always be a large group of Trumpists who will consider all his administration’s blatant lies to be truth.

This does not appear to be a problem for philosophy, but for public relations. We can solve this problem of the intractable audience for propaganda by finding or creating new paths to reach people in severely comforting information bubbles. There is a philosophical problem, but it is far more profound than even this practically difficult issue of outreach. The possibility conditions for the character of human society itself is the fundamental battlefield in the Trumpist era.

The accusation “You are fake news!” of Trump’s January press conference delivered a tactical subversion, rendering the original use of the term impossible. The moral aspects of this act of subversion appeared a few weeks later, in a 7 February interview Trump Administration communications official Sebastian Gorka did with Michael Medved. Gorka’s words first appear to be a straightforward instance of authoritarian delegitimizing of opposition, as he equates ‘fake news’ with opposition to President Trump. But Gorka goes beyond this simple gesture to contribute to a re-valuation of the values of subversion and opposition in our cultural discourse. He accuses Trump-critical news organizations of such a deep bias and hatred of President Trump and Trumpism that they themselves have failed to understand and perceive the world correctly. The mainstream media have become untrustworthy, says Gorka, not merely because many of their leaders and workers oppose President Trump, but because those people no longer understand the world as it is. That conclusion is, as Breitbart’s messaging would tell us, the reason to trust the mainstream media no longer is their genuine ignorance. And because it was a genuine mistake about the facts of the world, that accusation of ignorance and untrustworthiness is actually legitimate.

Real Failures of Knowledge

Donald Trump, as well as the political movements that backed his Presidential campaign and the anti-EU side of the Brexit referendum, knew something about the wider culture that many mainstream analysts and journalists did not: they knew that their victory was possible. This is not a matter of ideology, but a fact about the world. It is not a matter of interpretive understanding or political ideology like the symbolic meanings of a text, object, or gesture, but a matter of empirical knowledge. It is not a straightforward fact like the surface area of my apartment building’s front lawn or the number of Boeing aircraft owned by KLM. Discovering such a fact as the possibility conditions and likelihood of an election or referendum victory involving thousands of workers, billions of dollars of infrastructure and communications, and millions of people deliberating over their vote or refusal to vote is a massively complicated process. But it is still an empirical process and can be achieved to varying levels of success and failure. In the two most radical reversals of the West’s (neo)liberal democratic political programs in decades, the press as an institution failed to understand what is and is not possible.

Not only that, these organizations know they have failed, and know that their failure harms their reputation as sources of trustworthy knowledge about the world. Their knowledge of their real inadequacy can be seen in their steps to repair their knowledge production processes. These efforts are not a submission to the propagandistic demands of the Trump Presidency, but an attempt to rebuild real research capacities after the internet era’s disastrous collapse of the traditional newspaper industry. Through most of the 20th century, the news media ecology of the United States consisted of a hierarchy of local, regional, and inter/national newspapers. Community papers reported on local matters, these reports were among the sources for content at regional papers, and those regional papers in turn provided source material for America’s internationally-known newsrooms in the country’s major urban centres. This information ecology was the primary route not only for content, but for general knowledge of cultural developments beyond those few urban centres.

With the 21st century, it became customary to read local and national news online for free, causing sales and advertising revenue for those smaller newspapers to collapse. The ensuing decades saw most entry-level journalism work become casual and precarious, cutting off entry to the profession from those who did not have the inherited wealth to subsidize their first money-losing working years. So most poor and middle class people were cut off from work in journalism, removing their perspectives and positionality from the field’s knowledge production. The dominant newspaper culture that centred all content production in and around a local newsroom persisted into the internet era, forcing journalists to focus their home base in major cities. So investigation outside major cities rarely took place beyond parachute journalism, visits by reporters with little to no cultural familiarity with the region. This is a real failure of empirical knowledge gathering processes. Facing this failure, major metropolitan news organizations like the New York Times and Mic have begun building a network of regional bureaus throughout the now-neglected regions of America, where local independent journalists are hired as contractual workers to bring their lived experiences to national audiences.

America’s Democratic Party suffered a similar failure of knowledge, having been certain that the Trump campaign could never have breached the midwestern regions – Michigan, Wisconsin, Pennsylvania – that for decades have been strongholds of their support in Presidential elections. I leave aside the critical issue of voter suppression in these states to concentrate on a more epistemic aspect of Trump’s victory. This was the campaign’s unprecedented ability to craft messages with nuanced detail. Cambridge Analytica, the data analysis firm that worked for both Trump and, provided the power to understand and target voter outreach with almost individual specificity. This firm derives incredibly complex and nuanced data sets from the Facebook behaviour of hundreds of millions of people, and is the most advanced microtargeting analytics company operating today. They were able to craft messages intricately tailored to individual viewers and deliver them through Facebook advertising. So the Trump campaign has a legitimate claim to have won based on superior knowledge of the details of the electorate and how best to reach and influence them.

Battles Over the Right to Truth

With this essay, I have attempted an investigation that is a blend of philosophy and journalism, an examination of epistemological aspects of dangerous and important contemporary political and social phenomena and trends. After such a mediation, I feel confident in proposing the following conclusions.

1) Trumpist propaganda justifies itself with an exclusive and correct claim to reliability as a source of knowledge: that the Trump campaign was the only major information source covering the American election that was always certain of the possibility that they could win. That all other media institutions at some point did not understand or accept the truth of Trump’s victory being possible makes them less reliable than the Trump team and Trump personally.

2) The denial of a claim’s legitimacy as truth, and of an institution’s fidelity to informing people of truths, has become such a powerful weapon of political rhetoric that it has ended all cross-partisan agreement on what sources of information about the wider world are reliable.

3) Because of the second conclusion, journalism has become an unreliable set of knowledge production techniques. The most reliable source of knowledge about that election was the analysis of mass data mining Facebook profiles, the ground of all Trump’s public outreach communications. Donald Trump became President of the United States with the most powerful quantitative sociology research program in human history.

4) This is Trumpism’s most powerful claim to the mantle of the true subversives of society, the virtuous rebel overthrowing a corrupt mainstream. Trumpism’s victory, which no one but Trumpists themselves thought possible, won the greatest achievement of any troll. Trumpism has argued its opponent into submission, humiliated them for the fact of having lost, then turned out to be right anyway.

The statistical analysis and mass data mining of Cambridge Analytica made Trump’s knowledge superior to that of the entire journalistic profession. So the best contribution that social epistemology as a field can make to understanding our moment is bringing all its cognitive and conceptual resources to an intense analysis of statistical knowledge production itself. We must understand its strengths and weaknesses – what statistical knowledge production emphasizes in the world and what escapes its ability to comprehend. Social epistemologists must ask themselves and each other: What does qualitative knowledge discover and allow us to do, that quantitative knowledge cannot? How can the qualitative form of knowledge uncover a truth of the same profundity and power to popularly shock an entire population as Trump’s election itself?

Author Information: Frank Scalambrino


Editor’s Note: As we near the end of an eventful 2016, the SERRC will publish reflections considering broadly the immediate future of social epistemology as an intellectual and political endeavor.

Please refer to:


Image credit: Walt Jabsco, via flickr

Presently my interest in social epistemology is primarily related to policy development. Though I continue to be interested in the ways technology influences the formation of social identities, I also want to examine corporate agency. On the one hand, this relates to the notion of persona ficta and the idea that, beyond the persons comprising a group, a group itself may be considered a “person.” Take, for example, search committees for tenure-track professor positions. There is a sense in which the committee is supposed to represent the interests of the persona ficta of some group, be it the department, the university, etc. Otherwise, it would simply be the case that the committees were representing their own desires, or merely applying a merit-based template, and though the former characterization may often be true, the latter is clearly not the case. Moreover, because the decision-making is supposed to be in the name of, and based on the authority of, the persona ficta, the members of the search committee are supposedly not personally responsible for the decisions made. The questions raised by such a situation in which a persona ficta may be seen as a kind of mask covering the true social relations within the group determining the group’s decisions, I contextualize in terms of social epistemology.

On the other hand, I am interested in thinking about corporate agency and its efficacy in social environments. This is not unrelated to the question of the relation between the interests, knowledge, and actions of the corporate members which in some sense condition and sustain different types of (persona ficta) corporate agents. In other words, it is as if the collective interests, knowledge, and actions of members of a group constitute a kind of collective agent back to which changes in the world may be traced. I am interested in what I consider to be the ethical questions, which to some degree should factor into the various organizations of knowledge and power which sustain such corporate agents. To put it more narrowly and concretely would be to say, social epistemology may help us locate the points at which constitutive group members may be accountable for their contributions otherwise masked by some persona ficta. Subsequently, such accountability may be worked into policy development.

Author Information: Robert Frodeman, University of North Texas,


Please refer to:


Image credit: valkrye131, via flickr

As we do every holiday season, last night we watched the 1951 version of Dicken’s Christmas Carol. It was deeply comforting, and deeply troubling. It’s great because the director (Desmond-Hurst) treats the subject matter with the gravity and modesty it deserves. This is the version that haunted my childhood: how Marley’s face on the door knocker frightened me, as did his banging of chains. Ditto the hand that juts out from the black figure of the ghost of Christmas Future.

But what frightens me now is what the story portends for our future. The movie declares that it’s a story of redemption, or as it says, of (individual) reclamation. But it is about something more fundamental than that. It assumes the existence of a moral and metaphysical order. The accounts always balance: Marley wears the chains he forged in life, and if Scrooge is to avoid the same fate he must come to his senses. Of course, terrible injustices exist in Dicken’s London, but there is a stability to the world that is intensely consoling. Now, however, it’s this stability and consolation that’s been lost.

I feel that the greatest task of the philosopher—I mean the term in a generic sense, which includes STSers and many others—is to try to identify the deepest, most profound, and most significant problem of his or her time and think it through. Of course, people will differ in their evaluation of what this is. But that’s ok. In fact it’s good, for it increases the chances that someone will get lucky and hit upon the right problem. This is what led me to environmental philosophy, and then to interdisciplinarity, and most recently to what might be called policy studies but which is really about thinking through the problem of the mismatch between the supply and the demand for knowledge.

Now, all these issues remain central. But I am increasingly gripped by the sense that it is our loss of a moral and metaphysical order that is the chief problem of our time—an instability that is being driven by science and technology. It’s a point that Ted Kaczynski spotted early, though I reject his methods. When I read about the latest developments in AI and DIYbiology I feel a world spinning out of control—and feel that it is this feeling, mis-interpreted, that has led us to Trump. It’s spawned a wildness that expresses itself in Trump’s statements and behavior, and of some of those who support him, a feeling that things have been spinning out of control (MAGA); but rather than trying to react in a conservative or Burkean manner to reestablish order, the urge has now become nihilistic, expressing itself as authoritarianism and irrationality—Bannon’s ‘let’s blow up the entire system’ and the GOP’s ‘who cares if Putin threw the election, our guy won’.

So it is that here, teaching in Texas, I find myself saying repeatedly to my classes: you guys say you are christian; you picket abortion clinics; but why aren’t you picketing the biology building, which represents a much greater threat to your world order? In this sense I think Fuller is correct, that our political choices are reorienting themselves from left-right to what might be called black-green—that the real debate before us is between those who seek deification via technoscience, versus those hoary old metaphysicians who declaim the folly of that path and call for the observance of some type of larger order and limit.

It’s a battle that I fear I am on the losing side of. Which goes a long way to explain my love of old movies like A Christmas Carol, where I can (for all the Jim Crow or sexism or other stupidities) for an hour or two find a moral and metaphysical order that offers me solace.

Post-Truth Blues? Adam Briggle

SERRC —  December 22, 2016 — 6 Comments

Author Information: Adam Briggle, University of North Texas,


Editor’s Note: As we near the end of an eventful 2016, the SERRC will publish reflections considering broadly the immediate future of social epistemology as an intellectual and political endeavor.

Please refer to:


Image credit: Tim, via flickr

I think that 2017 might find social epistemologists busy reckoning with the fallout from the word of the year in 2016: post-truth. The definition for post-truth is: “Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” The Oxford English Dictionary online gives this example: “in this era of post-truth politics, it’s easy to cherry-pick data and come to whatever conclusion you desire.”

Bruno Latour might snidely conclude that “we have always been post-truth,” because there never was such a thing as objectivity and cherry-picking data is a game as old as data. Steve Fuller wrote something similar in a recent column. Daniel Sarewitz might as well just say “No duh! We have long suffered from an ‘excess of objectivity’!”

Finally, the world has bought what we have been selling! Oh…hmmm …

Now, maybe it is just my weak stomach, but I am feeling queasy with sellers’ remorse. If all expertise is just institutionalized power, then forget the fourth branch of government—CIA, DOE, EPA, Economic Council of Advisors, Department of Education—all of it is suspect and subject to revision. It strikes me as eerily similar to the conditions in Soviet Russia and Nazi Germany that prompted Robert K. Merton to articulate the normative structure of science. Or maybe it is better thought of as “the problem of extension:” Perhaps someone other than a nuclear physicist can run the DOE, given that it is tangled up in all sorts of non-technical aspects of society, but Rick Perry?

I wonder if some of us might whistle a guilty tune under our breath, turn around and start re-assembling some of the structures we had earlier pulled apart.

Deconstructing such wooly myths like ‘objective facts’ I wonder if the social epistemology crowd might feel a bit of sellers’ remorse on this score.

Author Information: Mark D. West, University of North Carolina, Asheville,

West, Mark D. “The Holidays and What is Given.” Social Epistemology Review and Reply Collective 5, no. 12 (2016): 17-19.

Editor’s Note: As we near the end of an eventful 2016, the SERRC will publish reflections considering broadly the immediate future of social epistemology as an intellectual and political endeavor.

The PDF of the article gives specific page numbers. Shortlink:

Please refer to:


Image credit: geir tønnessen, via flickr

We have reached the holidays, and for some of us, these are happy times. The media, at least, treat these days as if the merriment and cheer are givens; decorations festoon stores and public places, and music about Christmas cheer permeates any space; where two or more are gathered; there “Jingle Bell Rock” is in their midst.

In the Jewish tradition, winter season means a hanukkiah will make its yearly appearance, with the story of how one’s family came to own it. A normal menorah has seven branches, each with a candle holder; a hanukkiah has an eighth helper candle, which is out of line with the others. The hanukkiah is used only on Hanukkah, with its light serving no function other than to recall the miracle of Hanukkah.

Every hanukkiah brings with it a story, and every hanukkiah is itself a gift of memory. Our hanukkiah was carried by my cousin through the streets of Jerusalem, down the crowded streets, and across the United States, finally coming to rest in our home, a gift after many years of travel. Other families tell stories of hanukkiah smuggled from foreign countries under the glare of repressive regimes, carried in suitcases through customs at Ellis Island, bought for pennies in shtetls in lands long fled. The hanukkiah is a given of the holiday, and is, often, itself, a given. Like a menorah, it gives light; but the light is for only one purpose—a ‘given’ purpose.

Gift and Given

Considering that the root of both ‘gift’ and ‘given’ is the Proto-Indo-European root *ghabh-, “to give or receive”, I don’t think it is too far afield, in this season of giving and receiving, to consider not only gifts but givens, which, after all, to be givens must have been given by someone or something. As such, we might ask ourselves as social epistemologists what are the givens of our field, and what does it mean, in Jean-Luc Marion’s pregnant formulation, to exist in the realm of the “étant donné,” the “being given?”

What I mean by that is that we (the rational ‘cogita’ who operate as the members of the SERRC) take ourselves as ‘givens,’ as ‘données.’ From our own existence, we bootstrap the existence of groups (if I can exist, then I must, as a good agent of the Enlightenment, grant such agency to others, who as aggregates, are groups). Once we assume our own existence as a ‘given,’ we can take as our ‘given’ the group; and our ‘gift’ to the world of the philosophical is the notion of group epistemology. Particularly in this age of the Internet, and of electronic publications and forums, the disembodied res cogitans of Descartes is closer to our felt sense of what we are, as a group, than we might wish.

The cogito, and various discussions of it such as Hintikka’s (1962, reprinted in 1967), are familiar to all. But, as Williams (2014) suggests, the Cartesian argument (“cogito, ergo sum”) is posed in a more complex manner than the familiar formulation has it; Descartes imagines first the existence of a deity, then (implicitly) a self thinking of that deity and the qualities of that deity including benevolence; then he imagines that some malicious entity might cause him to perceive the world and its qualities in some way that does not accurately reflect the real. But, reasons Descartes, he himself is thinking, and from that he bootstraps that he exists; hence “cogito, ergo sum” is the endpoint, not the beginning, of a thought process; and that thought process is more akin to an intuition than to a proof, one which Stone (1993) argues is best understood as an enthymeme. Boos (1983) argues that the cogito’s ‘thoughtless thinking’ must be about something; and that the Cartesian formulation ends up as a metalogical formulation something like “If I doubt that I am, I am,” with the “I am” serving as the “point ferme” of Gueroult (1953) and the Archimedian fixed point of the cogito’s Gödelian diagonal lemma.

As Boos suggests, the implication of this is clear; this sounds suspiciously like a variant of the Hintikka’s Positive Introspection Axiom (the KK-thesis), which argues that agents know that they know what they know. The debate concerning this thesis is substantial (see, for example, Williamson 2000; Ginet 1970; Carrier 1974). But our theorizing must begin somewhere; we must accept some sort of metatheoretic notion if we are to devise theories at all. In our case, if we are to speak of groups, there must be individuals, and the first individual of all is “I.” That is our given, if we are to avoid the endless cycle of “no more this than that” of the Pyrrhonian skeptics.

Assumptions and Limitations

This is not to say that a domain of study can not function with a fully negative conceptualization of its object of study. Jean-Luc Marion, in his book God Without Being (1995), considers the limiting case of an apophatic theology; if we can, as Maimonides (Benor 1995) argued, make only negative assertions as to the attributions of a divine entity, are we not at some point forced to suggest that even being is an attribute which the divine entity does not possess?

As Marion (2002) suggests, the givenness of the existence of a divine entity is not the predicate of theology, but the existence of those searching for the divine entity is; as Kaplan (2010) argued, it is possible to have Judaism without a deity, but not without Jews. In a philosophical vein, how does one privilege Husserl’s Gegebenheit (Leask 2003) without merely assuming it as a given? How do we understand Being without taking it as given, and without somehow making that ‘given’ into a ‘Given,’ with a somehow transcendental ‘Giver?’

We, as social epistemologists, are in an interesting position with such questions. We, at some level, are can-kickers par excellence; in our struggle to explain knowledge structures as arising from groups, we are indeed situated in a local struggle, with its own give and take. But sometimes, perhaps, we should look up from our regional debates, and consider the larger issues afield; the “not yet” of Hegel’s “tarrying with the negative” (Foshay 2002) of these limits of the Given, and of the gifts we receive, and give, as a result of this struggle.


Benor, Ehud Z. “Meaning and Reference in Maimonides’ Negative Theology.” Harvard Theological Review 88, no. 3 (1995): 339-360.

Boos, William. “A Self-Referential ‘Cogito’.” Philosophical Studies 44, no. 2 (1983): 269-290.

Carrier, L. S. “Skepticism Made Certain.” The Journal of Philosophy 71, no 5 (1974): 140-150.

Foshay, Raphael. “‘Tarrying with the Negative’: Bataille and Derrida’s Reading of Negation in Hegel’s Phenomenology.” The Heythrop Journal 43, no. 3 (2002): 295-310.

Ginet, Carl. “What Must be Added to Knowing to Obtain Knowing That One Knows?” Synthese 21 no. 2 (1970): 163-186.

Gueroult, Martial. Descartes Selon L’ordre des Raisons, 2 vols. (Descartes’ Philosophy Interpreted according to the Order of Reasons). Paris: Aubier, 1953.

Hintikka, Jaakko. “Cogito, Ergo Sum: Inference or Performance?” In Descartes – A Collection of Critical Essays, edited by Willis Doney, 108-139. Palgrave Macmillan UK, 1967.

Kaplan, Mordecai M. Judaism as a Civilization: Toward a Reconstruction of American-Jewish Life. Jewish Publication Society, 2010.

Leask, Ian. “Husserl, Givenness, and the Priority of the Self.” International Journal of Philosophical Studies 11, no. 2 (2003): 141-156.

Marion, Jean-Luc. God Without Being: Hors-Texte. University of Chicago Press, 1995.

Marion, Jean-Luc. Being Given: Toward a Phenomenology of Givenness. Stanford University Press, 2002.

Stone, Jim. “Cogito Ergo Sum.” The Journal of Philosophy 90, no. 9 (1993): 462-468.

Williams, Bernard. Descartes: The Project of Pure Enquiry. New York: Routledge, 2014.

Williamson, Timothy. Knowledge and Its Limits. Oxford University Press, 2000.