Archives For Articles

Articles are stand-alone contributions to SERRC.

Author Information: Eric Kerr, National University of Singapore,

Kerr, Eric. “The Social Epistemology of Book Reviews.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 48-52.

The pdf of the article gives specific page references. Shortlink:

Image by Joel Gallagher via Flickr / Creative Commons


Because 2019 marks the end of my first full year as Book Reviews editor at SERRC, I want to take this opportunity to reflect on what we’ve done in terms of promoting conversation and criticism around new books in social epistemology and to reflect on how we can apply insights from social epistemology to our book reviews at SERRC.

The Place of Reviews

Nominally, social epistemology has a close connection to the book review.  As many readers of this journal will know, the term “social epistemology” was initially coined in the 1960s by the librarian and information scientist, Jesse Shera, to mean “the study of knowledge in society.” (Shera 1970, p. 86) Shera developed his work with colleague Margaret Egan and in the steps of fellow librarian Douglas Waples, concerned with the ways in which society reads: broadly, how it accesses, interprets, categorizes, indexes, and disseminates the written word and the role that librarianship, bibliography, and new methods of documentation could play (Zandonade 2004).

A library is a very particular filter of knowledge production. The Web may be seen as another, or as a collection of many. An academic journal yet another. These filters organize knowledge in society in their own way and we can, and do, evaluate this and make judgments of when it works well and when it does not work so well. Today, our access to information occurs within a wider ecosystem of filters that have flourished in the contemporary period, in tandem with the technological infrastructure to radically multiply and variegate filters.

For educators, reviews (from our students or are colleagues) are sometimes the primary means by which our performance and success are judged. Customer reviews – typically performed by the “uncredentialled curator” – are available on almost any website with something to sell and new companies have formed whose purpose is to provide customer reviews alone. Facebook, Instagram, Pinterest, and so on, use human and non-human filters to sift through vast trenches of information. I don’t need to belabour the point – it’s familiar to all of us.

Alongside the idea of the filter, has emerged a renewed prominence of curators, influenced by its powerful position in the art world. This curation comes with its own culture, its own beliefs, and its own language. This language functions to exclude alternatives and police boundaries. And while an art curator’s job may have once been to select what art was worth your attention, now, in an attention economy, a curator’s job may be just as much to provide the means to deal with information overload.

To complicate things still further, we now perform much more personal curation – keeping tabs, messages, snippets, and screenshots as well as cultivating all kinds of algorithms that learn from our past behaviour and deliver to us more of what we saw before.

Thomas Frank calls this expansion of curation, not just into reviewing almost anything we consume, but into the very language we use and the ways we think, curatolatry. He discusses how, responding to the newly-coined “fake news” (Faulkner 2018; Fuller 2016; Levy 2017), Barack Obama said:

We are going to have to rebuild within this wild-wild-west-of-information flow some sort of curating function that people agree to.

While Obama, in Frank’s view in common with other liberals, tends towards curation, Donald Trump is associated with the “refusal of curation. Trump does not reform or organize the chaos of the world…”

Frank warns at the end of his article:

“What they don’t agree upon, meanwhile, is simply ignored. It is outside the conversation. It is excluded. A world without fake news might really be awesome. So might a shop where every bottle of wine is excellent. So might an electoral system in which everyone heeds the urging of the professional consensus. But in any such system, reader, people like you and me can be assured with almost perfect confidence that our voices will be curated out.”

A Social Epistemological Interpretation of the Book Review

Would, should, SERRC perform a kind of curating function “that people agree to” to filter new books in social epistemology? I don’t think it does perform this function and I’m not sure that it should.

It is often alleged that book reviews tend towards mediocrity and nepotism, falling out of the publishing industry and, in academia, entrenched structures and metrics of hierarchy, prestige, and social status. To add to the miserable plight of the book review, they are not treated as prestigious publications or emphasized as lines on CVs (if listed at all).

They do not rank as highly as research articles or chapters in books or, indeed, books themselves. They do not generally rank at all on any metric that is used by academic institutions or funding bodies. Book reviews tend, therefore, to fall into the category of ‘service’ – gifts one is obliged to offer largely out of a sense of duty, responsibility, and morality.

This is lamentable. The first thing we are asked to do as students is review books. For many of us, the first thing we do when writing, or preparing to write, a paper, is to review books – to perform a literature review. Book reviews are not, primarily, a service to the author but to a wider audience. (If they were the former, one could easily email it to the author and avoid the hassle of formal publication.)

They do not simply repeat knowledge contained in the book but provide new knowledge as evidenced, I believe, by all of the book reviews we published in the last two years. Sometimes this is taken to be appraisal by an expert but I think that social epistemology can give us reason to take a second glance at this intuitive idea (Social Epistemology 32(6) – special issue on Expertise and Expert Knowledge; Watson 2018).

We should be critical of the encroachment of curation and the perceived need to curate. In wider culture, the most well-known critics were not themselves trained in the field they reviewed. This is often held against them by artists and writers but if we do not see their purpose as being about expert appraisal, that criticism loses some of its force.

One reason for this may be that reviews tell us as much about the reviewer as the reviewee. Reviews, as Oscar Wilde observed, are autobiographical. Ambrose Bierce echoed this sentiment in his Devil’s Dictionary. The entry for “review” reads simply:

To set your wisdom (holding not a doubt of it,
Although in truth there’s neither bone nor skin to it)
At work upon a book, and so read out of it
The qualities that you have first read into it.

This view seems to suit us at SERRC. We are, as in our name, a collective and much more than curate we read and write about what happens to take our interest at that time. We think, often, out loud. If that interest spreads throughout the community, it is likely to be picked up and turned into a symposium or extended dialogue. Or perhaps not. Others are welcome to join our community if they are interested in contributing to these conversations.

18 Months, more or less

Nevertheless, and undeniably, book review editors have a role to play in organizing knowledge in society. My approach to editing book reviews since I took over has not been to gatekeep. “Is this interesting?” – usual caveats aside about the word ‘interesting’ – has been the benchmark rather than “Is this proper social epistemology?”

I took over as Book Review Editor part-way through 2017. In this short period, we have published 64 reviews (and replies to reviews, and replies to reviews of reviews). Many of these have taken the form of book review “symposiums” where several authors take on one book, often featuring replies from the book’s author. Soliciting a range of views allows us to present a book from the perspective of scholars with different expertise and focus.

It encourages more in-depth and richer discussions of a book, and its surrounding intellectual milieu, and extends the conversation sometimes over a period of months. I believe that, in a small way, this facilitates a new ordering of knowledge around new books and, so, contributes to a new social epistemology.

It’s hard to focus on specific books given this long list but I can hint at some trends that we have been pushing, and will continue to push, in the new year. One concerns diversity and internationalization. When two of my National University of Singapore colleagues, Jay Garfield and Bryan van Norden, published an opinion piece in the New York Times’ Stone that argued for a greater role for “less commonly taught philosophies” (such as, but not limited to, Chinese or Indian philosophy) in the US curriculum, it caused a stir in the profession, and more widely.

A great deal has been written about the subsequent book Van Norden published on the theme, Taking Back Philosophy, but I would argue that our symposium, featuring seven scholars, including me, has added quite a bit to that conversation. A personal highlight for me was Steve Fuller’s visit to the Asia Research Institute at the National University of Singapore to speak on the subject. The full lecture can be heard here. Another important intervention in internationalizing our catalogue has been the symposium on African philosophy and I intend to continue this global perspective in 2019.

One innovation of SERRC is that we encourage authors to respond. I often write to authors to give them what the media call a right of reply. I believe this is quite unusual in the academic reviewsphere. It’s a method that is fraught with pitfalls and potential catastrophe but, I think, valuable for the ideas that frequently come out of it. Traditionally, a review is left hanging. The last laugh. Allowing authors a chance to respond can correct perceived inaccuracies but, more importantly, lead to new shared understandings.

As we enter 2019 under the deluge of our own personal tsundoku let’s embrace a multitude of reviews and reviews of reviews.

Best wishes for the new year. As always, if you wish to review a book, or propose a symposium, for SERRC you may write to me at the address below.

Contact details:


Briggle, Adam; and Robert Frodeman. “Thinking À La Carte.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 8-11.

Dusek, Val. “Antidotes to Provincialism.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 5-11.

Fuller, Steve. “Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.” Social Epistemology Review and Reply Collective, Steve Fuller (December 25, 2016):

Fuller, Steve. “‘China’ As the West’s Other in World Philosophy.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 1-11.

Graness, Anke. “African Philosophy and History.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 45-54.

Jain, Pankaj. “Taking Philosophy Back: A Call From the Great Wall of China.” Social Epistemology Review and Reply Collective 7, no. 9 (2018): 60-64.

Janz, Bruce. “The Problem of Method in African Philosophy.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 1-7.

Kerr, Eric. “A Hermeneutic of Non-Western Philosophy.” Social Epistemology Review and Reply Collective 7, no. 4 (2018): 1-6.

Lauer, Helen. “Scientific Consensus and the Discursive Dilemma.” Social Epistemology Review and Reply Collective 7, no. 9 (2018): 33-44.

Levy, Neil. “The Bad News About Fake News.” Social Epistemology Review and Reply Collective 6, no. 8 (2017): 20-36.

Faulkner, P. 2018. “Fake Barns, Fake News.” Social Epistemology Review and Reply Collective 7, no. 6 (2018): 16-21.

Martini, C. and M. Baghramian. 2018. Special issue on Expertise and Expert Knowledge, Social Epistemology 36(6).

Riggio, Adam. “Action in Harmony with a Global World.” Social Epistemology Review and Reply Collective 7, no. 3 (2018): 20-26.

Shera, J.H. Sociological Foundations of Librarianship. New York: Asia Publishing House, 1970.

Watson, J.C. 2018. “What Experts Could Not Be,” Social Epistemology, DOI: 10.1080/02691728.2018.1551437

Zandonade, T. 2004. “Social Epistemology from Jesse Shera to Steve Fuller, Library Trends 52(4): 810-832.

Author Information: Steve Fuller, University of Warwick,

Fuller, Steve. “Staying Human in the 21st Century Is Harder Than You Might Think.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 39-42.

The pdf of the article gives specific page references. Shortlink:

The Main Street Bridge in Columbus, Ohio, the largest major city near Ashland University.
Image by Bill Koontz via Flickr / Creative Commons


Let me start by saying that it’s a great honour to address you today.[1] It turns out that nearly forty years ago, I was the ‘salutatorian’ of the Class of 1979 at Columbia College in New York. That means I was the number two guy in terms of overall grade point average across all subjects. And that guy gives the introductory speech, whereas the number one guy gives the closing speech, the so-called ‘valedictory’ address, which literally means saying goodbye.

It seems that once again I am the ‘warm up act’ for a graduation ceremony in that once I finish speaking, you’ll actually get your degrees! And that’s exactly was how it worked in the old days.

I am someone who thinks that if I have anything interesting to say, it will be to those who are more oriented to the future than to the past – or even the present. In any case, this is how I would wish you to interpret me.

There are many challenges to our sense of humanity today. I want to start with a long term challenge that you will increasingly face in the coming years. That has to do with privileging ‘humanity’ understood as a kind of upright ape who has consolidated its place on Earth by monopolizing control over the planet’s resources. This is what geologists are beginning to call the ‘Anthropocene’, and it probably began with the Industrial Revolution in the late eighteenth century – and it marks the first time a single species has dictated the terms of engagement on Earth.

This has led to considerable metaphysical soul-searching about the human condition. And put bluntly, much of this soul-searching has resulted in self-loathing for humanity as a species. We are to blame for the unprecedented levels of mass extinctions and climate change over the past two or more centuries.

All the while, our species has come to range over the entire planet in a manner that reminds some of these scientifically informed misanthropes of cockroaches. However, the difference between us and cockroaches is that cockroaches don’t seem to exhibit the strong sense of inequality among its members that we have historically insisted among ourselves.

So given our evolutionary track record, in what sense are humans are worth promoting, let alone all of us – as Thomas Jefferson said, as ‘created equal’? Of course, it’s been long recognized that there is an enormous spread in the capacity of human beings. Modern biological science has given this informal observation an empirical basis.

Originally it was presented as a demonstration of natural inequality, and the phrase ‘scientific racism’ remains a legacy of that line of thought. However, nowadays biologists prefer to speak of the diversity of life-forms, which together constitute ecologies, the Earth itself being the ultimate ecosystem.

But against this general current of thought, egalitarianism has been advanced by the Abrahamic religions – Judaism, Christianity and Islam – simply on the grounds that we are all children of the same God in some broadly ‘privileged’ sense.

In the modern era, this fundamental intuition was given focus by the classical idea of republicanism, namely, that a society should be constituted only by those who regard each other as equals. And what makes and keeps people equal are the standards by which they are judged – and this is determined by the people themselves. And that was what was meant by the res publica – the ‘public thing’, in Latin.

It’s worth recalling that prior to the US Constitution, republics had been small enclaves of the few who regarded each other as equals. Think about Athens, Rome, Venice and the Netherlands in their republican phases. Basically, they were places for rich migrants.

So What Happens If The Migrants Aren’t Rich?

In law, there are two general ways in which people can become citizens. One is called jus soli, and refers to the land in which you yourself were born, and the other is called jus sanguinis, and refers to the birth of your parents. If you look at a map of the world today, you’ll see that jus soli dominates the Western hemisphere and jus sanguinis dominates the Eastern hemisphere. And that’s because the Western hemisphere – this hemisphere of North and South America – has been seen as a natural place for migrants.

However, candidates for citizenship in a republic typically have to demonstrate their fitness to be treated as equal with regard to the res publica. And then once accepted, they would be obliged to participate in public life. Providing evidence of wealth was historically crucial because it showed both management skills and a desire to pool one’s resources with an alien society. The duty to vote in elections – in which each vote counts as one — is simply a remnant of what had been a much stronger civic expectation to engage in society.

Many philosophers have believed that republicanism cannot be scaled up because they thought it was unreasonable to expect that people with quite diverse backgrounds and interests could treat each other as ‘equals’ in some politically sustainable sense. It’s quite clear that even the American Founding Fathers had their doubts, since they counted slaves as only 3/5 of a person for purposes of Congressional representation.

Notice that I haven’t yet mentioned ‘democracy’. That’s because democracy has historically meant ‘majority rule’, on whatever terms it’s established. For example, 51% could license the execution of the remaining 49% in a democracy. Indeed, people may start equal in a democracy but that equality could soon evaporate after the first collective decision is taken. Think Animal Farm and Lord of the Flies, two classic mid-20th century English novels.

Here one can begin to appreciate the abiding importance of the Abrahamic religions in upholding a metaphysical conception of human equality that cuts against what had been traditionally seen as the eventual descent of democracy into ‘mob rule’.

That metaphysical idea – the fundamental equality of all humans — was first made incarnate in the practice of debt forgiveness among the Jews on each sabbatical year. To cut a long story short, since I don’t want to bore you with religious history, the fundamental equality of people was ritualistically demonstrated by the redistribution of wealth from the ‘winners’ to the ‘losers’ in society, which in turn provided an opportunity for everyone to be ‘born again’: The rich as somewhat poorer and the poor as somewhat richer. Thus, society is periodically remade as a level playing field.

Until the losers are regarded as always the equals of the winners of society, democracy is not an especially egalitarian political movement. This helps to explain why such great defenders of liberalism as John Stuart Mill regarded democracy with considerable suspicion. He believed that given the chance, the great unwashed might permanently silence the enlightened few, who throughout history have often been on the losing side of many of society’s great arguments – especially on matters concerning the future.

In What Sense, Then, Are All People ‘Created Equal’?

I would like to propose that our equality is ultimately about possessing a wide degree of freedom. And I mean a freedom that gives you the right to be wrong and the right to fail. This is only possible if you’re allowed to express yourself in the first place — and be allowed a second chance. This is to do with the range of opportunities available to you.

It’s easy to see that someone with a track record of managing their own wealth successfully would already be in the business of allowing themselves second chances – say, when an investment sours – and so would be fit for republican citizenship. However, the ancient Jewish practice of debt atonement was the original policy to allow everyone to acquire that enviable status. It was always in the back of minds of those who designed the welfare state.

Every human is entitled to be free in how they dispose of their lives, regardless of their likelihood of success. Freedom is the capacity to take risks, and universities are for the development of that capacity. There is nothing natural about how people come to want what they want. It is all a matter of training, and the only question is where and how it happens. And you have come to Ashland for that.

If you graduate from Ashland with a clearer sense of purpose than when you entered, then this university will have done its job and you will be able to go forward as an exemplary human being. I say this as a matter of principle – regardless of what you take your purpose to be, and even if your sense of clarity arises from revolting what you have encountered here.

The bottom line is that if you can’t have a sense of purpose unless you have faced serious alternatives – that is to say, ‘opportunity costs’, as the economists like to put it.

You’re not free unless you have had the opportunity to reject alternatives presented to you. And in that respect, the value of your education amounts to increasing your capacity for rejection – you can afford to let go. And that means more than simply saying no because of what you have been taught, but rather saying yes because you can identify with a certain way of being in the world.

We live in a time when those of you before me can self-identify in a wider range of ways than ever before. When I was your age, all we had was class and national mobility at our disposal, but now you also have gender and even race mobility added on to it – at both a social and a biological level, in case one is worried by pedigree.

I am by no means suggesting that you need to think about any of these sorts of migrations, but they are there for the asking, and if you have been trained properly here, you will at least have heard of them and have adopted a reasoned response to them.

Whatever else one can say about humanity in the future, it is bound to be a moveable feast. And you will be among the movers and shakers!

Contact details:

[1] What follows is the commencement address to the Winter 2018 graduating class of Ashland University, Ohio. It was delivered on 15 December.

Author Information: Jim Butcher, Canterbury Christ Church University,

Butcher, Jim. “Questioning the Epistemology of Decolonise: The Case of Geography.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 12-24.

The pdf of the article gives specific page references. Shortlink:

Maori dancers about to perform at the 2017 Turangawaewae Regatta in New Zealand.
Image by Hone Tho via Flickr / Creative Commons


This paper was prompted by the prominence of new arguments in favour of ‘decolonising geography. This was taken by the 2017 Royal Geographical Society–Institute of British Geographers (RGS-IGB) annual conference as its theme, with many preparatory papers in Area and Transactions and sessions organised around this. In both, to ‘decolonise’ was presented as an imperative for geography as a field of study, and for all geographers within it, to address urgently (Daigle and Sundberg, 2017; Jazeel, 2017).

In the USA, the annual American Association of Geographers (AAG) conference in New Orleans of 2018 also featured a number of well attended sessions that took the same perspective. The number of journal articles published advocating decolonialism has also increased sharply in the last two years.

The spirit in which this paper is written is supportive of new debates in the academy, and supportive of the equality goals of decolonise. However it takes issue with important assumptions that, it is argued, will not advance the cause of marginalised or of geography as a discipline.

The paper is in three related parts, each written in the spirit of raising debate. First it considers the principal knowledge claim of decolonise: that a distinctly Western epistemology presents itself as a universal way of knowing, and that this is complicit in colonialism of the past and coloniality of the present through its undermining of a pluriverse of ontologies and consequent diversity of epistemologies (Sundberg, 2014; Grosfoguel, 2007; Mignolo, 2007). The paper also illustrates further how this principle of decolonialism is articulated in some key geographical debates. It then highlights a number of contradictions in and questions with this epistemological claim.

Second, decolonialism’s critique of universalist epistemology is effectively, and often explicitly, a critique of the Enlightenment, as Enlightenment humanism established knowledge as a product of universal rationality rather that varied cultures or deities (Pagden, 2015; Malik, 2014). The paper argues that decolonialism marks a retreat from what was positive about the Enlightenment tradition: the capacity of (geographical) knowledge to transcend time and place, and hence act as universal knowledge.

In conclusion I briefly broach the value of decolonising geography in terms of its claim to be challenging injustice. I suggest that a truly humanist and universalist approach to knowledge has more to offer geographers seeking ways to tackle inequality and differential access to the process of producing knowledge than has the epistemic relativism of decolonize.

The Epistemological Claim of Decolonise

One of the claims made prominently at the conference and elsewhere by advocates of decolonisation is that geographical knowledge can be ‘Western’ (Radcliffe, 2017), ‘Eurocentric’ (Jazeel, 2017) ‘colonial’ (Baldwin, 2017; Noxolo, 2017) or ‘imperial’ (Tolia-Kelly, 2017; Connell, 2007 & 2017). This is not just a question of a close link between geographical knowledge and Western interests per se – it is well established that geographical understanding has developed through and been utilised for partial, often brutal, interests. For example, one of the principal figures in the history of UK geography, Halford Mackinder, regarded geography as central to Britain’s colonial mission (Livingstone, 1992).

At issue here is an epistemological one: Do the ideas, theories and techniques that today’s geographers have inherited constitute a universal geographical tradition of human knowledge to be passed on, built upon and critiqued, or; are the ideas, theories and techniques themselves ‘saturated in colonialism’ (Radcliffe, 2017: 329) and hence part of a particular system of knowledge in urgent need of decolonisation.

In his advocacy of decolonialism, Grosfoguel (2007: 212) argues that it is wrong to say that ‘there is one sole epistemic tradition from which to achieve truth and universality’. Rather, he and other decolonial theorists argue for a pluriverse – a variety of ways of knowing corresponding to different historical experience and culture (Sundberg, 2014; Mignolo, 2013).

Decolonialism holds that systems of knowledge existing in colonised societies were effectively undermined by the false universal claims of the West, claims that were in turn inextricably bound up with colonialism itself. Hence in this formulation the persistence of the ‘sole epistemic tradition’ of ‘the West’ well after formal decolonisation has taken place ensures the continuation of a discriminatory culture of ‘coloniality’ (Grosfoguel, ibid.).

As a result it is not deemed sufficient to oppose colonialism or its legacy within the parameters of contemporary (geographical) thought, as that thought is itself the product of a Western epistemology complicit in colonialism and the denial of other ways of knowing. Jazeel quotes Audre Lorde to accentuate this: ‘the masters tools will never demolish the masters house’ (2017: 335).

This leads decolonial theory to argue that there needs to be a delinking from Western colonial epistemology (Mignolo, 2007). Here they part company with many post-colonial, liberal and Left arguments against colonialism and racism and for national independence and equal rights. These latter perspectives are viewed as unable to demolish the ‘masters house’, as they are using the ‘master’s tools’.

For Grosfoguel, rights – the basis around which almost all liberation struggles have been fought for the last 250 years – are ‘ … articulated to the simultaneous production and reproduction of an international division of labour of core / periphery that overlaps with the global racial / ethnic hierarchy of Europeans / non-Europeans’ (2007: 214). Rights discourse, as with ‘Western’ knowledge, is regarded as part of a Cartesian ‘Western global design’ (ibid.).

The relationship to the Enlightenment, then, is key. Enlightenment ideas are associated with modernity: the mastery of nature by people, as well as notions of rights and the social contract that influenced the development of the modern state. But for decolonial thinkers, modernity itself is inextricably tied to colonialism (Grosfoguel, 2007; Mignolo, 2007). Hence the challenge for decolonisation is to oppose not just colonialism and inequality, but also the Enlightenment universalism that shapes academic disciplines and fields including geography (ibid.).

Decolonial theory proposes in its stead the pluriverse of ways of knowing (Sundberg, 2014). For example (Blaser, 2012: 7) writes of a ‘pluriverse with multiple and distinct ontologies or worlds’ that ‘bring themselves into being and sustain themselves even as they interact, interfere and mingle with each other’ under asymmetrical circumstances (my italics). Effectively this answers philosopher Ernest Gellner’s rhetorical question: ‘Is there but one world or are there many’ (Gellner,1987: 83) with the clear answer ‘many’.

It is important at this point to distinguish between a plurality of ideas, influences and cultures, as opposed to a pluriverse of ontologies; different worlds. The former is uncontentious – openness to ideas from other societies has to be progressive, and this is evident throughout history, if not self evident.

Cities and ports have played an important role in the mixing of cultures and ideas, and often have proved to be the drivers of scientific and social advance. Scientists have learned much from traditional practices, and have been able to systematise and apply that knowledge in other contexts. Equally, reviewing curricula to consider the case for the inclusion of different concepts, theories and techniques is a worthwhile exercise.

A pluriverse of ways of knowing has much greater implications, as it posits diverse systems of knowledge as opposed to a diversity of viewpoints per se.

The Debate in Geography

The RGS-IGB 2017 Annual Conference call for sessions set out the aim of decolonising geographical knowledges as being to ‘to query implicitly universal claims to knowledges associated with the west, and further interrogate how such knowledges continue to marginalise and discount places, people, knowledges across the world’ (RGS-IGB, 2017).

Recent papers advocating decolonise argue in similar vein. Radcliffe argues that: ‘Decolonial writers argue that the modern episteme is always and intrinsically saturated with coloniality’ (2017: 329), hence the need to be alert to ‘multiple, diverse epistemic and ethical projects’ and to ‘delink’ from ‘Euro-American frameworks’ (ibid. 330). She goes on to argue that decoloniality should cover all aspects of geographical education: ‘racism and colonial modern epistemic privileging are often found in students selection and progress; course design, curriculum content; pedagogies; staff recruitment; resource allocation; and research priorities and debates’ (ibid. 331).

This challenge to the development of knowledge as a universal human endeavour, across history and culture, is often regarded not only as an issue for geographers, but is posed as a moral and political imperative (Elliot-Cooper, 2017; Jazeel, 2017 ). For Elliott-Cooper:

Geographers sit at a historical crossroads in academia, and there is no middle, benevolent way forward. We can either attempt to ignore, and implicitly reproduce the imperial logics that have influenced the shape of British geography since its inception, or actively rethink and dismantle imperialism’s afterlife by unlearning the unjust global hierarchies of knowledge production on which much of the Empires legitimacy was based. (2017:334)

To see contemporary geography as an expression of ‘imperialism’s afterlife’ serves to dramatically reinforce a sense of geographical knowledge – knowledge itself, not its origin or application – as ‘colonial’ or ‘imperial’. This approach often involves eschewing one’s own, or ‘Western’, knowledge in favour of that of marginalised people. Two academics, reflecting on their teaching, state: ‘Our efforts do not even begin to live up to decolonial land based pedagogies being implemented across indigenous communities‘ (Daigle and Sundberg, 2017: 339).

This deference to ‘land based pedagogies’, speaks to an eschewal modern geographical knowledge and method in favour of a plurality of knowledges, but with authority granted on the basis of indigeneity. Noxolo makes a similar case, arguing that ‘[t]here are material conditions of experience out of which both postcolonial and, crucially decolonial, writings emerge’ (2017: 342). Emphasis is placed on intellectual authority of the lived experience of the marginalised.

We may well want to read something due to the experience of the writer, or to consider how a society gathers information, precisely in order to begin to understand perspectives and conditions of others who’s lives may be very different to our own. But these writings enter into a world of ideas, theories and techniques in which individual geographers can judge their usefulness, veracity and explanatory power. The extent to which they are judged favourably as knowledge may well depend upon how far they transcend the conditions in which they were produced rather than their capacity to represent varied experience.

This is not at all to denigrate accounts based more directly upon lived experience and the diverse techniques and ideas that arise out of that, but simply to recognise the importance of generalisation, systematisation and abstraction in the production of knowledge that can have a universal veracity and capacity to help people in any context to understand and act upon the world we collectively inhabit.

Contradictions: Geography’s History and Darwin

There is a strong case against the epistemic relativism of decolonialism. Geographical thought is premised upon no more and no less that the impulse to understand the world around us in order to act upon it, whether we seek to conserve, harness or transform. Geographical knowledge qua knowledge is not tied to place, person or context in the way decolonise assumes – it is better understood not as the product of a pluriverse of ways of knowing the world, but a diverse universe of experience.

From ancient Greece onwards, and indeed prior to that, human societies have developed the capacity to act upon the world in pursuit of their ends, and to reflect upon their role in doing that. Geography – ‘earth writing’ – a term first used in 3,000 BC by scholars in Alexandria, is part of that humanistic tradition. From Herodotus mapping the Nile and considering its flow in 450 BC, up to today’s sophisticated Geographical Information Systems, knowledge confers the capacity to act.

How elites act is shaped by their societies and what they considered to be their political and economic goals. But the knowledge and techniques developed provide the basis for subsequent developments in knowledge, often in quite different societies. Knowledge and technique cross boundaries – the greater the capacity to travel and trade, the greater too the exchange of ideas on map making, agriculture, navigation and much else.

The 15th century explorer Prince Henry the Navigator acted in the interests of the Portuguese crown and instigated the slave trade, but was also a midwife to modern science. He was intrigued by the myth of Prester John, yet he also helped to see off the myths of seamonsters. His discoveries fueled a questioning of the notion that knowledge came from the external authority of a god, and a growing scientific spirit began to decentre mysticism and religion, a process that was later consolidated in the Enlightenment (Livingstone, 1992). Geographical knowledge – including that you were not going to sail off the end of the world, and that sea monsters are not real – stands as knowledge useful for any society or any individual, irrespective of Portugal’s leading role in the slave trade at this time.

So whilst of course it is important to consider and study the people, the society and interests involved in the production of knowledge, is also important to see knowledge’s universal potential. This is something downplayed by the calls to decolonise – knowledge and even technique seem at times to be tainted by the times in which they were developed and by the individuals who did the developing.

Deciding what is the best of this, always a worthy pursuit, may involve re-evaluating contributions from a variety of sources. Involvement in these sources, in the production of knowledge, may be shaped by national or racial oppression, poverty and access to resources, but it has little to do with epistemic oppression (Fricker, 1999).

Take for example, Charles Darwin’s Origin of the Species (1998, original 1859). Darwin’s research involved all of the features regarded as ‘imperial’ by Connell (2007) and by other advocates of decolonialism: an association with the military (The Beagle was a military ship) and the use of others’ societies for data gathering without their consent or involvement. The voyage was funded by the British state who were engaged in colonial domination. Geography and scientific voyages were closely linked with imperial ambition (Livingstone, 1992).

Yet Darwin’s theory marked a major breakthrough in the understanding of evolution regardless of this context. As an explorer sponsored by the British imperialist state, and having benefitted from a good education, Darwin as an individual was clearly better placed to make this breakthrough that native inhabitants of Britain’s colonies or the Galapagos Islands – he had ‘privilege’ and he was ‘white’, two terms often used by decolonial activists to qualify or deny the authority of truth claims. Yet the Origin of the Species stands regardless of context as a ground breaking step forward in human understanding.

Darwinism has another link to colonialism. Social Darwinism was to provide the pseudo- scientific justification for the racism that in turn legitimised the imperialist Scramble for Africa and attendant racial extermination (Malik, 1997). Yet the veracity of Darwin’s theory is not diminished by the horrors justified through its bastardisation as Social Darwinism. Contrary to the view key to decolonialism, geographical knowledge can be sound and an advance on previous thinking regardless of the uses and misuses to which it is put. That is in no way to legitimise those uses, but simply to recognise that ideas that have a universal veracity emerge from particular, contradictory and often (especially from the perspective of today) reactionary contexts.

Geographical knowledge can be (mis)understood and (mis)used to further particular politics. Darwin’s ideas received a cool reception amongst those in the American South who believed that God had created wholly separate races with a differential capacity for intellect and reason. In New Zealand the same ideas were welcomed as a basis for an assumed superior group of colonisers taking over from an assumed less evolved, inferior group. This was in the context of struggle between Mauri and land hungry colonialists.

For Marx, Darwinism provided a metaphor for class struggle. For economic liberals social Darwinism buttressed the notion of laisser-faire free trade. Anarchist geographer Kropotkin advocated small scale cooperative societies – survival of those who cooperate, as they are best fitted for survival (Livingstone, 1992). So as well as being produced in contexts of power and inequality, knowledge is also mobilised in such contexts.

However Darwin’s theory as the highest expression of human understanding of its time in its field stands regardless of these interpretations and mobilisations, to be accepted or criticised according to reason and scientific evidence alone. Geographical and scientific theory clearly does have the potential to constitute universal knowledge, and its capacity to do so is not limited by the context within which it emerged, or the interests of those who developed it. We cannot decolonise knowledge that is not, itself, colonial.

Decolonialism’s Critique of Enlightenment Universalism

It is clear that the epistemology of decolonialism is based, often explicitly, upon a critique of the Enlightenment and its orientation towards knowledge and truth. Emejulu states this clearly in a piece titled Another University is Possible (2017). She accepts that the Enlightenment viewed all men as endowed with rationality and logic, and with inalienable rights, that human authority was replacing the church – all the positive, humanist claims that defenders of the Enlightenment would cite.

However, she questions who is included in ‘Man’ – who counts as human in Enlightenment humanism? How universal is Enlightenment universalism? Who can be part of European modernity? She argues that the restriction of the category of those who are to be free was intrinsic to Enlightenment thought – i.e. it was a Western Enlightenment, not only geographically, but in essence. Knowledge, ideas themselves, can be ‘Eurocentric,’ ‘Western’ or even (increasingly) ‘white’ in the eyes of advocates of decoloniality.

Emejulu quotes Mills from his book The Racial Contract (1999):

The contemporary interpretation of the Enlightenment obscures its exclusion of women, ‘savages’, slaves and indigenous peoples through the prevailing racial science as inherently irrational beings. Savages – or the colonial other: the Native or Aboriginal peoples, the African, the Indian, the slave – were constructed as subhuman, incapable of logical reasoning and thus not subject to the equality or liberty enjoyed by ‘men’. It is here, in the hierarchies of modernity that we can understand the central role of racism in shaping the Enlightenment. The Enlightenment is brought into being by Europe’s colonial entanglements and is wholly dependent on its particular patriarchal relations – which Europe, in turn, imposed on its colonial subjects.

So these authors argue that the Enlightenment did not establish, nor establish the potential for, universal freedoms and rights or knowledge either, but that it stemmed from particular interests and experiences, and played the role of enforcing the domination of those interests. Humanistic notions of the pursuit of knowledge are considered partial, as a false universalist flag raised in the service of Western colonialism.

Matthew Arnold’s 19th century liberal humanist vision of knowledge (in schools) referring to ‘the best which has been thought and said in the world, and, through this knowledge, turning a stream of fresh and free thought upon our stock notions and habits’ (Arnold, 1869: viii) is rejected in favour of a view of knowledge itself as relative to incommensurate diverse human experience. This perspectival view of knowledge is central to the advocacy of decolonialism.

Sundberg (2014: 38), citing Blaser (2009), claims that the concept of the universal is itself ‘inherently colonial’, and can only exist through ‘performances’ that ‘tend to suppress and / or contain the enactment of other possible worlds’. This is a striking rejection of universality. Whilst logically universal claims can undermine different ways to think about the world, assuming that this in inherent in universal thinking questions geographical thought from any source that aspires to transcend diverse experience and be judged as part of a global geographical conversation across time and space.

Whilst this point is made by Sundberg to deny the wider veracity of Western thinking, logically it would apply to others too – it suggests Southern scholars, too, should not aspire to speak too far outside of their assumed ontological and epistemological identities in search of universal truths.

Saigon Opera House in Ho Chi Minh City.
Image by David McKelvey via Flickr / Creative Commons


In Defence of the Enlightenment Legacy

The view as set out by Emejulu (2017) and implicit or explicit through much of literature is both one sided and also a misreading of the Enlightenment. Many Enlightenment thinkers articulated ideas that were new and revolutionary in that they posited two things: the centrality of humanity in making the world in which we live (through reason and through scientific understanding replacing religious and mystical views of one’s place and possibilities), and; the possibility and moral desirability of universal freedoms from subjection by others – natural, universal rights applicable to all. Both the study of the world, and the idea that people within the world were equal and free, were central to the Enlightenment (Pagden, 2015; Malik, 2014).

However, these ideas emerged within and through a world of interests, prejudices and limitations. So there is a dialectical relationship: the new ideas that point to the possibility and desirability of human equality and freedom, and the world as it was which, as Emejulu rightly says, was far from free or equal and far from becoming so.

Consider the American Declaration of Independence of 1776 – a document shaped by the new ideas of the Enlightenment, and associated with freedom and rights subsequently. Some of its signatories and drafters, including Thomas Jefferson, were slaveholders or had a stake in the slave trade. Yet the Declaration served as an emblem for opponents of slavery and inequality for the next 200 years.

The most famous clause in the Declaration states: ‘We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness’ (US Congress, 1776). At the time principled abolitionists played on the contradiction between the grand ideas and the practice of men like Jefferson. Some even argued that the clause relating to the ‘right of revolution’ (which was there to justify fighting for independence from the British) could apply to slaves who were not being treated equally.

Martin Luther King referenced the Declaration in his famous ‘I Have A Dream’ speech at the Washington for Jobs and Freedom Demonstration of August 28, 1963: ‘When the architects of our republic wrote the magnificent words of the Constitution and the Declaration of Independence, they were signing a promissory note to which every American was to fall heir. This note was a promise that all men, yes, black men as well as white men, would be guaranteed the unalienable rights of life, liberty, and the pursuit of happiness’ (King, 1991: 217). King’s speech, holding society to account by its own highest, universal moral standards, was in a long and noble tradition.

In the same vein the French Revolution’s Declaration on the Rights of Man and the Citizen (1791) also states: ‘All men are born free and with equal rights, and must always remain free and have equal rights.’ The dialectical tension between by the ideas that informed the French Revolution and the reality of the society is well illustrated by CLR James in The Black Jacobins (2001, original 1938). James writes of the Haitian revolution, a revolution in revolutionary France’s colony, in which slaves and their leaders took the ideas of the revolutionaries at their word. They directly confronted the limits of the revolution by insisting that its demand for liberty, fraternity and equality be made truly universal and applied to themselves, the slaves in the colonies.

The force of these Enlightenment influenced universalist conceptions of humanity, central to both Declarations, feature throughout the history of anti-colonialism and anti-imperialism. For example, Ho Chi Minh’s Vietnamese Declaration of Independence in 1945 cites both the famous ‘all men are created equal’ clause from the American Declaration, and its equivalent in the French Declaration, to accuse both of these imperialist countries of denying these ‘undeniable truths’ (Ho Chi Minh, 1945). In the Vietnamese Declaration it was assumed that the denial of Enlightenment ideals, not their assertion, characterised colonialism and imperialism. This is reversed in decolonial theory.

Equally, colonialism involved the denial of the fruits of modern geographical knowledge and technique, not an imposition of ‘colonial’ ideas. Just as geographic technique and knowledge developed in the imperialist West no doubt played a dark role in the war in Vietnam – not least cartography in charting bombing missions – so those same tools (or more advanced versions) in mapping, agriculture and much else are utilised today to enable a sovereign Vietnam look to a better future.

Enlightenment ideas, expressed in the American Declaration of Independence and France’s Declaration of the Rights of Man, were drafted by people complicit in slavery and formed a rational and moral basis for equality. The former does not contradict the latter. In similar vein geographical knowledge was harnessed to oppress, and provided the basis for post- colonial governments to progress. The Declarations were both of their time and transcendent of their time, as is good geographical knowledge. It is in the latter sense that we judge their worth as knowledge to help us understand and act upon the world today.

There is much else to be said about the Enlightenment of course. There were great diversity and contradictions within it. What Enlightenment scholar Jonathan Israel (2009) terms the Radical Enlightenment consisted of thinkers who pushed at the contradiction between the potential in Enlightenment thought and some of the backward beliefs prevalent amongst their contemporaries. They went well beyond the limiting assumption of humanity characteristic of their time: that some were capable of citizenship rights, and others were not.

Thomas Paine argued against slavery on the grounds that it infringed the universal (natural) right to human freedom. He did not restrict his category of ‘Man’ to western Man. He criticised colonialism too. He argued that Africans were productive, peaceful citizens in their own countries, until the English enslaved them (Paine, 1774). Diderot, Raynal, d’Holbach and others contributed to a 1770 volume titled Histoire Philosophique des Deux Indes (The Philosophical History of the Two Indies). The book asserts that ‘natural liberty is the right which nature has given to everyone to dispose of himself according to his will’. It prophesied and defended the revolutionary overthrow of slavery: ‘The negroes only want a chief, sufficiently courageous to lead them to vengeance and slaughter… Where is the new Spartacus?’ (cited in Malik, 2017).

So Emejulu’s account, and the assumption of decolonialism, are wrong. The issue is not that the Enlightenment is racist and partial, and the intellectual traditions that draw upon its legacy comprise ‘imperial’ or ‘colonial’ knowledge. Rather, the Enlightenment put reason and rationality, scientific method and the potential for liberty and equality at the centre of intellectual and political life. It provided a basis for common, human pursuit of knowledge.

The growth of scientific method associated with the Enlightenment, as an orientation towards knowledge, was not linked to any particular culture or deity, but to universal reason (Malik, 2014). The implication of this is that theories should be judged for their capacity to explain and predict, concepts for their capacity to illuminate and techniques for their efficacy. That they should be judged with consideration for (or even deference towards) the identity, political or social, of their originator, or with regard to context or contemporary use – all key to decolonialism – undermines the pursuit of truth as a universal, human project.

Knowledge, theories and techniques are better seen as having the capacity to transcend place and power. The veracity of a theory, the usefulness of a concept or the efficacy of a technique are remarkably unaffected by their origin and their context. Audre Lorde’s idiom, ‘The masters tools will never dismantle the master’s house’, invoked by Jazeel (2017: 335) to argue that the traditions of knowledge and rights associated with the West cannot be the basis for the liberation of the non-West, is simply untrue in this context. The anti-colonial and anti-racist movements of the past achieved a massive amount through struggles that explicitly drew upon iconic assertions of the ‘Western’ Enlightenment. There is clearly some way to go.

Concluding Thoughts: Decolonialism and Liberation

To decolonise has been presented as a moral imperative connected to liberation (Jazeel, 2017; Elliot-Cooper, 2017). I think it is better regarded as one approach, premised upon particular political views and assumptions such as critical race theory and the intersectional politics of identity. In its advocacy of an ontological pluriverse and of diverse systems of knowledge, there is one knowledge claim that cannot be allowed – the claim that knowledge, from any source, ultimately, can aspire to be universal. In addition, presenting decolonialism as a moral and political imperative leaves little room for alternatives which become, a priori, immoral.

By contrast, Brenda Wingfield, Vice President of the Academy of Science of South Africa, argues that: ‘What’s really important is that South African teachers, lecturers and professors must develop curricula that build on the best knowledge skills, values, beliefs and habits from around the world’ (2017) (my italics). She fears that the rhetoric of decolonialism will effectively delink South Africa from science’s cutting edge. She points out that this in turn reduces the opportunity for young black South African scholars to be involved with the most advanced knowledge whatever its source, and also the opportunity to adapt and utilise that knowledge to address local issues and conditions. In other words, decolonialism could damage the potential for material liberation from poverty, and for promoting a more equal involvement in the global production of knowledge about our shared world.

In the spirit of the Radical Enlightenment, I would argue that the best of geographical knowledge and technique be made available for the benefit of all, on the terms of the beneficiaries. In judging ’the best’, origin and context, whilst important and enlightening areas of study in themselves, are secondary.

Academics and universities could certainly more effectively challenge the marginalisation of parts of the world in academic life and the production of geographical knowledge. Suggestions would include: Truly reciprocal academic exchanges, funded by Western universities who can better afford it, where budding academics from the South can choose freely from the curriculum around their own priorities; greater joint projects to understand and find solutions to problems as they are defined by Southern governments; increased funding for twinning with under resourced universities in the South, with a “no strings attached” undertaking to share knowledge, training and resources as they are demanded from academics based in the South.

In other words, we should prioritise a relationship between knowledge and resources from the best universities in the world (wherever they are located), and the sovereignty of the South.

None of this necessitates the decolonisation of geographical knowledge. Rather, it requires us to think afresh at how the promissory note of the Enlightenment – the ideals of liberty, fraternity and equality (and I would add of the potential to understand the word in order to change it) – can be cashed.

Contact details:


Arnold, Matthew. (1869). Culture and anarchy: An essay in political and social criticism. Oxford: Project Gutenberg.

Baldwin, A. (2017) Decolonising geographical knowledges: the incommensurable, the university and democracy. Area, 49, 3, 329-331. DOI:10.1111/area.12374

Blaser, M. (2012). Ontology and indigeneity: on the political ontology of heterogenous assemblages. Cultural Geographies, 21, 1, 7 DOI:10.1177/1474474012462534.

Connell, R. (2007). Southern theory: Social science and the global dynamics of knowledge. London: Polity.

Connell, R. (2017) Decolonising the curriculum. Retrieved from: .

Daigle, M and Sundberg, J. (2017). From where we stand: unsettling geographical knowledge in the classroom. Transactions, 42 , 338-341. DOI: 10.1111/tran.12195

Darwin, C. (1998, original 1859). The origin of species (Classics of world literature). London: Wordsworth.

Elliott-Cooper, A. (2017). ‘Free, decolonised education’: a lesson from the south African student struggle. Area, 49, 3, 332-334. DOI: 10.1111/area.12375

Emejulu, A. (2017). Another university is possible. Verso books blog. January 12 Retrieved from: .

Esson, J, Noxolo, P. Baxter, R. Daley, P. and Byron, M. (2017). The 2017 RGS-IGB chair’s theme: decolonising geographical knowledges, or reproducing coloniality? Area, 49,3, 384-388. DOI: 10.1111/area.12371

Fricker, M. (1999) Epistemic oppression and epistemic privilege, Canadian Journal of Philosophy, 29: sup1, 191-210. DOI: 10.1080/00455091.1999.10716836

Gellner, E. (1987). Relativism and the social sciences. Cambridge: Cambridge University Press.

Grosfoguel, R. (2007). The epistemic decolonial turn. Cultural Studies, 21:2-3, 211-223. DOI:10.1080/09502380601162514

Ho Chi Minh. (1945) Declaration of independence, democratic republic of Vietnam. Retrieved from: .

Israel, J. (2009) A revolution of the mind: Radical enlightenment and the Intellectual origins of modern democracy. Princeton University Press.

James, CLR (2001, original 1938) The black Jacobins. Toussaint L’ouverture and the San Domingo revolution. London: Penguin

Jazeel. (2017). Mainstreaming geography’s decolonial imperative. Transactions, 42, 334-337. DOI: 10.1111/tran.12200

King, Martin Luther. (1991). A testament of hope: The essential writings of Martin Luther King. New York: Harper Collins.

Livingstone, David. N. (1992). The geographical tradition: Episodes in the history of a contested enterprise. London: Wiley

Malik, K. (1996). The meaning of race: Race, history and culture in Western society. London: Palgrave.

Malik, K. (2014). The quest for a moral compass: a global history of ethics. London: Atlantic.

Malik, K. (2017) Are SOAS students right to ‘decolonise’ their minds from western philosophers? The Observer. Sunday 19 Feb Retrieved from: .

Mignolo, W. (2007). Delinking. Cultural Studies, 21,2-3, 449-514. DOI: 10.1080/09502380601162647

Mignolo, W. (2013). On pluriversality. Retrieved from

Mills, C.W. (1999). The racial contract. Cornell University Press.

Noxolo, P. (2017). Decolonial theory in a time of the recolonization of UK research. Transactions, 42, 342-344. DOI:10.1111/tran.12202

Pagden, A. (2015). The Enlightenment: And why it still matters. Oxford: OUP Press

Paine, T. (1774). Essay on slavery, 1774. In Foot. M and Kramnick I. (eds) (1987). Thomas Paine Reader: London:Penguin: 52-56

Radcliffe , Sarah A. (2017). Decolonising geographical knowledges. Transactions, 42, 329-333. DOI: 10.1111/tran.12195

RGS-IGB (2017). Annual Conference, conference theme. Retrieved from: .

Sundberg, J. (2014). Decolonising posthumanist geographies. Cultural Geographies, 2, 1, 33-47. DOI:10.1177/1474474013486067

Tolia-Kelly, Divya-P. (2017). A day in the life of a geographer: ‘lone’, black, female. Area, 49, 3, 324-328. DOI:10.1111/area.12373

US Congress (1776). The American Declaration of Independence. Retrieved from: .

Wingfield, B. (2017) What “decolonised education” should and shouldn’t mean. The Conversation. February 14. Retrieved from: .

Author Information: Steve Fuller, University of Warwick,

Fuller, Steve. “‘China’ As the West’s Other in World Philosophy.” Social Epistemology Review and Reply Collective 7, no. 11 (2018): 1-11.

The pdf of the article gives specific page references. Shortlink:

A man practices Taijiquan at the Kongzi Temple in Nanjing.
Image by Slices of Light via Flickr / Creative Commons


This essay was previously published in the Journal of World Philosophy, their Summer 2018 issue.

Bryan Van Norden’s Taking Back Philosophy: A Multicultural Manifesto draws on his expertise in Chinese philosophy to launch a comprehensive and often scathing critique of contemporary Anglo-American philosophy. I focus on the sense in which “China” figures as a “non-Western culture” in Van Norden’s argument. Here I identify an equivocation between what I call a “functional” and a “substantive” account of culture.

I argue that Van Norden, like perhaps most others who have discussed Chinese philosophy, presupposes a “functional” conception, whereby the relevant sense in which “China” matters is exactly as “non-Western,” which ends up incorporating some exogenous influences such as Indian Buddhism but not any of the Western philosophies that made major inroads in the twentieth century. I explore the implications of the functional/substantive distinction for the understanding of cross-cultural philosophy generally.

Dragging the West Into the World

I first ran across Bryan Van Norden’s understanding of philosophy from a very provocative piece entitled “Why the Western Philosophical Canon Is Xenophobic and Racist,”[1]  which trailed the book now under review. I was especially eager to review it because I had recently participated in a symposium in the Journal of World Philosophies that discussed Chinese philosophy—Van Norden’s own area of expertise—as a basis for launching a general understanding of world philosophy.[2]

However, as it turns out, most of the book is preoccupied with various denigrations of philosophy in contemporary America, from both inside and outside the discipline. The only thing I will say about this aspect of the book is that, even granting the legitimacy of Van Norden’s complaints, I don’t think that arguments around some “ontological” conception of what philosophy “really is” will resolve the matter because these can always be dismissed as self-serving and question-begging.

What could make a difference is showing that a broader philosophical palette would actually make philosophy graduates more employable in an increasingly globalized world. Those like Van Norden who oppose the “Anglo-analytic hegemony” in contemporary philosophy need to argue explicitly that it results in philosophy punching below its weight in terms of potential impact. That philosophy departments of the most analytic sort continue to survive and even flourish, and that their students continue to be employed, should be presented as setting a very low standard of achievement.

After all, philosophy departments tend to recruit students with better than average qualifications, while the costs for maintaining those departments remain relatively low. In contrast, another recent book that raises similar concerns to Van Norden’s, Socrates Tenured (Frodeman and Briggle 2016),[3] is more successful in pointing to extramural strategies for philosophy to pursue a more ambitious vision of general societal relevance.

Challenging How We Understand Culture Itself

But at its best, Taking Back Philosophy forces us to ask: what exactly does “culture” mean in “multicultural” or “cross-cultural” philosophy? For Van Norden, the culture he calls “China” is the exemplar of a non-Western philosophical culture. It refers primarily—if not exclusively—to those strands of Chinese thought associated with its ancient traditions. To be sure, this arguably covers everything that Chinese scholars and intellectuals wrote about prior to the late nineteenth century, when Western ideas started to be regularly discussed. It would then seem to suggest that “China” refers to the totality of its indigenous thought and culture.

But this is not quite right, since Van Norden certainly includes the various intellectually productive engagements that Buddhism as an alien (Indian) philosophy has had with the native Confucian and especially Daoist world-views. Yet he does not seem to want to include the twentieth-century encounters between Confucianism and, say, European liberalism and American pragmatism in the Republican period or Marxism in the Communist period. Here he differs from Leigh Jenco (2010),[4] who draws on the Republican Chinese encounter with various Western philosophies to ground a more general cross-cultural understanding of philosophy.

It would appear that Van Norden is operating with a functional rather than substantive conception of “China” as a philosophical culture. In other words, he is less concerned with all the philosophy that has happened within China than with simply the philosophy in China that makes it “non-Western.” Now some may conclude that this makes Van Norden as ethnocentric as the philosophers he criticizes.

I am happy to let readers judge for themselves on that score. However, functional conceptions of culture are quite pervasive, especially in the worlds of politics and business, whereby culture is treated as a strategic resource to provide a geographic region with what the classical political economist David Ricardo famously called “comparative advantage” in trade.

But equally, Benedict Anderson’s (1983) influential account of nationalism as the construction of “imagined communities” in the context of extricating local collective identities from otherwise homogenizing imperial tendencies would fall in this category. Basically your culture is what you do that nobody else does—or at least does not do as well as you. However, your culture is not the totality of all that you do, perhaps not even what you do most of the time.

To be sure, this is not the classical anthropological conception of culture, which is “substantive” in the sense of providing a systematic inventory of what people living in a given region actually think and do, regardless of any overlap with what others outside the culture think and do. Indeed, anthropologists in the nineteenth and most of the twentieth centuries expected that most of the items in the inventory would come from the outside, the so-called doctrine of “diffusionism.”

Thus, they have tended to stress the idiosyncratic mix of elements that go into the formation of any culture over any dominant principle. This helps explain why nowadays every culture seems to be depicted as a “hybrid.” I would include Jenco’s conception of Chinese culture in this “substantive” conception.

However, what distinguished, say, Victorians like Edward Tylor from today’s “hybrid anthropologists” was that the overlap of elements across cultures was used by the former as a basis for cross-cultural comparisons, albeit often to the detriment of the non-Western cultures involved. This fuelled ambitions that anthropology could be made into a “science” sporting general laws of progress, etc.

My point here is not to replay the history of the struggle for anthropology’s soul, which continues to this day, but simply to highlight a common assumption of the contesting parties—namely, that a “culture” is defined exclusively in terms of matters happening inside a given geographical region, in which case things happening outside the region must be somehow represented inside the region in order to count as part of a given culture. In contrast, the “functional” conception defines “culture” in purely relational terms, perhaps even with primary reference to what is presumed to lie outside a given culture.

Matters of Substance and Function

Both the substantive and the functional conception derive from the modern core understanding of culture, as articulated by Johann Gottfried Herder and the German Idealists, which assumed that each culture possesses an “essence” or “spirit.” On the substantive conception, which was Herder’s own, each culture is distinguished by virtue of having come from a given region, as per the etymological root of “culture” in “agriculture.” In that sense, a culture’s “essence” or “spirit” is like a seed that can develop in various ways depending on the soil in which it is planted.

Indeed, Herder’s teacher, Kant had already used the German Keime (“seeds”) in a book of lectures whose title is often credited with having coined “anthropology” (Wilson 2014).[5] This is the sense of culture that morphs into racialist ideologies. While such racialism can be found in Kant, it is worth stressing that his conception of race does not depend on the sense of genetic fixity that would become the hallmark of twentieth-century “scientific racism.” Rather, Kant appeared to treat “race” as a diagnostic category for environments that hold people back, to varying degrees, from realizing humanity’s full potential.

Here Kant was probably influenced by the Biblical dispersal of humanity, first with Adam’s Fall and then the Noachian flood, which implied that the very presence of different races or cultures marks our species’ decline from its common divine source. Put another way, Kant was committed to what Lamarck called the “inheritance of acquired traits,” though Lamarck lacked Kant’s Biblical declinist backdrop. Nevertheless, they agreed that a sustainably radical change to the environment could decisively change the character of its inhabitants. This marks them both as heirs to the Enlightenment.

To be sure, this reading of Kant is unlikely to assuage either today’s racists or, for that matter, anti-racists or multiculturalists, since it doesn’t assume that the preservation of racial or cultural identity possesses intrinsic (positive or negative) value. In this respect, Kant’s musings on race should be regarded as “merely historical,” based on his fallible second-hand knowledge of how peoples in different parts of the world have conducted their lives.

In fact, the only sense of difference that the German Idealists unequivocally valued was self-individuation, which is ultimately tied to the functional conception of culture, whereby my identity is directly tied to my difference from you. It follows that the boundaries of culture—or the self, for that matter—are moveable feasts. In effect, as your identity changes, mine does as well—and vice versa.

Justifying a New World Order

This is the metaphysics underwriting imperialism’s original liberal capitalist self-understanding as a global free-trade zone. In its ideal form, independent nation-states would generate worldwide prosperity by continually reorienting themselves to each other in response to market pressures. Even if the physical boundaries between nation-states do not change, their relationship to each other would, through the spontaneous generation and diffusion of innovations.

The result would be an ever-changing global division of labor. Of course, imperialism in practice fostered a much more rigid—even racialized—division of labor, as Marxists from Lenin onward decried. Those who nevertheless remain hopeful in the post-imperial era that the matter can ultimately be resolved diagnose the problem as one of “uneven development,” a phrase that leaves a sour aftertaste in the mouths of “post-colonialists.”

But more generally, “functionalism” as a movement in twentieth-century anthropology and sociology tended towards a relatively static vision of social order. And perhaps something similar could be said about Van Norden’s stereotyping of “China.” However, he would be hardly alone. In his magisterial The Sociology of Philosophies: A Global Theory of Intellectual Change, a book which Van Norden does not mention, Randall Collins (1998)[6] adopts a similarly functionalist stance. There it leads to a quite striking result, which has interesting social epistemological consequences.

Although Collins incorporates virtually every thinker that Chinese philosophy experts normally talk about, carefully identifying their doctrinal nuances and scholastic lineages, he ends his treatment of China at the historical moment that happens to coincide with what he marks as a sea change in the fortunes of Western philosophy, which occurs in Europe’s early modern period.

I put the point this way because Collins scrupulously avoids making any of the sorts of ethnocentric judgements that Van Norden rightly castigates throughout his book, whereby China is seen as un- or pre-philosophical. However, there is a difference in attitude to philosophy that emerges in Europe, less in terms of philosophy’s overall purpose than its modus operandi. Collins calls it rapid discovery science.

Rapid discovery science is the idea that standardization in the expression and validation of knowledge claims—both quantitatively and qualitatively—expedites the ascent to higher levels of abstraction and reflexivity by making it easier to record and reproduce contributions in the ongoing discourse. Collins means here not only the rise of mathematical notation to calculate and measure, but also “technical languages,” the mastery of which became the mark of “expertise” in a sense more associated with domain competence than with “wisdom.” In the latter case, the evolution of “peer review” out of the editorial regimentation of scientific correspondence in the early journals played a decisive role (Bazerman 1987).[7]

Citation conventions, from footnotes to bibliographies, were further efficiency measures. Collins rightly stresses the long-term role of universities in institutionalizing these innovations, but of more immediate import was the greater interconnectivity within Europe that was afforded by the printing press and an improved postal system. The overall result, so I believe, was that collective intellectual memory was consolidated to such an extent that intellectual texts could be treated as capital, something to both build upon and radically redeploy—once one has received the right training to access them. These correspond to the phases that Thomas Kuhn called “normal” and “revolutionary” science, respectively.

To be sure, Collins realizes that China had its own stretches in which competing philosophical schools pursued higher levels of abstraction and reflexivity, sometimes with impressive results. But these were maintained solely by the emotional energy of the participants who often dealt with each other directly. Once external events dispersed that energy, then the successors had to go back to a discursive “ground zero” of referring to original texts and reinventing arguments.

Can There Be More Than One Zero Point?

Of course, the West has not been immune to this dynamic. Indeed, it has even been romanticized. A popular conception of philosophy that continues to flourish at the undergraduate level is that there can be no genuine escape from origins, no genuine sense of progress. It is here that Alfred North Whitehead’s remark that all philosophy is footnotes to Plato gets taken a bit too seriously.

In any case, Collins’ rapid discovery science was specifically designed to escape just this situation, which Christian Europe had interpreted as the result of humanity’s fallen state, a product of Adam’s “Original Sin.” This insight figured centrally in the Augustinian theology that gradually—especially after the existential challenge that Islam posed to Christendom in the thirteenth century—began to color how Christians viewed their relationship to God, the source of all knowing and being. The Protestant Reformation marked a high watermark in this turn of thought, which became the crucible in which rapid discovery science was forged in the seventeenth century. Since the 1930s, this period has been called the “Scientific Revolution” (Harrison 2007).[8]

In the wake of the Protestant Reformation, all appeals to authority potentially became not sources of wisdom but objects of suspicion. They had to undergo severe scrutiny, which at the time were often characterized as “trials of faith.” Francis Bacon, the personal lawyer to England’s King James I, is a pivotal figure because he clearly saw continuity from the Inquisition in Catholic Europe (which he admired, even though it ensnared his intellectual ally Galileo), through the “witch trials” pursued by his fellow Protestants on both sides of the Atlantic, to his own innovation—the “crucial experiment”—which would be subsequently enshrined as the hallmark of the scientific method, most energetically by Karl Popper.

Bacon famously developed his own “hermeneutic of suspicion” as proscriptions against what he called “idols of the mind,” that is, lazy habits of thought that are born of too much reliance on authority, tradition, and surface appearances generally. For Bacon and his fellow early modern Christians, including such Catholics as Rene Descartes, these habits bore the mark of Original Sin because they traded on animal passions—and the whole point of the human project is to rise above our fallen animal natures to recover our divine birthright.

The cultural specificity of this point is often lost, even on Westerners for whom the original theological backdrop seems no longer compelling. What is cross-culturally striking about the radical critique of authority posed by the likes of Bacon and Descartes is that it did not descend into skepticism, even though—especially in the case of Descartes—the skeptical challenge was explicitly confronted. What provided the stopgap was faith, specifically in the idea that once we recognize our fallen nature, redemption becomes possible by finding a clearing on which to build truly secure foundations for knowledge and thereby to redeem the human condition, God willing.

For Descartes, this was “cogito ergo sum.” To be sure, the “God willing” clause, which was based on the doctrine of Divine Grace, became attenuated in the eighteenth century as “Providence” and then historicized as “Progress,” finally disappearing altogether with the rising tide of secularism in the nineteenth century (Löwith 1949; Fuller 2010: chap. 8).[9]

But its legacy was a peculiar turn of mind that continually seeks a clearing to chart a path to the source of all meaning, be it called “God” or “Truth.” This is what makes three otherwise quite temperamentally different philosophers—Husserl, Wittgenstein, and Heidegger—equally followers in Descartes’ footsteps. They all prioritized clearing a space from which to proceed over getting clear about the end state of the process.

Thus, the branches of modern Western philosophy concerned with knowledge—epistemology and the philosophy of science—have been focused more on methodology than axiology, that is, the means rather than the ends of knowledge. While this sense of detachment resonates with, say, the Buddhist disciplined abandonment of our default settings to become open to a higher level of state of being, the intellectual infrastructure provided by rapid discovery science allows for an archive to be generated that can be extended and reflected upon indefinitely by successive inquirers.

Common Themes Across Continents

A good way to see this point is that in principle the Buddhist and, for that matter, the Socratic quest for ultimate being could be achieved in one’s own lifetime with sufficient dedication, which includes taking seriously the inevitability of one’s own physical death. In contrast, the modern Western quest for knowledge—as exemplified by science—is understood as a potentially endless intergenerational journey in which today’s scientists effectively lead vicarious lives for the sake of how their successors will regard them.

Indeed, this is perhaps the core ethic promoted in Max Weber’s famous “Science as a Vocation” lecture (Fuller 2015: chap. 3).[10] Death as such enters, not to remind scientists that they must eventually end their inquiries but that whatever they will have achieved by the end of their lives will help pave the way for others to follow.

Heidegger appears as such a “deep” philosopher in the West because he questioned the metaphysical sustainability of the intellectual infrastructure of rapid discovery science, which the Weberian way of death presupposes. Here we need to recall that Heidegger’s popular reception was originally mediated by the postwar Existentialist movement, which was fixated on the paradoxes of the human condition thrown up by Hiroshima, whereby the most advanced science managed to end the biggest war in history by producing a weapon with the greatest chance of destroying humanity altogether in the future. Not surprisingly, Heidegger has proved a convenient vehicle for Westerners to discover Buddhism.

Early Outreach? Or Appropriation?

Finally, it is telling that the Western philosopher whom Van Norden credits with holding China in high esteem, Leibniz, himself had a functional understanding of China. To be sure, Leibniz was duly impressed by China’s long track record of imperial rule at the political, economic, and cultural levels, all of which were the envy of Europe. But Leibniz honed in on one feature of Chinese culture—what he took to be its “ideographic” script—which he believed could provide the intellectual infrastructure for a global project of organizing and codifying all knowledge so as to expedite its progress.

This was where he thought China had a decisive “comparative advantage” over the West. Clearly Leibniz was a devotee of rapid discovery science, and his project—shared by many contemporaries across Europe—would be pursued again to much greater effect two hundred years later by Paul Otlet, the founder of modern library and information science, and Otto Neurath, a founding member of the logical positivist movement.

While the Chinese regarded their written characters as simply a medium for people in a far-flung empire to communicate easily with each other, Leibniz saw in them the potential for collaboration on a universal scale, given that each character amounted to a picture of an abstraction, the metaphorical rendered literal, a message that was not simply conveyed but embedded in the medium. It seemed to satisfy the classical idea of nous, or “intellectual intuition,” as a kind of perception, which survives in the phrase, “seeing with the mind’s eye.”

However, the Chinese refused to take Leibniz’s bait, which led him to begin a train of thought that culminated in the so-called Needham Thesis, which turns on why Earth’s most advanced civilization, China, failed to have a “Scientific Revolution” (Needham 1969; Fuller 1997: chap. 5).[11] Whereas Leibniz was quick to relate Chinese unreceptiveness to his proposal to their polite but firm rejection of the solicitations of Christian missionaries, Joseph Needham, a committed Marxist, pointed to the formal elements of the distinctive cosmology promoted by the Abrahamic religions, especially Christianity, that China lacked—but stopping short of labelling the Chinese “heathens.”

An interesting feature of Leibniz’s modus operandi is that he saw cross-cultural encounters as continuous with commerce (Perkins 2004).[12]  No doubt his conception was influenced by living at a time when the only way a European could get a message to China was through traders and missionaries, who typically travelled together. But he also clearly imagined the resulting exchange as a negotiation in which each side could persuade the other to shift their default positions to potential mutual benefit.

This mentality would come to be crucial to the dynamic mentality of capitalist political economy, on which Ricardo’s theory of comparative advantage was based. However, the Chinese responded to their European counterparts with hospitality but only selective engagement with their various intellectual and material wares, implying their unwillingness to be fluid with what I earlier called “self-individuation.”

Consequently, Europeans only came to properly understand Chinese characters in the mid-nineteenth century, by which time it was treated as a cultural idiosyncrasy, not a platform for pursuing universal knowledge. That world-historic moment for productive engagement had passed—for reasons that Marxist political economy adequately explains—and all subsequent attempts at a “universal language of thought” have been based on Indo-European languages and Western mathematical notation.

China is not part of this story at all, and continues to suffer from that fact, notwithstanding its steady ascendancy on the world stage over the past century. How this particular matter is remedied should focus minds interested in a productive future for cross-cultural philosophy and multiculturalism more generally. But depending on what we take the exact problem to be, the burden of credit and blame across cultures will be apportioned accordingly.

Based on the narrative that I have told here, I am inclined to conclude that the Chinese underestimated just how seriously Europeans like Leibniz took their own ideas. This in turn raises some rather deep questions about the role that a shift in the balance of plausibility away from “seeing with one’s own eyes” and towards “seeing with the mind’s eye” has played in the West’s ascendancy.


I began this piece by distinguishing a “substantive” and a “functional” approach to culture because even theorists as culturally sensitive as Van Norden and Collins adopt a “functional” rather than a “substantive” approach. They defend and elaborate China as a philosophical culture in purely relational terms, based on its “non-Western” character.

This leads them to include, say, Chinese Buddhism but not Chinese Republicanism or Chinese Communism—even though the first is no less exogenous than the second two to “China,” understood as the land mass on which Chinese culture has been built over several millennia. Of course, this is not to take away from Van Norden’s or Collins’ achievements in reminding us of the continued relevance of Chinese philosophical culture.

Yet theirs remains a strategically limited conception designed mainly to advance an argument about Western philosophy. Here Collins follows the path laid down by Leibniz and Needham, whereas Van Norden takes that argument and flips it against the West—or, rather, contemporary Western philosophy. The result in both cases is that “China” is instrumentalized for essentially Western purposes.

I have no problem whatsoever with this approach (which is my own), as long as one is fully aware of its conceptual implications, which I’m not sure that Van Norden is. For example, he may think that his understanding of Chinese philosophical culture is “purer” than, say, Leigh Jenco’s, which focuses on a period with significant Western influence. However, this is “purity” only in the sense of an “ideal type” of the sort the German Idealists would have recognized as a functionally differentiated category within an overarching system.

In Van Norden’s case, that system is governed by the West/non-West binary. Thus, there are various ways to be “Western” and various ways to be “non-Western” for Van Norden. Van Norden is not sufficiently explicit about this logic. The alternative conceptual strategy would be to adopt a “substantive” approach to China that takes seriously everything that happens within its physical borders, regardless of origin. The result would be the more diffuse, laundry list approach to culture that was championed by the classical anthropologists, for which “hybrid” is now the politically correct term.

To be sure, this approach is not without its own difficulties, ranging from a desire to return to origins (“racialism”) to forced comparisons between innovator and adopter cultures. But whichever way one goes on this matter, “China” remains a contested concept in the context of world philosophy.

Contact details:


Bazerman, Charles. Shaping Written Knowledge. Madison WI: University of Wisconsin Press, 1987.

Collins, Randall. The Sociology of Philosophies: A Global Theory of Intellectual Change. Cambridge MA: Harvard University Press, 1998.

Frodeman, Robert; Adam Briggle. Socrates Tenured. Lanham MD: Rowman and Littlefield, 2016).

Fuller, Steve. Science: Concepts in the Social Sciences. Milton Keynes UK: Open University Press, 1997.

Fuller, Steve. Science: The Art of Living. Durham UK: Acumen, 2010.

Fuller, Steve. Knowledge: The Philosophical Quest in History. London: Routledge, 2015.

Harrison, Peter. The Fall of Man and the Foundations of Science. Cambridge UK: Cambridge University Press, 2007.

Jenco, Leigh. Making the Political: Founding and Action in the Political Theory of Zhang Shizhao. Cambridge UK: Cambridge University Press, 2010.

Jenco, Leigh; Steve Fuller, David Haekwon Kim, Thaddeus Metz, and Miljana Milojevic, “Symposium: Are Certain Knowledge Frameworks More Congenial to the Aims of Cross-Cultural Philosophy?” Journal of World Philosophies 2, no. 2 (2017): 82-145.

Löwith, Karl. Meaning in History: The Theological Implications of Philosophy of History. Chicago: University of Chicago Press, 1949.

Needham, Joseph. The Grand Titration: Science and Society in East and West. London: George Allen and Unwin, 1969.

Perkins, Franklin. Leibniz and China: A Commerce of Light. Cambridge UK: Cambridge University Press, 2004.

Van Norden, Bryan. Taking Back Philosophy: A Multicultural Manifesto. New York: Columbia University Press, 2017.

Wilson, Catherine. “Kant on Civilization, Culture and Moralization,” in Kant’s Lectures on Anthropology: A Critical Guide. Edited by A. Cohen. Cambridge UK: Cambridge University Press, 2014: 191-210.

[1] Bryan Van Norden, “Western Philosophy is Racist,” (; last accessed on May 10, 2018).

[2] See: Leigh Jenco, Steve Fuller, David Haekwon Kim, Thaddeus Metz, and Miljana Milojevic, “Symposium: Are Certain Knowledge Frameworks More Congenial to the Aims of Cross-Cultural Philosophy?” Journal of World Philosophies 2, no. 2 (2017): 82-145 (; last accessed on May 10, 2018).

[3] Robert Frodeman, and Adam Briggle, Socrates Tenured (Lanham MD: Rowman and Littlefield, 2016).

[4] Leigh Jenco, Making the Political: Founding and Action in the Political Theory of Zhang Shizhao (Cambridge UK: Cambridge University Press, 2010).

[5] Catherine Wilson, “Kant on Civilization, Culture and Moralization,” in Kant’s Lectures on Anthropology: A Critical Guide, ed. A. Cohen (Cambridge UK: Cambridge University Press, 2014), 191-210.

[6] Randall Collins, The Sociology of Philosophies: A Global Theory of Intellectual Change (Cambridge MA: Harvard University Press, 1998).

[7] Charles Bazerman, Shaping Written Knowledge (Madison WI: University of Wisconsin Press, 1987).

[8] Peter Harrison, The Fall of Man and the Foundations of Science (Cambridge UK: Cambridge University Press, 2007).

[9] Karl Löwith, Meaning in History: The Theological Implications of Philosophy of History (Chicago: University of Chicago Press, 1949); Steve Fuller, Science: The Art of Living (Durham UK: Acumen, 2010).

[10] Steve Fuller, Knowledge: The Philosophical Quest in History (London: Routledge, 2015).

[11] Joseph Needham, The Grand Titration: Science and Society in East and West (London: George Allen and Unwin, 1969); Steve Fuller, Science: Concepts in the Social Sciences (Milton Keynes UK: Open University Press, 1997).

[12] Franklin Perkins, Leibniz and China: A Commerce of Light (Cambridge UK: Cambridge University Press, 2004).

Author Information: Joshua Earle, Virginia Tech,

Earle, Joshua. “Deleting the Instrument Clause: Technology as Praxis.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 59-62.

The pdf of the article gives specific page references. Shortlink:

Image by Tambako the Jaguar via Flickr / Creative Commons


Damien Williams, in his review of Dr. Ashley Shew’s new book Animal Constructions and Technical Knowledge (2017), foregrounds in his title what is probably the most important thesis in Shew’s work. Namely that in our definition of technology, we focus too much on the human, and in doing so we miss a lot of things that should be considered technological use and knowledge. Williams calls this “Deleting the Human Clause” (Williams, 2018).

I agree with Shew (and Williams), for all the reasons they state (and potentially some more as well), but I think we ought to go further. I believe we should also delete the instrument clause.

Beginning With Definitions

There are two sets of definitions that I want to work with here. One is the set of definitions argued over by philosophers (and referenced by both Shew and Williams). The other is a more generic, “common-sense” definition that sits, mostly unexamined, in the back of our minds. Both generally invoke both the human clause (obviously with the exception of Shew) and the instrument clause.

Taking the “common-sense” definition first, we, generally speaking, think of technology as the things that humans make and use. The computer on which I write this article, and on which you, ostensibly, read it, is a technology. So is the book, or the airplane, or the hammer. In fact, the more advanced the object is, the more technological it is. So while the hammer might be a technology, it generally gets relegated to a mere “tool” while the computer or the airplane seems to be more than “just” a tool, and becomes more purely technological.

Peeling apart the layers therein would be interesting, but is beyond the scope of this article, but you get the idea. Our technologies are what give us functionalities we might not have otherwise. The more functionalities it gives us, the more technological it is.

The academic definitions of technology are a bit more abstract. Joe Pitt calls technology “humanity at work,” foregrounding the production of artefacts and the iteration of old into new (2000, pg 11). Georges Canguilhem called technology “the extension of human faculties” (2009, pg 94). Philip Brey, referencing Canguilhem (but also Marshall McLuhan, Ernst Kapp, and David Rothenberg) takes this definition up as well, but extending it to include not just action, but intent, and refining some various ways of considering extension and what counts as a technical artefact (sometimes, like Soylent Green, it’s people) (Brey, 2000).

Both the common sense and the academic definitions of technology use the human clause, which Shew troubles. But even if we alter instances of “human” to “human or non-human agents” there is still something that chafes. What if we think about things that do work for us in the world, but are not reliant on artefacts or tools, are those things still technology?

While each definition focuses on objects, none talks about what form or function those objects need to perform in order to count as technologies. Brey, hewing close to Heidegger, even talks about how using people as objects, as means to an end, would put them within the definition of technology (Ibid, pg. 12). But this also puts people in problematic power arrangements and elides the agency of the people being used toward an end. It also begs the question, can we use ourselves to an end? Does that make us our own technology?

This may be the ultimate danger that Heidegger warned us about, but I think it’s a category mistake. Instead of objectifying agents into technical objects, if, instead we look at the exercise of agency itself as what is key to the definition of technology, things shift. Technology no longer becomes about the objects, but about the actions, and how those actions affect the world. Technology becomes praxis.

Technology as Action

Let’s think through some liminal cases that first inspired this line of thought: Language and Agriculture. It’s certainly arguable that either of these things fits any definition of technology other than mine (praxis). Don Ihde would definitely disagree with me, as he explicitly states that one needs a tool or an instrument to be technology, though he hews close to my definition in other ways (Ihde, 2012; 2018). If Pitt’s definition, “humanity at work” is true, then agriculture is, indeed a technology . . . even without the various artifactual apparati that normally surround it.

Agriculture can be done entirely by hand, without any tools whatsoever, is iterative and produces a tangible output: food, in greater quantity/efficiency than would normally exist. By Brey’s and Canguihem’s definition, it should fit as well, as agriculture extends our intent (for greater amounts of food more locally available) into action and the production of something not otherwise existing in nature. Agriculture is basically (and I’m being too cute by half with this, I know) the intensification of nature. It is, in essence, moving things rather than creating or building them.

Language is a slightly harder case, but one I want to explicitly include in my definition, but I would also say fits Pitt’s and Brey’s definitions, IF we delete or ignore the instrument clause. While language does not produce any tangible artefacts directly (one might say the book or the written word, but most languages have never been written at all), it is the single most fundamental way in which we extend our intent into the world.

It is work, it moves people and things, it is constantly iterative. It is often the very first thing that is used when attempting to affect the world, and the only way by which more than one agent is able to cooperate on any task (I am using the broadest possible definition of language, here). Language could be argued to be the technology by which culture itself is made possible.

There is another way in which focusing on the artefact or the tool or the instrument is problematic. Allow me to illustrate with the favorite philosophical example: the hammer. A question: is a hammer built, but never used, technology[1]? If it is, then all of the definitions above no longer hold. An unused hammer is not “at work” as in Pitt’s definition, nor does it iterate, as Pitt’s definition requires. An unused hammer extends nothing vs. Canguilhem and Brey, unless we count the potential for use, the potential for extension.

But if we do, what potential uses count and which do not? A stick used by an ape (or a person, I suppose) to tease out some tasty termites from their dirt-mound home is, I would argue (and so does Shew), a technological use of a tool. But is the stick, before it is picked up by the ape, or after it is discarded, still a technology or a tool? It always already had the potential to be used, and can be again after it is discarded. But such a definition requires that any and everything as technology, which renders the definition meaningless. So, the potential for use cannot be enough to be technology.

Perhaps instead the unused hammer is just a tool? But again, the stick example renders the definition of “tool” in this way meaningless. Again, only while in use can we consider a hammer a tool. Certainly the hammer, even unused, is an artefact. The being of an artefact is not reliant on use, merely on being fashioned by an external agent. Thus if we can imagine actions without artefacts that count as technology, and artefacts that do not count as technology, then including artefacts in one’s definition of technology seems logically unsound.

Theory of Technology

I believe we should separate our terms: tool, instrument, artefact, and technology. Too often these get conflated. Central, to me, is the idea that technology is an active thing, it is a production. Via Pitt, technology requires/consists in work. Via Canguilhem and Brey it is extension. Both of these are verbs: “work” and “extend.” Techné, the root of the word technology, is about craft, making and doing; it is about action and intent.

It is about, bringing-forth or poiesis (a-la Heidegger, 2003; Haraway, 2016). To this end, I propose, that we define “technology” as praxis, as the mechanisms or techniques used to address problems. “Tools” are artefacts in use, toward the realizing of technological ends. “Instruments” are specific arrangements of artefacts and tools used to bring about particular effects, particularly inscriptions which signify or make meaning of the artefacts’ work (a-la Latour, 1987; Barad, 2007).

One critique I can foresee is that it would seem that almost any action taken could thus be considered technology. Eating, by itself, could be considered a mechanism by which the problem of hunger is addressed. I answer this by maintaining that there be at least one step between the problem and solution. There needs to be the putting together of theory (not just desire, but a plan) and action.

So, while I do not consider eating, in and of itself, (a) technology; producing a meal — via gathering, cooking, hunting, or otherwise — would be. This opens up some things as non-human uses of technology that even Shew didn’t consider like a wolf pack’s coordinated hunting, or dolphins’ various clever ways to get rewards from their handlers.

So, does treating technology as praxis help? Does extracting the confounding definitions of artefact, tool, and instrument from the definition of technology help? Does this definition include too many things, and thus lose meaning and usefulness? I posit this definition as a provocation, and I look forward to any discussion the readers of SERRC might have.

Contact details:


Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.

Brey, P. (2000). Theories of Technology as Extension of Human Faculties. Metaphysics, Epistemology, and Technology. Research in Philosophy and Technology, 19, 1–20.

Canguilhem, G. (2009). Knowledge of Life. Fordham University Press.

Haraway, D. J. (2016). Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press.

Heidegger, M. (2003). The Question Concerning Technology. In D. Kaplan (Ed.), Readings in the Philosophy of Technology. Rowan & Littlefield.

Ihde, D. (2012). Technics and praxis: A philosophy of technology (Vol. 24). Springer Science & Business Media.

Ihde, D., & Malafouris, L. (2018). Homo faber Revisited: Postphenomenology and Material Engagement Theory. Philosophy & Technology, 1–20.

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Harvard university press.

Pitt, J. C. (2000). Thinking about technology. Seven Bridges Press,.

Shew, A. (2017). Animal Constructions and Technological Knowledge. Lexington Books.

Williams, D. (2018). “Deleting the Human Clause: A Review of Ashley Shew’s Animal Constructions and Technological Knowledge.” Social Epistemology Review and Reply Collective 7, no. 2: 42-44.

[1] This is the philosophical version of “For sale: Baby shoes. Never worn.”

Author Information: Kamili Posey, Kingsborough College,

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-15.

Kamili Posey’s article was posted over two instalments. You can read the first here, but the pdf of the article includes the entire piece, and gives specific page references. Shortlink:

Image by Rigoberto Garcia via Flickr / Creative Commons


In the previous piece, I outlined some concerns with philosophers, and particularly philosophers of social science, assuming the success of implicit interventions into implicit bias. Motivated by a pointed note by Jennifer Saul (2017), I aimed to briefly go through some of the models lauded as offering successful interventions and, in essence, “get out of the armchair.”

(IAT) Models and Egalitarian Goal Models

In this final piece, I go through the last two models, Glaser and Knowles’ (2007) and Blair et al.’s (2001) (IAT) models and Moskowitz and Li’s (2011) egalitarian goal model. I reiterate that this is not an exhaustive analysis of such models nor is it intended as a criticism of experiments pertaining to implicit bias. Mostly, I am concerned that the science is interesting but that the scientism – the application of tentative results to philosophical projects – is less so. It is from this point that I proceed.

Like Mendoza et al.’s (2010) implementation intentions, Glaser and Knowles’ (2007) (IMCP) aims to capture implicit motivations that are capable of inhibiting automatic stereotype activation. Glaser and Knowles measure (IMCP) in terms of an implicit negative attitude toward prejudice, or (NAP), and an implicit belief that oneself is prejudiced, or (BOP). This is done by retooling the (IAT) to fit both (NAP) and (BOP): “To measure NAP we constructed an IAT that pairs the categories ‘prejudice’ and ‘tolerance’ with the categories ‘bad’ and ‘good.’ BOP was assessed with an IAT pairing ‘prejudiced’ and ‘tolerant’ with ‘me’ and ‘not me.’”[1]

Study participants were then administered the Shooter Task, the (IMCP) measures, and the Race Prejudice (IAT) and Race-Weapons Stereotype (RWS) tests in a fixed order. They predicted that (IMCP) as an implicit goal for those high in (IMCP) “should be able to short-circuit the effect of implicit anti-Black stereotypes on automatic anti-Black behavior.”[2] The results seemed to suggest that this was the case. Glaser and Knowles found that study participants who viewed prejudice as particularly bad “[showed] no relationship between implicit stereotypes and spontaneous behavior.”[3]

There are a few considerations missing from the evaluation of the study results. First, with regard to the Shooter Task, Glaser and Knowles (2007) found that “the interaction of target race by object type, reflecting the Shooter Bias, was not statistically significant.”[4] That is, the strength of the relationship that Correll et al. (2002) found between study participants and the (high) likelihood that they would “shoot” at black targets was not found in the present study. Additionally, they note that they “eliminated time pressure” from the task itself. Although it was not suggested that this impacted the usefulness of the measure of Shooter Bias, it is difficult to imagine that it did not do so. To this, they footnote the following caveat:

Variance in the degree and direction of the stereotype endorsement points to one reason for our failure to replicate Correll et. al’s (2002) typically robust Shooter Bias effect. That is, our sample appears to have held stereotypes linking Blacks and weapons/aggression/danger to a lesser extent than did Correll and colleagues’ participants. In Correll et al. (2002, 2003), participants one SD below the mean on the stereotype measure reported an anti-Black stereotype, whereas similarly low scorers on our RWS IAT evidenced a stronger association between Whites and weapons. Further, the adaptation of the Shooter Task reported here may have been less sensitive than the procedure developed by Correll and colleagues. In the service of shortening and simplifying the task, we used fewer trials, eliminated time pressure and rewards for speed and accuracy, and presented only one background per trial.[5]

Glaser and Knowles claimed that the interaction of the (RWS) with the Shooter Task results proved “significant,” however, if the Shooter Bias failed to materialize (in the standard Correll et al. way) with study participants, it is difficult to see how the (RWS) was measuring anything except itself, generally speaking. This is further complicated by the fact that the interaction between the Shooter Bias and the (RWS) revealed “a mild reverse stereotype associating Whites with weapons (d = -0.15) and a strong stereotype associating Blacks with weapons (d = 0.83), respectively.”[6]

Recall that Glaser and Knowles (2007) aimed to show that participants high in (IMCP) would be able to inhibit implicit anti-black stereotypes and thus inhibit automatic anti-black behaviors. Using (NAP) and (BOP) as proxies for implicit control, participants high in (NAP) and moderate in (BOP) – as those with moderate (BOP) will be motivated to avoid bias – should show the weakest association between (RWS) and Shooter Bias. Instead, the lowest levels of Shooter Bias were seen in “low NAP, high BOP, and low RWS” study participants, or those who do not disapprove of prejudice, would describe themselves as prejudiced, and also showed lowest levels of (RWS).[7]

They noted that neither “NAP nor BOP alone was significantly related to the Shooter Bias,” but “the influence of RWS on Shooter Bias remained significant.”[8] In fact, greater bias was actually found with higher (NAP) and (BOP) levels.[9] This bias seemed to map on to the initial results of the Shooter Task results. It is most likely that (RWS) was the most important measure in this study for assessing implicit bias, not, as the study claimed, for assessing implicit motivation to control prejudice.

What Kind of Bias?

It is also not clear that the (RWS) was not capturing explicit bias instead of implicit bias in this study. At the point at which study participants were tasked with the (RWS), automatic stereotype activation may have been inhibited just in virtue of study participants involvement in the Shooter Task and (IAT) assessments regarding race-related prejudice. That is, race-sensitivity was brought to consciousness in the sequencing of the test process.

Although we cannot get into the heads of the study participants, this counter explanation seems a compelling possibility. That is, that the sequential tasks involved in the study captured study participants’ ability to increase focus and increase conscious attention to the race-related (IAT) test. Additionally, it is possible that some study participants could both cue and follow their own conscious internal commands, “If I see a black face, I won’t judge!” Consider that this is exactly how implementation intentions work.

Consider that this is also how Armageddon chess and other speed strategy games work. In Park et al.’s (2008) follow-up study on (IMCP) and cognitive depletion, they retreat somewhat from their initial claims about the implicit nature of (IMCP):

We cannot state for certain that our measure of IMCP reflects a purely nonconscious construct, nor that differential speed to “shoot” Black armed men vs. White armed men in a computer simulation reflects purely automatic processes. Most likely, the underlying stereotypes, goals, and behavioral responses represent a blend of conscious and nonconscious influences…Based on the results of the present study and those of Glaser and Knowles (2008), it would be premature to conclude that IMCP is a purely and wholly automatic construct, meeting the “four horsemen” criteria (Bargh, 1990). Specifically, it is not yet clear whether high IMCP participants initiate control of prejudice without intention; whether implicit control of prejudice can itself be inhibited, if for some reason someone wanted to; nor whether IMCP-instigated control of spontaneous bias occurs without awareness.[10]

If the (IMCP) potentially measures low-level conscious attention, this makes the question of what implicit measurements actually measure in the context of sequential tasks all the more important. In the two final examples, Blair et al.’s (2001) study on the use of counterstereotype imagery and Moskowitz and Li’s (2011) study on the use of counterstereotype egalitarian goals, we are again confronted with the issue of sequencing. In the study by Moskowitz and Li, study participants were asked to write down an example of a time when “they failed to live up to the ideal specified by an egalitarian goal, and to do so by relaying an event relating to African American men.”[11]

They were then given a series of computerized LDTs (lexicon decision tasks) and primes involving photographs of black and white faces and stereotypical and non-stereotypical attributes of black people (crime, lazy, stupid, nervous, indifferent, nosy). Over a series of four experiments, Moskowitz and Li found that when egalitarian goals were “accessible,” study participants were able to successfully generate stereotype inhibition. Blair et al. asked study participants to use counterstereotypical (CS) gender imagery over a series of five experiments, e.g., “Think of a strong, capable woman,” and then administered a series of implicit measures, including the (IAT).

Similar to Moskowitz and Li (2011), Blair et al. (2001) found that (CS) gender imagery was successful in reducing implicit gender stereotypes leaving “little doubt that the CS mental imagery per se was responsible for diminishing implicit stereotypes.”[12] In both cases, the study participants were explicitly called upon to focus their attention on experiences and imagery pertaining to negative stereotypes before the implicit measures, i.e., tasks, were administered. Again it is not clear that the implicit measures measured the supposed target.

In the case of Moskowitz and Li’s (2011) experiment, the study participants began by relating moments in their lives where they failed to live up to their goals. However, those goals can only be understood within a particular social and political framework where holding negatively prejudicial beliefs about African-American men is often explicitly judged harshly, even if not implicitly so. Given this, we might assume that the study participants were compelled into a negative affective state. But does this matter? As suggested by the study by Monteith (1993), and later study by Amodio et. al (2007), guilt can be a powerful tool.[13]

Questions of Guilt

If guilt was produced during the early stages of the experiment, it may have also participated in the inhibition of stereotype activation. Moskowitz and Li (2011) noted that “during targeted questioning in the debriefing, no participants expressed any conscious intent to inhibit stereotypes on the task, nor saw any of the tasks performed during the computerized portion of the experiment as related to the egalitarian goals they had undermined earlier in the session.”[14]

But guilt does not have to be conscious for it to produce effects. The guilt produced by recalling a moment of negative bias could be part and parcel of a larger feeling of moral failure. Moskowitz and Li needed to adequately disambiguate competing implicit motivations for stereotype inhibition before arriving at a definitive conclusion. This, I think, is a limitation of the study.

However, the same case could be made for (CS) imagery. Blair et al. (2001) noted that it is, in fact, possible that they too have missed competing motivations and competing explanations for stereotype inhibition. Particularly, they suggested that by emphasizing counterstereotyping the researchers “may have communicated the importance of avoiding stereotypes and increased their motivation to do so.”[15] Still, the researchers dismissed that this would lead to better (faster, more accurate) performance of the (IAT), but that is merely asserting that the (IAT) must measure exactly what the (IAT) claims that it does. Fast, accurate, and conscious measures are excluded from that claim. Complicated internal motivations are excluded from that claim.

But on what grounds? Consider Fielder et al.’s (2006) argument that the (IAT) is susceptible to faking and strategic processing, or Brendl et al.’s (2001) argument that it is not possible to infer a single cause from (IAT) results, or Fazio and Olson’s (2003) claim “the IAT has little to do with what is automatically activated in response to a given stimulus.”[16]

These studies call into question the claim that implicit measures like the (IAT) can measure implicit bias in the clear, problem-free manner that is often suggested in the literature. Implicit interventions into implicit bias that utilize the (IAT) are difficult to support for this reason. Implicit interventions that utilize sequential (IAT) tasks are also difficult to support for this reason. Of course, this is also live debate and the problems I have discussed here are far from the only ones that plague this type of research.[17]

That said, when it comes to this research we are too often left wondering if the measure itself is measuring the right thing. Are we capturing implicit bias or some other socially generated phenomenon? Are the measured changes we see in study results reflecting the validity of the instrument or the cognitive maneuverings of study participants? These are all critical questions that need sussing out. The temporary result is that the target conclusion that implicit interventions will lead to reductions in real-world discrimination will move further away.[18] We find evidence of this conclusion in Forscher et al.’s (2018) meta-analysis of 492 implicit interventions:

We found little evidence that changes in implicit measures translated into changes in explicit measures and behavior, and we observed limitations in the evidence base for implicit malleability and change. These results produce a challenge for practitioners who seek to address problems that are presumed to be caused by automatically retrieved associations, as there was little evidence showing that change in implicit measures will result in changes for explicit measures or behavior…Our results suggest that current interventions that attempt to change implicit measures will not consistently change behavior in these domains. These results also produce a challenge for researchers who seek to understand the nature of human cognition because they raise new questions about the causal role of automatically retrieved associations…To better understand what the results mean, future research should innovate with more reliable and valid implicit, explicit, and behavioral tasks, intensive manipulations, longitudinal measurement of outcomes, heterogeneous samples, and diverse topics of study.[19]

Finally, what I take to be behind Alcoff’s (2010) critical question at the beginning of this piece is a kind of skepticism about how individuals can successfully tackle implicit bias through either explicit or implicit practices without the support of the social spaces, communities, and institutions that give shape to our social lives. Implicit bias is related to the culture one is in and the stereotypes it produces. So instead of insisting on changing people to reduce stereotyping, what if we insisted on changing the culture?

As Alcoff notes: “We must be willing to explore more mechanisms for redress, such as extensive educational reform, more serious projects of affirmative action, and curricular mandates that would help to correct the identity prejudices built up out of faulty narratives of history.”[20] This is an important point. It is a point that philosophers who work on implicit bias would do well to take seriously.

Science may not give us the way out of racism, sexism, and gender discrimination. At the moment, it may only give us tools for seeing ourselves a bit more clearly. Further claims about implicit interventions appear as willful scientism. They reinforce the belief that science can cure all of our social and political ills. But this is magical thinking.

Contact details:


Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

[2] Glaser, Jack and Knowles, Eric D. (2007), p. 167.

[3] Glaser, Jack and Knowles, Eric D. (2007), p. 170.

[4] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[5] Glaser, Jack and Knowles, Eric D. (2007), p. 168.

[6] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[7] Glaser, Jack and Knowles, Eric D. (2007), p. 169. Of this “rogue” group, Glaser and Knowles note: “This group had, on average, a negative RWS (i.e., rather than just a low bias toward Blacks, they tended to associate Whites more than Blacks with weapons; see footnote 4). If these reversed stereotypes are also uninhibited, they should yield reversed Shooter Bias, as observed here” (169).

[8] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[9] Glaser, Jack and Knowles, Eric D. (2007), p. 169.

[10] Sang Hee Park, Jack Glaser, and Eric D. Knowles. (2008). “Implicit Motivation to Control Prejudice Moderates the Effect of Cognitive Depletion on Unintended Discrimination,” in Social Cognition, Vol. 26, No. 4, p. 416.

[11] Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

[12] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

[13] Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30

[14] Moskowitz, Gordon and Li, Peizhong (2011), p. 108.

[15] Blair, I. V., Ma, J. E., & Lenton, A. P. (2001), p. 838.

[16] Fielder, Klaus, Messner, Claude, Bluemke, Matthias. (2006). “Unresolved problems with the ‘I’, the ‘A’, and the ‘T’: A logical and Psychometric Critique of the Implicit Association Test (IAT),” in European Review of Social Psychology, 12, pp. 74-147. Brendl, C. M., Markman, A. B., & Messner, C. (2001). “How Do Indirect Measures of Evaluation Work? Evaluating the Inference of Prejudice in the Implicit Association Test,” in Journal of Personality and Social Psychology, 81(5), pp. 760-773. Fazio, R. H., and Olson, M. A. (2003). “Implicit Measures in Social Cognition Research: Their Meaning and Uses,” in Annual Review of Psychology 54, pp. 297-327.

[17] There is significant debate over the issue of whether the implicit bias that (IAT) tests measure translate into real-world discriminatory behavior. This is a complex and compelling issue. It is also an issue that could render moot the (IAT) as an implicit measure of anything full stop. Anthony G. Greenwald, Mahzarin R. Banaji, and Brian A. Nosek (2015) write: “IAT measures have two properties that render them problematic to use to classify persons as likely to engage in discrimination. Those two properties are modest test–retest reliability (for the IAT, typically between r = .5 and r = .6; cf., Nosek et al., 2007) and small to moderate predictive validity effect sizes. Therefore, attempts to diagnostically use such measures for individuals risk undesirably high rates of erroneous classifications. These problems of limited test-retest reliability and small effect sizes are maximal when the sample consists of a single person (i.e., for individual diagnostic use), but they diminish substantially as sample size increases. Therefore, limited reliability and small to moderate effect sizes are not problematic in diagnosing system-level discrimination, for which analyses often involve large samples” (557). However, Oswald et al. (2013) argue that “IAT scores correlated strongly with measures of brain activity but relatively weakly with all other criterion measures in the race domain and weakly with all criterion measures in the ethnicity domain. IATs, whether they were designed to tap into implicit prejudice or implicit stereotypes, were typically poor predictors of the types of behavior, judgments, or decisions that have been studied as instances of discrimination, regardless of how subtle, spontaneous, controlled, or deliberate they were. Explicit measures of bias were also, on average, weak predictors of criteria in the studies covered by this meta-analysis, but explicit measures performed no worse than, and sometimes better than, the IATs for predictions of policy preferences, interpersonal behavior, person perceptions, reaction times, and microbehavior. Only for brain activity were correlations higher for IATs than for explicit measures…but few studies examined prediction of brain activity using explicit measures. Any distinction between the IATs and explicit measures is a distinction that makes little difference, because both of these means of measuring attitudes resulted in poor prediction of racial and ethnic discrimination” (182-183). For further details about this debate, see: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192 and Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

[18] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

[19] Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from

[20] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Author Information: Kamili Posey, Kingsborough College,

Posey, Kamili. “Scientism in the Philosophy of Implicit Bias Research.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 1-16.

Kamili Posey’s article will be posted over two instalments. The pdf of the article gives specific page references, and includes the entire essay. Shortlink:

Image by Walt Stoneburner via Flickr / Creative Commons


If you consider the recent philosophical literature on implicit bias research, then you would be forgiven for thinking that the problem of successful interventions into implicit bias fall into the category of things that are resolved. If you consider the recent social psychological literature on interventions into implicit bias, then you would come away with a similar impression. The claim is that implicit bias is epistemically harmful because we profess to believing one thing while our implicit attitudes tell a different story.

Strategy Models and Discrepancy Models

Implicit bias is socially harmful because it maps onto our real-world discriminatory practices, e.g., workplace discrimination, health disparities, racist police shootings, and identity-prejudicial public policies. Consider the results of Greenwald et al.’s (1998) Implicit Association Test. Consider also the results of Correll et. al’s (2002) “Shooter Bias.” If cognitive interventions are possible, and specifically implicit cognitive interventions, then they can help knowers implicitly manage automatic stereotype activation. Do these interventions lead to real-world reductions of bias?

Linda Alcoff (2010) notes that it is difficult to see how implicit, nonvolitional biases (e.g., those at the root of social and epistemic ills like race-based police shootings) can be remedied by explicit epistemic practices.[1] I would follow this by noting that it is equally difficult to see how nonvolitional biases can be remedied by implicit epistemic practices as well.

Jennifer Saul (2017) responds to Alcoff’s (2010) query by pointing to social psychological experiments conducted by Margo Monteith (1993), Jack Glaser and Eric D. Knowles (2007), Gordon B. Moskowitz and Peizhong Li (2011), Saaid A. Mendoza et al. (2010), Irene V. Blair et al. (2001), and Kerry Kawakami et al. (2005).[2] These studies suggest that implicit self-regulation of implicit bias is possible. Saul notes that philosophers with objections like Alcoff’s, and presumably like mine, should “not just to reflect upon the problem from the armchair – at the very least, one should use one’s laptop to explore the internet for effective interventions.”[3]

But I think this recrimination rings rather hollow. How entitled are we to extrapolate from social psychological studies in the manner that Saul advocates? How entitled are we to assumes the epistemic superiority of scientific research on racism, sexism, etc. over the phenomenological reporting of marginalized knowers? Lastly, how entitled are we to claims about the real-world applicability of these study results?[4] My guess is that the devil is in the details. My guess is also that social psychologists have not found the silver bullet for remedying implicit bias. But let’s follow Saul’s suggestion and not just reflect from the armchair.

A caveat: the following analysis is not intended to be an exhaustive or thorough refutation of what is ultimately a large body social psychological literature. Instead, it is intended to cast a bit of doubt on how these models are used by philosophers as successful remedies for implicit bias. It is intended to cast doubt a bit of doubt on the idea that remedies for racist, sexist, homophobic, and transphobic discrimination are merely a training session or reflective exercise away.

This type of thinking devalues the very real experiences of those who live through racism, sexism, homophobia, and transphobia. It devalues how pervasive these experiences are in American society and the myriad ways in which the effects of discrimination seep into marrow of marginalized bodies and marginalized communities. Worse still, it implies that marginalized knowers who claim, “You don’t understand my experiences!” are compelled to contend with the hegemonic role of “Science” that continues to speak over their own voices and about their own lives.[5] But again, back to the studies.

Four Methods of Remedy

I break up the above studies into four intuitive model types: (1) strategy models, (2) discrepancy models, (3) (IAT) models, and (4) egalitarian goal models. (I am not a social scientist, so the operative word here is “intuitive.”) Let’s first consider Kawakami et al. (2005) and Mendoza et al. (2010) as examples of strategy models. Kawakami et al. used Devine and Monteith’s (1993) notion of a negative stereotype as a “bad habit” that a knower needs to “kick” to model strategies that aid in the inhibition of automatic stereotype activation, or the inhibition of “increased cognitive accessibility of characteristics associated with a particular group.”[6]

In a previous study, Kawakami et al. (2000) asked research participants presented with photographs of black individuals and white individuals with stereotypical traits and non-stereotypical traits listed under each photograph to respond “No” to stereotypical traits and “Yes” to non-stereotypical traits.[7] The study found that “participants who were extensively trained to negate racial stereotypes initially also demonstrated stereotype activation, this effect was eliminated by the extensive training.

Furthermore, Kawakami et al. found that practice effects of this type lasted up to 24 h following the training.”[8] Kawakami et al. (2005) used this training model to ground an experiment aimed at strategies for reducing stereotype activation in the preference of men over women for leadership roles in managerial positions. Despite the training, they found that there was “no difference between Nonstereotypic Association Training and No Training conditions…participants were indeed attempting to choose the best candidate overall, in these conditions there was an overall pattern of discrimination against women relative to men in recommended hiring for a managerial position (Glick, 1991; Rudman & Glick, 1999)” [emphasis mine].[9]

Substantive conclusions are difficult to make by a single study but one critical point is how learning occurred in the training but improved stereotype inhibition did not occur. What, exactly, are we to make of this result? Kawakami et al. (2005) claimed that “similar levels of bias in both the Training and No Training conditions implicates the influence of correction processes that limit the effectiveness of training.”[10] That is, they attributed the lack of influence of corrective processes on a variety of contributing factors that limited the effectiveness of the strategy itself.

Notice, however, that this does not implicate the strategy as a failed one. Most notably Kawakami et al. found that “when people have the time and opportunity to control their responses [they] may be strongly shaped by personal values and temporary motivations, strategies aimed at changing the automatic activation of stereotypes will not [necessarily] result in reduced discrimination.”[11]

This suggests that although the strategies failed to reduce stereotype activation they may still be helpful in limited circumstances “when impressions are more deliberative.”[12] One wonders under what conditions such impressions can be more deliberative? More than that, how useful are such limited-condition strategies for dealing with everyday life and every day automatic stereotype activation?

Mendoza et al. (2010) tested the effectiveness of “implementation intentions” as a strategy to reduce the activation or expression of implicit stereotypes using the Shooter Task.[13] They tested both “distraction-inhibiting” implementation intentions and “response-facilitating” implementation intentions. Distraction-inhibiting intentions are strategies “designed to engage inhibitory control,” such as inhibiting the perception of distracting or biasing information, while “response-facilitating” intentions are strategies designed to enhance goal attainment by focusing on specific goal-directed actions.[14]

In the first study, Mendoza et al. asked participants to repeat the on-screen phrase, “If I see a person, then I will ignore his race!” in their heads and then type the phrase into the computer. This resulted in study participants having a reduced number of errors in the Shooter Task. But let’s come back to if and how we might be able to extrapolate from these results. The second study compared a simple-goal strategy with an implementation intention strategy.

Study participants in the simple-goal strategy group were asked to follow the strategy, “I will always shoot a person I see with a gun!” and “I will never shoot a person I see with an object!” Study participants in the implementation intention strategy group were asked to use a conditional, if-then, strategy instead: “If I see a person with an object, then I will not shoot!” Mendoza et al. found that a response-facilitating implementation intention “enhanced controlled processing but did not affect automatic stereotyping processing,” while a distraction-inhibiting implementation intention “was associated with an increase in controlled processing and a decrease in automatic stereotyping processes.”[15]

How to Change Both Action and Thought

Notice that if the goal is to reduce automatic stereotype activation through reflexive control that only a distraction-inhibiting strategy achieved the desired effect. Notice also how the successful use of a distraction-inhibiting strategy may require a type of “non-messy” social environment unachievable outside of a laboratory experiment.[16] Or, as Mendoza et al. (2010) rightly note: “The current findings suggest that the quick interventions typically used in psychological experiments may be more effective in modulating behavioral responses or the temporary accessibility of stereotypes than in undoing highly edified knowledge structures.”[17]

The hope, of course, is that distraction-inhibiting strategies can help dominant knowers reduce automatic stereotype activation and response-facilitated strategies can help dominant knowers internalize controlled processing such that negative bias and stereotyping can be (one day) reflexively controlled as well. But these are only hopes. The only thing that we can rightly conclude from these results is that if we ask a dominant knower to focus on an internal command, they will do so. The result is that the activation of negative bias fails to occur.

This does not mean that the knower has reduced their internalized negative biases and prejudices or that they can continue to act on the internal commands in the future (in fact, subsequent studies reveal the effects are short-lived[18]). As Mendoza et al. also note: “In psychometric terms, these strategies are designed to enhance accuracy without necessarily affecting bias. That is, a person may still have a tendency to associate Black people with violence and thus be more likely to shoot unarmed Blacks than to shoot unarmed Whites.”[19] Despite hope for these strategies, there is very little to support their real-world applicability.

Hunting for Intuitive Hypocrisies

I would extend a similar critique to Margot Monteith’s (1993) discrepancy model. Monteith’s (1993) often cited study uses two experiments to investigate prejudice related discrepancies in the behaviors of low-prejudice (LP) and high-prejudice (HP) individuals and the ability to engage in self-regulated prejudice reduction. In the first experiment, (LP) and (HP) heterosexual study participants were asked to evaluate two law school applications, one for an implied gay applicant and one for an implied heterosexual applicant. Study participants “were led to believe that they had evaluated a gay law school applicant negatively because of his sexual orientation;” they were tricked into a “discrepancy-activated condition” or a condition that was at odds with their believed prejudicial state.[20]

All of the study participants were then told that the applications were identical and that those who had rejected the gay applicant had done so because of the applicant’s sexual orientation. It is important to note that the applicants qualifications were not, in fact, identical. The gay applicant’s application materials were made to look worse than the heterosexual applicant’s materials. This was done to compel the rejection of the applicant.

Study participants were then provided a follow-up questionnaire and essay allegedly written by a professor who wanted to know (a) “why people often have difficulty avoiding negative responses toward gay men,” and (b) “how people can eliminate their negative responses toward gay men.”[21] Researchers asked study participants to record their reactions to the faculty essay and write down as much they could remember about what they read. They were then told about the deception in the experiment and told why such deception was incorporated into the study.

Monteith (1993) found that “low and high prejudiced subjects alike experienced discomfort after violating their personal standards for responding to a gay man, but only low prejudiced subjects experienced negative self-directed affect.”[22] Low prejudiced, (LP), “discrepancy-activated subjects,” also spent more time reading the faculty essay and “showed superior recall for the portion of the essay concerning why prejudice-related discrepancies arise.”[23]

The “discrepancy experience” generated negative self-directed affect, or guilt, for (LP) study participants with the hope that the guilt would (a) “motivate discrepancy reduction (e.g., Rokeach, 1973)” and (b) “serve to establish strong cues for punishment (cf. Gray, 1982).”[24] The idea here is that the experiment results point to the existence of a self-regulatory mechanism that can replace automatic stereotype activation with “belief-based responses;” however, “it is important to note that the initiation of self-regulatory mechanisms is dependent on recognizing and interpreting one’s responses as discrepant from one’s personal beliefs.”[25]

The discrepancy between what one is shown to believe and what one professes to believe (whether real or manufactured, as in the experiment) is aimed at getting knowers to engage in heightened self-focus due to negative self-directed affect. The goal of Monteith’s (1993) study is that self-directed affect would lead to a kind of corrective belief-making process that is both less prejudicial and future-directed.

But if it’s guilt that’s doing the psychological work in these cases, then it’s not clear that knowers wouldn’t find other means of assuaging such feelings. Why wouldn’t it be the case that generating negative self-directed affect would point a knower toward anything they deem necessary to restore a more positive sense of self? To this, Monteith made the following concession:

Steele (1988; Steele & Liu, 1983) contended that restoration of one’s self-image after a discrepancy experience may not entail discrepancy reduction if other opportunities for self-affirmation are available. For example, Steele (1988) suggested that a smoker who wants to quit might spend more time with his or her children to resolve the threat to the self-concept engendered by the psychological inconsistency created by smoking. Similarly, Tesser and Cornell (1991) found that different behaviors appeared to feed into a general “self-evaluation reservoir.” It follows that prejudice-related discrepancy experiences may not facilitate the self-regulation of prejudiced responses if other means to restoring one’s self-regard are available [emphasis mine].[26]

Additionally, she noted that even if individuals are committed to the reducing or “unlearning” automatic stereotyping, they “may become frustrated and disengage from the self-regulatory cycle, abandoning their goal to eliminate prejudice-like responses.”[27] Cognitive exhaustion, or cognitive depletion, can occur after intergroup exchanges as well. This may make it even less likely that a knower will continue to feel guilty, and to use that guilt to inhibit the activation of negative stereotypes when they find themselves struggling cognitively. Conversely, there is also the issue of a kind of lab-based, or experiment-based, cognitive priming. I pick up with this idea along with the final two models of implicit interventions in the next part.

Contact details:


Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

Amodio, David M., Devine, Patricia G., and Harmon-Jones, Eddie. (2007). “A Dynamic Model of Guilt: Implications for Motivation and Self-Regulation in the Context of Prejudice,” in Psychological Science 18(6), pp. 524-30.

Blair, I. V., Ma, J. E., & Lenton, A. P. (2001). “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes Through Mental Imagery,” in Journal of Personality and Social Psychology, 81:5, p. 837.

Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

Forscher, Patrick S., Lai, Calvin K., Axt, Jordan R., Ebersole, Charles R., Herman, Michelle, Devine, Patricia G., and Nosek, Brian A. (August 13, 2018). “A Meta-Analysis of Procedures to Change Implicit Measures.” [Preprint]. Retrieved from

Glaser, Jack and Knowles, Eric D. (2007). “Implicit Motivation to Control Prejudice,” in Journal of Experimental Social Psychology 44, p. 165.

Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69.

Greenwald, Anthony G., Banaji, Mahzarin R., and Nosek, Brian A. (2015). “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 553-561.

Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514.

Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

Moskowitz, Gordon and Li, Peizhong. (2011). “Egalitarian Goals Trigger Stereotype Inhibition,” in Journal of Experimental Social Psychology 47, p. 106.

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192

Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2015). “Using the IAT to Predict Ethnic and Racial Discrimination: Small Effect Sizes of Unknown Societal Significance,” in Journal of Personality and Social Psychology, Vol. 108, No. 4, pp. 562-571.

Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[1] Alcoff, Linda. (2010). “Epistemic Identities,” in Episteme 7 (2), p. 132.

[2] Saul, Jennifer. (2017). “Implicit Bias, Stereotype Threat, and Epistemic Injustice,” in The Routledge Handbook of Epistemic Injustice, eds. Ian James Kidd, José Medina, and Gaile Pohlhaus, Jr. [Google Books Edition] New York: Routledge.

[3] Saul, Jennifer (2017), p. 466.

[4] See: Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P. E. (2013). “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies,” in Journal of Personality and Social Psychology, Vol. 105, pp. 171-192.

[5] I owe this critical point in its entirety to the work of Lacey Davidson and her presentation, “When Testimony Isn’t Enough: Implicit Bias Research as Epistemic Injustice” at the Feminist Epistemologies, Methodologies, Metaphysics, and Science Studies (FEMMSS) conference in Corvallis, Oregon in 2018. Davidson notes that the work of philosophers of race and critical race theorists often takes a backseat to the projects of philosophers of social science who engage with the science of racialized attitudes as opposed to the narratives and/or testimonies of those with lived experiences of racism. Davidson describes this as a type of epistemic injustice against philosophers of race and critical race theorists. She also notes that philosophers of race and critical race theorists are often people of color while the philosophers of social science are often white. This dimension of analysis is important but unexplored. Davidson’s work highlights how epistemic injustice operates within the academy to perpetuate systems of racism and oppression under the guise of “good science.” Her arguments was inspired by the work of Jeanine Weekes Schroer on the problematic nature of current research on stereotype threat and implicit bias in “Giving Them Something They Can Feel: On the Strategy of Scientizing the Phenomenology of Race and Racism,” Knowledge Cultures 3(1), 2015.

[6] Kawakami, K., Dovidio, J. F., and van Kamp, S. (2005). “Kicking the Habit: Effects of Nonstereotypic Association Training and Correction Processes on Hiring Decisions,” in Journal of Experimental Social Psychology 41:1, pp. 68-69. See also: Devine, P. G., & Monteith, M. J. (1993). “The Role of Discrepancy-Associated Affect in Prejudice Reduction,” in Affect, Cognition and Stereotyping: Interactive Processes in Group Perception, eds., D. M. Mackie & D. L. Hamilton. San Diego: Academic Press, pp. 317–344.

[7] Kawakami et al. (2005), p. 69. See also: Kawakami, K., Dovidio, J. F., Moll, J., Hermsen, S., & Russin, A. (2000). “Just Say No (To Stereotyping): Effects Of Training in Negation of Stereotypic Associations on Stereotype Activation,” in Journal of Personality and Social Psychology, 78, 871–888.

[8] Kawakami et al. (2005), p. 69.

[9] Kawakami et al. (2005), p. 73.

[10] Kawakami et al. (2005), p. 73.

[11] Kawakami et al. (2005), p. 74.

[12] Kawakami et al. (2005), p. 74.

[13] The Shooter Task refers to a computer simulation experiment where images of black and white males appear on a screen holding a gun or a non-gun object. Study participants are given a short response time and tasked with pressing a button, or “shooting” armed images versus unarmed images. Psychological studies have revealed a “shooter bias” in the tendency to shoot black, unarmed males more often than unarmed white males. See: Correll, Joshua, Bernadette Park, Bernd Wittenbrink, and Charles M. Judd. (2002). “The Police Officer’s Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals,” in Journal of Personality and Social Psychology, Vol. 83, No. 6, 1314–1329.

[14] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David. (2010). “Reducing the Expression of Implicit Stereotypes: Reflexive Control through Implementation Intentions,” in Personality and Social Psychology Bulletin 36:4, p. 513-514..

[15] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[16] A “messy environment” presents additional challenges to studies like the one discussed here. As Kees Keizer, Siegwart Lindenberg, and Linda Steg (2008) claim in “The Spreading of Disorder,” people are more likely to violate social rules when they see that others are violating the rules as well. I can only imagine that this is applicable to epistemic rules as well. I mention this here to suggest that the “cleanliness” of the social environment of social psychological studies such as the one by Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010) presents an additional obstacle in extrapolating the resulting behaviors of research participants to the public-at-large. Short of mass hypnosis, how could the strategies used in these experiments, strategies that are predicated on the noninterference of other destabilizing factors, be meaningfully applied to everyday life? There is a tendency in the philosophical literature on implicit bias and stereotype threat to outright ignore the limited applicability of much of this research in order to make critical claims about interventions into racist, sexist, homophobic, and transphobic behaviors. Philosophers would do well to recognize the complexity of these issues and to be more cautious about the enthusiastic endorsement of experimental results.

[17] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[18] Webb, Thomas L., Sheeran, Paschal, and Pepper, John. (2012). “Gaining Control Over Responses to Implicit Attitude Tests: Implementation Intentions Engender Fast Responses on Attitude-Incongruent Trials,” in British Journal of Social Psychology 51, pp. 13-32.

[19] Mendoza, Saaid, Gollwitzer, Peter, and Amodio, David (2010), p. 520.

[20] Monteith, Margo. (1993). “Self-Regulation of Prejudiced Responses: Implications for Progress in Prejudice-Reduction Efforts,” in Journal of Personality and Social Psychology 65:3, p. 472.

[21] Monteith (1993), p. 474.

[22] Monteith (1993), p. 475.

[23] Monteith (1993), p. 477.

[24] Monteith (1993), p. 477.

[25] Monteith (1993), p. 477.

[26] Monteith (1993), p. 482.

[27] Monteith (1993), p. 483.

Author Information: Raphael Sassower, University of Colorado, Colorado Springs,

Sassower, Raphael. “Post-Truths and Inconvenient Facts.” Social Epistemology Review and Reply Collective 7, no. 8 (2018): 47-60.

The pdf of the article gives specific page references. Shortlink:

Can one truly refuse to believe facts?
Image by Oxfam International via Flickr / Creative Commons


If nothing else, Steve Fuller has his ear to the pulse of popular culture and the academics who engage in its twists and turns. Starting with Brexit and continuing into the Trump-era abyss, “post-truth” was dubbed by the OED as its word of the year in 2016. Fuller has mustered his collected publications to recast the debate over post-truth and frame it within STS in general and his own contributions to social epistemology in particular.

This could have been a public mea culpa of sorts: we, the community of sociologists (and some straggling philosophers and anthropologists and perhaps some poststructuralists) may seem to someone who isn’t reading our critiques carefully to be partially responsible for legitimating the dismissal of empirical data, evidence-based statements, and the means by which scientific claims can be deemed not only credible but true. Instead, we are dazzled by a range of topics (historically anchored) that explain how we got to Brexit and Trump—yet Fuller’s analyses of them don’t ring alarm bells. There is almost a hidden glee that indeed the privileged scientific establishment, insular scientific discourse, and some of its experts who pontificate authoritative consensus claims are all bound to be undone by the rebellion of mavericks and iconoclasts that include intelligent design promoters and neoliberal freedom fighters.

In what follows, I do not intend to summarize the book, as it is short and entertaining enough for anyone to read on their own. Instead, I wish to outline three interrelated points that one might argue need not be argued but, apparently, do: 1) certain critiques of science have contributed to the Trumpist mindset; 2) the politics of Trumpism is too dangerous to be sanguine about; 3) the post-truth condition is troublesome and insidious. Though Fuller deals with some of these issues, I hope to add some constructive clarification to them.

Part One: Critiques of Science

As Theodor Adorno reminds us, critique is essential not only for philosophy, but also for democracy. He is aware that the “critic becomes a divisive influence, with a totalitarian phrase, a subversive” (1998/1963, 283) insofar as the status quo is being challenged and sacred political institutions might have to change. The price of critique, then, can be high, and therefore critique should be managed carefully and only cautiously deployed. Should we refrain from critique, then? Not at all, continues Adorno.

But if you think that a broad, useful distinction can be offered among different critiques, think again: “[In] the division between responsible critique, namely, that practiced by those who bear public responsibility, and irresponsible critique, namely, that practiced by those who cannot be held accountable for the consequences, critique is already neutralized.” (Ibid. 285) Adorno’s worry is not only that one forgets that “the truth content of critique alone should be that authority [that decides if it’s responsible],” but that when such a criterion is “unilaterally invoked,” critique itself can lose its power and be at the service “of those who oppose the critical spirit of a democratic society.” (Ibid)

In a political setting, the charge of irresponsible critique shuts the conversation down and ensures political hegemony without disruptions. Modifying Adorno’s distinction between (politically) responsible and irresponsible critiques, responsible scientific critiques are constructive insofar as they attempt to improve methods of inquiry, data collection and analysis, and contribute to the accumulated knowledge of a community; irresponsible scientific critiques are those whose goal is to undermine the very quest for objective knowledge and the means by which such knowledge can be ascertained. Questions about the legitimacy of scientific authority are related to but not of exclusive importance for these critiques.

Have those of us committed to the critique of science missed the mark of the distinction between responsible and irresponsible critiques? Have we become so subversive and perhaps self-righteous that science itself has been threatened? Though Fuller is primarily concerned with the hegemony of the sociology of science studies and the movement he has championed under the banner of “social epistemology” since the 1980s, he does acknowledge the Popperians and their critique of scientific progress and even admires the Popperian contribution to the scientific enterprise.

But he is reluctant to recognize the contributions of Marxists, poststructuralists, and postmodernists who have been critically engaging the power of science since the 19th century. Among them, we find Jean-François Lyotard who, in The Postmodern Condition (1984/1979), follows Marxists and neo-Marxists who have regularly lumped science and scientific discourse with capitalism and power. This critical trajectory has been well rehearsed, so suffice it here to say, SSK, SE, and the Edinburgh “Strong Programme” are part of a long and rich critical tradition (whose origins are Marxist). Adorno’s Frankfurt School is part of this tradition, and as we think about science, which had come to dominate Western culture by the 20th century (in the place of religion, whose power had by then waned as the arbiter of truth), it was its privileged power and interlocking financial benefits that drew the ire of critics.

Were these critics “responsible” in Adorno’s political sense? Can they be held accountable for offering (scientific and not political) critiques that improve the scientific process of adjudication between criteria of empirical validity and logical consistency? Not always. Did they realize that their success could throw the baby out with the bathwater? Not always. While Fuller grants Karl Popper the upper hand (as compared to Thomas Kuhn) when indirectly addressing such questions, we must keep an eye on Fuller’s “baby.” It’s easy to overlook the slippage from the political to the scientific and vice versa: Popper’s claim that we never know the Truth doesn’t mean that his (and our) quest for discovering the Truth as such is given up, it’s only made more difficult as whatever is scientifically apprehended as truth remains putative.

Limits to Skepticism

What is precious about the baby—science in general, and scientific discourse and its community in more particular ways—is that it offered safeguards against frivolous skepticism. Robert Merton (1973/1942) famously outlined the four features of the scientific ethos, principles that characterized the ideal workings of the scientific community: universalism, communism (communalism, as per the Cold War terror), disinterestedness, and organized skepticism. It is the last principle that is relevant here, since it unequivocally demands an institutionalized mindset of putative acceptance of any hypothesis or theory that is articulated by any community member.

One detects the slippery slope that would move one from being on guard when engaged with any proposal to being so skeptical as to never accept any proposal no matter how well documented or empirically supported. Al Gore, in his An Inconvenient Truth (2006), sounded the alarm about climate change. A dozen years later we are still plagued by climate-change deniers who refuse to look at the evidence, suggesting instead that the standards of science themselves—from the collection of data in the North Pole to computer simulations—have not been sufficiently fulfilled (“questions remain”) to accept human responsibility for the increase of the earth’s temperature. Incidentally, here is Fuller’s explanation of his own apparent doubt about climate change:

Consider someone like myself who was born in the midst of the Cold War. In my lifetime, scientific predictions surrounding global climate change has [sic.] veered from a deep frozen to an overheated version of the apocalypse, based on a combination of improved data, models and, not least, a geopolitical paradigm shift that has come to downplay the likelihood of a total nuclear war. Why, then, should I not expect a significant, if not comparable, alteration of collective scientific judgement in the rest of my lifetime? (86)

Expecting changes in the model does not entail a) that no improved model can be offered; b) that methodological changes in themselves are a bad thing (they might be, rather, improvements); or c) that one should not take action at all based on the current model because in the future the model might change.

The Royal Society of London (1660) set the benchmark of scientific credibility low when it accepted as scientific evidence any report by two independent witnesses. As the years went by, testability (“confirmation,” for the Vienna Circle, “falsification,” for Popper) and repeatability were added as requirements for a report to be considered scientific, and by now, various other conditions have been proposed. Skepticism, organized or personal, remains at the very heart of the scientific march towards certainty (or at least high probability), but when used perniciously, it has derailed reasonable attempts to use science as a means by which to protect, for example, public health.

Both Michael Bowker (2003) and Robert Proctor (1995) chronicle cases where asbestos and cigarette lobbyists and lawyers alike were able to sow enough doubt in the name of attenuated scientific data collection to ward off regulators, legislators, and the courts for decades. Instead of finding sufficient empirical evidence to attribute asbestos and nicotine to the failing health condition (and death) of workers and consumers, “organized skepticism” was weaponized to fight the sick and protect the interests of large corporations and their insurers.

Instead of buttressing scientific claims (that have passed the tests—in refereed professional conferences and publications, for example—of most institutional scientific skeptics), organized skepticism has been manipulated to ensure that no claim is ever scientific enough or has the legitimacy of the scientific community. In other words, what should have remained the reasonable cautionary tale of a disinterested and communal activity (that could then be deemed universally credible) has turned into a circus of fire-blowing clowns ready to burn down the tent. The public remains confused, not realizing that just because the stakes have risen over the decades does not mean there are no standards that ever can be met. Despite lobbyists’ and lawyers’ best efforts of derailment, courts have eventually found cigarette companies and asbestos manufacturers guilty of exposing workers and consumers to deathly hazards.

Limits to Belief

If we add to this logic of doubt, which has been responsible for discrediting science and the conditions for proposing credible claims, a bit of U.S. cultural history, we may enjoy a more comprehensive picture of the unintended consequences of certain critiques of science. Citing Kurt Andersen (2017), Robert Darnton suggests that the Enlightenment’s “rational individualism interacted with the older Puritan faith in the individual’s inner knowledge of the ways of Providence, and the result was a peculiarly American conviction about everyone’s unmediated access to reality, whether in the natural world or the spiritual world. If we believe it, it must be true.” (2018, 68)

This way of thinking—unmediated experiences and beliefs, unconfirmed observations, and disregard of others’ experiences and beliefs—continues what Richard Hofstadter (1962) dubbed “anti-intellectualism.” For Americans, this predates the republic and is characterized by a hostility towards the life of the mind (admittedly, at the time, religious texts), critical thinking (self-reflection and the rules of logic), and even literacy. The heart (our emotions) can more honestly lead us to the Promised Land, whether it is heaven on earth in the Americas or the Christian afterlife; any textual interference or reflective pondering is necessarily an impediment, one to be suspicious of and avoided.

This lethal combination of the life of the heart and righteous individualism brings about general ignorance and what psychologists call “confirmation bias” (the view that we endorse what we already believe to be true regardless of countervailing evidence). The critique of science, along this trajectory, can be but one of many so-called critiques of anything said or proven by anyone whose ideology we do not endorse. But is this even critique?

Adorno would find this a charade, a pretense that poses as a critique but in reality is a simple dismissal without intellectual engagement, a dogmatic refusal to listen and observe. He definitely would be horrified by Stephen Colbert’s oft-quoted quip on “truthiness” as “the conviction that what you feel to be true must be true.” Even those who resurrect Daniel Patrick Moynihan’s phrase, “You are entitled to your own opinion, but not to your own facts,” quietly admit that his admonishment is ignored by media more popular than informed.

On Responsible Critique

But surely there is merit to responsible critiques of science. Weren’t many of these critiques meant to dethrone the unparalleled authority claimed in the name of science, as Fuller admits all along? Wasn’t Lyotard (and Marx before him), for example, correct in pointing out the conflation of power and money in the scientific vortex that could legitimate whatever profit-maximizers desire? In other words, should scientific discourse be put on par with other discourses?  Whose credibility ought to be challenged, and whose truth claims deserve scrutiny? Can we privilege or distinguish science if it is true, as Monya Baker has reported, that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments” (2016, 1)?

Fuller remains silent about these important and responsible questions about the problematics (methodologically and financially) of reproducing scientific experiments. Baker’s report cites Nature‘s survey of 1,576 researchers and reveals “sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Ibid.) So, if science relies on reproducibility as a cornerstone of its legitimacy (and superiority over other discourses), and if the results are so dismal, should it not be discredited?

One answer, given by Hans E. Plesser, suggests that there is a confusion between the notions of repeatability (“same team, same experimental setup”), replicability (“different team, same experimental setup”), and reproducibility (“different team, different experimental setup”). If understood in these terms, it stands to reason that one may not get the same results all the time and that this fact alone does not discredit the scientific enterprise as a whole. Nuanced distinctions take us down a scientific rabbit-hole most post-truth advocates refuse to follow. These nuances are lost on a public that demands to know the “bottom line” in brief sound bites: Is science scientific enough, or is it bunk? When can we trust it?

Trump excels at this kind of rhetorical device: repeat a falsehood often enough and people will believe it; and because individual critical faculties are not a prerequisite for citizenship, post-truth means no truth, or whatever the president says is true. Adorno’s distinction of the responsible from the irresponsible political critics comes into play here; but he innocently failed to anticipate the Trumpian move to conflate the political and scientific and pretend as if there is no distinction—methodologically and institutionally—between political and scientific discourses.

With this cultural backdrop, many critiques of science have undermined its authority and thereby lent credence to any dismissal of science (legitimately by insiders and perhaps illegitimately at times by outsiders). Sociologists and postmodernists alike forgot to put warning signs on their academic and intellectual texts: Beware of hasty generalizations! Watch out for wolves in sheep clothes! Don’t throw the baby out with the bathwater!

One would think such advisories unnecessary. Yet without such safeguards, internal disputes and critical investigations appear to have unintentionally discredited the entire scientific enterprise in the eyes of post-truth promoters, the Trumpists whose neoliberal spectacles filter in dollar signs and filter out pollution on the horizon. The discrediting of science has become a welcome distraction that opens the way to radical free-market mentality, spanning from the exploitation of free speech to resource extraction to the debasement of political institutions, from courts of law to unfettered globalization. In this sense, internal (responsible) critiques of the scientific community and its internal politics, for example, unfortunately license external (irresponsible) critiques of science, the kind that obscure the original intent of responsible critiques. Post-truth claims at the behest of corporate interests sanction a free for all where the concentrated power of the few silences the concerns of the many.

Indigenous-allied protestors block the entrance to an oil facility related to the Kinder-Morgan oil pipeline in Alberta.
Image by Peg Hunter via Flickr / Creative Commons


Part Two: The Politics of Post-Truth

Fuller begins his book about the post-truth condition that permeates the British and American landscapes with a look at our ancient Greek predecessors. According to him, “Philosophers claim to be seekers of the truth but the matter is not quite so straightforward. Another way to see philosophers is as the ultimate experts in a post-truth world” (19). This means that those historically entrusted to be the guardians of truth in fact “see ‘truth’ for what it is: the name of a brand ever in need of a product which everyone is compelled to buy. This helps to explain why philosophers are most confident appealing to ‘The Truth’ when they are trying to persuade non-philosophers, be they in courtrooms or classrooms.” (Ibid.)

Instead of being the seekers of the truth, thinkers who care not about what but how we think, philosophers are ridiculed by Fuller (himself a philosopher turned sociologist turned popularizer and public relations expert) as marketing hacks in a public relations company that promotes brands. Their serious dedication to finding the criteria by which truth is ascertained is used against them: “[I]t is not simply that philosophers disagree on which propositions are ‘true’ or ‘false’ but more importantly they disagree on what it means to say that something is ‘true’ or ‘false’.” (Ibid.)

Some would argue that the criteria by which propositions are judged to be true or false are worthy of debate, rather than the cavalier dismissal of Trumpists. With criteria in place (even if only by convention), at least we know what we are arguing about, as these criteria (even if contested) offer a starting point for critical scrutiny. And this, I maintain, is a task worth performing, especially in the age of pluralism when multiple perspectives constitute our public stage.

In addition to debasing philosophers, it seems that Fuller reserves a special place in purgatory for Socrates (and Plato) for labeling the rhetorical expertise of the sophists—“the local post-truth merchants in fourth century BC Athens”—negatively. (21) It becomes obvious that Fuller is “on their side” and that the presumed debate over truth and its practices is in fact nothing but “whether its access should be free or restricted.” (Ibid.) In this neoliberal reading, it is all about money: are sophists evil because they charge for their expertise? Is Socrates a martyr and saint because he refused payment for his teaching?

Fuller admits, “Indeed, I would have us see both Plato and the Sophists as post-truth merchants, concerned more with the mix of chance and skill in the construction of truth than with the truth as such.” (Ibid.) One wonders not only if Plato receives fair treatment (reminiscent of Popper’s denigration of Plato as supporting totalitarian regimes, while sparing Socrates as a promoter of democracy), but whether calling all parties to a dispute “post-truth merchants” obliterates relevant differences. In other words, have we indeed lost the desire to find the truth, even if it can never be the whole truth and nothing but the truth?

Political Indifference to Truth

One wonders how far this goes: political discourse without any claim to truth conditions would become nothing but a marketing campaign where money and power dictate the acceptance of the message. Perhaps the intended message here is that contemporary cynicism towards political discourse has its roots in ancient Greece. Regardless, one should worry that such cynicism indirectly sanctions fascism.

Can the poor and marginalized in our society afford this kind of cynicism? For them, unlike their privileged counterparts in the political arena, claims about discrimination and exploitation, about unfair treatment and barriers to voting are true and evidence based; they are not rhetorical flourishes by clever interlocutors.

Yet Fuller would have none of this. For him, political disputes are games:

[B]oth the Sophists and Plato saw politics as a game, which is to say, a field of play involving some measure of both chance and skill. However, the Sophists saw politics primarily as a game of chance whereas Plato saw it as a game of skill. Thus, the sophistically trained client deploys skill in [the] aid of maximizing chance occurrences, which may then be converted into opportunities, while the philosopher-king uses much the same skills to minimize or counteract the workings of chance. (23)

Fuller could be channeling here twentieth-century game theory and its application in the political arena, or the notion offered by Lyotard when describing the minimal contribution we can make to scientific knowledge (where we cannot change the rules of the game but perhaps find a novel “move” to make). Indeed, if politics is deemed a game of chance, then anything goes, and it really should not matter if an incompetent candidate like Trump ends up winning the American presidency.

But is it really a question of skill and chance? Or, as some political philosophers would argue, is it not a question of the best means by which to bring to fruition the best results for the general wellbeing of a community? The point of suggesting the figure of a philosopher-king, to be sure, was not his rhetorical skills in this conjunction, but instead the deep commitment to rule justly, to think critically about policies, and to treat constituents with respect and fairness. Plato’s Republic, however criticized, was supposed to be about justice, not about expediency; it is an exploration of the rule of law and wisdom, not a manual about manipulation. If the recent presidential election in the US taught us anything, it’s that we should be wary of political gamesmanship and focus on experience and knowledge, vision and wisdom.

Out-Gaming Expertise Itself

Fuller would have none of this, either. It seems that there is virtue in being a “post-truther,” someone who can easily switch between knowledge games, unlike the “truther” whose aim is to “strengthen the distinction by making it harder to switch between knowledge games.” (34) In the post-truth realm, then, knowledge claims are lumped into games that can be played at will, that can be substituted when convenient, without a hint of the danger such capricious game-switching might engender.

It’s one thing to challenge a scientific hypothesis about astronomy because the evidence is still unclear (as Stephen Hawking has done in regard to Black Holes) and quite another to compare it to astrology (and give equal hearings to horoscope and Tarot card readers as to physicists). Though we are far from the Demarcation Problem (between science and pseudo-science) of the last century, this does not mean that there is no difference at all between different discourses and their empirical bases (or that the problem itself isn’t worthy of reconsideration in the age of Fuller and Trump).

On the contrary, it’s because we assume difference between discourses (gray as they may be) that we can move on to figure out on what basis our claims can and should rest. The danger, as we see in the political logic of the Trump administration, is that friends become foes (European Union) and foes are admired (North Korea and Russia). Game-switching in this context can lead to a nuclear war.

In Fuller’s hands, though, something else is at work. Speaking of contemporary political circumstances in the UK and the US, he says: “After all, the people who tend to be demonized as ‘post-truth’ – from Brexiteers to Trumpists – have largely managed to outflank the experts at their own game, even if they have yet to succeed in dominating the entire field of play.” (39) Fuller’s celebratory tone here may either bring a slight warning in the use of “yet” before the success “in dominating the entire field of play” or a prediction that indeed this is what is about to happen soon enough.

The neoliberal bottom-line surfaces in this assessment: he who wins must be right, the rich must be smart, and more perniciously, the appeal to truth is beside the point. More specifically, Fuller continues:

My own way of dividing the ‘truthers’ and the ‘post-truthers’ is in terms of whether one plays by the rules of the current knowledge game or one tries to change the rules of the game to one’s advantage. Unlike the truthers, who play by the current rules, the post-truthers want to change the rules. They believe that what passes for truth is relative to the knowledge game one is playing, which means that depending on the game being played, certain parties are advantaged over others. Post-truth in this sense is a recognisably social constructivist position, and many of the arguments deployed to advance ‘alternative facts’ and ‘alternative science’ nowadays betray those origins. They are talking about worlds that could have been and still could be—the stuff of modal power. (Ibid.)

By now one should be terrified. This is a strong endorsement of lying as a matter of course, as a way to distract from the details (and empirical bases) of one “knowledge game”—because it may not be to one’s ideological liking–in favor of another that might be deemed more suitable (for financial or other purposes).

The political stakes here are too high to ignore, especially because there are good reasons why “certain parties are advantaged over others” (say, climate scientists “relative to” climate deniers who have no scientific background or expertise). One wonders what it means to talk about “alternative facts” and “alternative science” in this context: is it a means of obfuscation? Is it yet another license granted by the “social constructivist position” not to acknowledge the legal liability of cigarette companies for the addictive power of nicotine? Or the pollution of water sources in Flint, Michigan?

What Is the Mark of an Open Society?

If we corral the broader political logic at hand to the governance of the scientific community, as Fuller wishes us to do, then we hear the following:

In the past, under the inspiration of Karl Popper, I have argued that fundamental to the governance of science as an ‘open society’ is the right to be wrong (Fuller 2000a: chap. 1). This is an extension of the classical republican ideal that one is truly free to speak their mind only if they can speak with impunity. In the Athenian and the Roman republics, this was made possible by the speakers–that is, the citizens–possessing independent means which allowed them to continue with their private lives even if they are voted down in a public meeting. The underlying intuition of this social arrangement, which is the epistemological basis of Mill’s On Liberty, is that people who are free to speak their minds as individuals are most likely to reach the truth collectively. The entangled histories of politics, economics and knowledge reveal the difficulties in trying to implement this ideal. Nevertheless, in a post-truth world, this general line of thought is not merely endorsed but intensified. (109)

To be clear, Fuller not only asks for the “right to be wrong,” but also for the legitimacy of the claim that “people who are free to speak their minds as individuals are most likely to reach the truth collectively.” The first plea is reasonable enough, as humans are fallible (yes, Popper here), and the history of ideas has proven that killing heretics is counterproductive (and immoral). If the Brexit/Trump post-truth age would only usher a greater encouragement for speculation or conjectures (Popper again), then Fuller’s book would be well-placed in the pantheon of intellectual pluralism; but if this endorsement obliterates the silly from the informed conjecture, then we are in trouble and the ensuing cacophony will turn us all deaf.

The second claim is at best supported by the likes of James Surowiecki (2004) who has argued that no matter how uninformed a crowd of people is, collectively it can guess the correct weight of a cow on stage (his TED talk). As folk wisdom, this is charming; as public policy, this is dangerous. Would you like a random group of people deciding how to store nuclear waste, and where? Would you subject yourself to the judgment of just any collection of people to decide on taking out your appendix or performing triple-bypass surgery?

When we turn to Trump, his supporters certainly like that he speaks his mind, just as Fuller says individuals should be granted the right to speak their minds (even if in error). But speaking one’s mind can also be a proxy for saying whatever, without filters, without critical thinking, or without thinking at all (let alone consulting experts whose very existence seems to upset Fuller). Since when did “speaking your mind” turn into scientific discourse? It’s one thing to encourage dissent and offer reasoned doubt and explore second opinions (as health care professionals and insurers expect), but it’s quite another to share your feelings and demand that they count as scientific authority.

Finally, even if we endorse the view that we “collectively” reach the truth, should we not ask: by what criteria? according to what procedure? under what guidelines? Herd mentality, as Nietzsche already warned us, is problematic at best and immoral at worst. Trump rallies harken back to the fascist ones we recall from Europe prior to and during WWII. Few today would entrust the collective judgment of those enthusiasts of the Thirties to carry the day.

Unlike Fuller’s sanguine posture, I shudder at the possibility that “in a post-truth world, this general line of thought is not merely endorsed but intensified.” This is neither because I worship experts and scorn folk knowledge nor because I have low regard for individuals and their (potentially informative) opinions. Just as we warn our students that simply having an opinion is not enough, that they need to substantiate it, offer data or logical evidence for it, and even know its origins and who promoted it before they made it their own, so I worry about uninformed (even if well-meaning) individuals (and presidents) whose gut will dictate public policy.

This way of unreasonably empowering individuals is dangerous for their own well-being (no paternalism here, just common sense) as well as for the community at large (too many untrained cooks will definitely spoil the broth). For those who doubt my concern, Trump offers ample evidence: trade wars with allies and foes that cost domestic jobs (when promising to bring jobs home), nuclear-war threats that resemble a game of chicken (as if no president before him ever faced such an option), and completely putting into disarray public policy procedures from immigration regulations to the relaxation of emission controls (that ignores the history of these policies and their failures).

Drought and suffering in Arbajahan, Kenya in 2006.
Photo by Brendan Cox and Oxfam International via Flickr / Creative Commons


Part Three: Post-Truth Revisited

There is something appealing, even seductive, in the provocation to doubt the truth as rendered by the (scientific) establishment, even as we worry about sowing the seeds of falsehood in the political domain. The history of science is the story of authoritative theories debunked, cherished ideas proven wrong, and claims of certainty falsified. Why not, then, jump on the “post-truth” wagon? Would we not unleash the collective imagination to improve our knowledge and the future of humanity?

One of the lessons of postmodernism (at least as told by Lyotard) is that “post-“ does not mean “after,” but rather, “concurrently,” as another way of thinking all along: just because something is labeled “post-“, as in the case of postsecularism, it doesn’t mean that one way of thinking or practicing has replaced another; it has only displaced it, and both alternatives are still there in broad daylight. Under the rubric of postsecularism, for example, we find religious practices thriving (80% of Americans believe in God, according to a 2018 Pew Research survey), while the number of unaffiliated, atheists, and agnostics is on the rise. Religionists and secularists live side by side, as they always have, more or less agonistically.

In the case of “post-truth,” it seems that one must choose between one orientation or another, or at least for Fuller, who claims to prefer the “post-truth world” to the allegedly hierarchical and submissive world of “truth,” where the dominant establishment shoves its truths down the throats of ignorant and repressed individuals. If post-truth meant, like postsecularism, the realization that truth and provisional or putative truth coexist and are continuously being re-examined, then no conflict would be at play. If Trump’s claims were juxtaposed to those of experts in their respective domains, we would have a lively, and hopefully intelligent, debate. False claims would be debunked, reasonable doubts could be raised, and legitimate concerns might be addressed. But Trump doesn’t consult anyone except his (post-truth) gut, and that is troublesome.

A Problematic Science and Technology Studies

Fuller admits that “STS can be fairly credited with having both routinized in its own research practice and set loose on the general public–if not outright invented—at least four common post-truth tropes”:

  1. Science is what results once a scientific paper is published, not what made it possible for the paper to be published, since the actual conduct of research is always open to multiple countervailing interpretations.
  2. What passes for the ‘truth’ in science is an institutionalised contingency, which if scientists are doing their job will be eventually overturned and replaced, not least because that may be the only way they can get ahead in their fields.
  3. Consensus is not a natural state in science but one that requires manufacture and maintenance, the work of which is easily underestimated because most of it occurs offstage in the peer review process.
  4. Key normative categories of science such as ‘competence’ and ‘expertise’ are moveable feasts, the terms of which are determined by the power dynamics that obtain between specific alignments of interested parties. (43)

In that sense, then, Fuller agrees that the positive lessons STS wished for the practice of the scientific community may have inadvertently found their way into a post-truth world that may abuse or exploit them in unintended ways. That is, something like “consensus” is challenged by STS because of how the scientific community pretends to get there knowing as it does that no such thing can ever be reached and when reached it may have been reached for the wrong reasons (leadership pressure, pharmaceutical funding of conferences and journals). But this can also go too far.

Just because consensus is difficult to reach (it doesn’t mean unanimity) and is susceptible to corruption or bias doesn’t mean that anything goes. Some experimental results are more acceptable than others and some data are more informative than others, and the struggle for agreement may take its political toll on the scientific community, but this need not result in silly ideas about cigarettes being good for our health or that obesity should be encouraged from early childhood.

It seems important to focus on Fuller’s conclusion because it encapsulates my concern with his version of post-truth, a condition he endorses not only in the epistemological plight of humanity but as an elixir with which to cure humanity’s ills:

While some have decried recent post-truth campaigns that resulted in victory for Brexit and Trump as ‘anti-intellectual’ populism, they are better seen as the growth pains of a maturing democratic intelligence, to which the experts will need to adjust over time. Emphasis in this book has been given to the prospect that the lines of intellectual descent that have characterised disciplinary knowledge formation in the academy might come to be seen as the last stand of a political economy based on rent-seeking. (130)

Here, we are not only afforded a moralizing sermon about (and it must be said, from) the academic privileged position, from whose heights all other positions are dismissed as anti-intellectual populism, but we are also entreated to consider the rantings of the know-nothings of the post-truth world as the “growing pains of a maturing democratic intelligence.” Only an apologist would characterize the Trump administration as mature, democratic, or intelligent. Where’s the evidence? What would possibly warrant such generosity?

It’s one thing to challenge “disciplinary knowledge formation” within the academy, and there are no doubt cases deserving reconsideration as to the conditions under which experts should be paid and by whom (“rent-seeking”); but how can these questions about higher education and the troubled relations between the university system and the state (and with the military-industrial complex) give cover to the Trump administration? Here is Fuller’s justification:

One need not pronounce on the specific fates of, say, Brexit or Trump to see that the post-truth condition is here to stay. The post-truth disrespect for established authority is ultimately offset by its conceptual openness to previously ignored people and their ideas. They are encouraged to come to the fore and prove themselves on this expanded field of play. (Ibid)

This, too, is a logical stretch: is disrespect for the authority of the establishment the same as, or does it logically lead to, the “conceptual” openness to previously “ignored people and their ideas”? This is not a claim on behalf of the disenfranchised. Perhaps their ideas were simply bad or outright racist or misogynist (as we see with Trump). Perhaps they were ignored because there was hope that they would change for the better, become more enlightened, not act on their white supremacist prejudices. Should we have “encouraged” explicit anti-Semitism while we were at it?

Limits to Tolerance

We tolerate ignorance because we believe in education and hope to overcome some of it; we tolerate falsehood in the name of eventual correction. But we should never tolerate offensive ideas and beliefs that are harmful to others. Once again, it is one thing to argue about black holes, and quite another to argue about whether black lives matter. It seems reasonable, as Fuller concludes, to say that “In a post-truth utopia, both truth and error are democratised.” It is also reasonable to say that “You will neither be allowed to rest on your laurels nor rest in peace. You will always be forced to have another chance.”

But the conclusion that “Perhaps this is why some people still prefer to play the game of truth, no matter who sets the rules” (130) does not follow. Those who “play the game of truth” are always vigilant about falsehoods and post-truth claims, and to say that they are simply dupes of those in power is both incorrect and dismissive. On the contrary: Socrates was searching for the truth and fought with the sophists, as Popper fought with the logical positivists and the Kuhnians, and as scientists today are searching for the truth and continue to fight superstitions and debunked pseudoscience about vaccination causing autism in young kids.

If post-truth is like postsecularism, scientific and political discourses can inform each other. When power-plays by ignoramus leaders like Trump are obvious, they could shed light on less obvious cases of big pharma leaders or those in charge of the EPA today. In these contexts, inconvenient facts and truths should prevail and the gamesmanship of post-truthers should be exposed for what motivates it.

Contact details:

* Special thanks to Dr. Denise Davis of Brown University, whose contribution to my critical thinking about this topic has been profound.


Theodor W. Adorno (1998/1963), Critical Models: Interventions and Catchwords. Translated by Henry W. Pickford. New York: Columbia University Press

Kurt Andersen (2017), Fantasyland: How America Went Hotwire: A 500-Year History. New York: Random House

Monya Baker, “1,500 scientists lift the lid on reproducibility,” Nature Vol. 533, Issue 7604, 5/26/16 (corrected 7/28/16)

Michael Bowker (2003), Fatal Deception: The Untold Story of Asbestos. New York: Rodale.

Robert Darnton, “The Greatest Show on Earth,” New York Review of Books Vo. LXV, No. 11 6/28/18, pp. 68-72.

Al Gore (2006), An Inconvenient Truth: The Planetary Emergency of Global Warming and What Can Be Done About It. New York: Rodale.

Richard Hofstadter (1962), Anti-Intellectualism in American Life. New York: Vintage Books.

Jean- François Lyotard (1984), The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. Minneapolis: University of Minnesota Press.

Robert K. Merton (1973/1942), “The Normative Structure of Science,” The Sociology of Science: Theoretical and Empirical Investigations. Chicago and London: The University of Chicago Press, pp. 267-278.

Hans E. Plesser, “Reproducibility vs. Replicability: A Brief History of Confused Terminology,” Frontiers in Neuroinformatics, 2017; 11: 76; online: 1/18/18.

Robert N. Proctor (1995), Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.

James Surowiecki (2004), The Wisdom of Crowds. New York: Anchor Books.