Is the constructivist analysis of science a hindrance to clear thinking, in particular clear thinking about the politics of science? […] This question arose from my discussions with Alan Sokal, who expressed his view that constructivism does indeed hinder clear thinking. His perspective can be gleaned from his 2008 book Beyond the Hoax. … [please read below the rest of the article].
Martin Brian. 2019. “Constructivism Versus Clear Thinking?” Social Epistemology Review and Reply Collective 8 (11): 18-26. https://wp.me/p1Bfg0-4E0.
The PDF of the article gives specific page numbers.
- Martin, Brian. 2019. “Bad Social Science.” Social Epistemology Review and Reply Collective 8 (3): 6-16.
- Sokal, Alan. 2019. “A Dialogue on a Paradigm Case of Bad Science: Comment on Brian Martin.” Social Epistemology Review and Reply Collective 8 (5): 36-47.
- Martin, Brian. 2019. “More on Bad Social Science.” Social Epistemology Review and Reply Collective 8 (7): 13-17.
Is the constructivist analysis of science a hindrance to clear thinking, in particular clear thinking about the politics of science?
This question arose from my discussions with Alan Sokal, who expressed his view that constructivism does indeed hinder clear thinking. His perspective can be gleaned from his 2008 book Beyond the Hoax. He writes, for example, “To the extent that postmodernist ideas are widely disseminated in the culture, even in watered-down form, they create a climate in which the incentives promoting the rigorous analysis of evidence are undermined” (344). Later in the book, he refers to “The harm caused by belief based on insufficient evidence …” (451) and to “… propagandists, spin doctors, and postmodernists—all of whom, in their different ways, really do not care about the difference between what is true and what is false” (457).
Before being convinced that postmodernist ideas, in particular constructivism, hinder clear thinking, I told Alan I wanted to see some empirical evidence. However, obtaining relevant evidence isn’t straightforward.
Here, to begin addressing this issue, I look at two arenas: the political critique of science and public scientific controversies. My goal is not to actually make a judgement about constructivism and clear thinking but rather to point to how an investigation into their relationship might be undertaken. But first, some comments on constructivism.
In relation to science, constructivism or constructionism refers to the idea that knowledge is, to some extent, shaped or influenced by human judgements and interests. (“Interest” here refers to a stake, as in a vested interest.) In constructivism, knowledge can be influenced by the natural world, but not necessarily with any fixed or automatic outcome. Knowledge here refers to claims about reality agreed to by relevant practitioners, such as scientists.
It is important to note that there are various ways in which systems of ideas can be organised that are more-or-less compatible with observations. In other words, there may be no single best way for humans to conceptually represent nature.
There are quite a number of versions of constructivism (Hess 1997, 81–111; Sismondo 1996; 2010, 57–71). Within science studies, there have been long and involved discussions and debates, ably surveyed and assessed by Yearley (2005). To bring up constructivism in science is to hark back to disputation of decades ago.
Constructivism in science studies can be seen as a particular application of the sociology of knowledge, which is the social analysis for knowledge systems, including the study of influences on the creation, organisation and content of knowledge. Pioneering writers on the sociology of knowledge (Mannheim 1936) exempted science and mathematics from their scrutiny; exponents of the sociology of scientific knowledge (SSK) have addressed this omission (Barnes 1974; Mulkay 1979).
Bloor (1976) set out what is called the strong programme in SSK, arguing that scientific knowledge should be studied in the same way as anything else, namely using the same intellectual tools. He presented four principles: impartiality, symmetry, causality and reflexivity. Importantly, beliefs are studied using the same sociological tools without concern about whether the beliefs are considered true or false.
Another approach is the empirical programme of relativism or EPOR (Collins 1981). A key idea in EPOR is that, when investigating a scientific controversy, the facts (aka the truth) cannot be used to explain the outcome of the controversy, because agreement about the facts is a result of resolving the controversy.
The strong programme and EPOR are just two constructivist perspectives on science; other constructivists draw on social interests theory, actor-network theory, feminist analyses, the construction of scientific discourse, and studies of reflexivity (Yearley 2005).
Sokal and Bricmont (1998, x) say that a target of their book is “epistemic relativism,” which is the idea “that modern science is nothing more than a ‘myth’, a ‘narration’ or a ‘social construction’ among many others.” I can’t remember encountering anyone who subscribes to epistemic relativism, in this description or any other. Few if any constructivists would say all truth claims are equally valid (or equally invalid), or that there is no way to determine whether some knowledge claims are better than others. Therefore, in addressing Sokal’s perspective, I look more broadly at constructivism, not just at epistemic relativism. After all, it’s easy to formulate less extreme versions of constructivism than Sokal and Bricmont’s epistemic relativism, for example that science has elements of a myth, that it can be understood as a narrative, and that understanding scientific knowledge as socially constructed does not rule out a role for material reality in its creation and validation.
Another thing to take into account is that some scholars use constructivism as a method of analysis, separately from their personal beliefs about reality. This would limit the likelihood of constructivism leading to poor thinking.
For my purposes here, there is no need to decide whether constructivism is an inherently deficient approach to knowledge. Even if it is a useful and intellectually coherent and rigorous approach to knowledge, it is quite possible that it is easily misunderstood or invoked in ways that lead to poor thinking. Constructivist-inspired deficient thinking is the focus.
There is no need, at least initially, to display examples of poor thinking associated with constructivism, because first it is necessary to show that constructivist thinking is prevalent, or even just present, in arenas where the politics of science is discussed. The number of possible domains is quite large, and I make no attempt to canvass or even itemise all of them. Instead, I examine two particular domains where I have experience: the political critique of science and public scientific controversies. These are addressed in the next two sections, followed by a note about the doubt technique.
Although my thoughts here are inspired by Sokal’s comments, I do not attempt to engage with the many arguments he presents in his publications. It would be fascinating to examine differences in perspective and emphasis between science studies scholars and critics including Sokal and Bricmont (1998), Sokal (2008) and Gross and Levitt (1994). This big topic, which has been addressed by others, need not be resolved for my task here. Determining the relationship between constructivism and clear thinking is independently worthy of attention.
Epistemology and the Political Critique of Science
The radical science movement in the US and Britain began in the late 1960s, inspired by the rise of other social movements—including the second-wave feminist movement, the student movement and the environmental movement—that challenged dominant systems of power. The radical science movement was spearheaded by the British Society for Social Responsibility in Science (with its magazine Science for People) and a parallel group in the US, Science for the People. The main emphasis of these groups was the political critique of science.
A central theme was that the trajectory of science and technology has been shaped by capitalism to serve capitalist interests or, more generally, for the purposes of domination of humans and nature. This theme played out in the various issues taken up by activists. An example is industrial agriculture, heavily dependent on pesticides, artificial fertilisers and monocultures, all serving the goals of an economy imbued with capitalist values. Science enters via research and development oriented to industrial agriculture, for example R&D into pesticides as the solution to the problem of pests. Research priorities are set by the priorities of profit and control. This means that alternative research trajectories, such as into organic farming, are neglected. Put simply, money and power shape science and technology.
As a sort of spin-off or supplement to this political economy critique, there was a related critique of scientific knowledge. In the case of pesticides, it is possible talk of the “pesticide paradigm,” drawing on Thomas Kuhn’s idea of scientific paradigms. Within the pesticide paradigm, the solution to pests is always seen as better or more pesticides. An obvious influence is that chemical corporations make a lot of money selling pesticides, whereas there is no vested interest in pursuing alternatives that promise fewer or no profits.
My recollection of the heyday of the radical science movements—the 1970s—is that epistemological analysis was usually a side issue, more a supplement to political analysis than a motivating factor, and never a foundation. To check this memory, I looked at a book (Arditti et al. 1980) edited by three long-standing members of Science for the People. It has 25 chapters, many of them reprinted from other sources, including articles that appeared in the magazine Science for the People. In their introduction to the book, the editors write “To most people of the United States it is heresy to suggest that science is not absolute truth, or that it is undertaken in such a way as to reinforce a certain set of values” (1). In concluding their introduction chapter, they write “The purpose of this book is to discuss the role of science and scientists in maintaining social oppression and to present ideas and concrete examples of the movement—such as it is—toward a new science: a science of liberation” (12). They do not say scientific knowledge is a myth or narrative. To the extent they might say knowledge is socially constructed, this is in the limited sense that particular bodies of theory and application are wrong or misdirected, and damaging.
In the book, the most extended discussion of epistemology is by Steven and Hilary Rose in an article titled “The myth of the neutrality of science,” in which they challenge the view that the “activities of science are morally and socially value-free.” They draw on Kuhn’s concept of a scientific paradigm, arguing that paradigms are never value-free. The Roses are well known for their work on the political critique of science (e.g. Rose and Rose 1976a, b). They are very far from saying science is nothing more than a social construction.
My assessment is that the political critique of science neither relies on nor has a special affinity with epistemic relativism. Arguing that science, as it is practised and used, is biased in favour of vested interests is quite different from saying science has no anchors. The political critique of science is constructivist in the general sense that scientific knowledge is likely to serve the interests of those who fund research and development. Is this poor thinking? Activists could well argue that this is better thinking than believing scientific knowledge is completely independent, in form and content, of its origins.
Postscript: after a long hiatus, Science for the People has been resurrected (https://scienceforthepeople.org). It’s now possible to inspect issues of the magazine from 1970 to 1989 (http://science-for-the-people.org/history/sftp-magazine/), which would be another way to assess the role of constructivism in the radical science movement.
Participants in Public Scientific Controversies
A scientific controversy is a dispute involving claims about scientific knowledge. Some such controversies remain largely within the scientific community, for example the debates about gravitational waves, studied in great depth by sociologist Harry Collins (2017). In contrast, my focus is on controversies involving public campaigning. Ones I’ve studied, and which inform my thinking, include nuclear power, pesticides, nuclear winter, fluoridation and vaccination.
What I am looking for is evidence that partisans in such controversies draw on constructivist perspectives, or are influenced by constructivist perspectives that have seeped into their thinking, and that in doing so they are misled, especially regarding the political, economic and social dimensions of the issues. There are two parts to this examination: finding evidence of constructivist thinking and showing that it leads people astray.
My personal experience and research provide two sorts of evidence about the thinking of participants in scientific controversies. First is my personal interaction with partisans, through face-to-face conversations, telephone conversations, emails and letters (pre-Internet), amounting to several hundred individuals, the majority from Australia but many from elsewhere. My observation is that nearly everyone believes in reality, scientific facts and so forth, with scarcely a trace of constructivist thinking, or at least not any of the varieties of epistemic relativism castigated by Sokal and Bricmont.
My impression is that nearly all of those who think about science in constructivist terms are scholars. Even among scholars, not many subscribe to this approach.
Although controversy partisans may not personally express themselves in constructivist ways, it is possible that they have absorbed constructivist perspectives that have saturated popular thinking. To get an angle on this possibility, I turned to my 2014 book The Controversy Manual, written as an attempt to summarise everything I had learned about public scientific controversies in a way convenient to campaigners (Martin 2014a). There’s a lot of material in the book, so for a more focused account, here I draw on a short article that I wrote the same year published in The Conversation (Martin 2014b). In it, I presented half a dozen insights about controversies that people ought to know, but often don’t. It is then revealing to look at the relationship of these insights to constructivism. Essentially this means looking at insights, drawn from my studies of controversies, compiled at the time when I wasn’t thinking specifically about constructivism.
My article in The Conversation is titled “Why do some controversies persist despite the evidence?” It opens with the observation that some controversies continue for decades, and “Some campaigners despair, assuming that those on the other side simply refuse to acknowledge the overwhelming evidence: ‘They must be ignorant. Or devious—they’re lying. Or they’re getting paid.’” I then note “Sociologists have been studying scientific and technological controversies for many decades, and have documented that new evidence seldom makes much of a difference.”
I next turn to factors to explain this.
The first factor is confirmation bias, the tendency to seek out evidence that supports one’s beliefs and reject evidence that doesn’t. Participants in controversies, if they know about confirmation bias, typically assume it affects those on the other side, not themselves. Not understanding or taking into account confirmation bias might be considered poor thinking, but is not linked to constructivism in any obvious way.
The second factor concerns the burden of proof: “In a polarised controversy, the two sides usually differ over what needs to be proved.” Assigning the burden or onus to the other side gives an advantage to one’s own side. Campaigners seldom think explicitly about doing this, which might be considered poor thinking. Assumptions about the burden of proof are based on non-constructivist premises.
The third factor concerns paradigms. In scientific controversies, each side thinks in terms of a coherent set of assumptions, methods and beliefs, which can be likened to Kuhnian paradigms. For example, in the debate over adding fluoride to public water supplies to reduce tooth decay, pro-fluoridationists dismiss concerns about the link between fluoride and skeletal fluorosis: for them, evidence suggesting such a link is an anomaly and either ignored or dismissed as not relevant.
This insight into controversies has the closest connection with constructivism. My aim in raising the idea of paradigms was to help campaigners and observers understand why those on the other side seem not to behave as expected in relation to evidence: to them, opponents ignore or dismiss, seemingly without justification, evidence that should be taken into account. In other words, they subscribe to something akin to a realist understanding of evidence, assuming everyone sees the same evidence as a direct product of reality. If anything, then, in raising this factor I was pointing out that facts do not speak for themselves. This is the opposite of a concern that people have been led astray by constructivism.
The fourth factor is about group dynamics. Participants in controversies tend to interact mainly with like-minded others, and seldom interact with opponents except in hostile forums such as debates. The result is maintenance of a type of groupthink that perpetuates the controversy. This factor has no obvious connection with constructivism.
The fifth factor goes under the heading “Beware of vested interests.”
For example, the tobacco industry funded sympathetic scientists and tried to discredit critics.
Some industries sponsor fake citizens’ groups and use connections in the media and professional groups to try to sow seeds of doubt.
Just because vested interests are involved doesn’t mean that the side backed by money and power is wrong, but it does mean that extra attention needs to be given to possible distortions in the debate.
These points might seem obvious, and indeed they are, but only when the vested interests are on the other side. It is standard for campaigners to accuse opponents of being driven by vested interests while remaining silent about those associated with their own position. It is important to remember that the presence of vested interests does not, on its own, discredit positions associated with it.
The idea that money and power can influence the credibility of ideas in debates involving science might seem to have some affinities with constructivism, in the sense that some scientific knowledge is seen as biased, selective or otherwise tainted. However, this is far from the full-blooded claim that all knowledge claims are equal, or that all are constructed arbitrarily, much less that science is a myth. Partisans, in pointing to the role of vested interests, typically are concerned with the distortion of knowledge: they believe that independently developed scientific knowledge, freed of distorting influences, can reveal the truth.
Pointing to the role of vested interests often reflects a one-sided constructivism rather than the symmetrical approach advocated by the strong programme in SSK (Bloor 1976), in which all knowledge claims, whether considered right or wrong, are subject to sociological analysis using the same intellectual tools, namely the same explanatory processes. Campaigners in controversies believe their opponents are wrong, so when they point to the role of vested interests, they are engaging in what has been called the sociology of error, which is non-constructivist.
The sixth and final factor is about the role of values. I wrote, “Public scientific controversies are not just about the science. They invariably involve differences in values concerning ethics and social choices. Partisans will come at the issue with differing assessments of fairness, care, authority and sacredness.” The point here is that controversies persist in part because of a clash of values. Therefore, to imagine that scientific knowledge will resolve the debate misses a key driving force.
Both partisans and observers commonly think that new evidence will decisively shift the debate, nearly always to support the position they favour. This reflects a belief that viewpoints derive from science. The fluoridation debate has been going on much the same as it began in the 1950s, despite various studies (many of them by scientists not involved in the debate, not linked to vested interests) offering new information.
Is it poor thinking to believe that new studies will dramatically change the course of a long-running controversy? Perhaps so, but this would not be the result of constructivism. It is more likely to result from believing two things: that scientific knowledge provides privileged access to the truth and that scientific knowledge should determine people’s viewpoints.
The fluoridation controversy is a good one for the purposes here because it arose decades before constructivism became fashionable in some academic circles. Yet, by my assessment (Martin 1991), the dynamics of the fluoridation controversy have remained much the same, including the way knowledge claims are deployed. This suggests that constructivism, whatever its sins, seems not to have made inroads into the thinking of partisans in this long-running controversy.
The Doubt Technique
Oreskes and Conway (2010), in their widely discussed book Merchants of Doubt, highlighted a technique used by defenders of the tobacco industry and critics of climate science: cast doubt on the conclusions of scientific orthodoxy. With this technique, it is unnecessary to assert a correct answer; it is sufficient to get people to think there is uncertainty, so maybe none of the science should be trusted.
The relationship of this technique to constructivism is not clear. Doubting, if applied to all knowledge claims without regard to their strength, seems to have an affinity to the excesses of constructivism. However, the sort of doubting involved here is one-sided: the critics of orthodoxy assume and expect that their criticisms are taken as valid. Only their targets are accused of epistemological weakness.
Sowing seeds of doubt in scientific controversies is not new. Early proponents of fluoridation cited experimental studies comparing rates of tooth decay in children in pairs of cities, one fluoridated and the other not, as the basis for widespread introduction of fluoridation. Dental researcher Philip Sutton (1960) wrote a book systematically critiquing the research methods involved in the fluoridation studies, for example pointing out that the choices of control and experimental cities were not made randomly. Sutton did not claim that the findings of the studies were necessarily wrong, just that they were methodologically flawed and hence unreliable as the basis for policy (Martin 1991, 16–21). This was long before the rise of constructivism.
If the technique of casting doubt is not new, why does this method seem so much better known today? I think the reason lies in the configuration of power and knowledge in a few prominent debates. In the cases of tobacco and climate change, the dominant scientific view is a threat to powerful corporate interests. Tobacco and fossil-fuel companies have the resources to sponsor and publicise questioning of seemingly overwhelming scientific consensus.
It is much more common in scientific controversies for power and dominant knowledge to be aligned, in other words for vested interests and the scientific mainstream to support the same position, as in debates over pesticides, fluoridation, microwaves and genetic modification. Arguably, Sutton’s critique of fluoridation studies had no apparent impact on policy because he was not backed by any powerful group. He had knowledge but was not aligned with power.
To assess whether constructivism is leading to poor thinking, in particular about the politics of science, I’ve examined two areas. The first is the early political critique of science, which reveals few associations with constructivism. The second area is public scientific controversies, which I probed in two ways. Firstly, my personal interactions with partisans suggests nearly all have a traditional non-constructivist view of scientific knowledge. Secondly, my exposition of factors explaining why some scientific controversies persist despite new evidence suggests that poor thinking is likely to derive from an unreflective belief in the power of scientific knowledge. In this case, constructivism seems to be far from the most important cause of poor thinking.
My conclusion is tentative, because I’ve only examined two areas and only provided my own assessment. For a fuller examination of the role of constructivism in thinking, it would be useful for others, with different perspectives and expertise, to explore other domains. Based on my experience, my hypothesis is that outside of strictly scholarly arenas, few people are constructivist in their thinking about science, and poor thinking is more likely to result from unexamined assumptions about how scientific findings relate to people’s beliefs. Even if nothing conclusive comes of searching for evidence about constructivism and clear thinking, it might be useful for pointing to areas of poor thinking and ways to foster improvements.
Thanks to Alan Sokal for posing the challenge and to him, Kurtis Hagen, David Hess and Steven Yearley for valuable comments.
Contact details: Brian Martin, University of Wollongong, email@example.com
Arditti, Rita, Pat Brennan and Steve Cavrak, eds. 1980. Science and Liberation. Boston: South End Press.
Barnes, Barry. 1974. Scientific Knowledge and Sociological Theory. London: Routledge and Kegan Paul.
Bloor, David. 1976. Knowledge and Social Imagery. London: Routledge and Kegan Paul.
Collins, H. M. 1981. “Stages in the Empirical Programme of Relativism.” Social Studies of Science 11 (1): 3–10.
Collins, Harry M. 2017. Gravity’s Kiss: The Detection of Gravitational Waves. Cambridge, MA: MIT Press.
Gross, Paul R. and Norman Levitt. 1994. Higher Superstition: The Academic Left and its Quarrels with Science. Baltimore, MD: Johns Hopkins University Press.
Hess, David J. 1997. Science Studies: An Advanced Introduction. New York: New York University Press.
Mannheim, Karl. 1936. Ideology and Utopia: An Introduction to the Sociology of Knowledge. London: Routledge & Kegan Paul.
Martin, Brian. 1991. Scientific Knowledge in Controversy: The Social Dynamics of the Fluoridation Debate. Albany, NY: State University of New York Press.
Martin, Brian. 2014a. The Controversy Manual. Sparsnäs, Sweden: Irene Publishing.
Martin, Brian. 2014b. “Why do Some Controversies Persist Despite the Evidence?” The Conversation, 4 August, http://theconversation.com/why-do-some-controversies-persist-despite-the-evidence-28954.
Mulkay, Michael. 1979. Science and the Sociology of Knowledge. London: Allen and Unwin.
Oreskes, Naomi and Erik M. Conway. 2010. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury.
Rose, Hilary and Steven Rose, eds. 1976a. The Political Economy of Science: Ideology of/in the Natural Sciences. London: Macmillan.
Rose, Hilary and Steven Rose, eds. 1976b. The Radicalisation of Science: Ideology of/in the Natural Sciences. London: Macmillan.
Sismondo, Sergio. 1996. Science without Myth: On Constructions, Reality, and Social Knowledge. Albany, NY: State University of New York Press.
Sismondo, Sergio. 2010. An Introduction to Science and Technology Studies, 2nd ed. Chichester, England: Wiley-Blackwell.
Sokal, Alan. 2008. Beyond the Hoax: Science, Philosophy and Culture. Oxford: Oxford University Press.
Sokal, Alan and Jean Bricmont. 1998. Intellectual Impostures: Postmodern Philosophers’ Abuse of Science. London: Profile.
Sutton, Philip R. N. 1960. Fluoridation: Errors and Omissions in Experimental Trials, 2nd ed. Melbourne: Melbourne University Press.
Yearley, Steven. 2005. Making Sense of Science: Understanding the Social Study of Science. London: Sage.
Leave a Reply