Li-an Yu’s recent article “On Social Robustness Checks on Science: What Climate Policymakers Can Learn from Population Control” (2022) argues that responsible policy-makers should not be informed by science alone when making science-relevant policy decisions. He makes this argument by turning to four examples from the history of population control. The singular focus on scientific evidence in these cases, Yu argues, caused policy-makers to overlook the socially relevant consequences of the policies that they were advocating and the values that underpin an adequate assessment of their costs and benefits. As a result, the policy-makers in these cases enacted harmful policies that could have been avoided had they grounded their decision in greater social reflection. Here, we engage with Yu’s work by asking how it sheds light not only on climate policy-making, but also the degree to which it can help us to understand the role that expertise played curing the COVID-19 pandemic. We argue that policy makers could have crafted better policy, and better preserved the public credibility of science, had they better heeded Yu’s advice … [please read below the rest of the article].
Frohock, Richard and Eric Winsberg. 2022. “Expert Opinion, Social Robustness, and COVID-19: A Response to Yu.” Social Epistemology Review and Reply Collective 11 (9): 1-6. https://wp.me/p1Bfg0-78e.
🔹 The PDF of the article gives specific page numbers.
❧ Yu, Li-an. 2022. “On Social Robustness Checks on Science: What Climate Policymakers Can Learn from Population Control.” Social Epistemology 36 (4): 436-448.
❦ DiMoia, John P. 2022. “The Spaces of Science: Post-Colonial Design, Communication, and Enhancing Practice.” Social Epistemology Review and Reply Collective 11 (7): 40-48.
❦ Fuller, Steve. 2020. “The Emergence of Civil Libertarian Science in Pandemic Times.” Social Epistemology Review and Reply Collective 9 (12): 10-13.
❦ Parviainen, Jaana. 2020. “‘We’re Flying the Plane While We’re Building It’: Epistemic Humility and Non-Knowledge in Political Decision-Making on COVID-19.” Social Epistemology Review and Reply Collective 9 (7): 6-10.
Searching for Appropriate Cases
Yu does not see it as being inherently problematic when policy-makers turn to relevant scientific experts to aid decision making. There are, as he notes, “appropriate cases of political reliance on science,” such as when policy-makers appeal to climate scientists when addressing climate change (436). What is important for Yu is to distinguish these appropriate cases from the inappropriate ones. This is because an inappropriate reliance on science in policy making can be a detriment to not only society but science as well (436). If policy-makers are going to rely on science when making science-relevant policy decisions, they are morally required to check whether their decisions conflict with prevailing social values.
In the case of population control measures like the One-Child policy in China, for example, Yu observers that the Malthusian relevant-science was “immediately antagonistic to the cultural value of filial piety (孝)…which had been an important social norm based on Confucian ethics.” This clash between the science-relevant policy and social values resulted in families heavily selecting male children over female children via abortion and even filicide (442). This deplorable situation arose, according to Yu, because of the role that science played in guiding policy. The scientists overlooked the social value of familial lineage in Chinese society, and how this value commitment would impact the adoption of the policies scientists were recommending. Thus, it was inappropriate for policy makers to rely on science in such a direct and unmediated way.
For Yu, science-relevant policy making is considered appropriate when it is “enabled and not blocked in a sufficiently wide range of social value commitments” (442). Enabling seems to be an obvious condition for checking appropriateness. If a policy is supported by the values of a society and the relevant scientific community then enacting such a policy would certainly seem justified. The more controversial claim of Yu’s is that scientifically guided policies ought to avoid social blocking or resistance. For Yu it is not enough that social values enable a science-relevant policy, they also must not directly oppose, or block, that policy. This is why, as Yu points out, population control measures that had support from both the relevant science as well as the “proponents of population control” still failed to pass a social robustness check, harming the legitimacy of both science and society (443). These policies, though having scientific and some social support were also blocked by relevant social values. These blocking values, when weighed against enabling values, can provide solid rationale for policy-makers to reevaluate their commitment to a particular policy, even when it is supported by relevant science.
The issue with these blocking values, however, is that they can (and often are) easy to overlook or otherwise fall out of consideration. For example, in the instance of the One-Child policy, the Chinese government saw its decision to restrict children as aligning with the new political values of the cultural revolution. By reducing the population, they would be able to create a society that was populated by “high-quality modern citizens.” But this ideal clashed with the values of the rural farmers, who saw the policy as a clamp down on their workforce (441). For these farmers, having multiple kids to work the fields outweighed the desire for producing modern citizens. These values of the farmers, though, were not taken into consideration by the government, making the One-Child policy appear more likely to succeed than it actually was. This is also true, as Yu notes, in the case of immigrant Chinese workers who had no say in the inaction of anti-Chinese American boarder laws (442). Again, policy-makers were able to remain blind to relevant social concerns by referencing the relevant science that supported their position and ignoring the social values that put pressure on them. This is especially easy when the groups with blocking values are marginalized and lack social capital.
Active and Passive Disagreement
It would seem, then, that this condition of “not being blocked” is a difficult standard to reach, and Yu is not unaware of that fact. He notes that science relevant policy debates should take place within a “political culture of mutual criticism” in which all relevant concerns are subject to “critical examination” by policy-makers and supported by robust systems of freedom of expression. But he also notes, that “not all societies assume this social value of open debate” and that “many of the oppressed did not have a voice” (445). The standards for debate and consideration, appear too steep to be legitimately viable in most (if not all) societies. To resolve this issue, Yu makes a distinction between active blocking disagreement and a more passive no disagreement. While active blocking requires participation from both the public and policy-makers, passive no disagreement requires no investment. Rather, it is simply the condition of “passive expression which may be composed of no comments, no interests, indifference, ignorance, suspension, wait-and-see, uncertainty” (445). These passive indications of non disagreement are intended to show that a proposed policy aligns with, or at least doesn’t clash with, relevant social values. In this way, it is able to function as more “affordable” means of testing a policy against social values than active disagreement.
While no disagreement can indicate that a policy or action is not offensive to a society’s values, it can also point towards an epistemic uncertainty amongst scientists causing this suspension of judgement. Additionally, no disagreement could point towards a society having a large “politically neutral and indifferent” population, whose apathy, as philosopher Hannah Arendt notes, can be taken advantage of in order to push policy that is otherwise antithetical to the society’s normal values (Arendt 1951, 10-11). In the case of epistemic uncertainty and societal apathy, then, no disagreement should not be considered as a good indicator of social robustness. With no disagreement, a policy-maker might have a “relatively achievable and affordable signal of social robustness,” but, as the adage goes, you get what you pay for. A cheap indicator of robustness is going to yield a cheap product that is more likely to misconstrue social values, thereby enabling the passage of policies that are harmful to both science and society.
As a standard for gauging the social robustness of science relevant policy, no disagreement can fail to provide a complete epistemic picture of the policy’s relation to social values. No disagreement can be produced as a result of social conditions rather than from a lack of conflict with social values. One particular example of this is when individuals are not given a means of expressing their values, as in Yu’s example of the immigrant workers. As immigrants, these workers did not have a political outlet to express their values, nor did they process the social capital to meaningfully disagree with the policy. Thus they appeared to be in a state of no disagreement relative to immigration policy despite holding blocking social values.
Yu’s response to this concern is to argue that policies like the Chinese Exclusion Act are “focused on a too narrow range of social value commitments” and that considering more “sufficiently wide range” of value commitments would have given a more complete epistemic picture of the relevant social values, allowing policymakers to check their proposals more robustly (447). While expanding considerations to include the values of those who do not belong to the social majority is certainly a good move for policymakers both ethically and epistemologically, such an expansion greatly increases the cost and achievability of obtaining such knowledge. Identifying and collecting data from all groups whose social values could potentially conflict with a proposed policy would require a heavy investment on the part of policymakers. Additionally, such measures fail to empower groups who lack the social or political power to meaningfully disagree with policy in the first place. These groups must hope that the policymakers must identify their concerns as potential disagreement without having a means to promote that disagreement for themselves.
The American COVID Response
When looking at population control and eugenicist policies like Chinese Exclusion, the idea that identifying potential sources of social disagreement represents a high cost on policymakers seems unlikely. Policies like these create targeted harm towards particular groups, making these groups into easily identifiable dissenters. However, when dealing with policies that do not target groups directly, it is easier for disagreement and blocking values to be left unconsidered. We can observe this when we reflect on the American COVID response.
In the early days of the COVID pandemic, little was known about the virus, how it spread, and what its effects would be. Because of this, science, including both modeling and causal inference, operated more as a “rhetorical device” than a source of objective information (Harvard and Winsberg 2021, 8). As a result, policymakers who labeled aggressive interventions like lockdowns, mask mandates, and vaccination requirements as just ‘following the science’ were turning a blind eye to relevant, and potentially blocking, social values in order to follow scientific recommendations that were either immature or missing. The potential harms and benefits of proposed anti-COVID measures would be impossible to justify even for a mature science without careful consideration of the values people attach to different outcomes. By making these judgements in the name of a science that was yet-to-be-settled, those advocating aggressive interventions were hiding the ideological commitments that were driving their policy endorsements, while actively silencing blocking and disagreeing concerns. Those whose social values opposed these proposed policy measures were labeled as science denialists, allowing their concerns to be easily dismissed while maintaining an appearance of epistemic responsibility.
This method of using science as rhetoric in order to artificially manufacture non-disagreement sowed distrust in both policymakers and scientific communities since these entities were actively working to discredit relevant social values despite lacking sufficient social and scientific rationale to do so. This lag between supposedly ‘science-based’ COVID policy and a mature science that could theoretically support it was evidenced by obvious inconsistencies in expert policy advice, which often manifested along political lines. As Stephen Turner observed “inconsistencies were apparent in relation to the idea that there were ‘superspreader events,’ that needed to be forbidden. Trump rallies were criticized… but political demonstrations on the Left were left officially uncriticized” (Turner forthcoming, 5).
If science says that mass gatherings ought to be avoided, and the experts are basing their policies on this science, why would some gatherings be deemed safe while others not? The issue is that policymakers, and indeed some members of the public, were using science as a means of dismissing social values in some cases, while privileging certain social values in other cases. They treated their own values as if they were a part of a value neutral science while dismissing other values as ideological ravings that lacked genuine epistemic grounding. Trump rallies, on the one hand, were seen as an irrational denial of science because they conflicted not with the science, but with the values that were being passed of as science. Political demonstrations by the left, on the other hand, where supported since they a part of the value system being treated as science. While this appeared as an inconsistency to some, to others the ‘science’ was sufficient to dismiss or otherwise ignore the blocking values that produced the inconsistencies in the first place.
Risks, Probabilities, and Uncertainties
It did not have to be this way. Since the early days of the pandemic, policy-makers knew that all that scientific experts could offer them was “information on risks, probabilities and uncertainties” (Parviainen 2020, 6). While an early scientific consensus was able to emerge, as Schliesser and Winsberg noted in their March 2020 article for The New Statesman, this new consensus was subject to “any number of all-too-human biases.” Our understanding of COVID was (and largely still is) incomplete. In spite of this, policy had to be made to address the ongoing crisis. Science, which lagged behind the needed policy, had to be supplemented with non-epistemic rationales. As Harvard and Winsberg (2021) observe, when too much uncertainty exists for anyone to “follow the science”, policy-makers can turn to non-epistemic social considerations like “moral intuitions” to support their positions (6). These same non-scientific considerations could have been used to explain why the mass protests that happened in the wake of George Floyd’s murder were understood to be permissible without being silencing towards potential disagreement or blocking values. If treated as a social value that outweighed relevant scientific recommendations, rather than as being in line with a value free science, inconsistencies could have been avoided. However, because the policy-makers and scientific experts were singularly focused on justifying their positions via science, they were unable to make such claims, producing the inconsistencies that undercut public trust in both policymakers and the relevant scientific communities.
With COVID, at least in the U.S., policies such as lockdowns were at odds with a “cultural self-understanding includes a strong sense of civil liberties” (Fuller 2020, 11). Thus, they were subject to blocking values from a significant portion of the population. Instead of taking these issues into consideration, policymakers, members of the public, and even members of the scientific community were blinded to the relevant blocking values and instances of social resistance by their fixation on following the science. While policymakers in the U.S. could have benefited from using a social robustness check similar to the one described in Yu’s paper, their inability to consider relevant blocking values highlights a flaw in the nature of using blocking values as indicators of social robustness. In this case, science was used as a smoke screen, allowing policymakers to push their own ideological beliefs and silence the dissenters as ideologues and conspiracy theorists. Thus, the common liberal mantra to “trust the science” served to further obscure and discredit disagreement and blocking values, rather than promote a socially robust policy decision.
While blocking values and social resistance can be easily overlooking order to manufacture a “science-backed” sense of social robustness, it can still be a useful tool for policymakers as Yu suggests. In the case of American COVID policy, seeking out and genuinely taking into consideration relevant blocking values held the potential to produce a better response to the virus, effectively preserving public health and trust in scientific leadership. The issue, as we have attempted to show here, is getting policy makers to look past their own ideological commitments, as well as the temptations of “fast science,” in order to actively seek out these blocking values. As has been made apparent by the U.S.’s response to COVID, relying on “no blocking” to make a claim of social robustness is insufficient since this standard allows for policymakers to turn a blind eye to or otherwise remain unaware of relevant blocking values. Instead, policymakers should actively seek out and address these blocking values before making a scientifically informed decision. By doing this, they will be able to meet the standard for social robustness that Yu presents in her paper, and, hopefully, make policy decisions that advance both science and society.
Richard Frohock, firstname.lastname@example.org, University of South Florida; Eric Winsberg, email@example.com, University of South Florida.
Arendt, Hannah. 1951. Totalitarianism: Part Three of the Origins of Totalitarianism. San Diego: Harvest/HBJ.
Fuller, Steve. 2020. “The Emergence of Civil Libertarian Science in Pandemic Times.” Social Epistemology Review and Reply Collective 9 (12): 10-13.
Harvard, Stephanie and Eric Winsberg. 2021. “Causal Inference, Moral Intuition, and Modeling in a Pandemic.” The Philosophy of Medicine 2 (2): 1-10.
Parviainen, Jaana. 2020. “‘We’re Flying the Plane While We’re Building It’: Epistemic Humility and Non-Knowledge in Political Decision-Making on COVID-19.” Social Epistemology Review and Reply Collective 9 (7): 6-10.
Schliesser, Eric and Eric Winsberg. 2020. “Climate and Coronavirus: The Science is Not the Same.” The New Statesman. Update 2 March. https://www.newstatesman.com/business/economics/2020/03/climate-coronavirus-science-experts-data-sceptics.
Turner, Stephen. forthcoming. “What Became of Liberal Democracy Under COVID?” The Public Sphere (in Hebrew).
Yu, Li-an. 2022. “On Social Robustness Checks on Science: What Climate Policymakers Can Learn from Population Control.” Social Epistemology 36 (4): 436-448.
Categories: Critical Replies
Leave a Reply