Author Information: Shira Elqayam, De Montfort University, selqayam@dmu.ac.uk
Elqayam, Shira. “Instrumental Bounded and Grounded Rationality: Comments on Lockie.” Social Epistemology Review and Reply Collective 4, no. 11 (2015): 47-51.
The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2ui
Please refer to:
- Lockie, Robert. “Perspectivism, Deontologism and Epistemic Poverty.” Social Epistemology (2015): 1-17. doi: 10.1080/02691728.2014.990281.
Image credit: seier+seier, via flickr
Abstract
I have a lot of sympathy with Lockie’s (2015) suggestions for a relativist view of normative rationality. However, the efforts to add a contextualised, relativist dimension to rationality are best viewed from a perspective of instrumental or pragmatic rationality, rather than normative rationality as Lockie suggests. From an instrumental rationality perspective, grounded and bounded rationality are more than regrettable limitations on normative standards—they are the very computations that go into agents’ epistemic costs and benefits.
Achieving Rationality: Meliorism versus Panglossianism
Consider Homo Sapiens Sapiens in its entirety: from the dawn of history to present day; from locations and cultures around the globe; smart and dim; children and adults; women and men. Out of the total number of people who ever lived (estimated to be more than 100 billion)[1], how many can be considered to have behaved rationally?
The answer vastly changes depending on your take on rationality: from almost everyone if you are Panglossian, to a small minority if you prefer Meliorism (see Stanovich 1999 for the taxonomy). Meliorists such as Stanovich and West (e.g., Stanovich 1999, 2004; Stanovich and West 2000) see human rationality as fallible although amenable to education. The cognitive machinery necessary to achieve rationality is immense and complex (Stanovich 2011), with many a trap ready for the unwary: absence of the appropriate cognitive tools or presence of the wrong ones as well as sheer cognitive laziness are more likely than not to result in cognitive bias and error. It is small wonder, then, that even intelligent, educated people can behave irrationally, and often do (Stanovich 2009). This puts human rationality on similar grounds to, say, being a musical prodigy: a major achievement to be sure, but so very rare, and applicable to so few people that it hardly seems worth the rivers of ink spilled over it. The appropriate cognitive tools include, for example, Bayes’ theorem, whose widespread violation in lab experiments is well-documented in the literature on base rate neglect (see Barbey and Sloman 2007; Koehler 1996 for reviews). If people unfamiliar with Bayes’ theorem cannot be rational, this alone would exclude from the domain of rational human beings anyone born before 1763, when Thomas Bayes’ theorem was first published—including Aristotle and Newton.
So I have a lot of sympathy with Lockie (2015) when he calls against excluding humans from the fold on grounds of lacking modern university education (or any formal education at all, for that matter). Lockie draws on both bounded (Simon 1982) and grounded (Elqayam 2011; 2012) conceptions of rationality to argue that our ideas of what it means to be rational should take into account both biological, universal constraints (bounded rationality) and contextualised constraints such as cultural and individual differences (grounded rationality). I concur. I also agree with Lockie that relativism is not a ‘get out of jail free’ card (what Stich 1990 calls an ‘anything goes’) for the epistemic agent, and that viable conception of rationality should also explain where rationality fails, and propose a hard core of non-relative absolutes to avoid Theaetetan issues.
On the other hand, the danger for any relativist position is that we can easily end up with a Panglossian ‘get out of jail free’ card. If you are Panglossian, then human rationality is something like working memory: bar major catastrophe to your nervous system, you have it by dint of being an adult human being. Panglossians (e.g., Cohen 1981; Gigerenzer 2007; Gigerenzer and Selten 2001; Gigerenzer, Todd and the ABC research group 1999; Oaksford and Chater 1998; 2007) see humans as inherently, a priori rational. The rationale is often adaptationist, the argument being that evolution equipped humans with optimal rationality functions as a survival mechanism. On this view, the vast majority of these 100 billion and more humans were and are rational, and lapses in rationality are highly uncommon. Being without rationality is like being without a leg or an eye.
While we humans are indeed uniquely smart (and in this sense Panglossians have a point), we also display consistent bias and error in lab tasks of reasoning and decision making. Evans and Over (1996) called this ‘the rationality paradox’. Meliorists neglect side one of the paradox; Panglossians neglect the other. And people are not just smart in everyday life and daft in the lab: human folly outside the lab is just as well-documented. If being rational is like having functioning eyes, there are too many people out there with metaphorical blindness: the conspiracy theorists, the UFO spotters, the compulsive gamblers. Cohen’s (1981) veteran ‘performance errors’ argument—which Lockie seems to adopt—is too weak to cover the richness and interest of human irrationality.
Avoiding the Panglossian temptation is a challenge for relativist views of rationality, but not an impossible one. Lockie does this by proposing a hard core of normative absolutes, along lines reminiscent of Cherniak’s (1986) minimal rationality. His take is that there is an objective, absolute truth to be achieved, but agents should be judged not by how closely they approximate these absolute normative standards, but by how close they get relative to the cognitive resources they have at their disposal. This is suitable for his aim, which is to defend a version of moderate relativism within a framework of normative rationality. This is also where we differ.
Instrumental Rationality and Descriptivism
Recall the veteran distinction between pragmatic and normative rationality: rationality1 and rationality2, respectively, in Evans and Over’s (1996) terminology. Pragmatic or instrumental rationality is about achieving one’s goals; normative rationality is about conforming to a normative standard. The problem with normative rationality, especially in the context of cognitive psychology, is that determining what the normative standard should be is a contentious and laborious business. The history of psychology is rife with publications striving to exonerate participants in this or that classic task from the reasoning and decision making literature as being perfectly rational—provided the appropriate normative standard is applied. What is considered to be appropriate ranges from fast and frugal heuristics (Gigerenzer, 1996) to Bayesian mechanisms (Hahn and Warren 2009) to quantum probability (Pothos and Busemeyer 2013).
This might not have been such a problem, had psychologists not opted so often to back their normative claims with empirical data, thus drawing an inference from ‘is’ to ‘ought’ (Elqayam and Evans 2011; Evans and Elqayam 2011). It seems odd that, to defend normative rationality, psychologists so often draw on an inference which, if not downright fallacious, is at least highly dubious, normatively speaking (Hume 2000 / 1739-40; the jury, after all these years, is still out; see Hudson 1969; Pigden 2010, for reviews). Thus, in our open peer commentary discussion, Jonathan Evans and I preferred to eschew normativism, suggesting instead that psychology of reasoning and decision making would be better off focusing on how humans reason, instead of how they should reason: a descriptivist stance. And descriptivism works well with instrumental rationality (whose ‘oughts’ can be rephrased in indicative terms).
Both bounded and grounded rationality have two rather different interpretations. From the point of view of normative rationality, they are regrettable limitations: the finitary predicament, Cherniak (1986) called it. We (philosophers ‘we’, or psychologist ‘we’), so this line of argument goes, know what good thinking is: it is just that humans are too limited to achieve it, and we need to make allowances. Satisficing is the poor man’s optimising—and all of us are this ‘poor man’ (some, to be sure, more than others). This is the normative view. In contrast, from the point of view of descriptivism and instrumental rationality, satisficing is optimising. Or at least, it can be. Take norm compliance out of the equation, and satisficing becomes a matter of wise investment of whatever cognitive resources we have. The same economic cost / benefit analysis that we apply to decision making also applies to meta-cognitive application of resources (Thompson, Prowse, Turner and Pennycook 2011). If an agent has, e.g., low working memory capacity, or lacks the necessary cognitive tools, satisficing is not just a sensible option: it might optimise epistemic value for a given cognitive cost. In other words, optimising is impossible without taking into account the relative epistemic cost of processing.
Lockie is on to something when he points out deontic thinking as a possible clue. I agree: you do not have to subscribe to a Cosmides-style massive modularity view (Cosmides 1989; Fiddick, Cosmides and Tooby 2000) to agree that deontic thinking is pretty important, possibly innate, and probably species specific. And deontic norms are strongly anchored in costs and benefits (Manktelow and Over 1991; Oaksford and Chater 1994). People create new deontic norms when an action leads to a desirable outcome or helps avoid an undesirable one (Elqayam, Thompson, Wilkinson, Evans and Over 2015); they prefer deontic speech acts where the cost / benefit balance is right—for example, they prefer promises in which benefits outweigh costs (Evans, Neilens, Handley and Over 2008). Costs play a role in metacognitive regulation, too: when participants work on difficult tasks, they tend to settle for less ambitious goals—satisfice, if you will—as time goes by and the costs of cognitive effort mount (Ackerman 2014). It all converges to ground even normative regulation in costs and benefits (epistemic ones or otherwise), let alone instrumental rationality. For me, this is what grounded rationality is all about.
Acknowledgements
I am grateful to Jonathan Evans for a critical reading and helpful comments on a previous draft of this commentary.
References
Ackerman, Rakefet. “The Diminishing Criterion Model for Metacognitive Regulation of Time Investment.” Journal of Experimental Psychology: General 143, no. 3 (2014): 1349-1368.
Barbey, Aron K. and Steven A. Sloman. “Base-Rate Respect: From Statistical Formats to Cognitive Structures.” Behavioral and Brain Sciences 30, no. 3 (2007): 287-292.
Cherniak, Christopher. Minimal Rationality. Cambridge, MA: MIT Press, 1986.
Cohen, L. Jonathan. “Can Human Irrationality be Experimentally Demonstrated?” Behavioral and Brain Sciences 4, no. 3 (1981): 317-370.
Cosmides, Leda. “The Logic of Social Exchange: Has Natural Selection Shaped How Humans Reason?” Cognition 31, no. 3 (1989): 187-276.
Curtin, Ciara. “Do Living People Outnumber the Dead?” Scientific American 297, no. 3 (2007): 126.
Elqayam, Shira. “Grounded Rationality: A Relativist Framework for Normative Rationality.” In The Science of Reason: A Festschrift in Honour of Jonathan St B.T. Evans, edited by Ken Manktelow, David Over, Shira Elqayam, 397-420. Hove, UK: Psychology Press, 2011.
Elqayam, Shira. “Grounded Rationality: Descriptivism in Epistemic Context.” Synthese 189, no. 1 (2012): 39-49.
Elqayam, Shira and Jonathan St. B. T. Evans. “Subtracting ‘Ought’ from ‘Is’: Descriptivism versus Normativism in the Study of Human Thinking.” Behavioral and Brain Sciences 34, no. 5 (2011): 233-248.
Elqayam, Shira, Valerie A. Thompson, Meredith R Wilkinson, Jonathan St. B. T. Evans and David E. Over. “Deontic Introduction: A theory of Inference from Is to Ought.” Journal of Experimental Psychology: Learning, Memory, and Cognition 41, no. 5 (2015): 1516-1532.
Evans, Jonathan St. B. T. and Shira Elqayam. “Towards a Descriptivist Psychology of Reasoning and Decision Making.” Behavioral and Brain Sciences 34, no. 5 (2011): 275-290.
Evans, Jonathan St. B. T., Helen Neilens, Simon J. Handley and David E. Over. “When Can We Say ‘If’?” Cognition 108, no. 1 (2008): 100-116.
Evans, Jonathan St. B. T., David E. Over. Rationality and Reasoning. Hove: Psychology Press, 1996.
Fiddick, Laurence, Leda Cosmides and John Tooby, J. “No Interpretation Without Representation: The Role of Domain-Specific Representations and Inferences in the Wason Selection Task.” Cognition 77, no. 1 (2000): 1-79.
Gigerenzer, Gerd. “On Narrow Norms and Vague Heuristics: A Reply to Kahneman and Tversky.” Psychological Review 103, no. 3 (1996): 592-596.
Gigerenzer, Gerd. Gut Feelings: The Intelligence of the Unconscious. New York, NY: Viking, 2007.
Gigerenzer, Gerd and Reinhard Selten, eds. Bounded Rationality: The Adaptive Toolbox. Cambridge, MA: MIT Press, 2001.
Gigerenzer, Gerd, Peter M. Todd, ABC Research Group. Simple Heuristics that Make Us Smart. New York and Oxford: Oxford University Press, 1999.
Hahn, Ulrike and Paul A. Warren. “Perceptions of Randomness: Why Three Heads are Better than Four.” Psychological Review 116, no. 2 (2009): 454-461.
Hudson, William D., ed. The Is-Ought Question: A Collection of Papers on the Central Problem in Moral Philosophy. London: Macmillan, 1969.
Hume, David. (2000). A Treatise on Human Nature. Oxford: Clarendon Press, [1739-1740] 2000.
Koehler, Jonathan J. “The Base Rate Fallacy Reconsidered: Descriptive, Normative and Methodological Challenges.” Behavioral and Brain Sciences 19, no. 1 (1996): 1-53.
Lockie, Robert. “Perspectivism, Deontologism and Epistemic Poverty.” Social Epistemology (2015): 1-17. doi: 10.1080/02691728.2014.990281.
Manktelow, Ken I. and David E. Over. “Social Roles and Utilities in Reasoning with Deontic Conditionals.” Cognition 39, no.2 (1991): 85-105.
Oaksford, Mike and Nick Chater. “A Rational Analysis of the Selection Task as Optimal Data Selection.” Psychological Review 101, no. 4 (1994) 608-631.
Oaksford, Mike and Nick Chater. Rationality in an Uncertain World. Hove, UK: Psychology Press, 1998.
Oaksford, Mike and Nick Chater. Bayesian Rationality: The Probabilistic Approach to Human Reasoning. Oxford: Oxford University Press, 2007.
Pigden, Charles R. Hume on Is and Ought. New York: Palgrave Macmillan, 2010.
Pothos, Emmanuel M. and Jerome R. Busemeyer. “Can Quantum Probability Provide a New Direction for Cognitive Modeling? Behavioral and Brain Sciences 36, no. 3 (2013): 255-274.
Simon, Herbert. A. Models of Bounded Rationality. Cambridge, MA: MIT Press, 1982.
Stanovich, Keith E. Who is Rational? Studies of Individual Differences in Reasoning. Mahway, NJ: Lawrence Elrbaum Associates, 1999.
Stanovich, Keith E. The Robot’s Rebellion: Finding Meaning in the Age of Darwin. Chicago: Chicago University Press, 2004.
Stanovich, Keith E. What Intelligence Tests Miss: The Psychology of Rational Thought. New Haven; London: Yale University Press, 2009.
Stanovich, Keith E. Rationality and the Reflective Mind. New York: Oxford University Press, 2011.
Stanovich, Keith E. and Richard F. West. “Individual Differences in Reasoning: Implications for the Rationality Debate.” Behavioral and Brain Sciences 23, no. 5 (2000): 645-726.
Stich, Stephen P. The Fragmentation of Reason: Preface to a Pragmatic Theory of Cognitive Evaluation. Cambridge, MA: MIT Press, 1990.
Thompson, Valerie A., Jamie A. Prowse Turner and Gordon Pennycook. “Intuition, Reason, and Metacognition.” Cognitive Psychology 63, no. 3 (2011): 107-140.
[1] Urban myth notwithstanding, the dead still outnumber the living, a state of affairs unlikely ever to change (Curtin, 2007).
Categories: Critical Replies
Leave a Reply