Archives For practice

Author Information: Joshua Earle, Virginia Tech, jearle@vt.edu.

Earle, Joshua. “Deleting the Instrument Clause: Technology as Praxis.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 59-62.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42r

Image by Tambako the Jaguar via Flickr / Creative Commons

 

Damien Williams, in his review of Dr. Ashley Shew’s new book Animal Constructions and Technical Knowledge (2017), foregrounds in his title what is probably the most important thesis in Shew’s work. Namely that in our definition of technology, we focus too much on the human, and in doing so we miss a lot of things that should be considered technological use and knowledge. Williams calls this “Deleting the Human Clause” (Williams, 2018).

I agree with Shew (and Williams), for all the reasons they state (and potentially some more as well), but I think we ought to go further. I believe we should also delete the instrument clause.

Beginning With Definitions

There are two sets of definitions that I want to work with here. One is the set of definitions argued over by philosophers (and referenced by both Shew and Williams). The other is a more generic, “common-sense” definition that sits, mostly unexamined, in the back of our minds. Both generally invoke both the human clause (obviously with the exception of Shew) and the instrument clause.

Taking the “common-sense” definition first, we, generally speaking, think of technology as the things that humans make and use. The computer on which I write this article, and on which you, ostensibly, read it, is a technology. So is the book, or the airplane, or the hammer. In fact, the more advanced the object is, the more technological it is. So while the hammer might be a technology, it generally gets relegated to a mere “tool” while the computer or the airplane seems to be more than “just” a tool, and becomes more purely technological.

Peeling apart the layers therein would be interesting, but is beyond the scope of this article, but you get the idea. Our technologies are what give us functionalities we might not have otherwise. The more functionalities it gives us, the more technological it is.

The academic definitions of technology are a bit more abstract. Joe Pitt calls technology “humanity at work,” foregrounding the production of artefacts and the iteration of old into new (2000, pg 11). Georges Canguilhem called technology “the extension of human faculties” (2009, pg 94). Philip Brey, referencing Canguilhem (but also Marshall McLuhan, Ernst Kapp, and David Rothenberg) takes this definition up as well, but extending it to include not just action, but intent, and refining some various ways of considering extension and what counts as a technical artefact (sometimes, like Soylent Green, it’s people) (Brey, 2000).

Both the common sense and the academic definitions of technology use the human clause, which Shew troubles. But even if we alter instances of “human” to “human or non-human agents” there is still something that chafes. What if we think about things that do work for us in the world, but are not reliant on artefacts or tools, are those things still technology?

While each definition focuses on objects, none talks about what form or function those objects need to perform in order to count as technologies. Brey, hewing close to Heidegger, even talks about how using people as objects, as means to an end, would put them within the definition of technology (Ibid, pg. 12). But this also puts people in problematic power arrangements and elides the agency of the people being used toward an end. It also begs the question, can we use ourselves to an end? Does that make us our own technology?

This may be the ultimate danger that Heidegger warned us about, but I think it’s a category mistake. Instead of objectifying agents into technical objects, if, instead we look at the exercise of agency itself as what is key to the definition of technology, things shift. Technology no longer becomes about the objects, but about the actions, and how those actions affect the world. Technology becomes praxis.

Technology as Action

Let’s think through some liminal cases that first inspired this line of thought: Language and Agriculture. It’s certainly arguable that either of these things fits any definition of technology other than mine (praxis). Don Ihde would definitely disagree with me, as he explicitly states that one needs a tool or an instrument to be technology, though he hews close to my definition in other ways (Ihde, 2012; 2018). If Pitt’s definition, “humanity at work” is true, then agriculture is, indeed a technology . . . even without the various artifactual apparati that normally surround it.

Agriculture can be done entirely by hand, without any tools whatsoever, is iterative and produces a tangible output: food, in greater quantity/efficiency than would normally exist. By Brey’s and Canguihem’s definition, it should fit as well, as agriculture extends our intent (for greater amounts of food more locally available) into action and the production of something not otherwise existing in nature. Agriculture is basically (and I’m being too cute by half with this, I know) the intensification of nature. It is, in essence, moving things rather than creating or building them.

Language is a slightly harder case, but one I want to explicitly include in my definition, but I would also say fits Pitt’s and Brey’s definitions, IF we delete or ignore the instrument clause. While language does not produce any tangible artefacts directly (one might say the book or the written word, but most languages have never been written at all), it is the single most fundamental way in which we extend our intent into the world.

It is work, it moves people and things, it is constantly iterative. It is often the very first thing that is used when attempting to affect the world, and the only way by which more than one agent is able to cooperate on any task (I am using the broadest possible definition of language, here). Language could be argued to be the technology by which culture itself is made possible.

There is another way in which focusing on the artefact or the tool or the instrument is problematic. Allow me to illustrate with the favorite philosophical example: the hammer. A question: is a hammer built, but never used, technology[1]? If it is, then all of the definitions above no longer hold. An unused hammer is not “at work” as in Pitt’s definition, nor does it iterate, as Pitt’s definition requires. An unused hammer extends nothing vs. Canguilhem and Brey, unless we count the potential for use, the potential for extension.

But if we do, what potential uses count and which do not? A stick used by an ape (or a person, I suppose) to tease out some tasty termites from their dirt-mound home is, I would argue (and so does Shew), a technological use of a tool. But is the stick, before it is picked up by the ape, or after it is discarded, still a technology or a tool? It always already had the potential to be used, and can be again after it is discarded. But such a definition requires that any and everything as technology, which renders the definition meaningless. So, the potential for use cannot be enough to be technology.

Perhaps instead the unused hammer is just a tool? But again, the stick example renders the definition of “tool” in this way meaningless. Again, only while in use can we consider a hammer a tool. Certainly the hammer, even unused, is an artefact. The being of an artefact is not reliant on use, merely on being fashioned by an external agent. Thus if we can imagine actions without artefacts that count as technology, and artefacts that do not count as technology, then including artefacts in one’s definition of technology seems logically unsound.

Theory of Technology

I believe we should separate our terms: tool, instrument, artefact, and technology. Too often these get conflated. Central, to me, is the idea that technology is an active thing, it is a production. Via Pitt, technology requires/consists in work. Via Canguilhem and Brey it is extension. Both of these are verbs: “work” and “extend.” Techné, the root of the word technology, is about craft, making and doing; it is about action and intent.

It is about, bringing-forth or poiesis (a-la Heidegger, 2003; Haraway, 2016). To this end, I propose, that we define “technology” as praxis, as the mechanisms or techniques used to address problems. “Tools” are artefacts in use, toward the realizing of technological ends. “Instruments” are specific arrangements of artefacts and tools used to bring about particular effects, particularly inscriptions which signify or make meaning of the artefacts’ work (a-la Latour, 1987; Barad, 2007).

One critique I can foresee is that it would seem that almost any action taken could thus be considered technology. Eating, by itself, could be considered a mechanism by which the problem of hunger is addressed. I answer this by maintaining that there be at least one step between the problem and solution. There needs to be the putting together of theory (not just desire, but a plan) and action.

So, while I do not consider eating, in and of itself, (a) technology; producing a meal — via gathering, cooking, hunting, or otherwise — would be. This opens up some things as non-human uses of technology that even Shew didn’t consider like a wolf pack’s coordinated hunting, or dolphins’ various clever ways to get rewards from their handlers.

So, does treating technology as praxis help? Does extracting the confounding definitions of artefact, tool, and instrument from the definition of technology help? Does this definition include too many things, and thus lose meaning and usefulness? I posit this definition as a provocation, and I look forward to any discussion the readers of SERRC might have.

Contact details: jearle@vt.edu

References

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.

Brey, P. (2000). Theories of Technology as Extension of Human Faculties. Metaphysics, Epistemology, and Technology. Research in Philosophy and Technology, 19, 1–20.

Canguilhem, G. (2009). Knowledge of Life. Fordham University Press.

Haraway, D. J. (2016). Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press.

Heidegger, M. (2003). The Question Concerning Technology. In D. Kaplan (Ed.), Readings in the Philosophy of Technology. Rowan & Littlefield.

Ihde, D. (2012). Technics and praxis: A philosophy of technology (Vol. 24). Springer Science & Business Media.

Ihde, D., & Malafouris, L. (2018). Homo faber Revisited: Postphenomenology and Material Engagement Theory. Philosophy & Technology, 1–20.

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Harvard university press.

Pitt, J. C. (2000). Thinking about technology. Seven Bridges Press,.

Shew, A. (2017). Animal Constructions and Technological Knowledge. Lexington Books.

Williams, D. (2018). “Deleting the Human Clause: A Review of Ashley Shew’s Animal Constructions and Technological Knowledge.” Social Epistemology Review and Reply Collective 7, no. 2: 42-44.

[1] This is the philosophical version of “For sale: Baby shoes. Never worn.”

Author Information: Raphael Sassower, University of Colorado, Colorado Springs, rsasswe@uccs.edu

Sassower, Raphael. “Heidegger and the Sociologists: A Forced Marriage?.” Social Epistemology Review and Reply Collective 7, no. 5 (2018): 30-32.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3X8

The town of Messkirch, the hometown of Martin Heidegger.
Image by Renaud Camus via Flickr / Creative Commons

 

Jeff Kochan is upfront about not being able “to make everyone happy” in order to write “a successful book.” For him, choices had to be made, such as promoting “Martin Heidegger’s existential conception of science . . . the sociology of scientific knowledge . . . [and the view that] the accounts of science presented by SSK [sociology of scientific knowledge] and Heidegger are, in fact, largely compatible, even mutually reinforcing.” (1) This means combining the existentialist approach of Heidegger with the sociological view of science as a social endeavour.

Such a marriage is bound to be successful, according to the author, because together they can exercise greater vitality than either would on its own.  If each party were to incorporate the other’s approach and insights, they would realize how much they needed each other all along. This is not an arranged or forced marriage, according to Kochan the matchmaker, but an ideal one he has envisioned from the moment he laid his eyes on each of them independently.

The Importance of Practice

Enumerating the critics of each party, Kochan hastens to suggest that “both SSK and Heidegger have much more to offer a practice-based approach to science than has been allowed by their critics.” (6) The Heideggerian deconstruction of science, in this view, is historically informed and embodies a “form of human existence.” (7) Focusing on the early works of Heidegger Kochan presents an ideal groom who can offer his SSK bride the theoretical insights of overcoming the Cartesian-Kantian false binary of subject-object (11) while benefitting from her rendering his “theoretical position” more “concrete, interesting, and useful through combination with empirical studies and theoretical insights already extant in the SSK literature.” (8)

In this context, there seems to be a greater urgency to make Heidegger relevant to contemporary sociological studies of scientific practices than an expressed need by SSK to be grounded existentially in the Heideggerian philosophy (or for that matter, in any particular philosophical tradition). One can perceive this postmodern juxtaposition (drawing on seemingly unrelated sources in order to discover something novel and more interesting when combined) as an attempt to fill intellectual vacuums.

This marriage is advisable, even prudent, to ward off criticism levelled at either party independently: Heidegger for his abstract existential subjectivism and SSK for unwarranted objectivity. For example, we are promised, with Heidegger’s “phenomenology of the subject as ‘being-in-the-world’ . . . SSK practitioners will no longer be vulnerable to the threat of external-world scepticism.” (9-10) Together, so the argument proceeds, they will not simply adopt each other’s insights and practices but will transform themselves each into the other, shedding their misguided singularity and historical positions for the sake of this idealized research program of the future.

Without flogging this marriage metaphor to death, one may ask if the two parties are indeed as keen to absorb the insights of their counterpart. In other words, do SSK practitioners need the Heideggerian vocabulary to make their work more integrated? Their adherents and successors have proven time and again that they can find ways to adjust their studies to remain relevant. By contrast, the Heideggerians remain fairly insulated from the studies of science, reviving “The Question Concerning Technology” (1954) whenever asked about technoscience. Is Kochan too optimistic to think that citing Heidegger’s earliest works will make him more rather than less relevant in the 21st century?

But What Can We Learn?

Kochan seems to think that reviving the Heideggerian project is worthwhile: what if we took the best from one tradition and combined it with the best of another? What if we transcended the subject-object binary and fully appreciated that “knowledge of the object [science] necessarily implicates the knowing subject [practitioner]”? (351) Under such conditions (as philosophers of science have understood for a century), the observer is an active participant in the observation, so much so (as some interpreters of quantum physics admit) that the very act of observing impacts the objects being perceived.

Add to this the social dimension of the community of observers-participants and the social dynamics to which they are institutionally subjected, and you have the contemporary landscape that has transformed the study of Science into the study of the Scientific Community and eventually into the study of the Scientific Enterprise.

But there is another objection to be made here: Even if we agree with Kochan that “the subject is no longer seen as a social substance gaining access to an external world, but an entity whose basic modes of existence include being-in-the-world and being-with-others,” (351) what about the dynamics of market capitalism and democratic political formations? What about the industrial-academic-military complex? To hope for the “subject” to be more “in-the-world” and “with-others” is already quite common among sociologists of science and social epistemologists, but does this recognition alone suffice to understand that neoliberalism has a definite view of what the scientific enterprise is supposed to accomplish?

Though Kochan nods at “conservative” and “liberal” critics, he fails to concede that theirs remain theoretical critiques divorced from the neoliberal realities that permeate every sociological study of science and that dictate the institutional conditions under which the very conception of technoscience is set.

Kochan’s appreciation of the Heideggerian oeuvre is laudable, even admirable in its Quixotic enthusiasm for Heidegger’s four-layered approach (“being-in-the-world,” “being-with-others,” “understanding,” and “affectivity”, 356), but does this amount to more than “things affect us, therefore they exist”? (357) Just like the Cartesian “I think, therefore I am,” this formulation brings the world back to us as a defining factor in how we perceive ourselves instead of integrating us into the world.

Perhaps a Spinozist approach would bridge the binary Kochan (with Heidegger’s help) wishes to overcome. Kochan wants us to agree with him that “we are compelled by the system [of science and of society?] only insofar as we, collectively, compel one another.” (374) Here, then, we are shifting ground towards SSK practices and focusing on the sociality of human existence and the ways the world and our activities within it ought to be understood. There is something quite appealing in bringing German and Scottish thinkers together, but it seems that merging them is both unrealistic and perhaps too contrived. For those, like Kochan, who dream of a Hegelian aufhebung of sorts, this is an outstanding book.

For the Marxist and sociological skeptics who worry about neoliberal trappings, this book will remain an erudite and scholarly attempt to force a merger. As we look at this as yet another arranged marriage, we should ask ourselves: would the couple ever have consented to this on their own? And if the answer is no, who are we to force this on them?

Contact details: rsassowe@uccs.edu

References

Kochan, Jeff. Science as Social Existence: Heidegger and the Sociology of Scientific Knowledge. Cambridge, UK: Open Book Publishers, 2017.

Author Information: Bonnie Talbert, Harvard University, USA, btalbert@fas.harvard.edu

Talbert, Bonnie. “Paralysis by Analysis Revisited.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 6-9.

Please refer to:

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Sh

Illustration by Lemuel Thomas from the 1936 Chesapeake and Ohio Railway Calendar.
Image by clotho39 via Flickr / Creative Commons

 

In his reply to my article “Overthinking and Other Minds: the Analysis Paralysis” (2017), Joshua Bergamin (2017) offers some fascinating thoughts about the nature of our knowledge of other people.

Bergamin is right in summarizing my claim that knowing another person involves fundamentally a know-how, and that knowing all the facts there is to know about a person is not enough to constitute knowing her. But, he argues, conscious deliberate thinking is useful in getting to know someone just as it is useful in learning any type of skill.

Questions of Ability

The example he cites is that of separating an egg’s yoke from its white—expert cooks can do it almost automatically while the novice in the kitchen needs to pay careful, conscious attention to her movements in order to get it right. This example is useful for several reasons. It highlights the fact that learning a skill requires effortful attention while engaging in an activity. It is one thing to think or read about how to separate an egg’s white from its yoke; it is quite another thing to practice it, even if it is slow going and clumsy at first. The point is that practice rather than reflection is what one has to do in order to learn how to smoothly complete the activity, even if the first attempts require effortful attention.[1]

On this point Bergamin and I are in agreement. My insistence that conscious deliberate reflection is rarely a good way to get to know someone is mostly targeted at the kinds of reflection one does “in one’s own head”. My claim is not that we never consciously think about other people, but that consciously thinking about them without their input is not a good way to get to know them.  This leads to another, perhaps more important point, which is that the case of the egg cracking is dissimilar from getting to know another person in some fundamental ways.

Unlike an egg, knowing how to interact with a person requires a back and forth exchange of postures, gestures, words, and other such signals. It is not possible for me to figure out how to interact with you and simply to execute those actions; I have to allow for a dynamic exchange of actions originating from each of us. With the egg, or any inanimate object, I am the only agent causing the sequence of events. With another person, there are two agents, and I cannot simply decide how to make the interaction work like I want it to; I have to have your cooperation. This makes knowing another person a different kind of enterprise than knowing other kinds of things.[2]

I maintain that most of the time, interactions with others are such that we do not need to consciously be thinking about what is going on. In fact, the behavioral, largely nonverbal signals that are sent nearly instantaneously to participants in a conversation occur so quickly that there is rarely time to reflect on them. Nevertheless, Bergamin’s point is that in learning an activity, and thus by extension, in getting to know another person as we learn to interact with her, we may be more conscious of our actions than we are once we know someone well and the interactions “flow” naturally.

Knowing Your Audience

I do not think this is necessarily at odds with my account. Learning how to pace one’s speech to a young child when one is used to speaking to adults might take some effortful attention, and the only way to get to the point where one can have a good conversation (if there is such a thing) with a youngster is to begin by paying attention to the speed at which one talks. I still think that once one no longer has to think about it, she will be better able to glean information from the child and will not have her attention divided between trying to pay attention to both what the child is doing and how she sounds herself.

It is easier to get to know someone if you are not focused on what you have to do to hold up your end of the conversation. But more than whether we are consciously or unconsciously attending to our actions in an interaction, my point is that reflection is one-sided while interaction is not, and it is interaction that is crucial for knowing another person. In interaction, whether our thought processes are unconscious or conscious, their epistemic function is such that they allow us to coordinate our behavior with another person’s. This is the crucial distinction from conscious deliberation that occurs in a non-interactive context.

Bergamin claims that “breakdowns” in flow are more than just disruptive; rather, they provide opportunities to learn how to better execute actions, both in learning a skill and in getting to know another person. And it is true that in relationships, a fight or disagreement can often shed light on the underlying dynamics that are causing tension. But unlike the way you can learn from a few misses how to crack an egg properly, you cannot easily decide how to fix your actions in a relationship without allowing for input from the other party.

Certain breakdowns in communication, or interruptions of the “flow” of a conversation can help us know another person better insofar as they alert us to situations in which things are not going smoothly. But further thinking does not always get us out of the problem–further interacting does. You cannot sort it out in your head without input from the other person.

My central claim is that knowing another person requires interaction and that the interactive context is constitutively different from contexts that require one-sided deliberation rather than back and forth dynamic flows of behavioral signals and other information. However, I also point out that propositional knowledge of various sorts is necessary for knowing another person.

Bergamin is correct to point out that in my original essay I do not elaborate on what if anything propositional, conscious deliberative thinking can add to knowing another person. But elsewhere (2014) I have argued that part of what it means to know someone is to know various things about her and that when we know someone, we can articulate various propositions that capture features of her character.

In the essay under discussion, I focus on the claim that propositional knowledge is not sufficient for knowing another person and that we must start with the kind of knowledge that comes from direct interaction if we are to claim that we know another person. We do also gain useful and crucial propositional knowledge from our interactions as well as from other sources that are also part of our knowledge of others, but without the knowledge that comes only from interaction we would ordinarily claim to know things about a person, rather than to know her.

Bergamin is also right in asserting that my account implies that our interactions with others do not typically involve much thinking in the traditional sense. They are, as he speculates, “immersive, intersubjective events…such that each relationship is different for each of us and to some extent out of our control.”  This is partly true. While I might share a very different relationship to Jamie than you do, chances are that we can both recognize certain features of Jamie as being part of who he is. I was struck by this point at a recent memorial service when people with very different relationships spoke about their loved one, impersonating his accent, his frequently used turns of phrase, his general stubbornness, generosity, larger than life personality and other features that everyone at the service could recognize no matter whether the relationship was strictly professional, familial, casual, lasting decades, etc.

I have tentatively spelled out an account (2014) that suggests that with people we know, there are some things that only the people in the relationship share, such as knowledge of where they had lunch last week and what was discussed. But there is also knowledge that is shared beyond that particular relationship that helps situate that relationship vis-à-vis other, overlapping relationships, i.e., while I share a unique relationship with my mother, and so does my sister-in-law, we can both recognize some features of her that are the same for both of us. Further, my sister–in-law knows that I am often a better judge of what my mother wants for her birthday, since I have known my mother longer and can easily tell that she does not mean it when she says she does not want any gifts this year.

Bergamin’s concluding thoughts about the Heideggerian nature of my project are especially insightful, and I too am still working on the speculative implications of my account, which posits that (in Bergamin’s words), “If people are ‘moving targets,’ then we are not ‘things’ but ‘processes,’ systems that are in constant flux. To know such a process is not to try to nail down the ever-changing facts about it, but involves interacting with it. Yet we who interact are ourselves a similar kind of ‘process,’ and in getting to know somebody we are just as much the known as the knower. Our relationships, therefore, are a kind of identity, that involves us and yet exceeds us — growing and evolving over time.” My hope is that this is a project on which we and many other scholars will continue to make progress.

Contact details: btalbert@fas.harvard.edu

References

Bergamin, Joshua. “To Know and To Be: Second-Person Knowledge and the Intersubjective Self, A Reply to Talbert.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 43-47.

Cleary, Christopher. “Olympians Use Imagery as Mental Training.” New York Times,  February 22, 2014, https://www.nytimes.com/2014/02/23/sports/olympics/olympians-use-imagery-as-mental-training.html

Talbert, Bonnie. “Knowing Other People: A Second-person Framework.” Ratio 28, no. 2 (2014): 190–206.

Talbert, Bonnie. “Overthinking and Other Minds: The Analysis Paralysis.” Social Epistemology 31, no. 6 (2017): 1-12.

[1] There is some research that shows that conscious thoughtful reflection, indeed “visualization” can help a person perform an activity better. Visualization has been used to help promote success in sports, business, personal habits, and the like. Process visualization, which is sometimes used with varying degrees of success in athletes, is interesting for my purposes because it does seem to help in performing an activity, or to help with the know-how involved in some athletic endeavors. I do not know why this is the case, and I am a bit skeptical of some of the claims used in this line of reasoning. But I do not think we could use process visualization to help with our interactions with others and get the same kind of results, for the actions of another person are much more unpredictable than the final hill of the marathon or the dismount of a balance beam routine. It is also useful to note that some sports are easier than others to visualize, namely those that are most predictable. For more on this last point and on how imagery can be used to enhance athletic performance, see Christopher Cleary’s “Olympians Use Imagery as Mental Training” (2014).

[2] This leads to another point that is not emphasized in my original essay but perhaps should have been. Insofar as I liken getting to know another person to the “flow” one can experience in certain sports, I do not sufficiently point out that “flow” in some sports, namely those that involve multiple people, involves something much more similar to the “know-how” involved in getting to know another person than in sports where there is only one person involved. Interestingly, “team sports” and other multi person events are not generally cited as activities whose success can be significantly improved by visualization.

Author Information: Manuel Padilla Cruz, University of Seville, mpadillacruz@us.es

Cruz, Manuel Padilla. “Conceptual Competence Injustice and Relevance Theory, A Reply to Derek Anderson.” Social Epistemology Review and Reply Collective 6, no. 12 (2017): 39-50.

Please refer to:

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3RS

Contestants from the 2013 Scripps National Spelling Bee. Image from Scripps National Spelling Bee, via Flickr / Creative Commons

 

Derek Anderson (2017a) has recently differentiated conceptual competence injustice and characterised it as the wrong done when, on the grounds of the vocabulary used in interaction, a person is believed not to have a sophisticated or rich conceptual repertoire. His most interesting, insightful and illuminating work induced me to propose incorporating this notion to the field of linguistic pragmatics as a way of conceptualising an undesired and unexpected perlocutionary effect: attribution of lower level of communicative or linguistic competence. These may be drawn from a perception of seemingly poor performance stemming from lack of the words necessary to refer to specific elements of reality or misuse of the adequate ones (Padilla Cruz 2017a).

Relying on the cognitive pragmatic framework of relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004), I also argued that such perlocutionary effect would be an unfortunate by-product of the constant tendency to search for the optimal relevance of intentional stimuli like single utterances or longer stretches of discourse. More specifically, while aiming for maximum cognitive gain in exchange for a reasonable amount of cognitive effort, the human mind may activate or access assumptions about a language user’s linguistic or communicative performance, and feed them as implicated premises into inferential computations.

Although those assumptions might not really have been intended by the language user, they are made manifest by her[1] behaviour and may be exploited in inference, even if at the hearer’s sole responsibility and risk. Those assumptions are weak implicated premises and their interaction with other mentally stored information yields weakly implicated conclusions (Sperber and Wilson 1986/1995; Wilson and Sperber 2004). Since their content pertains to the speaker’s behaviour, they are behavioural implicatures (Jary 2013); since they negatively impact on an individual’s reputation as a language user, they turn out to be detrimental implicatures (Jary 1998).

My proposal about the benefits of the notion of conceptual competence injustice to linguistic pragmatics was immediately replied by Anderson (2017b). He considers that the intention underlying my comment on his work was “[…] to model conceptual competence injustice within relevance theory” and points out that my proposal “[…] must be tempered with the proper understanding of that phenomenon as a structural injustice” (Anderson 2017b: 36; emphasis in the original). Furthermore, he also claims that relevance theory “[…] does not intrinsically have the resources to identify instances of conceptual competence injustice” (Anderson 2017b: 36).

In what follows, I purport to clarify two issues. Firstly, my suggestion to incorporate conceptual competence injustice into linguistic pragmatics necessarily relies on a much broader, more general and loosened understanding of this notion. Even if such an understanding deprives it of some of its essential, defining conditions –namely, existence of different social identities and of matrices of domination– it may somehow capture the ontology of the unexpected effects that communicative performance may result in: an unfair appraisal of capacities.

Secondly, my intention when commenting on Anderson’s (2017a) work was not actually to model conceptual competence injustice within relevance theory, but to show that this pragmatic framework is well equipped and most appropriate in order to account for the cognitive processes and the reasons underlying the unfortunate negative effects that may be alluded to with the notion I am advocating for. Therefore, I will argue that relevance theory does in fact have the resources to explain why some injustices stemming from communicative performance may originate. To conclude, I will elaborate on the factors why wrong ascriptions of conceptual and lexical competence may be made.

What Is Conceptual Competence Injustice

As a sub-type of epistemic injustice (Fricker 2007), conceptual competence injustice arises in scenarios where there are privileged epistemic agents who (i) are prejudiced against members of specific social groups, identities or minorities, and (ii) exert power as a way of oppression. Such agents make “[…] false judgments of incompetence [which] function as part of a broader, reliable pattern of marginalization that systematically undermines the epistemic agency of members of an oppressed social identity” (Anderson 2017b: 36). Therefore, conceptual competence injustice is a way of denigrating individuals as knowers of specific domains of reality and ultimately disempowering, discriminating and excluding them, so it “[…] is a form of epistemic oppression […]” (Anderson 2017b: 36).

Lack or misuse of vocabulary may result in wronging if hearers conclude that certain concepts denoting specific elements of reality –objects, animals, actions, events, etc.– are not available to particular speakers or that they have erroneously mapped those concepts onto lexical items. When this happens, speakers’ conceptualising and lexical capacities could be deemed to be below alleged or actual standards. Since lexical competence is one of the pillars of communicative competence (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995), that judgement could contribute to downgrading speakers in an alleged scale of communicative competence and, consequently, to regarding them as partially or fully incompetent.

According to Medina (2011), competence is a comparative and contrastive property. On the one hand, skilfulness in some domain may be compared to that in (an)other domain(s), so a person may be very skilled in areas like languages, drawing, football, etc., but not in others like mathematics, oil painting, basketball, etc. On the other hand, knowledge of and abilities in some matters may be greater or lesser than those of other individuals. Competence, moreover, may be characterised as gradual and context-dependent. Degree of competence –i.e. its depth and width, so to say– normally increases because of age, maturity, personal circumstances and experience, or factors such as instruction and subsequent learning, needs, interests, motivation, etc. In turn, the way in which competence surfaces may be affected by a variety of intertwined factors, which include (Mustajoki 2012; Padilla Cruz 2017b).

Factors Affecting Competence in Communication

Internal factors –i.e. person-related– among which feature:

Relatively stable factors, such as (i) other knowledge and abilities, regardless of their actual relatedness to a particular competence, and (ii) cognitive styles –i.e. patterns of accessing and using knowledge items, among which are concepts and words used to name them.

Relatively unstable factors, such as (i) psychological states like nervousness, concentration, absent-mindedness, emotional override, or simply experiencing feelings like happiness, sadness, depression, etc.; (ii) physiological conditions like tiredness, drowsiness, drunkenness, etc., or (iii) performance of actions necessary for physiological functions like swallowing, sipping, sneezing, etc. These may facilitate or hinder access to and usage of knowledge items including concepts and words.

External –i.e. situation-related– factors, which encompass (i) the spatio-temporal circumstances where encounters take place, and (ii) the social relations with other participants in an encounter. For instance, haste, urgency or (un)familiarity with a setting may ease or impede access to and usage of knowledge items, as may experiencing social distance and/or more or less power with respect to another individual (Brown and Levinson 1987).

While ‘social distance’ refers to (un)acquaintance with other people and (dis)similarity with them as a result of perceptions of membership to a social group, ‘power’ does not simply allude to the possibility of imposing upon others and conditioning their behaviour as a consequence of differing positions in a particular hierarchy within a specific social institution. ‘Power’ also refers to the likelihood to impose upon other people owing to perceived or supposed expertise in a field –i.e. expert power, like that exerted by, for instance, a professor over students– or to admiration of diverse personal attributes –i.e. referent power, like that exerted by, for example, a pop idol over fans (Spencer-Oatey 1996).

There Must Be Some Misunderstanding

Conceptualising capacities, conceptual inventories and lexical competence also partake of the four features listed above: gradualness, comparativeness, contrastiveness and context-dependence. Needless to say, all three of them obviously increase as a consequence of growth and exposure to or participation in a plethora of situations and events, among which education or training are fundamental. Conceptualising capacities and lexical competence may be more or less developed or accurate than other abilities, among which are the other sub-competences upon which communicative competence depends –i.e. phonetics, morphology, syntax and pragmatics (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995).

Additionally, conceptual inventories enabling lexical performance may be rather complex in some domains but not in others –e.g. a person may store many concepts and possess a rich vocabulary pertaining to, for instance, linguistics, but lack or have rudimentary ones about sports. Finally, lexical competence may appear to be higher or lower than that of other individuals under specific spatio-temporal and social circumstances, or because of the influence of the aforesaid psychological and physiological factors, or actions performed while speaking.

Apparent knowledge and usage of general or domain-specific vocabulary may be assessed and compared to those of other people, but performance may be hindered or fail to meet expectations because of the aforementioned factors. If it was considered deficient, inferior or lower than that of other individuals, such consideration should only concern knowledge and usage of vocabulary concerning a specific domain, and be only relative to a particular moment, maybe under specific circumstances.

Unfortunately, people often extrapolate and (over)generalise, so they may take (seeming) lexical gaps at a particular time in a speaker’s life or one-off, occasional or momentary lexical infelicities to suggest or unveil more global and overarching conceptualising handicaps or lexical deficits. This does not only lead people to doubt the richness and broadness of that speaker’s conceptual inventory and lexical repertoire, but also to question her conceptualising abilities and what may be labelled her conceptual accuracy –i.e. the capacity to create concepts that adequately capture nuances in elements of reality and facilitate correct reference to those elements– as well as her lexical efficiency or lexical reliability –i.e. the ability to use vocabulary appropriately.

As long as doubts are cast about the amount and accuracy of the concepts available to a speaker and her ability to verbalise them, there arises an unwarranted and unfair wronging which would count as an injustice about that speaker’s conceptualising skills, amount of concepts and expressive abilities. The loosened notion of conceptual competence injustice whose incorporation into the field of linguistic pragmatics I advocated does not necessarily presuppose a previous discrimination or prejudice negatively biasing hegemonic, privileged or empowered individuals against minorities or identities.

Wrong is done, and an epistemic injustice is therefore inflicted, because another person’s conceptual inventory, lexical repertoire and expressive skills are underestimated or negatively evaluated because of (i) perception of a communicative behaviour that is felt not to meet expectations or to be below alleged standards, (ii) tenacious adherence to those expectations or standards, and (iii) unawareness of the likely influence of various factors on performance. This wronging may nonetheless lead to subsequently downgrading that person as regards her communicative competence, discrediting her conceptual accuracy and lexical efficiency/reliability, and denigrating her as a speaker of a language, and, therefore, as an epistemic agent. Relying on all this, further discrimination on other grounds may ensue or an already existing one may be strengthened and perpetuated.

Relevance Theory and Conceptual Competence Injustice

Initially put forth in 1986, and slightly refined almost ten years later, relevance theory is a pragmatic framework that aims to explain (i) why hearers select particular interpretations out of the various possible ones that utterances may have –all of which are compatible with the linguistically encoded and communicated information– (ii) how hearers process utterances, and (iii) how and why utterances and discourse give rise to a plethora of effects (Sperber and Wilson 1986/1995). Accordingly, it concentrates on the cognitive side of communication: comprehension and the mental processes intervening in it.

Relevance theory (Sperber and Wilson 1986/1995) reacted against the so-called code model of communication, which was deeply entrenched in western linguistics. According to this model, communication merely consists of encoding thoughts or messages into utterances, and decoding these in order to arrive at speaker meaning. Since speakers cannot encode everything they intend to communicate and absolute explicitness is practically unattainable, relevance theory portrays communication as an ostensive-inferential process where speakers draw the audience’s attention by means of intentional stimuli. On some occasions these amount to direct evidence –i.e. showing– of what speakers mean, so their processing requires inference; on other occasions, intentional stimuli amount to indirect –i.e. encoded– evidence of speaker meaning, so their processing relies on decoding.

However, in most cases the stimuli produced in communication combine direct with indirect evidence, so their processing depends on both inference and decoding (Sperber and Wilson 2015). Intentional stimuli make manifest speakers’ informative intention –i.e. the intention that the audience create a mental representation of the intended message, or, in other words, a plausible interpretative hypothesis– and their communicative intention –i.e. the intention that the audience recognise that speakers do have a particular informative intention. The role of hearers, then, is to arrive at speaker meaning by means of both decoding and inference (but see below).

Relevance theory also reacted against philosopher Herbert P. Grice’s (1975) view of communication as a joint endeavour where interlocutors identify a common purpose and may abide by, disobey or flout a series of maxims pertaining to communicative behaviour –those of quantity, quality, relation and manner– which articulate the so-called cooperative principle. Although Sperber and Wilson (1986/1995) seriously question the existence of such principle, they nevertheless rest squarely on a notion already present in Grice’s work, but which he unfortunately left undefined: relevance. This becomes the corner stone in their framework. Relevance is claimed to be a property of intentional stimuli and characterised on the basis of two factors:

Cognitive effects, or the gains resulting from the processing of utterances: (i) strengthening of old information, (ii) contradiction and rejection of old information, and (iii) derivation of new information.

Cognitive or processing effort, which is the effort of memory to select or construct a suitable mental context for processing utterances and to carry out a series of simultaneous tasks that involve the operation of a number of mental mechanisms or modules: (i) the language module, which decodes and parses utterances; (ii) the inferential module, which relates information encoded and made manifest by utterances to already stored information; (iii) the emotion-reading module, which identifies emotional states; (iv) the mindreading module, which attributes mental states, and (v) vigilance mechanisms, which assess the reliability of informers and the believability of information (Sperber and Wilson 1986/1995; Wilson and Sperber 2004; Sperber et al. 2010).

Relevance is a scalar property that is directly proportionate to the amount of cognitive effects that an interpretation gives rise to, but inversely proportionate to the expenditure of cognitive effort required. Interpretations are relevant if they yield cognitive effects in return for the cognitive effort invested. Optimal relevance emerges when the effect-effort balance is satisfactory. If an interpretation is found to be optimally relevant, it is chosen by the hearer and thought to be the intended interpretation. Hence, optimal relevance is the property determining the selection of interpretations.

The Power of Relevance Theory

Sperber and Wilson’s (1986/1995) ideas and claims originated a whole branch in cognitive pragmatics that is now known as relevance-theoretic pragmatics. After years of intense, illuminating and fruitful work, relevance theorists have offered a plausible model for comprehension. In it, interpretative hypotheses –i.e. likely interpretations– are said to be formulated during a process of mutual parallel adjustment of the explicit and implicit content of utterances, where the said modules and mechanisms perform a series of simultaneous, incredibly fast tasks at a subconscious level (Carston 2002; Wilson and Sperber 2004).

Decoding only yields a minimally parsed chunk of concepts that is not yet fully propositional, so it cannot be truth-evaluable: the logical form. This form needs pragmatic or contextual enrichment by means of additional tasks wherein the inferential module relies on contextual information and is sometimes constrained by the procedural meaning –i.e. processing instructions– encoded by some linguistic elements.

Those tasks include (i) disambiguation of syntactic constituents; (ii) assignment of reference to words like personal pronouns, proper names, deictics, etc.; (iii) adjustment of the conceptual content encoded by words like nouns, verbs, adjectives or adverbs, and (iv) recovery of unarticulated constituents. Completion of these tasks results in the lower-level explicature of an utterance, which is a truth-evaluable propositional form amounting to the explicit content of an utterance. Construction of lower-level explicatures depends on decoding and inference, so that the more decoding involved, the more explicit or strong these explicatures are and, conversely, the more inference needed, the less explicit and weaker these explicatures are (Wilson and Sperber 2004).

A lower-level explicature may further be embedded into a conceptual schema that captures the speaker’s attitude(s) towards the proposition expressed, her emotion(s) or feeling(s) when saying what she says, or the action that she intends or expects the hearer to perform by saying what she says. This schema is the higher-level explicature and is also part of the explicit content of an utterance.

It is sometimes built through decoding some of the elements in an utterance –e.g. attitudinal adverbs like ‘happily’ or ‘unfortunately’ (Ifantidou 1992) or performative verbs like ‘order’, ‘apologise’ or ‘thank’ (Austin 1962)– and other times through inference, emotion-reading and mindreading –as in the case of, for instance, interjections, intonation or paralanguage (Wilson and Wharton 2006; Wharton 2009, 2016) or indirect speech acts (Searle 1969; Grice 1975). As in the case of lower-level explicatures, higher-level ones may also be strong or weak depending on the amount of decoding, emotion-reading and mindreading involved in their construction.

The explicit content of utterances may additionally be related to information stored in the mind or perceptible from the environment. Those information items act as implicated premises in inferential processes. If the hearer has enough evidence that the speaker intended or expected him to resort to and use those premises in inference, they are strong, but, if he does so at his own risk and responsibility, they are weak. Interaction of the explicit content with implicated premises yields implicated conclusions. Altogether, implicated premises and implicated conclusions make up the implicit content of an utterance. Arriving at the implicit content completes mutual parallel adjustment, which is a process constantly driven by expectations of relevance, in which the more plausible, less effort-demanding and more effect-yielding possibilities are normally chosen.

The Limits of Relevance Theory

As a model centred on comprehension and interpretation of ostensive stimuli, relevance theory (Sperber and Wilson 1986/1995) does not need to be able to identify instances of conceptual competence injustice, as Anderson (2017b) remarks, nor even instances of the negative consequences of communicative behaviour that may be alluded to by means of the broader, loosened notion of conceptual competence injustice I argued for. Rather, as a cognitive framework, its role is to explain why and how these originate. And, certainly, its notional apparatus and the cognitive machinery intervening in comprehension which it describes can satisfactorily account for (i) the ontology of unwarranted judgements of lexical and conceptual (in)competence, (ii) their origin and (iii) some of the reasons why they are made.

Accordingly, those judgements (i) are implicated conclusions which (ii) are derived during mutual parallel adjustment as a result of (iii) accessing some manifest assumptions and using these as implicated premises in inference. Obviously, the implicated premises that yield the negative conclusions about (in)competence might not have been intended by the speaker, who would not be interested in the hearer accessing and using them. However, her communicative performance makes manifest assumptions alluding to her lexical lacunae and mistakes and these lead the hearer to draw undesired conclusions.

Relevance theory (Sperber and Wilson 1986/1995) is powerful enough to offer a cognitive explanation of the said three issues. And this alone was what I aimed to show in my comment to Anderson’s (2017a) work. Two different issues, nevertheless, are (i) the reasons why certain prejudicial assumptions become manifest to an audience and (ii) why those assumptions end up being distributed across the members of certain wide social groups.

As Anderson (2017b) underlines, conceptual competence injustices must necessarily be contextualised in situations where privileged and empowered social groups are negatively-biased or prejudiced against other identities and create patterns of marginalisation. Prejudice may be argued to bring to the fore a variety of negative assumptions about the members of the identities against whom it is held. Using Giora’s (1997) terminology, prejudice makes certain detrimental assumptions very salient or increases the saliency of those assumptions.

Consequently, they are amenable to being promptly accessed and effortlessly used as implicated premises in deductions, from which negative conclusions are straightforwardly and effortlessly derived. Those premises and conclusions spread throughout the members of the prejudiced and hegemonic group because, according to Sperber’s (1996) epidemiological model of culture, they are repeatedly transmitted or made public. This is possible thanks to two types of factors (Sperber 1996: 84):

Psychological factors, such as their relative easiness of storage, the existence of other knowledge with which they can interact in order to generate cognitive effects –e.g. additional negative conclusions pertaining to the members of the marginalised identity– or existence of compelling reasons to make the individuals in the group willing to transmit them –e.g. desire to disempower and/or marginalise the members of an unprivileged group, to exclude them from certain domains of human activity, to secure a privileged position, etc.

Ecological factors, such as the repetition of the circumstances under which those premises and conclusions result in certain actions –e.g. denigration, disempowerment, maginalisation, exclusion, etc.– availability of storage mechanisms other than the mind –e.g. written documents– or the existence of institutions that transmit and perpetuate those premises and conclusions, thus ensuring their continuity and availability.

Since the members of the dominating biased group find those premises and conclusions useful to their purposes and interests, they constantly reproduce them and, so to say, pass them on to the other members of the group or even on to individuals who do not belong to it. Using Sperber’s (1996) metaphor, repeated production and internalisation of those representations resembles the contagion of illnesses. As a result, those representations end up being part of the pool of cultural representations shared by the members of the group in question or other individuals.

The Imperative to Get Competence Correct

In social groups with an interest in denigrating and marginalising an identity, certain assumptions regarding the lexical inventories and conceptualising abilities of the epistemic agents with that identity may be very salient, or purposefully made very salient, with a view to ensuring that they are inferentially exploited as implicated premises that easily yield negative conclusions. In the case of average speakers’ lexical gaps and mistakes, assumptions concerning their performance and infelicities may also become very salient, be fed into inferential processes and result in prejudicial conclusions about their lexical and conceptual (in)competence.

Although utterance comprehension and information processing end upon completion of mutual parallel adjustment, for the informational load of utterances and the conclusions derivable from them to be added to an individual’s universe of beliefs, information must pass the filters of a series of mental mechanisms that target both informers and information itself, and check their believability and reliability. These mechanisms scrutinise various sources determining trust allocation, such as signs indicating certainty and trustworthiness –e.g. gestures, hesitation, nervousness, rephrasing, stuttering, eye contact, gaze direction, etc.– the appropriateness, coherence and relevance of the dispensed information; (previous) assumptions about speakers’ expertise or authoritativeness in some domain; the socially distributed reputation of informers, and emotions, prejudices and biases (Origgi 2013: 227-233).

As a result, these mechanisms trigger a cautious and sceptic attitude known as epistemic vigilance, which in some cases enables individuals to avoid blind gullibility and deception (Sperber et al. 2010). In addition, these mechanisms monitor the correctness and adequateness of the interpretative steps taken and the inferential routes followed while processing utterances and information, and check for possible flaws at any of the tasks in mutual parallel adjustment –e.g. wrong assignment of reference, supply of erroneous implicated premises, etc.– which would prevent individuals from arriving at actually intended interpretations. Consequently, another cautious and sceptical attitude is triggered towards interpretations, which may be labelled hermeneutical vigilance (Padilla Cruz 2016).

If individuals do not perceive risks of malevolence or deception, or do not sense that they might have made interpretative mistakes, vigilance mechanisms are weakly or moderately activated (Michaelian 2013: 46; Sperber 2013: 64). However, their level of activation may be raised so that individuals exercise external and/or internal vigilance. While the former facilitates higher awareness of external factors determining trust allocation –e.g. cultural norms, contextual information, biases, prejudices, etc.– the latter facilitates distancing from conclusions drawn at a particular moment, backtracking with a view to tracing their origin –i.e. the interpretative steps taken, the assumptions fed into inference and assessment of their potential consequences (Origgi 2013: 224-227).

Exercising weak or moderate vigilance of the conclusions drawn upon perception of lexical lacunae or mistakes may account for their unfairness and the subsequent wronging of individuals as regards their actual conceptual and lexical competence. Unawareness of the internal and external factors that may momentarily have hindered competence and ensuing performance, may cause perceivers of lexical gaps and errors to unquestioningly trust assumptions that their interlocutors’ allegedly poor performance makes manifest, rely on them, supply them as implicated premises, derive conclusions that do not do any justice to their actual level of conceptual and lexical competence, and eventually trust their appropriateness, adequacy or accuracy.

A higher alertness to the potential influence of those factors on performance would block access to the detrimental assumptions made manifest by their interlocutors’ performance or make perceivers of lexical infelicities reconsider the convenience of using those assumptions in deductions. If this was actually the case, perceivers would be deploying the processing strategy labelled cautious optimism, which enables them to question the suitability of certain deductions and to make alternative ones (Sperber 1994).

Conclusion

Relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004) does not need to be able to identify cases of conceptual competence injustice, but its notional apparatus and the machinery that it describes can satisfactorily account for the cognitive processes whereby conceptual competence injustices originate. In essence, prejudice and interests in denigrating members of specific identities or minorities favour the saliency of certain assumptions about their incompetence, which, for a variety of psychological and ecological reasons, may already be part of the cultural knowledge of the members of prejudiced empowered groups. Those assumptions are subsequently supplied as implicated premises to deductions, which yield conclusions that undermine the reputation of the members of the identities or minorities in question. Ultimately, such conclusions may in turn be added to the cultural knowledge of the members of the biased hegemonic group.

The same process would apply to those cases wherein hearers unfairly wrong their interlocutors on the grounds of performance below alleged or expected standards, and are not vigilant enough of the factors that could have impeded it. That wronging may be alluded to by means of a somewhat loosened, broadened notion of ‘conceptual competence injustice’ which deprives it of one of its quintessential conditions: the existence of prejudice and interests in marginalising other individuals. Inasmuch as apparently poor performance may give rise to unfortunate unfair judgements of speakers’ overall level of competence, those judgements could count as injustices. In a nutshell, this was the reason why I advocated for the incorporation of a ‘decaffeinated’ version of Anderson’s (2017a) notion into the field of linguistic pragmatics.

Contact details: mpadillacruz@us.es

References

Anderson, Derek. “Conceptual Competence Injustice.” Social Epistemology. A Journal of Knowledge, Culture and Policy 37, no. 2 (2017a): 210-223.

Anderson, Derek. “Relevance Theory and Conceptual Competence Injustice.” Social Epistemology Review and Reply Collective 6, no. 7 (2017b): 34-39.

Austin, John L. How to Do Things with Words. Oxford: Clarendon Press, 1962.

Bachman, Lyle F. Fundamental Considerations in Language Testing. Oxford: Oxford University Press, 1990.

Brown, Penelope, and Stephen C. Levinson. Politeness: Some Universals in Language Usage. Cambridge: Cambridge University Press, 1987.

Canale, Michael. “From Communicative Competence to Communicative Language Pedagogy.” In Language and Communication, edited by Jack C. Richards and Richard W. Schmidt, 2-28. London: Longman, 1983.

Carston, Robyn. Thoughts and Utterances. The Pragmatics of Explicit Communication. Oxford: Blackwell, 2002.

Celce-Murcia, Marianne, Zoltán Dörnyei, and Sarah Thurrell. “Communicative Competence: A Pedagogically Motivated Model with Content Modifications.” Issues in Applied Linguistics 5 (1995): 5-35.

Fricker, Miranda. Epistemic Injustice. Power & the Ethics of Knowing. Oxford: Oxford University Press, 2007.

Giora, Rachel. “Understanding Figurative and Literal Language: The Graded Salience Hypothesis.” Cognitive Linguistics 8 (1997): 183-206.

Grice, Herbert P., 1975. “Logic and Conversation.” In Syntax and Semantics. Vol. 3: Speech Acts, edited by Peter Cole and Jerry L. Morgan, 41-58. New York: Academic Press, 1975.

Hymes, Dell H. “On Communicative Competence.” In Sociolinguistics. Selected Readings, edited by John B. Pride and Janet Holmes, 269-293. Baltimore: Penguin Books, 1972.

Ifantidou, Elly. “Sentential Adverbs and Relevance.” UCL Working Papers in Linguistics 4 (1992): 193-214.

Jary, Mark. “Relevance Theory and the Communication of Politeness.” Journal of Pragmatics 30 (1998): 1-19.

Jary, Mark. “Two Types of Implicature: Material and Behavioural.” Mind & Language 28, no. 5 (2013): 638-660.

Medina, José. “The Relevance of Credibility Excess in a Proportional View of Epistemic Injustice: Differential Epistemic Authority and the Social Imaginary.” Social Epistemology: A Journal of Knowledge, Culture and Policy 25, no. 1 (2011): 15-35.

Michaelian, Kourken. “The Evolution of Testimony: Receiver Vigilance, Speaker Honesty and the Reliability of Communication.” Episteme 10, no. 1 (2013): 37-59.

Mustajoki, Arto. “A Speaker-oriented Multidimensional Approach to Risks and Causes of Miscommunication.” Language and Dialogue 2, no. 2 (2012): 216-243.

Origgi, Gloria. “Epistemic Injustice and Epistemic Trust.” Social Epistemology: A Journal of Knowledge, Culture and Policy 26, no. 2 (2013): 221-235.

Padilla Cruz, Manuel. “Vigilance Mechanisms in Interpretation: Hermeneutical Vigilance.” Studia Linguistica Universitatis Iagellonicae Cracoviensis 133, no. 1 (2016): 21-29.

Padilla Cruz, Manuel. “On the Usefulness of the Notion of ‘Conceptual Competence Injustice’ to Linguistic Pragmatics.” Social Epistemology Review and Reply Collective 6, no. 4 (2017a): 12-19.

Padilla Cruz, Manuel. “Interlocutors-related and Hearer-specific Causes of Misunderstanding: Processing Strategy, Confirmation Bias and Weak Vigilance.” Research in Language 15, no. 1 (2017b): 11-36.

Searle, John. Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press, 1969.

Spencer-Oatey, Hellen D. “Reconsidering Power and Distance.” Journal of Pragmatics 26 (1996): 1-24.

Sperber, Dan. “Understanding Verbal Understanding.” In What Is Intelligence? edited by Jean Khalfa, 179-198. Cambridge: Cambridge University Press, 1994.

Sperber, Dan. Explaining Culture. A Naturalistic Approach. Oxford: Blackwell, 1996.

Sperber, Dan. “Speakers Are Honest because Hearers Are Vigilant. Reply to Kourken Michaelian.” Episteme 10, no. 1 (2013): 61-71.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. Oxford: Blackwell, 1986.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. 2nd edition. Oxford: Blackwell, 1995.

Sperber, Dan, and Deirdre Wilson. “Beyond Speaker’s Meaning.” Croatian Journal of Philosophy 15, no. 44 (2015): 117-149.

Sperber, Dan, Fabrice Clément, Christophe Heintz, Olivier Mascaro, Hugo Mercier, Gloria Origgi, and Deirdre Wilson. “Epistemic Vigilance.” Mind & Language 25, no. 4 (2010): 359-393.

Wharton, Tim. Pragmatics and Non-verbal Communication. Cambridge: Cambridge University Press, 2009.

Wharton, Tim. “That Bloody so-and-so Has Retired: Expressives Revisited.” Lingua 175-176 (2016): 20-35.

Wilson, Deirdre, and Dan Sperber. “Relevance Theory”. In The Handbook of Pragmatics, edited by Larry Horn, and Gregory Ward, 607-632. Oxford: Blackwell, 2004.

Wilson, Deirdre and Tim Wharton. “Relevance and Prosody.” Journal of Pragmatics 38 (2006): 1559-1579.

[1] Following a relevance-theoretic convention, reference to the speaker will be made through the feminine third person singular personal pronoun, while reference to the hearer will be made through its masculine counterpart.

Author Information: Joshua Bergamin, University of Durham, UK, joshua.bergamin@uqconnect.edu.au

Bergamin, Joshua. “To Know and To Be: Second-Person Knowledge and the Intersubjective Self, A Reply to Talbert.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 43-47.

The pdf of the article includes specific page numbers. Shortlink: http://wp.me/p1Bfg0-3Pd

Please refer to:

Image credit: Vexthorm, via Flickr

 

Bonnie Talbert’s article opens up some fascinating questions stemming from the nature of our second-person knowledge, which led me on a thought-provoking journey from questioning what it takes to know another person, towards wondering what is a person, in itself? In this reply, I explore Talbert’s claim to getting to know a person is akin to the ‘knowing-how’ of a skill. However, I question her assertion that the propositional ‘knowing-that’ of ‘figuring someone out’ is of secondary importance, noting the important interplay of knowledge-how and knowledge-that in the process of skill-acquisition. I try to resolve this by asking what is it that we are trying to know when we come to know another person, which offers some intriguing (if speculative) hypotheses on the nature of our second-person relationships.

Knowing a Person

As Talbert points out, knowing a person differs from other types of knowledge especially insofar as people are “moving targets” (2). Human beings are in a constant state of change. As well as physically growing and ageing, facts about our personalities and lifestyles are also in continual flux. A disciplined child can become a scatterbrained adult, while a mid-life crisis mellows into a graceful old age. We can go from disliking scotch whisky to collecting it, from being thrilled by big city life to annoyed by it, or from being avowed pacifists to hard-headed realpolitikers. And if such changes aren’t hard enough to keep track of, we are at all stages bundles of contradictions, capable of holding and acting on conflicting views, like a chain-smoking doctor who counsels her patients to quit, or a professor of logic whose day is brightened by a fortune cookie.

Knowing someone, we feel, must be more than knowing if they still like bourbon, or are a smoker, or a hypocrite. Knowing all the facts about what a person might do or say in a situation, Talbert argues, does not help us really get to know them, anymore than knowing facts about their hair colour or their shoe size. It’s arguable that Facebook or Amazon ‘know’ more about your politics, your wardrobe, even who you’ve got a crush on, than all but your closest friends. Yet nobody really thinks that such systems (or those with access to them) could actually know you better than your friends do. All the same, we’re still left with an uncertainty about just what the ‘knowing’ at stake consists of.

Talbert’s answer is that getting to know someone is something like learning a skill. Acquiring a skill, goes the argument, is not simply a matter of learning propositional facts about it, but involves the non-propositional knowledge that one gains through immersion in an activity. For example, to cook a good spaghetti carbonara, it’s not enough to know that the sauce contains only the yolk of the egg, or what the ratio of pancetta to garlic is. Really knowing how to mix the eggs and add them to the pasta requires actually practising making the dish, and in that process acquiring what Harry Collins (2010) calls the ‘tacit knowledge’ that helps us judge when to take the garlic out, or if the pasta is al dente, and other subtleties that can’t be easily be captured in propositions. Cooking a really good carbonara, moreover, requires what Collins & Evans (2007) call the “interactive expertise” that you can’t learn from a recipe book, but only through ‘hanging out’ with the right people; that is, with other practising experts who—through conversations, demonstrations, and sharing the act of cooking and eating together—socialise you into the best practices and family secrets, gradually transmitting the subtleties of an art that goes beyond what words can describe.

In this respect, I think Talbert has hit on something quite insightful. Getting to know another person is the intersubjective experience par excellence, and I suspect that the growing ease and comfort with which we interact with someone as we get to know them is more than analogous to our growing sense of confidence as we become familiar with a task. Yet I think that, in highlighting the significance of tacit knowledge, Talbert is overly dismissive of the role of explicit knowledge-that in coming to know another person, as when she argues (pp. 3-4) that trying to ‘figure someone out’ is counterproductive.

Skill Acquisition

In building her account, Talbert draws on work on skill acquisition, especially that of Hubert Dreyfus, who famously argues (2005; 2007; 2013) that that too much (deliberative, propositional) thinking interferes with performance. Dreyfus’ (2007, 354) paradigm example is the baseball player Chuck Knoblauch, who at the peak of his career was plagued by rookie errors in simple situations where he had ample time to act. Curiously, however, in tighter situations, where he had no time to think, he completed difficult throws with the expertise that got him into the big leagues in the first place. Dreyfus’ conclusion is that Knoblauch was over-thinking on his simple throws—focusing too much on the uncountable minutiae of body movements rather than acting holistically, ‘in the flow.’

While I am sympathetic to Dreyfus’ account (and have defended it elsewhere (Bergamin 2017)), it is important to note that it applies to an expert at the peak of their game. The road to becoming an expert, on the other hand, involves quite a lot of thinking and puzzling over knowledge-that (see Dreyfus & Dreyfus 1986, 2005). Dreyfus (1991, 68-9), drawing on Heidegger’s (1962) so-called ‘theory of equipment,’ gives a special emphasis to the ‘breakdown’ that happens during smooth-coping, and which draws our attention to the elements of our action. Separating an egg’s yolk from its white, to take a simple example, takes a bit of sticky practice to do smoothly. While your grandmother might do it almost automatically and with her attention elsewhere, on your first couple of tries it’s helpful to concentrate and think deliberatively about the steps of the task. Even once you’ve mastered it, and can do it ‘without thinking,’ your attention still gets drawn back to the objects and deliberative thought when things go wrong—say, if the egg doesn’t crack as expected. This interruption breaks the phenomenological ‘flow’ of the task, but also provides fodder for further learning and refining your skill.

If getting to know a person is something like learning a skill, then the kind of ‘breakdowns’ that help us learn skills should also apply to getting to know someone. A couple’s first fight, for example, is often a defining moment in their relationship. In the flurry of a new romance, it’s easy to feel like we’ve found someone who really ‘gets’ us—everything is exciting, smooth, and flowing. But sooner or later, inevitably, something goes wrong—we say or do things that are unexpected, surprising, hurtful. But then (hopefully), after the fighting and the tears, we talk about it; we step back from the situation and share why we felt the way we felt and said the things we said. We make up, and feel closer, with the feeling that we now know each other a little bit better than before.

Talbert argues that trying to ‘figure someone out’ is counterproductive. And past a certain point, it probably is—trying to psychoanalyse a new partner is like trying to write out a recipe you haven’t yet tried. Yet it doesn’t follow from this that ‘figuring out’ has no place, and indeed, I would argue that it is a necessary element of coming to know someonewith the caveat that it is only part of our ‘practice’ of getting to know someone. It’s through a combination of both the interactive, shared activity that Talbert recommends, plus direct questions and reflection, that help us really get to know a person—to know that when he says he doesn’t mind, he actually really does, or when she goes quiet, it’s because she’s shy, not rude.

Learning, Knowing, Practicing

Importantly—just like with a skill—as we become proficient, we stop needing to actively ‘work someone out’ but we learn to read them intuitively. However, as Dreyfus argues, reaching this point requires a combination of reflection plus lots of practice.[1] Furthermore, I suspect that we have to start more-or-less from scratch with each new individual we get to know. To hold that actively ‘figuring someone out’ is better left to the side implies that getting to know someone is a skill that, once mastered, one could apply to different, particular individuals. Yet that—I believe—comes from a confusion over the object of our knowledge, and is where the ‘skills analogy’ starts to break down.

The skills analogy is confusing in that, unlike the kinds of embodied skills that Dreyfus, Collins, and others discuss, knowing another person has no clear success conditions. Talbert, rightly I believe, rejects simply predicting behaviour, since an impartial psychologist or even an algorithm can do that with some degree of reliability. All the same, knowing some facts about someone must be important, since Talbert also—again, I think, rightly—rejects simple ease-of-interaction, since a gifted psychiatrist, host, or salesperson can ‘read’ a person and make them feel ‘at home’ without knowing anything concrete about them. Furthermore, it isn’t even obvious that getting along well with someone is a sign of actually knowing them—many bad relationships are rooted in the fact that two people know one another too well. If knowing someone is like a skill, then, we must ask, what is the object of that skill? What are we doing when we know another human being?

Knowing a person is, of course, very different to cooking pasta, or to any other skill that we might learn. Most skills, like cooking carbonara or riding a bike, are what Setiya (2014, 12-3), in an interesting reading of Aristotle, calls telic—they are directed at an end (telos), such as impressing a date, or getting to work on time. As Heidegger (1962, 118-9) points out, however, we apply ourselves to such ends not as ends in themselves but ‘in order to’ perform something much more meaningful. These more meaningful, atelic activities, Heidegger argued, are ‘for the sake of’ being a certain kind of person—in an important sense, they define who are we are. A good cook, an attentive lover, a punctual worker—each of these discloses an ethical value in a broad sense. They are who we want to be.

Really knowing somebody is an atelic activity—a ‘for the sake of itself.’ One can, of course, get to ‘know’ somebody ‘in order to’ do something else, such as to survey them as an anthropologist, or ‘network’ with them to advance one’s career. But that isn’t really the sense that I think is at stake in Talbert’s discussion. Getting to know somebody—whether intimately or casually, by choice or by happenstance—is something that we continually engage in as an essential part of living a human social life.

If knowing another person is an open-ended, atelic ‘for the sake of itself,’ this leads us towards the fascinating possibility that each relationship is a kind of identity, a way of being in itself. This applies not just to our significant relationships with lovers and close friends, but with casual acquaintances and interactions on the street. Talbert’s conclusion implies the Heideggerian point that our everyday interactions tend not to involve much thinking, but are immersive, intersubjective events. We act (and react) without a lot of deliberative thinking, such that each relationship is different for each of us, and to some extent, out of our control. You and I might be great friends, but while you and Jamie are always laughing together, he and I never manage more than an awkward conversation. It’s as though I’m a different person with you than I am with Jamie—or more precisely, the being-together that you-and-I share is different to the being-with of me-and-Jamie.

This conclusion is, of course, a bit speculative. But it’s an interesting hypothesis that Talbert’s work brings forth for me. If people are ‘moving targets,’ then we are not ‘things’ but ‘processes,’ systems that are in constant flux. To know such a process is not to try nail down the ever-changing facts about it, but involves interacting with it. Yet we who interact are ourselves a similar kind of ‘process,’ and in getting to know somebody we are just as much the known as the knower. Our relationships, therefore, are a kind of identity, that involves us and yet exceeds us—growing and evolving over time.

References

Bergamin, Joshua. “Being-in-the-Flow: Expert Coping as Beyond Both Thought and Automaticity.” Phenomenology and the Cognitive Sciences 16, no. 3 (2017): 403-424.

Benner, Patricia. From Novice to Expert: Excellence and Power in Clinical Nursing Practice. Reading, MA: Addison-Wesley, 1984.

Collins, Harry and Evans, Robert. Rethinking Expertise. Chicago: University of Chicago Press, 2007.

Collins, Harry. Tacit and Explicit Knowledge. Chicago: University of Chicago Press, 2010.

Dreyfus, Hubert. Being-in-the-World: A Commentary on Heidegger’s Being and Time, Division I. Cambridge, MA: MIT Press, 1991.

Dreyfus, Hubert. “Overcoming the Myth of the Mental: How Philosophers Can Profit from the Phenomenology of Everyday Expertise.” Proceedings and Addresses of the American Philosophical Association 79, no. 2 (2005): 47-65.

Dreyfus, Hubert. “The Return of the Myth of the Mental.” Inquiry 50, no. 4 (2007): 352-365.

Dreyfus, Hubert. “The Myth of the Pervasiveness of the Mental” In Mind, Reason, and Being-in-the-World: The McDowell-Dreyfus Debate, edited by Joseph K. Schear, 15-40. Abingdon: Routledge, 2013.

Dreyfus, Hubert and Dreyfus, Stuart. Mind Over Machine. Oxford: Blackwell, 1986.

Dreyfus, Hubert and Dreyfus, Stuart. “Peripheral Vision: Expertise in Real World Contexts.” Organization Studies, 26 (2005): 779-792.

Heidegger, Martin. Being and Time. Translated by John Macquarrie and Edward Robinson, London: SCM Press, 1962.

Setiya, Kieran. “The Midlife Crisis.” Philosophers Imprint, 14, no. 31 (2014): 1-18.

Talbert, Bonnie. “Overthinking and Other Minds: The Analysis Paralysis.” Social Epistemology (2017): 1-12. doi: 10.1080/02691728.2017.1346933.

[1] It’s significant that Dreyfus & Dreyfus (2005) find that people who are emotionally-invested in their activities—who rejoice in their successes and are despondent at their failures—are more likely to become expert in a certain activity. In the context of getting to know someone, it follows that—as we would expect—we get closer to people that we really care about. Cf. Benner (1984).

Author Information: Zoltan Majdik, North Dakota State University, zoltan.majdik@ndsu.edu

Majdik, Zoltan. “Expertise as Practice: A Response to DeVasto.” Social Epistemology Review and Reply Collective 5, no. 11 (2016): 1-6.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-3hQ

Please refer to:

ask_expertsImage credit: Chris Pirillo, via flickr

A few years ago, Bill Keith and I wrote a paper on rethinking the concept of expertise as a kind of argument, which opened by noting that “expertise” contains an essential tension between authority and democracy.[1] DeVasto’s recent article on expertise prompted me to rethink and extend some of our ideas in light of her argument for focusing more attention on (multiple) ontological and practice-based grounds for expertise. In this response to her paper, I want to suggest that to think of expertise as ontological is to think of expertise as a tension—that the subject matter of expertise across all domains of expert engagement can productively be understood as a kind of tension, and that the practice, the doing, of expertise lies in the resolution of tension. In other words, to think of expertise as a “doing,” as an ontology, all the way down, to be concerned with “patterns of practice”[2] as the ontology of expertise, is to understand expertise as an ongoing tension—with attendant deliberative demands and opportunities toward resolution—that encompasses political (between the authority of the few and a common interest), epistemic (between knowledge that is credentialed, and knowledge that is otherwise acquired), and moral (between legitimating the norms and practices of different groups, based on their rightness in a given context) aspects.

Early in her paper, DeVasto notes that the crux of scholarship on expertise is the “unresolved question of how to determine pertinent experts.”[3] There are numerous ways of trying to resolve this question, by providing structural models of expertise that tend to catalogue expertises in taxonomies, in decision charts, in inclusion networks, or in domains or regimes associated with questions of fact and value. These can be categorized by historical waves, classified by degrees of inclusiveness, parsed by types and degrees of experience, and so on. Most models of determining expertise, including the ones DeVasto cites, do so based on criteria fixed in the past: potential experts either have or have not acquired some kind of knowledge that would grant them expertise in a given situation, either because they have studied such situations or have experienced them. These selection criteria are reasonable in many cases, as they provide some normative stability to the determination of expertise, and so help push back against the kind of democratization of expertise we saw in the second wave of science studies. But they also lack situational flexibility, because they are built on how people have understood or managed exigent problems in the past.

Problems that require expertise, however, can sometimes emerge in new and unique ways, rendering systems of expert classification built on past experience difficult to work with. L’Aquila was precisely such a problem—a fact noted by Carl Herndl’s response to DeVasto as a potential barrier to the generalizability of her argument.[4] Yet, I’d argue that maybe the uniqueness of the L’Aquila case is precisely what gives strength to DeVasto’s claims. Her multiple ontologies frame can, I believe, work as a system for “determining pertinent experts” that is more nimble than a system based on past experience can be. Her use of Mol’s somewhat Wittgensteinian approach to ontology, and her argument for multiple ontologies as a guiding framework for expertise, moves toward that goal. Though I might argue with some details,[5] the new materialism approach does open a space for thinking not about legitimate classifications of expertise, but about the constitution of expertise itself in and as a practice.

DeVasto’s rejection of a “politics of who” illustrates this point. Pushing our understanding of expertise toward ontologies, hooked into practices, shifts emphasis from determining who qualifies as expert based on their past experience or knowledge, toward how expertise gets constituted in a situation—the “what” that is comprised by the practices emerging in a situation. But at times, DeVasto’s mapping of a multiple materialities framework onto the L’Aquila case simply recreates Collins and Evans’ classificatory model. Replacing their epistemic heuristic for classifying expertise with an ontological one—replacing the “who/how” with the “what”—ends up with the same four buckets of expertise, albeit now underwritten by ontological grounds: we have, in DeVasto’s analysis, still interactional ontology, contributory ontology, etc. She argues this point herself: “the types of expertise proposed put forth by Collins and Evans are actually distinct ontologies.”[6] This is, of course, not wrong: these are categories of expertise we encounter, and they are well conceptualized and well mapped onto actual exigent situations. But it raises the question of what we gain by moving to new materialism, practice, and multiple ontologies—what is the upside, if all we do is recreate an existing classificatory system?

DeVasto’s circling-back to Collins and Evans’ categories obscures the fact that shifting our perspective on expertise to its ontology via practice may do more than map onto existing categories. It may give us an out from a rigid classificatory system. “Practice,” after all, isn’t just inert materiality: it is, as DeVasto recognizes, an act, by which materialities are situated, positioned, actuated relative to people and exigencies and constraints. It is how we get from Ontology to multiple ontologies, from “experiencing an earthquake” to “‘doing’ earthquakes.”[7] As DeVasto shows, the value of Mol’s work isn’t simply that it “deconstructs the expert/lay binary,”[8] which, after all, Collins & Evans and many others had already done, and done well. It is that it escapes simply reconstituting it into another expert/lay binary. Mol’s, and DeVasto’s, contributions are meaningful in drawing attention to the legitimization and enactment of expertise as a practice in its own right, not in recreating new systems of classifications. They draw attention to the selection mechanism for choosing experts as being expertise, rather than as a means toward granting legitimacy to a group of experts. “How to determine pertinent experts” is itself what expertise does, is its ontology, its practice, not merely an epistemic heuristic by which we gain knowledge of who the proper experts are. This is what I meant when I referred to expertise as ontological all the way down.

Hence, in DeVasto’s view of expertise grounded in practices, we gain something in how we theorize expertise. Enacting expertise is not to use an existing body of knowledge—static, a priori sets of facts, skills, or experiences—that either fit or do not fit an exigent situation. Enacting expertise is to choose, deconstruct, assemble, test, and legitimize what knowledge best fits a situation. It is, thus, practical knowledge, phronesis, a kind of knowledge-in-making—a grappling with and discerning not only the epistemic dimension of knowledge as it ought to properly pertain to a problem, but also its structural and moral dimensions. It is a moral practice, in the sense Mary Douglas and Aaron Wildavsky talk about moral judgments and the valuing of consequences of risk as questions of social criticism and communal consent,[9] and in the sense that Jürgen Habermas distinguishes moral-practical expressions having to do with the rightness of actions from the instrumentality of discourse that aims to validate truths.[10] The enactment of expertise is to discern and engage tensions between different sets of facts, actors, norms, and skill, legitimizing and justifying the their appropriateness relative to a situation. This is a long way of saying that expertise is not only located in the practices of those addressing exigent situations, but that expertise is a kind of practice itself.

DeVasto recognizes all this. As she shows in her conclusion, the fact that “people move among sites of practise” in enacting different ontologies[11] demonstrates the conceptual flexibility of a multiple ontologies framework. More importantly, she argues here for the importance of examining “how science-policy decision-making is conducted rather than remaining focused only on who should be present.” It is precisely this capacity of the theoretical framework she builds for pulling together sometimes related, sometimes divergent sets of practices into an intelligible model of expertise that makes this paper meaningful.

If DeVasto’s use of Mol’s multiple ontologies is to open new ways of recognizing legitimate and pertinent expertise in the practices of people, groups, and institutions, then a next step is to disentangle the “what” of the ontological perspective. Her move from a “politics of who” to a “politics of what” introduces a practical challenge. A practice-based view of selecting experts in situ eschews the use of simple, predetermined procedures: is is made up both of things old and new and of people from outside and within the problem at hand, and their “thrownness” (to use Heidegger’s fitting term) into an exigent situation. It is constituted by what’s there (both in terms of the objects at hand, and the institutional norms that guide their use and the interactions of people with them) just as much as it constitutes what’s there: practices create new objects, and challenge or reaffirm existing norms, hence altering the landscape of the “there.” To use ontology and practice as a means of “recognizing pertinent experts” requires understanding how such an expertise-as-practice can function.[12]

Doing so is beyond the scope of my response paper, but in the spirit of the Review & Reply Collective’s discussion-centric format, I will make some suggestions. One is to return to the epistemological function of expertise, but consider it within the context of an ontological/practice-centric model. If expertise is a kind of knowledge-in-making, its epistemological function—the information it can furnish for how to address an exigent situation—emerges downstream from the social and linguistic practices that go into resolving tensions about facts, norms, people, and skills. Expertise so undeniably has an epistemological function, but it is not its epistemological function. We find this kind of thinking about epistemology and practice in Giambattista Vico’s epistemology, and in particular his notion of a sensus communis. As John Schaeffer outlines, the relationship between sensus communis, language, imagination, and epistemology in Vico is complex.[13] One way to situate them is through the idea of practice, in the sense in which Vico locates concepts of language and knowledge closer to practice than to the kind of logical-deductive, Cartesian reasoning common at his time. “Eloquence,” argues Schaeffer, “does not merely mean speaking well; it means speaking the truth effectively in the public sphere.” Along with prudence, its design is to “make wisdom effective in civic life,” which is where “the community requires that concrete decisions [about matters of probability] be made in specific circumstances.”[14] And that sense of community—of the “what” of community, the “prelogical”[15] awareness of community—comes from a sensus communis that contains “conventional meanings” and “similitudes” which make “community choice possible.”[16]

We are, of course, a long way from the technical discussion of expertise in STS and related fields. But I suggest that broadening our focus to a place like Vico can be instructive, because it offers a deeply linguistic understanding of the kinds of on-the-ground practices that underwrite determinations of expertise in a multiple ontological framework. Concepts like imagination and similitude (precisely defined as in Vico’s work, or extended to places like Wittgenstein’s family resemblance) can serve to make concrete the linguistic, rhetorical structures that are at work in the complex, dynamic practice of expertise outlined by DeVasto and necessary for cases like the L’Aquila one.

A second aspect to consider is audience. Enactments of expertise have audiences, as well as are constituted by audiences—they address an audience, and are given legitimacy as “being expert” by that audience. Both these directions—the ontological “what” that makes expertise, and the epistemological “how” that conveys knowledge through expertise—work through the complex social contract of trust. We cannot talk about an ontology or an epistemology of expertise without considering the notion that the constitution of expertise as well as its social/epistemological function can exist only if experts and their audiences trust each other. And when we talk about expertise and trust, we are talking about Anthony Giddens, who sees both expertise and trust as central to the functioning of a late-modern social structure in which individuals engage with and are engaged by disembedded social institutions, and their norms about life abstracted from local place by an “emptying out of time and space.”[17]

For Giddens, the facts and norms of social institutions “coordinate social activities without necessary reference to the particularities of place.”[18] Yet, and seemingly paradoxically at first, it is precisely this “‘lifting out’ of social relations from local contexts and their rearticulation across indefinite tracts of time-space”[19] that allows for more coordinated, more precise modes of interaction. Giddens’ theory here speaks to the “place-ness” of expertise: this notion I briefly mention above that the challenge of expertise lies in the fact that it throws together highly local, idiosyncratic, particular features of a place with the generalized knowledge and practices of institutions (be they scientific, legal, economical, or otherwise) that are purposefully abstracted from the characteristics of the local. In fact, expertise to Giddens is the “disembedding mechanism” by which social institutions manage to “bracket time and space [through] technical knowledge.” And expert systems, along with a second disembedding mechanism (symbolic tokens), “depend in an essential way on trust” to function in our late modern space.[20] Hence, to see expertise in place and in practice, without abandoning the important function of institutionalized norms and knowledge as bases for determining expert knowledge, is to see it through a kind of interplay between institutional logic and local agency, mediated by trust. Here, it may be that the practice of crafting trust becomes a critical dimension of enacting expertise.

Considering audience and trust, and considering communality and a historical situatedness of language, as two possible directions for continuing a conversation on expertise may open the scope of academic inquiry on the topic beyond the commonly referenced STS-centric themes. In politics, for example, the role of expertise in the recent “Brexit” vote in the U.K. has been framed as a repudiation of experts, but maybe could be researched with some more nuance from a vantage point that sees expertise as partly constituted by considerations of audience, trust, and community. Whichever way further discussions go, they will benefit from DeVasto’s challenge, and Herndl’s added insights in this forum, to our understandings of expertise as a social practice.

References

DeVasto, Danielle. “Being Expert: L’Aquila and Issues of Inclusion in Science-Policy Decision Making.” Social Epistemology 30, no. 4 (2016): 372–97.

Douglas, Mary, and Aaron B. Wildavsky. Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers. Berkeley: University of California Press, 1982.

Giddens, Anthony. Modernity and Self-Identity: Self and Society in the Late Modern Age. Stanford, CA: Stanford University Press, 1991.

Habermas, Jürgen. The Theory of Communicative Action. Translated by Thomas McCarthy. Vol. 1. Boston, MA: Beacon Press, 1984.

Herndl, Carl G. “Doing and Knowing in the L’Aquila Case.” Social Epistemology Review and Reply Collective 5, no. 6 (2016): 1–6.

Knorr Cetina, Karin. Epistemic Cultures: How the Sciences Make Knowledge. Cambridge, MA: Harvard University Press, 1999.

Majdik, Zoltan P., and William M. Keith. “Expertise as Argument: Authority, Democracy, and Problem-Solving.” Argumentation 25, no. 3 (2011): 371-384.

Schaeffer, John D. “Vico’s Rhetorical Model of the Mind: ‘Sensus Communis’ in the ‘De Nostri Temporis Studiorum Ratione’.” Philosophy and Rhetoric 14, no. 3 (1981): 152–67.

[1]. Majdik and Keith, “Expertise as Argument.”

[2]. DeVasto, “Being Expert,” 381.

[3]. Ibid., 374.

[4]. Herndl, “Doing and Knowing in the L’Aquila Case.”

[5]. The notion, for example, that materiality can shift too far from the linguistic and perspectival. Cf. e.g., Knorr Cetina, who shows just how deep the role of language (as “imaginative terminological repertoires” in experimental physics), along with practices, can go in positioning objects in practices and enactments. Knorr Cetina, Epistemic Cultures, 112.

[6]. DeVasto, “Being Expert,” 383.

[7]. Ibid., 384.

[8]. Ibid., 377.

[9]. Douglas and Wildavsky, Risk and Culture, 5–10.

[10]. Habermas, The Theory of Communicative Action, 1:23.

[11]. DeVasto, “Being Expert,” 390.

[12]. Ibid., 374.

[13]. Schaeffer, “Vico’s Rhetorical Model of the Mind,” 152–53.

[14]. Ibid., 154.

[15]. Ibid., 163.

[16]. Ibid., 163.

[17]. Giddens, Modernity and Self-Identity, 17.

[18]. Ibid., 17.

[19]. Ibid., 18.

[20]. Ibid., 18.