Archives For epistemic practices

Author Information: Patrick Bondy, Wichita State University, patrick.bondy@wichita.edu.

Bondy, Patrick. “Knowledge and Ignorance, Theoretical and Practical.” Social Epistemology Review and Reply Collective 7, no. 12 (2018): 9-14.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-44n

Image by The Naked Ape via Flickr / Creative Commons

 

In “What Ignorance Really Is: Examining the Foundations of Epistemology of Ignorance,” Nadja El Kassar brings disparate conceptions of ignorance from recent epistemology into contact with each other, and she proposes an integrated conception of ignorance which aims to capture the important aspects of each of these conceptions. This paper is both useful and stimulating for anyone interested in the subjects of knowledge and ignorance, especially those who might be ignorant of work on ignorance conducted in other branches of epistemology.

El Kassar’s View of Ignorance

El Kassar identifies three broad approaches to ignorance in the epistemology literature which lead up to her proposed integrated conception:

(1) Propositional conception of ignorance

This is the standard approach in epistemology. On this approach, ignorance consists of a subject’s lacking either knowledge of or belief in a true proposition.

(2) Agential conception of ignorance

Agential ignorance goes beyond mere propositional ignorance, in “explicitly includ[ing] the epistemic agent as contributing to and maintaining ignorance” (p.3). Epistemic vices such as arrogance, laziness, and closed-mindedness contribute to this sort of ignorance. On this approach, the particular way in which ignorance is brought about or maintained is viewed as partly constitutive of the ignorance itself.

(3) Structural conception of ignorance

Like the agential conception, this conception of ignorance views the causes of ignorance as partly constitutive of ignorance. Unlike the agential conception, however, the structural conception takes into account belief-forming practices and social structures that go beyond the individual cognizer.

(4) Integrated conception of ignorance

El Kassar argues that each of these other conceptions of ignorance gets at something important, and that they are not reducible to each other. So she proposes her integrated conception, which aims to bring the key features of these approaches together: “Ignorance is a disposition of an epistemic agent that manifests itself in her beliefs – either she has no belief about p or a false belief – and her epistemic attitudes (doxastic attitudes, epistemic virtues, epistemic vices)” (p.7).

In the remainder of this commentary, I will do three things. First, I will briefly argue in defense of the Standard View, on the ground that we can say everything we want to say about ignorance, taking the propositional conception of ignorance as fundamental. Second, I will suggest that proponents of the Standard View of ignorance do not need to choose between viewing ignorance as a lack of knowledge and ignorance as lack of true belief. Just as there are strong and weak senses of “knowledge,” there can be corresponding weak and strong senses of “ignorance.”

Third, I will propose that we should recognize another kind of ignorance, which we might call practical ignorance, which consists of not knowing how to do things. There is a clear way in which practical ignorance is distinct from propositional ignorance, given that knowledge-how and knowledge-that appear to be different kinds of knowledge that are irreducible to each other. But there is also a sense in which practical ignorance can be partly constitutive of propositional ignorance, which is similar to how El Kassar sees agential ignorance as partly constitutive of ignorance in general. Indeed, I will suggest, El Kassar’s integrated view of ignorance might easily be extended to cover practical ignorance as well.

Propositional Ignorance as Fundamental

I want to defend the view that propositional ignorance is the most fundamental kind of ignorance. Viewing ignorance this way is intuitively plausible, and it allows us to say everything we need to say about ignorance.

The claim that propositional ignorance is most fundamental is ambiguous. On the one hand, it might mean that agential and structural ignorance are entirely reducible to it, in the sense that the crucial aspects of agential and structural ignorance as described above, such as the cognitive dispositions of individual subject or the knowledge-producing institutions extant in a society, are themselves all forms of propositional ignorance or that they derive from propositional ignorance.

El Kassar notes that that kind of reductivism is implausible, and it is not the view I mean to defend here. Instead, I mean to defend the proposal that “The propositional conception is most fundamental because the second and the third conceptions are not really conceptions of ignorance but rather accounts of different causes of ignorance” (p. 4).

On this view, the only condition that constitutes ignorance is lack of knowledge or true belief, and so all ignorance is propositional ignorance. But propositional ignorance might be brought about in various ways, and it is useful to distinguish the various ways in which it can be brought about or sustained, especially when some of those ways make a person’s or a group’s ignorance particularly dangerous or resilient.

This approach does not aim to denigrate the projects pursued by proponents of agential and structural conceptions of ignorance. It does not even aim to prevent us from talking about different kinds of ignorance as differentiated by their agential or structural causes.

Just as we can categorize propositional knowledge into different kinds based on the subject matter of what is known and the methods by which knowledge in different areas is acquired, all the while acknowledging that these are still all kinds of propositional knowledge, so too we can distinguish kinds of propositional ignorance based on the subject matter and the ways in which ignorance is caused or maintained, while still recognizing these as kinds of propositional ignorance.

El Kassar objects (p. 4) that this proposal misunderstands the agential and structural conceptions of ignorance, for they aim to broaden our view of ignorance, to incorporate more than just propositional ignorance. They view certain kinds of agential or structural causes of ignorance as part of what constitutes ignorance itself. Propositional conceptions of ignorance cannot capture these aspects of ignorance; these aspects of ignorance are not propositional in nature, after all.

But it seems that propositionalists can make two replies here. First, if virtue epistemologists such as Greco (2009) are right, then knowledge itself depends on subjects possessing and exercising certain cognitive abilities. In that case, there are agential aspects to propositional knowledge—and in some cases, to propositional ignorance. So some aspects of agential ignorance can be built into propositional ignorance.

And second, it’s not clear that we need to broaden the conception of ignorance to include things beyond propositional ignorance. Granting that there are aspects of agential and structural conceptions of ignorance that are left out of the account of what ignorance is when we take propositional ignorance as fundamental, it does not follow that we cannot take those aspects of agential and structural ignorance into account at all.

Some kinds of causes of ignorance are worth dwelling on in our theories of knowledge and ignorance. We just don’t need to think of the causes of ignorance as themselves forms of ignorance, or as part of what constitutes ignorance.

So it seems to me that we can still say everything we want to say about what are here called propositional, agential, and structural ignorance, even if we only ultimately count propositional ignorance as ignorance proper, and we count the features of agential and structural ignorance as important causes of ignorance proper but not themselves constitutive of ignorance.

Propositional Ignorance: Lack of Knowledge or True Belief?

El Kassar notes that if we take the propositional conception as fundamental, then we will need to decide whether to take ignorance to consist of a lack of true belief or a lack of knowledge. But perhaps we can have it both ways. As Goldman and Olsson (2010) note, ordinarily, from the fact that S lacks knowledge that p, one may infer that S is ignorant of p. Knowledge and ignorance appear to exhaust the logical space, for a given subject S and true proposition p.

Furthermore, in ordinary English there are strong and weak senses of “knowledge,” with the weak sense meaning simply true belief, and the strong sense meaning Gettier-proof justified true belief. In the weak sense of “knowledge,” ignorance is a lack of knowledge and a lack of true belief, because knowledge and true belief are one and the same, on this conception of knowledge.

In the strong sense of knowledge, on the other hand, a lack of knowledge results from lacking true belief, or from lacking justification, or from being Gettiered. But, Goldman and Olsson argue, lacking justification or being Gettiered do not make a person ignorant of whether p is true. As long as p is true and S believes p, it is incorrect to say that S is ignorant of p.

So Goldman and Olsson plump for the view of ignorance as lack of true belief. But another option is to take their initial point about ignorance as a lack of knowledge at face value. Given that ignorance is a lack of knowledge, and given that there are strong and weak senses of “knowledge,” one would expect that there also are strong and weak senses of “ignorance.” A lack of knowledge in the weak sense would be ignorance in the strong sense, and a lack of knowledge in the strong sense would be ignorance in the weak sense. Because knowledge in the strong sense consists of more than knowledge in the weak sense, a lack of knowledge in the strong sense takes less than does a lack of knowledge in the weak sense.

Practical Ignorance

The proposal here is that ignorance at bottom consists of a lack of knowledge. So far, in line with the Standard View, we have only been considering propositional knowledge: ignorance consists of the existence of a true proposition p, and S’s lacking knowledge that p.

But on the assumption that knowledge-how is not reducible to knowledge-that, it seems useful to have a conception of ignorance which will apply to the lack of knowledge-how.[1] For example, it seems natural enough to say that I am ignorant of how to kick a field goal, or how to speak Mandarin, or how to build a sturdy chair. And if knowledge-how is not just a species of knowledge-that, then my ignorance of these things consists of more than a simple lack of true beliefs about how these things are done: they consist at least in part of my lacking the ability to do them. We can call this kind of ignorance practical ignorance.

Importantly, practical ignorance is not reducible to the agential kind of ignorance discussed above. Although the agential conception takes cognitive abilities and dispositions to be partly constitutive of ignorance, practical ignorance would be much broader, encompassing practical inabilities as well as cognitive inabilities. Further, the agential conception of ignorance draws our attention to ignorance that can sometimes be actively maintained by very sophisticated intellectual abilities, in which case such ignorance does not manifest practical ignorance.

For example, one might have the ability to reinterpret data to support a preferred outlook. That is not a truth-conducive ability, but it is an ability to form desired beliefs, and it is an ability at which people can become quite proficient. In cases where a subject exercises such an ability, she might successfully maintain a distorted or mistaken outlook because of the exercise of practical abilities, not because of practical ignorance.

Like propositional ignorance, practical ignorance can be partly caused or sustained by agential and structural features of a person or a society. For example, practical ignorance can be actively maintained by an individual’s interference in her own development, or by other people’s interference in her development. Social structures geared toward the oppression of segments of the population, or which simply encourage members of certain social groups to participate in some activities and not to participate in others, can also contribute to sustaining people’s practical inabilities.

And, like agential ignorance, practical ignorance can be responsible for maintaining propositional ignorance in individuals or in groups, about individual propositions or about whole domains of knowledge.

For example, the inability to speak local languages can keep victims of human trafficking from gaining knowledge of the kinds of resources that might be available to them. The inability to perform relatively simple arithmetical calculations can prevent an individual from knowing whether she is receiving the correct amount of change in a transaction. The inability to conceptualize certain kinds of behaviour as abusive can sustain a lack of understanding of one’s situation.[2] And so on.

So although practical and propositional ignorance are different kinds of ignorance, on the assumption that know-how and knowledge-that are irreducible to each other, they appear to be susceptible to being intertwined in these ways.

The nature of practical ignorance and its relation to propositional ignorance bears further investigation. One potential feature of El Kassar’s integrated conception of ignorance is that, although it has a doxastic component built in, and so it does not account for practical ignorance as I am conceiving of it, it might be straightforwardly extended to cover practical ignorance as well.

For example, theoretical and practical ignorance might be defined and brought together as follows:

Theoretical ignorance: this would remain as El Kassar formulates her integrated conception of ignorance, as “a disposition of an epistemic agent that manifests itself in her beliefs – either she has no belief about p or a false belief – and her epistemic attitudes (doxastic attitudes, epistemic virtues, epistemic vices)” (p.7).

Practical ignorance: a disposition of an agent that manifests itself in her actions – where S fails to φ, or S does not φ well or properly – and her practical attitudes (ethical and pragmatic attitudes, ethical or practical virtues and vices).

Ignorance in general: combines theoretical and practical ignorance. Ignorance in general would then be: a disposition of an agent that manifests itself in an agent’s beliefs or actions – whereby she fails to succeed in achieving the characteristic goal of the activity in question (believing truly, knowing, or successfully carrying out some practical action) – and in her epistemic and practical attitudes (doxastic attitudes, ethical attitudes, epistemic and practical virtues and vices).

Of course, this is only a suggestion about how practical ignorance could be conceptualized. I have argued in defense of the Standard View of (theoretical) ignorance, so this sort of unified integrated conception is not available to me. Nor do I mean to suggest that El Kassar is committed to developing her view of ignorance in this direction.

Still, given a commitment to El Kassar’s integrated view of ignorance, and given that we should also want to give an account of practical ignorance, this seems like a plausible way to deliver a unified treatment of ignorance.

Contact details: patrick.bondy@wichita.edu

References

El Kassar, Nadja (2018). “What Ignorance Really Is: Examining the Foundations of Epistemology of Ignorance.” Social Epistemology. DOI: 10.1080/02691728.2018.1518498.

Goldman, Alvin and Olsson, Erik (2009). “Reliabilism and the Value of Knowledge.” In: A. Haddock, A. Millar, and D. Pritchard, eds., Epistemic Value. Oxford: Oxford University Press, 19-41.

Greco, John (2009). “Knowledge and Success from Ability.” Philosophical Studies 142 (1): 17-26.

Fricker, Miranda (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press.

Peels, Rik (2010). “What Is Ignorance?” Philosophia 38: 57–67.

[1] Peels (2010) briefly considers the possibility of practical ignorance, only to set it aside and focus on propositional ignorance.

[2] I have in mind here Fricker’s (2007) treatment of hermeneutical injustice.

Author Information: Joshua Earle, Virginia Tech, jearle@vt.edu.

Earle, Joshua. “Deleting the Instrument Clause: Technology as Praxis.” Social Epistemology Review and Reply Collective 7, no. 10 (2018): 59-62.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-42r

Image by Tambako the Jaguar via Flickr / Creative Commons

 

Damien Williams, in his review of Dr. Ashley Shew’s new book Animal Constructions and Technical Knowledge (2017), foregrounds in his title what is probably the most important thesis in Shew’s work. Namely that in our definition of technology, we focus too much on the human, and in doing so we miss a lot of things that should be considered technological use and knowledge. Williams calls this “Deleting the Human Clause” (Williams, 2018).

I agree with Shew (and Williams), for all the reasons they state (and potentially some more as well), but I think we ought to go further. I believe we should also delete the instrument clause.

Beginning With Definitions

There are two sets of definitions that I want to work with here. One is the set of definitions argued over by philosophers (and referenced by both Shew and Williams). The other is a more generic, “common-sense” definition that sits, mostly unexamined, in the back of our minds. Both generally invoke both the human clause (obviously with the exception of Shew) and the instrument clause.

Taking the “common-sense” definition first, we, generally speaking, think of technology as the things that humans make and use. The computer on which I write this article, and on which you, ostensibly, read it, is a technology. So is the book, or the airplane, or the hammer. In fact, the more advanced the object is, the more technological it is. So while the hammer might be a technology, it generally gets relegated to a mere “tool” while the computer or the airplane seems to be more than “just” a tool, and becomes more purely technological.

Peeling apart the layers therein would be interesting, but is beyond the scope of this article, but you get the idea. Our technologies are what give us functionalities we might not have otherwise. The more functionalities it gives us, the more technological it is.

The academic definitions of technology are a bit more abstract. Joe Pitt calls technology “humanity at work,” foregrounding the production of artefacts and the iteration of old into new (2000, pg 11). Georges Canguilhem called technology “the extension of human faculties” (2009, pg 94). Philip Brey, referencing Canguilhem (but also Marshall McLuhan, Ernst Kapp, and David Rothenberg) takes this definition up as well, but extending it to include not just action, but intent, and refining some various ways of considering extension and what counts as a technical artefact (sometimes, like Soylent Green, it’s people) (Brey, 2000).

Both the common sense and the academic definitions of technology use the human clause, which Shew troubles. But even if we alter instances of “human” to “human or non-human agents” there is still something that chafes. What if we think about things that do work for us in the world, but are not reliant on artefacts or tools, are those things still technology?

While each definition focuses on objects, none talks about what form or function those objects need to perform in order to count as technologies. Brey, hewing close to Heidegger, even talks about how using people as objects, as means to an end, would put them within the definition of technology (Ibid, pg. 12). But this also puts people in problematic power arrangements and elides the agency of the people being used toward an end. It also begs the question, can we use ourselves to an end? Does that make us our own technology?

This may be the ultimate danger that Heidegger warned us about, but I think it’s a category mistake. Instead of objectifying agents into technical objects, if, instead we look at the exercise of agency itself as what is key to the definition of technology, things shift. Technology no longer becomes about the objects, but about the actions, and how those actions affect the world. Technology becomes praxis.

Technology as Action

Let’s think through some liminal cases that first inspired this line of thought: Language and Agriculture. It’s certainly arguable that either of these things fits any definition of technology other than mine (praxis). Don Ihde would definitely disagree with me, as he explicitly states that one needs a tool or an instrument to be technology, though he hews close to my definition in other ways (Ihde, 2012; 2018). If Pitt’s definition, “humanity at work” is true, then agriculture is, indeed a technology . . . even without the various artifactual apparati that normally surround it.

Agriculture can be done entirely by hand, without any tools whatsoever, is iterative and produces a tangible output: food, in greater quantity/efficiency than would normally exist. By Brey’s and Canguihem’s definition, it should fit as well, as agriculture extends our intent (for greater amounts of food more locally available) into action and the production of something not otherwise existing in nature. Agriculture is basically (and I’m being too cute by half with this, I know) the intensification of nature. It is, in essence, moving things rather than creating or building them.

Language is a slightly harder case, but one I want to explicitly include in my definition, but I would also say fits Pitt’s and Brey’s definitions, IF we delete or ignore the instrument clause. While language does not produce any tangible artefacts directly (one might say the book or the written word, but most languages have never been written at all), it is the single most fundamental way in which we extend our intent into the world.

It is work, it moves people and things, it is constantly iterative. It is often the very first thing that is used when attempting to affect the world, and the only way by which more than one agent is able to cooperate on any task (I am using the broadest possible definition of language, here). Language could be argued to be the technology by which culture itself is made possible.

There is another way in which focusing on the artefact or the tool or the instrument is problematic. Allow me to illustrate with the favorite philosophical example: the hammer. A question: is a hammer built, but never used, technology[1]? If it is, then all of the definitions above no longer hold. An unused hammer is not “at work” as in Pitt’s definition, nor does it iterate, as Pitt’s definition requires. An unused hammer extends nothing vs. Canguilhem and Brey, unless we count the potential for use, the potential for extension.

But if we do, what potential uses count and which do not? A stick used by an ape (or a person, I suppose) to tease out some tasty termites from their dirt-mound home is, I would argue (and so does Shew), a technological use of a tool. But is the stick, before it is picked up by the ape, or after it is discarded, still a technology or a tool? It always already had the potential to be used, and can be again after it is discarded. But such a definition requires that any and everything as technology, which renders the definition meaningless. So, the potential for use cannot be enough to be technology.

Perhaps instead the unused hammer is just a tool? But again, the stick example renders the definition of “tool” in this way meaningless. Again, only while in use can we consider a hammer a tool. Certainly the hammer, even unused, is an artefact. The being of an artefact is not reliant on use, merely on being fashioned by an external agent. Thus if we can imagine actions without artefacts that count as technology, and artefacts that do not count as technology, then including artefacts in one’s definition of technology seems logically unsound.

Theory of Technology

I believe we should separate our terms: tool, instrument, artefact, and technology. Too often these get conflated. Central, to me, is the idea that technology is an active thing, it is a production. Via Pitt, technology requires/consists in work. Via Canguilhem and Brey it is extension. Both of these are verbs: “work” and “extend.” Techné, the root of the word technology, is about craft, making and doing; it is about action and intent.

It is about, bringing-forth or poiesis (a-la Heidegger, 2003; Haraway, 2016). To this end, I propose, that we define “technology” as praxis, as the mechanisms or techniques used to address problems. “Tools” are artefacts in use, toward the realizing of technological ends. “Instruments” are specific arrangements of artefacts and tools used to bring about particular effects, particularly inscriptions which signify or make meaning of the artefacts’ work (a-la Latour, 1987; Barad, 2007).

One critique I can foresee is that it would seem that almost any action taken could thus be considered technology. Eating, by itself, could be considered a mechanism by which the problem of hunger is addressed. I answer this by maintaining that there be at least one step between the problem and solution. There needs to be the putting together of theory (not just desire, but a plan) and action.

So, while I do not consider eating, in and of itself, (a) technology; producing a meal — via gathering, cooking, hunting, or otherwise — would be. This opens up some things as non-human uses of technology that even Shew didn’t consider like a wolf pack’s coordinated hunting, or dolphins’ various clever ways to get rewards from their handlers.

So, does treating technology as praxis help? Does extracting the confounding definitions of artefact, tool, and instrument from the definition of technology help? Does this definition include too many things, and thus lose meaning and usefulness? I posit this definition as a provocation, and I look forward to any discussion the readers of SERRC might have.

Contact details: jearle@vt.edu

References

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.

Brey, P. (2000). Theories of Technology as Extension of Human Faculties. Metaphysics, Epistemology, and Technology. Research in Philosophy and Technology, 19, 1–20.

Canguilhem, G. (2009). Knowledge of Life. Fordham University Press.

Haraway, D. J. (2016). Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press.

Heidegger, M. (2003). The Question Concerning Technology. In D. Kaplan (Ed.), Readings in the Philosophy of Technology. Rowan & Littlefield.

Ihde, D. (2012). Technics and praxis: A philosophy of technology (Vol. 24). Springer Science & Business Media.

Ihde, D., & Malafouris, L. (2018). Homo faber Revisited: Postphenomenology and Material Engagement Theory. Philosophy & Technology, 1–20.

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Harvard university press.

Pitt, J. C. (2000). Thinking about technology. Seven Bridges Press,.

Shew, A. (2017). Animal Constructions and Technological Knowledge. Lexington Books.

Williams, D. (2018). “Deleting the Human Clause: A Review of Ashley Shew’s Animal Constructions and Technological Knowledge.” Social Epistemology Review and Reply Collective 7, no. 2: 42-44.

[1] This is the philosophical version of “For sale: Baby shoes. Never worn.”

Author Information: Stephen John, Cambridge University, sdj22@cam.ac.uk

John, Stephen. “Transparency, Well-Ordered Science, and Paternalism.” Social Epistemology Review and Reply Collective 7, no. 7 (2018): 30-33.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Zf

See also:

Image by Sergio Santos and http://nursingschoolsnearme.com, via Flickr / Creative Commons

 

Should a physician tell you that you have cancer, even if she thinks this would cause you needless distress? Of course she should! How, though, should she convey that news? Imagine three, stylised options. Dr Knowsbest is certain you should have your cancer operated on, so tells you the news in a way which vividly highlights the horrors of cancer, but downplays the risk of an operation.

Dr Neutral, by contrast, simply lists all of the facts about your cancer, your prognosis, your possible treatment options, their likely benefits and risks and so on. Finally, Dr Sensitive reports only those aspects of your condition and those risks of surgery which she judges that you, given your values and interests, would want to know about.

Many Methods to Reveal

We can, I hope, all agree that Dr Knowsbest’s communicative strategies and choices are ethically problematic, because she acts in a paternalistic manner. By contrast, Dr Neutral does not act paternalistically. In this regard, at least, Dr Neutral’s strategies are ethically preferable to Dr Knowsbest’s strategies. What about the choice between Knowsbest and Sensititve? In one sense, Dr Sensitive acts paternalistically, because she controls and structures the flow of information with the aim of improving your well-being.

However, there is an important difference between Dr Sensitive and Dr Knowsbest; the former aims solely to improve your epistemic well-being, such that you can better make a choice which aligns with your own values, whereas the latter aims to influence or override your judgment. Knowsbest’s “moral paternalism” is wrong for reasons which are absent in the case of Sensitive’s “epistemic paternalism” (Ahlstrom-Vij, 2013).

Therefore, plausibly, both the Neutral and Sensitive strategies are ethically preferable to Knowsbest; What, though, of the choice between these two communicative strategies? First, I am not certain that it is even possible to report all the facts in a neutral way (for more, see below.) Second, even if it is possible, Dr Sensitive’s strategy seems preferable; her strategy, if successful, positively promotes – as opposed to merely failing to interfere with – your ability to make autonomous choices.

At least at an abstract, ideal level, then, we have good reason to want informants who do more than merely list facts, but who are sensitive to their audiences’ epistemic situation and abilities and their evaluative commitments; we want experts who “well-lead” us. In my recent paper in Social Epistemology, I argued that that certain widely-endorsed norms for science communication are, at best, irrelevant, and, at worst, dangerous (John 2018). We should be against transparency, openness, sincerity and honesty.

It’s a Bit Provocative

One way of understanding that paper is as following from the abstract ideal of sensitive communication, combined with various broadly sociological facts (for example, about how audiences identify experts). I understand why my article put Moore in mind of a paradigm case of paternalism. However, reflection on the hypothetical example suggests we should also be against “anti-paternalism” as a norm for science communication; not because Knowsbest’s strategy is fine, but, rather, because the term “paternalism” tends to bundle together a wide range of practices, not all of which are ethically problematic, and some of which promote – rather than hinder – audiences’ autonomy.

Beyond the accusation of paternalism, Moore’s rich and provocative response focuses on my scepticism about transparency. While I argued that a “folk philosophy of science” can lead audiences to distrust experts who are, in fact, trustworthy, he uses the example of HIV-AIDS activism to point to the epistemic benefits of holding scientists to account, suggesting that “it is at least possible that the process of engaging with and responding to criticism can lead to learning on both sides and the production, ultimately, of better science”. I agree entirely that such a dynamic is possible; indeed, his example shows it does happen!

However, conceding this possibility does not show that we must endorse a norm of transparency, because, ultimately, the costs may still be greater than the benefits. Much here depends on the mechanisms by which transparency and engagement are enacted. Moore suggests one model for such engagement, via the work of “trust proxies”, such as ACT-UP. As he acknowledges, however, although proxies may be better-placed than lay-people to identify when science is flawed, we now create a new problem for the non-expert: to adapt a distinction from Goldman’s work, we must decide which “putative proxies” are “true proxies” (Goldman, 2001).

Plausibly, this problem is even harder than Goldman’s problem of distinguishing the “true experts” among the “putative experts”; because in the latter case, we have some sense of the credentials and so on which signal experthood. Again, I am tempted to say, then, that it is unclear that transparency, openness or engagement will necessarily lead to better, rather than worse, socio-epistemic outcomes.

Knowledge From Observation and Practice

Does that mean my arguments against transparency are in the clear? No. First, many of the issues here turn on the empirical details; maybe careful institutional design can allow us to identify trustworthy trust-proxies, whose work promotes good science. Second, and more importantly, the abstract model of sensitive communication is an ideal. In practice, it is easy to fail to meet this ideal, in ways which undermine, rather than respect or promote, hearers’ autonomy.

For example, rather than tailor her communication to what her audiences do care about, Dr Sensitive might tailor what she says to what she thinks they ought to care about; as a result, she might leave out information which is relevant to their choices given their values, while including information which is irrelevant. An influential strain in recent philosophy of science suggests that non-epistemic value judgments do and must run deep in practices of justification; as such, even a bald report of what a study showed may, implicitly, encode or endorse value judgments which are not shared by the audience (Douglas, 2000).

Reporting claims when, and only when, they meet a certain confidence level may, for example, implicitly rely on assumptions about the relative disvalue of false positives and false negatives; in turn, it may be difficult to justify such assumptions without appeal to non-epistemic values (John, 2015). As such, even Dr Neutral may be unable to avoid communicating in ways which are truly sensitive to her audience’s values. In short, it may be hard to handover our epistemic autonomy to experts without also handing over our moral autonomy.

This problem means that, for research to be trustworthy, requires more than that the researchers’ claims are true, but that they are claims which are, at least, neutral and, at best, aligned with, audiences’ values. Plausibly, regardless greater engagement and transparency may help ensure such value alignment. One might understand the example of ACT-UP along these lines: activist engagement ensured that scientists did “good science” not only in a narrow, epistemic sense of “good” – more or more accurate data and hypotheses were generated – but in a broader sense of being “well-ordered”, producing knowledge that better reflected the concerns and interests of the broader community (Kitcher, 2003).

Whether engagement improves epistemic outcomes narrowly construed is a contingent matter, heavily dependent on the details of the case. By contrast, engagement may be necessary for science to be “well-ordered”. In turn, transparency may be necessary for such engagement. At least, that is the possibility I would push were I to criticise my own conclusions in line with Moore’s concerns.

A Final Sting

Unfortunately, there is a sting in the tail. Developing effective frameworks for engagement and contestation may require us to accept that scientific research is not, and cannot be, fully “value free”. To the extent that such an assumption is a commitment of our “folk philosophy of science”, then developing the kind of rigorous engagement which Moore wants may do as much to undermine, as promote, our trust in true experts. Moore is surely right that the dynamics of trust and distrust are even more complex than my paper suggested; unfortunately, they might be even more complex again than he suggests.

Contact details: sdj22@cam.ac.uk

References

Ahlstrom-Vij, K. (2013). Epistemic paternalism: a defence. Springer

Douglas, H. (2000). Inductive risk and values in science. Philosophy of science, 67(4), 559-579.

Goldman, A (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63(1), 85–110.

John, S. (2015). Inductive risk and the contexts of communication. Synthese, 192(1), 79-96.

John, S. (2018). Epistemic trust and the ethics of science communication: against transparency, openness, sincerity and honesty. Social Epistemology, 32(2), 75-87.

Kitcher, P. (2003). Science, truth, and democracy. Oxford University Press.

Author Information: Adam Riggio, SERRC Digital Editor, serrc.digital@gmail.com

Riggio, Adam. “Action in Harmony with a Global World.” Social Epistemology Review and Reply Collective 7, no. 3 (2018): 20-26.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Vp

Image by cornie via Flickr / Creative Commons

 

Bryan Van Norden has become about as notorious as an academic philosopher can be while remaining a virtuous person. His notoriety came with a column in the New York Times that took the still-ethnocentric approach of many North American and European university philosophy departments to task. The condescending and insulting dismissal of great works of thought from cultures and civilizations beyond Europe and European-descended North America should scandalize us. That it does not is to the detriment of academic philosophy’s culture.

Anyone who cares about the future of philosophy as a tradition should read Taking Back Philosophy and take its lessons to heart, if one does not agree already with its purpose. The discipline of philosophy, as practiced in North American and European universities, must incorporate all the philosophical traditions of humanity into its curriculum and its subject matter. It is simple realism.

A Globalized World With No Absolute Hierarchies

I am not going to argue for this decision, because I consider it obvious that this must be done. Taking Back Philosophy is a quick read, an introduction to a political task that philosophers, no matter their institutional homes, must support if the tradition is going to survive beyond the walls of universities increasingly co-opted by destructive economic, management, and human resources policies.

Philosophy as a creative tradition cannot survive in an education economy built on the back of student debt, where institutions’ priorities are set by a management class yoked to capital investors and corporate partners, which prioritizes the proliferation of countless administrative-only positions while highly educated teachers and researchers compete ruthlessly for poverty wages.

With this larger context in mind, Van Norden’s call for the enlargement of departments’ curriculums to cover all traditions is one essential pillar of the vision to liberate philosophy from the institutions that are destroying it as a viable creative process. In total, those four pillars are 1) universal accessibility, economically and physically; 2) community guidance of a university’s priorities; 3) restoring power over the institution to creative and research professionals; and 4) globalizing the scope of education’s content.

Taking Back Philosophy is a substantial brick through the window of the struggle to rebuild our higher education institutions along these democratic and liberating lines. Van Norden regularly publishes work of comparative philosophy that examines many problems of ethics and ontology using texts, arguments, and concepts from Western, Chinese, and Indian philosophy. But if you come to Taking Back Philosophy expecting more than a brick through those windows, you’ll be disappointed. One chapter walks through a number of problems as examples, but the sustained conceptual engagement of a creative philosophical work is absent. Only the call to action remains.

What a slyly provocative call it is – the book’s last sentence, “Let’s discuss it . . .”

Unifying a Tradition of Traditions

I find it difficult to write a conventional review of Taking Back Philosophy, because so much of Van Norden’s polemic is common sense to me. Of course, philosophy departments must be open to primary material from all the traditions of the human world, not just the Western. I am incapable of understanding why anyone would argue against this, given how globalized human civilization is today. For the context of this discussion, I will consider a historical and a technological aspect of contemporary globalization. Respectively, these are the fall of the European military empires, and the incredible intensity with which contemporary communications and travel technology integrates people all over Earth.

We no longer live in a world dominated by European military colonial empires, so re-emerging centres of culture and economics must be taken on their own terms. The Orientalist presumption, which Edward Said spent a career mapping, that there is no serious difference among Japanese, Malay, Chinese, Hindu, Turkic, Turkish, Persian, Arab, Levantine, or Maghreb cultures is not only wrong, but outright stupid. Orientalism as an academic discipline thrived for the centuries it did only because European weaponry intentionally and persistently kept those cultures from asserting themselves.

Indigenous peoples – throughout the Americas, Australia, the Pacific, and Africa – who have been the targets of cultural and eradicative genocides for centuries now claim and agitate for their human rights, as well as inclusion in the broader human community and species. I believe most people of conscience are appalled and depressed that these claims are controversial at all, and even seen by some as a sign of civilizational decline.

The impact of contemporary technology I consider an even more important factor than the end of imperialist colonialism in the imperative to globalize the philosophical tradition. Despite the popular rhetoric of contemporary globalization, the human world has been globalized for millennia. Virtually since urban life first developed, long-distance international trade and communication began as well.

Here are some examples. Some of the first major cities of ancient Babylon achieved their greatest economic prosperity through trade with cities on the south of the Arabian Peninsula, and as far east along the Indian Ocean coast as Balochistan. From 4000 to 1000 years ago, Egyptian, Roman, Greek, Persian, Arab, Chinese, Mongol, Indian, Bantu, Malian, Inca, and Anishinaabeg peoples, among others, built trade networks and institutions stretching across continents.

Contemporary globalization is different in the speed and quantity of commerce, and diversity of goods. It is now possible to reach the opposite side of the planet in a day’s travel, a journey so ordinary that tens of millions of people take these flights each year. Real-time communication is now possible between anywhere on Earth with broadband internet connections thanks to satellite networks and undersea fibre-optic cables. In 2015, the total material value of all goods and commercial services traded internationally was US$21-trillion. That’s a drop from the previous year’s all-time (literally) high of US$24-trillion.[1]

Travel, communication, and productivity has never been so massive or intense in all of human history. The major control hubs of the global economy are no longer centralized in a small set of colonial powers, but a variety of economic centres throughout the world, depending on industry. From Beijing, Moscow, Mumbai, Lagos, and Berlin to Tokyo, and Washington, the oil fields of Kansas, the Dakotas, Alberta, and Iraq, and the coltan, titanium, and tantalum mines of Congo, Kazakhstan, and China.

All these proliferating lists express a simple truth – all cultures of the world now legitimately claim recognition as equals, as human communities sharing our Earth as we hollow it out. Philosophical traditions from all over the world are components of those claims to equal recognition.

The Tradition of Process Thought

So that is the situation forcing a recalcitrant and reactionary academy to widen its curricular horizons – Do so, or face irrelevancy in a global civilization with multiple centres all standing as civic equals in the human community. This is where Van Norden himself leaves us. Thankfully, he understands that a polemic ending with a precise program immediately becomes empty dogma, a conclusion which taints the plausibility of an argument. His point is simple – that the academic discipline must expand its arms. He leaves the more complex questions of how the philosophical tradition itself can develop as a genuinely global community.

Process philosophy is a relatively new philosophical tradition, which can adopt the classics of Daoist philosophy as broad frameworks and guides. By process philosophy, I mean the research community that has grown around Gilles Deleuze and Félix Guattari as primary innovators of their model of thought – a process philosophy that converges with an ecological post-humanism. The following are some essential aspects of this new school of process thinking, each principle in accord with the core concepts of the foundational texts of Daoism, Dao De Jing and Zhuang Zi.

Ecological post-humanist process philosophy is a thorough materialism, but it is an anti-reductive materialism. All that exists is bodies of matter and fields of force, whose potentials include everything for which Western philosophers have often felt obligated to postulate a separate substance over and above matter, whether calling it mind, spirit, or soul.

As process philosophy, the emphasis in any ontological analysis is on movement, change, and relationships instead of the more traditional Western focus on identity and sufficiency. If I can refer to examples from the beginning of Western philosophy in Greece, process thought is an underground movement with the voice of Heraclitus critiquing a mainstream with the voice of Parmenides. Becoming, not being, is the primary focus of ontological analysis.

Process thinking therefore is primarily concerned with potential and capacity. Knowledge, in process philosophy, as a result becomes inextricably bound with action. This unites a philosophical school identified as “Continental” in common-sense categories of academic disciplines with the concerns of pragmatist philosophy. Analytic philosophy took up many concepts from early 20th century pragmatism in the decades following the death of John Dewey. These inheritors, however, remained unable to overcome the paradoxes stymieing traditional pragmatist approaches, particularly how to reconcile truth as correspondence with knowledge having a purpose in action and achievement.

A solution to this problem of knowledge and action was developed in the works of Barry Allen during the 2000s. Allen built an account of perception that was rooted in contemporary research in animal behaviour, human neurology, and the theoretical interpretations of evolution in the works of Steven Jay Gould and Richard Lewontin.

His first analysis, focussed as it was on the dynamics of how human knowledge spurs technological and civilizational development, remains humanistic. Arguing from discoveries of how profoundly the plastic human brain is shaped in childhood by environmental interaction, Allen concludes that successful or productive worldly action itself constitutes the correspondence of our knowledge and the world. Knowledge does not consist of a private reserve of information that mirrors worldly states of affairs, but the physical and mental interaction of a person with surrounding processes and bodies to constitute those states of affairs. The plasticity of the human brain and our powers of social coordination are responsible for the peculiarly human mode of civilizational technology, but the same power to constitute states of affairs through activity is common to all processes and bodies.[2]

“Water is fluid, soft, and yielding. But water will wear away rock, which is rigid and cannot yield. Whatever is soft, fluid, and yielding will overcome whatever is rigid and hard.” – Lao Zi
The Burney Falls in Shasta County, Northern California. Image by melfoody via Flickr / Creative Commons

 

Action in Phase With All Processes: Wu Wei

Movement of interaction constitutes the world. This is the core principle of pragmatist process philosophy, and as such brings this school of thought into accord with the Daoist tradition. Ontological analysis in the Dao De Jing is entirely focussed on vectors of becoming – understanding the world in terms of its changes, movements, and flows, as each of these processes integrate in the complexity of states of affairs.

Not only is the Dao De Jing a foundational text in what is primarily a process tradition of philosophy, but it is also primarily pragmatist. Its author Lao Zi frames ontological arguments in practical concerns, as when he writes, “The most supple things in the world ride roughshod over the most rigid” (Dao De Jing §43). This is a practical and ethical argument against a Parmenidean conception of identity requiring stability as a necessary condition.

What cannot change cannot continue to exist, as the turbulence of existence will overcome and erase what can exist only by never adapting to the pressures of overwhelming external forces. What can only exist by being what it now is, will eventually cease to be. That which exists in metamorphosis and transformation has a remarkable resilience, because it is able to gain power from the world’s changes. This Daoist principle, articulated in such abstract terms, is in Deleuze and Guattari’s work the interplay of the varieties of territorializations.

Knowledge in the Chinese tradition, as a concept, is determined by an ideal of achieving harmonious interaction with an actor’s environment. Knowing facts of states of affairs – including their relationships and tendencies to spontaneous and proliferating change – is an important element of comprehensive knowledge. Nonetheless, Lao Zi describes such catalogue-friendly factual knowledge as, “Those who know are not full of knowledge. Those full of knowledge do not know” (Dao De Jing 81). Knowing the facts alone is profoundly inadequate to knowing how those facts constrict and open potentials for action. Perfectly harmonious action is the model of the Daoist concept of Wu Wei – knowledge of the causal connections among all the bodies and processes constituting the world’s territories understood profoundly enough that self-conscious thought about them becomes unnecessary.[3]

Factual knowledge is only a condition of achieving the purpose of knowledge: perfectly adapting your actions to the changes of the world. All organisms’ actions change their environments, creating physically distinctive territories: places that, were it not for my action, would be different. In contrast to the dualistic Western concept of nature, the world in Daoist thought is a complex field of overlapping territories whose tensions and conflicts shape the character of places. Fulfilled knowledge in this ontological context is knowledge that directly conditions your own actions and the character of your territory to harmonize most productively with the actions and territories that are always flowing around your own.

Politics of the Harmonious Life

The Western tradition, especially in its current sub-disciplinary divisions of concepts and discourses, has treated problems of knowledge as a domain separate from ethics, morality, politics, and fundamental ontology. Social epistemology is one field of the transdisciplinary humanities that unites knowledge with political concerns, but its approaches remain controversial in much of the conservative mainstream academy. The Chinese tradition has fundamentally united knowledge, moral philosophy, and all fields of politics especially political economy since the popular eruption of Daoist thought in the Warring States period 2300 years ago. Philosophical writing throughout eastern Asia since then has operated in this field of thought.

As such, Dao-influenced philosophy has much to offer contemporary progressive political thought, especially the new communitarianism of contemporary social movements with their roots in Indigenous decolonization, advocacy for racial, sexual, and gender liberation, and 21st century socialist advocacy against radical economic inequality. In terms of philosophical tools and concepts for understanding and action, these movements have dense forebears, but a recent tradition.

The movement for economic equality and a just globalization draws on Antonio Gramsci’s introduction of radical historical contingency to the marxist tradition. While its phenomenological and testimonial principles and concepts are extremely powerful and viscerally rooted in the lived experience of subordinated – what Deleuze and Guattari called minoritarian – people as groups and individuals, the explicit resources of contemporary feminism is likewise a century-old storehouse of discourse. Indigenous liberation traditions draw from a variety of philosophical traditions lasting millennia, but the ongoing systematic and systematizing revival is almost entirely a 21st century practice.

Antonio Negri, Rosi Braidotti, and Isabelle Stengers’ masterworks unite an analysis of humanity’s destructive technological and ecological transformation of Earth and ourselves to develop a solution to those problems rooted in communitarian moralities and politics of seeking harmony while optimizing personal and social freedom. Daoism offers literally thousands of years of work in the most abstract metaphysics on the nature of freedom in harmony and flexibility in adaptation to contingency. Such conceptual resources are of immense value to these and related philosophical currents that are only just beginning to form explicitly in notable size in the Western tradition.

Van Norden has written a book that is, for philosophy as a university discipline, is a wake-up call to this obstinate branch of Western academy. The world around you is changing, and if you hold so fast to the contingent borders of your tradition, your territory will be overwritten, trampled, torn to bits. Live and act harmoniously with the changes that are coming. Change yourself.

It isn’t so hard to read some Lao Zi for a start.

Contact details: serrc.digital@gmail.com

References

Allen, Barry. Knowledge and Civilization. Boulder, Colorado: Westview Press, 2004.

Allen, Barry. Striking Beauty: A Philosophical Look at the Asian Martial Arts. New York: Columbia University Press, 2015.

Allen, Barry. Vanishing Into Things: Knowledge in Chinese Tradition. Cambridge: Harvard University Press, 2015.

Bennett, Jane. Vibrant Matter: A Political Ecology of Things. Durham: Duke University Press, 2010.

Betasamosake Simpson, Leanne. As We Have Always Done: Indigenous Freedom Through Radical Resistance. Minneapolis: University of Minnesota Press, 2017.

Bogost, Ian. Alien Phenomenology, Or What It’s Like to Be a Thing. Minneapolis: Minnesota University Press, 2012.

Braidotti, Rosi. The Posthuman. Cambridge: Polity Press, 2013.

Deleuze, Gilles. Bergsonism. Translated by Hugh Tomlinson and Barbara Habberjam. New York: Zone Books, 1988.

Chew, Sing C. World Ecological Degradation: Accumulation, Urbanization, and Deforestation, 3000 B.C. – A.D. 2000. Walnut Creek: Altamira Press, 2001.

Negri, Antonio, and Michael Hardt. Assembly. New York: Oxford University Press, 2017.

Parikka, Jussi. A Geology of Media. Minneapolis: University of Minnesota Press, 2015.

Riggio, Adam. Ecology, Ethics, and the Future of Humanity. New York: Palgrave MacMillan, 2015.

Stengers, Isabelle. Cosmopolitics I. Translated by Robert Bononno. Minneapolis: Minnesota University Press, 2010.

Stengers, Isabelle. Cosmopolitics II. Translated by Robert Bononno. Minneapolis: Minnesota University Press, 2011.

Van Norden, Bryan. Taking Back Philosophy: A Multicultural Manifesto. New York: Columbia University Press, 2017.

World Trade Organization. World Trade Statistical Review 2016. Retrieved from https://www.wto.org/english/res_e/statis_e/wts2016_e/wts2016_e.pdf

[1] That US$3-trillion drop in trade was largely the proliferating effect of the sudden price drop of human civilization’s most essential good, crude oil, to just less than half of its 2014 value.

[2] A student of Allen’s arrived at this conclusion in combining his scientific pragmatism with the French process ontology of Deleuze and Guattari in the context of ecological problems and eco-philosophical thinking.

[3] This concept of knowledge as perfectly harmonious but non-self-conscious action also conforms to Henri Bergson’s concept of intuition, the highest (so far) form of knowledge that unites the perfect harmony in action of brute animal instinct with the self-reflective and systematizing power of human understanding. This is a productive way for another creative contemporary philosophical path – the union of vitalist and materialist ideas in the work of thinkers like Jane Bennett – to connect with Asian philosophical traditions for centuries of philosophical resources on which to draw. But that’s a matter for another essay.

Author Information: Damien Williams, Virginia Tech, damienw7@vt.edu

Williams, Damien. “Deleting the Human Clause: A Review of Ashley Shew’s Animal Constructions and Technological Knowledge.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 42-44.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uh

Animal Constructions and Technological Knowledge is Ashley Shew’s debut monograph and in it she argues that we need to reassess and possibly even drastically change the way in which we think about and classify the categories of technology, tool use, and construction behavior. Drawing from the fields of anthropology, animal studies, and philosophy of technology and engineering, Shew demonstrates that there are several assumptions made by researchers in all of these fields—assumptions about intelligence, intentionality, creativity and the capacity for novel behavior.

Many of these assumptions, Shew says, were developed to guard against the hazard of anthropomorphizing the animals under investigation, and to prevent those researchers ascribing human-like qualities to animals that don’t have them. However, this has led to us swinging the pendulum too far in the other direction, engaging in “a kind of speciesist arrogance” which results in our not ascribing otherwise laudable characteristics to animals for the mere fact that they aren’t human.[1]

Shew says that we consciously and unconsciously appended a “human clause” to all of our definitions of technology, tool use, and intelligence, and this clause’s presumption—that it doesn’t really “count” if humans aren’t the ones doing it—is precisely what has to change.

In Animal Constructions, Shew’s tone is both light and intensely focused, weaving together extensive notes, bibliography, and index with humor, personal touches, and even poignancy, all providing a sense of weight and urgency to her project. As she lays out the pieces of her argument, she is extremely careful about highlighting and bracketing out her own biases, throughout the text; an important fact, given that the whole project is about the recognition of assumptions and bias in human behavior. In Chapter 6, when discussing whether birds can be said to understand what they’re doing, Shew says that she

[relies] greatly on quotations…because the study’s authors describe crow tool uses and manufacture using language that is very suggestive about crows’ technological understanding and behaviors—language that, given my particular philosophical research agenda, might sound biased in paraphrase.[2]

In a chapter 6 endnote, Shew continues to touch on this issue of bias and its potential to become prejudice, highlighting the difficulty of cross-species comparison, and noting that “we also compare the intelligence of culturally and economically privileged humans with that of less privileged humans, a practice that leads to oppression, exploitation, slavery, genocide, etc.”[3] In the conclusion, she elaborates on this somewhat, pointing out the ways in which biases about the “right kinds” of bodies and minds have led to embarrassments and atrocities in human history.[4] As we’ll see, this means that the question of how and why we categorize animal construction behaviors as we do has implications which are far more immediate and crucial than research projects.

The content of Animal Constructions is arranged in such a way as to make a strong case for the intelligence, creativity, and ingenuity of animals, throughout, but it also provides several contrast cases in which we see that there are several animal behaviors which might appear to be intentional, but which are the product of instinct or the extended phenotype of the species in question.[5] According to Shew, these latter cases do more than act as exceptions that test the rule; they also provide the basis for reframing the ways in which we compare the behaviors of humans and nonhuman animals.

If we can accept that construction behavior exists on a spectrum or continuum with tool use and other technological behaviors, and we can come to recognize that animals such as spiders and beavers make constructions as a part of the instinctual, DNA-based, phenotypical natures, then we can begin to interrogate whether the same might not be true for the things that humans make and do. If we can understand this, then we can grasp that “the nature of technology is not merely tied to the nature of humanity, but to humanity in our animality” (emphasis present in original).[6]

Using examples from animal studies reaching back several decades, Shew discusses experimental observations of apes, monkeys, cetaceans (dolphins and whales), and birds. Each example set moves further away from the kind of animals we see as “like us,” and details how each group possess traits and behaviors humans tend to think only exist in ourselves.[7] Chimps and monkeys test tool-making techniques and make plans; dolphins and whales pass hunting techniques on to their children and cohort, have names, and social rituals; birds make complex tools for different scenarios, adapt them to novel circumstances, and learn to lie.[8]

To further discuss the similarities between humans and other animals, Shew draws on theories about the relationship between body and mind, such as embodiment and extended mind hypotheses, from philosophy of mind, which say that the kind of mind we are is intimately tied to the kinds of bodies we are. She pairs this with work from disability studies which forwards the conceptual framework of “bodyminds,” saying that they aren’t simply linked; they’re the same.[9] This is the culmination of descriptions of animal behaviors and a prelude a redefinition and reframing of the concepts of “technology” and “knowledge.”

Editor's note - My favourite part of this review roundtable is scanning through pictures of smart animals

Dyson the seal. Image by Valerie via Flickr / Creative Commons

 

In the book’s conclusion, Shew suggests placing all the products of animal construction behavior on a two-axis scale, where the x-axis is “know-how” (the knowledge it takes to accomplish a task) and the y-axis is “thing knowledge” (the information about the world that gets built into constructed objects).[10] When we do this, she says, we can see that every made thing, be it object or social construct (a passage with important implications) falls somewhere outside of the 0, 0 point.[11] This is Shew’s main thrust throughout Animal Constructions: That humans are animals and our technology is not what sets us apart or makes us special; in fact, it may be the very thing that most deeply ties us to our position within the continuum of nature.

For Shew, we need to be less concerned about the possibility of incorrectly thinking that animals are too much like us, and far more concerned that we’re missing the ways in which we’re still and always animals. Forgetting our animal nature and thinking that there is some elevating, extra special thing about humans—our language, our brains, our technologies, our culture—is arrogant in the extreme.

While Shew says that she doesn’t necessarily want to consider the moral implications of her argument in this particular book, it’s easy to see how her work could be foundational to a project about moral and social implications, especially within fields such as animal studies or STS.[12] And an extension like this would fit perfectly well with the goal she lays out in the introduction, regarding her intended audience: “I hope to induce philosophers of technology to consider animal cases and induce researchers in animal studies to think about animal tool use with the apparatus provided by philosophy of technology.”[13]

In Animal Constructions, Shew has built a toolkit filled with fine arguments and novel arrangements that should easily provide the instruments necessary for anyone looking to think differently about the nature of technology, engineering, construction, and behavior, in the animal world. Shew says that “A full-bodied approach to the epistemology of technology requires that assumptions embedded in our definitions…be made clear,”[14] and Animal Constructions is most certainly a mechanism by which to deeply delve into that process of clarification.

Contact details: damienw7@vt.edu

References

Shew, Ashley. Animal Constructions and Technological Knowledge. Lanham, MD: Lexington Books, 2017.

[1] Ashley Shew, Animal Constructions and Technological Knowledge p. 107

[2] Ibid., p. 73

[3] Ibid., p. 89, n. 7

[4] Ibid., pg. 107—122

[5] Ibid., pg. 107—122

[6] Ibid., p. 19

[7] On page 95, Shew makes brief mention various instances of octopus tool use; more of these examples would really drive the point home.

[8] Shew, pg. 35—51; 53—65; 67—89

[9] Ibid., p. 108

[10] Ibid., pg. 110—119

[11] Ibid., p. 118

[12] Ibid., p. 16

[13] Ibid., p. 11

[14] Ibid., p 105

Author Information: Emma Stamm, Virginia Tech, stamm@vt.edu

Stamm, Emma. “Retooling ‘The Human.’” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 36-40.

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3SW

Ashley Shew’s Animal Constructions and Technical Knowledge challenges philosophers of technology with the following provocation: What would happen if we included tools made and used by nonhuman animals in our broad definition of “technology?”

Throughout Animal Constructions, Shew makes the case that this is more than simply an interesting question. It is, she says, a necessary interrogation within a field that may well be suffering from a sort of speciesist myopia. Blending accounts from a range of animal case studies — including primates, cetaceans, crows, and more — with pragmatic theoretical analysis, Shew demonstrates that examining animal constructions through a philosophical lens not only expands our awareness of the nonhuman world, but has implications for how humans should conceive of their own relationship with technology.

At the beginning of Animal Constructions, Shew presents us with “the human clause,” her assessment of “the idea that human beings are the only creatures that can have or do use technology” (14). This misconception stems from the notion of homo faber, “(hu)man the maker” (14), which “sits at the center of many definitions of technology… (and) is apparent in many texts theorizing technology” (14).

It would appear that this precondition for technology, long taken as dogma by technologists and philosophers alike, is less stable than has often been assumed. Placing influential ideas from philosophers of technology in dialogue with empirical field and (to a lesser extent) laboratory studies conducted on animals, Shew argues that any thorough philosophical account of technology not only might, but must include objects made and used by nonhuman animals.

Animal Constructions and Technical Knowledge lucidly demonstrates this: by the conclusion, readers may wonder how the intricate ecosystem of animal tool-use has been so systematically excluded from philosophical treatments of the technical. Shew has accomplished much in recasting a disciplinary norm as a glaring oversight — although this oversight may be forgivable, considering the skill set required to achieve its goals. The author’s ambitions demand not only fluency with interdisciplinary research methods, but acute sensitivity to each of the disciplines it mobilizes.

Animal Constructions is a philosophical text wholly committed to representing science and technology on their own terms while speaking to a primarily humanities-based audience, a balance its author strikes gracefully. Indeed, Shew’s transitions from the purely descriptive to the interpretive are, for the most part, seamless. For example, in her chapter on cetaceans, she examines the case of dolphins trained to identify man-made objects of a certain size category (60), noting that the success of this initiative indicates that dolphins have the human-like capacity to think in abstract categories. This interpretation feels natural and very reasonable.

Importantly, the studies selected are neither conceptually simple, nor do they appear cherry-picked to serve her argument. A chapter titled “Spiderwebs, Beaver Dams, and Other Contrast Cases” (91) explores research on animal constructions that do not entirely fit the author’s definitions of technology. Here, it is revealed that while this topic is necessarily complicated for techno-philosophers, these complexities do not foreclose the potential for the nonhuman world to provide humans with a greater awareness of technology in theory and practice.

Ambiguous Interpretations

That being said, in certain parts, the empirical observations Shew uses to make her argument seem questionable. In a chapter on ape and primate cases, readers are given the tale of Santino, a chimpanzee in a Switzerland zoo with the pesky habit of storing stones specifically to throw at visitors (40). Investigators declared this behavior “the first unambiguous evidence of forward-planning in a nonhuman animal” (40) — a claim that may seem spurious, since many of us have witnessed dogs burying bones to dig up in the future, or squirrels storing food for winter.

However, as with every case study in the book, the story of Santino comes from well-documented, formal research, none of which was conducted by the author herself. If it was discovered that factual knowledge such as the aforementioned are, in fact, erroneous, it is not a flaw of the book itself. Moreover, so many examples are used that the larger arguments of Animal Constructions will hold up even if parts of the science on which it relies comes to be revised.

In making the case for animals so completely, Animal Constructions and Technical Knowledge is a success. The book also makes a substantial contribution with the methodological frameworks it gives to those interested in extending its project. Animal Constructions is as much conceptual cartography as it is a work of persuasion: Shew not only orients readers to her discipline — she does not assume readerly familiarity with its academic heritage — but provides a map that philosophers may use to situate the nonhuman in their own reflection on technology. This is largely why Animal Constructions is such a notable text for 21st century philosophy, as so many scholars are committed to rethinking “the human” in the wake of recent innovations in technoscience.

Animal Knowledge

Animal Constructions is of particular interest to critical and social epistemologists. Its opening chapters introduce a handful of ideas about what defines technical knowledge, concepts that bear on the author’s assessment of animal activity. Historically, Shew writes, philosophers of technology have furnished us with two types of accounts of technical knowledge. The first sees technology as constituting a unique case for philosophers (3).

In this view, the philosophical concerns of technology cannot be reduced to those of science (or, indeed, any domain of knowledge to which technology is frequently seen as subordinate). “This strain of thought represents a negative reaction to the idea that philosophy is the handmaiden of science, that technology is simply ‘applied science,’” she writes (3). It is a line of reasoning that relies on a careful distinction between “knowing how” and “knowing that,” claiming that technological knowledge is, principally, skillfulness in the first: know-how, or knowledge about “making or doing something” (3) as opposed to the latter “textbook”-ish knowledge. Here, philosophy of technology is demarcated from philosophy of science in that it exists outside the realm of theoretical epistemologies, i.e., knowledge bodies that have been abstracted from contextual application.

If “know-how” is indeed the foundation for a pragmatic philosophy of technology, the discipline would seem to openly embrace animal tools and constructions in its scope. After all, animals clearly “know how” to engage the material world. However, as Shew points out, most technology philosophers who abide by this dictum in fact lean heavily on the human clause. “This first type of account nearly universally insists that human beings are the sole possessors of technical knowledge” (4), she says, referencing the work  of philosophers A. Rupert Hall, Edwin T. Layton, Walter Vincenti, Carl Mitcham, and Joseph C. Pitt (3) as evidence.

The human clause is also present in the second account, although it is not nearly so deterministic. This camp has roots in the philosophy of science (6) and “sees knowledge as embodied in the objects themselves” (6). Here, Shew draws from the theorizations of Davis Baird, whose concept “thing knowledge” — “knowledge that is encapsulated in devices or otherwise materially instantiated” (6) — recurs throughout the book’s chapters specifically devoted to animal studies (chapters 4, 5, 6 and 7).

Scientific instruments are offered as perhaps the most exemplary cases of “thing knowledge,” but specialized tools made by humans are far from the only knowledge-bearing objects. The parameters of “thing knowledge” allow for more generous interpretations: Shew offers that Baird’s ideas include “know-how that is demonstrated or instantiated by the construction of a device that can be used by people or creatures without the advanced knowledge of its creators” (6). This is a wide category indeed, one that can certainly accommodate animal artefacts.

Image from Sergey Rodovnichenko via Flickr / Creative Commons

 

The author adapts this understanding of thing-knowledge, along with Davis Baird’s five general ideals for knowledge — detachment, efficacy, longevity, connection and objectivity (6) — as a scale within which some artefacts made and used by animals may be thought as “technologies” and others not. Positioned against “know-how,” “thing knowledge” serves as the other axis for this framework (112-113). Equally considered is the question of whether animals can set intentions and engage in purpose-driven behavior. Shew suggests that animal constructions which result from responses to stimuli, instinctive behavior, or other byproducts of evolutionary processes may not count as technology in the same way that artefacts which seem to come from purposiveness and forward-planning would (6-7).

Noting that intentionality is a tenuous issue in animal studies (because we can’t interview animals about their reasons for making and using things), Shew indicates that observations on intentionality can, at least in part, be inferred by exploring related areas, including “technology products that encode knowledge,” “problem-solving,” and “innovation” (9). These characteristics are taken up throughout each case study, albeit in different ways and to different ends.

At its core, the manner in which Animal Constructions grapples with animal cognition as a precursor to animal technology is an epistemological inquiry into the nonhuman. In the midst of revealing her aims, Shew writes: “this requires me to address questions about animal minds — whether animals set intentions and how intentionality evolved, whether animals are able to innovate, whether they can problem solve, how they learn — as well as questions about what constitutes technology and what constitutes knowledge” (9). Her answer to the animal-specific queries is a clear “yes,” although this yes comes with multiple caveats.

Throughout the text, Shew notes the propensity of research and observation to alter objects under study, clarifying that our understanding of animals is always filtered through a human lens. With a nod to Thomas Nagel’s famous essay “What Is It Like To Be A Bat?” (34), she maintains that we do not, in fact, know what it is like to be a chimpanzee, crow, spider or beaver. However, much more important to her project is the possibility that caution around perceived categorical differences, often foregrounded in the name of scholarly self-reflexivity, can hold back understanding of the nonhuman.

“In our fear of anthropomorphization and desire for a sparkle of objectivity, we can move too far in the other direction, viewing human beings as removed from the larger animal kingdom,” she declares (16).

Emphasizing kinship and closeness over remoteness and detachment, Shew’s pointed proclamations about animal life rest on the overarching “yes:” yes, animals solve problems, innovate, and set intentions. They also transmit knowledge culturally and socially. Weaving these observations together, Shew suggests that our anthropocentrism represents a form of bias (108); as with all biases, it stifles discourse and knowledge production for the fields within which it is imbricated — here, technological knowledge.

While this work explicitly pertains to technology, the lingering question of “what constitutes knowledge overall?” does not vanish in the details. Shew’s take on what constitutes animal knowledge has immediate relevance to work on knowledge made and manipulated by nonhumans. By the book’s end, it is evident that animal research can help us unhinge “the human clause” from our epistemology of the technical, facilitating a radical reinvestigation of both tool use and materially embodied knowledge.

Breaking Down Boundaries

But its approach has implications for taxonomies that not only divide humans and animals, but humans, animals and entities outside of the animal kingdom.  Although it is beyond the scope of this text, the methods of Animal Constructions can easily be applied to digital “minds” and artificial general intelligence, along with plant and fungus life. (One can imagine a smooth transition from a discussion on spider web-spinning, p. 92, to the casting of spores by algae and mushrooms). In that it excavates taxonomies and affirms the violence done by categorical delineations, Animal Constructions bears surface resemblance to the work of Michel Foucault and Donna Haraway. However, its commitment to positive knowledge places it in a tradition that more boldly supports the possibilities of knowing than does the legacies of Foucault and Haraway. That is to say, the offerings of Animal Constructions are not designed to self-deconstruct, or ironically self-reflect.

In its investigation of the flaws of anthropocentrism, Animal Constructions implies a deceptively straightforward question: what work does “the human clause” do for us? —  in other words, what has led “the human” to become so inexorably central to our technological and philosophical consciousness? Shew does not address this head-on, but she does give readers plenty of material to begin answering it for themselves. And perhaps they should: while the text resists ethical statements, there is an ethos to this particular question.

Applied at the societal level, an investigation of the roots of “the human clause” could be leveraged toward democratic ends. If we do, in  fact, include tools made and used by nonhuman animals in our definition of technology, it may mar the popular image of technological knowledge as a sort of “magic” or erudite specialization only accessible to certain types of minds. There is clear potential for this epistemological position to be advanced in the name of social inclusivity.

Whether or not readers detect a social project among the conversations engaged by Animal Constructions, its relevance to future studies is undeniable. The maps provided by Animal Constructions and Technical Knowledge do not tell readers where to go, but will certainly come in useful for anybody exploring the nonhuman territories of 21st century. Indeed, Animal Construction and Technical Knowledge is not only a substantive offering to philosophy of technology, but a set of tools whose true power may only be revealed in time.

Contact details: stamm@vt.edu

References

Shew, Ashley. Animal Constructions and Technological Knowledge. Lanham, MD: Lexington Books, 2017.

Author Information: Bonnie Talbert, Harvard University, USA, btalbert@fas.harvard.edu

Talbert, Bonnie. “Paralysis by Analysis Revisited.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 6-9.

Please refer to:

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Sh

Illustration by Lemuel Thomas from the 1936 Chesapeake and Ohio Railway Calendar.
Image by clotho39 via Flickr / Creative Commons

 

In his reply to my article “Overthinking and Other Minds: the Analysis Paralysis” (2017), Joshua Bergamin (2017) offers some fascinating thoughts about the nature of our knowledge of other people.

Bergamin is right in summarizing my claim that knowing another person involves fundamentally a know-how, and that knowing all the facts there is to know about a person is not enough to constitute knowing her. But, he argues, conscious deliberate thinking is useful in getting to know someone just as it is useful in learning any type of skill.

Questions of Ability

The example he cites is that of separating an egg’s yoke from its white—expert cooks can do it almost automatically while the novice in the kitchen needs to pay careful, conscious attention to her movements in order to get it right. This example is useful for several reasons. It highlights the fact that learning a skill requires effortful attention while engaging in an activity. It is one thing to think or read about how to separate an egg’s white from its yoke; it is quite another thing to practice it, even if it is slow going and clumsy at first. The point is that practice rather than reflection is what one has to do in order to learn how to smoothly complete the activity, even if the first attempts require effortful attention.[1]

On this point Bergamin and I are in agreement. My insistence that conscious deliberate reflection is rarely a good way to get to know someone is mostly targeted at the kinds of reflection one does “in one’s own head”. My claim is not that we never consciously think about other people, but that consciously thinking about them without their input is not a good way to get to know them.  This leads to another, perhaps more important point, which is that the case of the egg cracking is dissimilar from getting to know another person in some fundamental ways.

Unlike an egg, knowing how to interact with a person requires a back and forth exchange of postures, gestures, words, and other such signals. It is not possible for me to figure out how to interact with you and simply to execute those actions; I have to allow for a dynamic exchange of actions originating from each of us. With the egg, or any inanimate object, I am the only agent causing the sequence of events. With another person, there are two agents, and I cannot simply decide how to make the interaction work like I want it to; I have to have your cooperation. This makes knowing another person a different kind of enterprise than knowing other kinds of things.[2]

I maintain that most of the time, interactions with others are such that we do not need to consciously be thinking about what is going on. In fact, the behavioral, largely nonverbal signals that are sent nearly instantaneously to participants in a conversation occur so quickly that there is rarely time to reflect on them. Nevertheless, Bergamin’s point is that in learning an activity, and thus by extension, in getting to know another person as we learn to interact with her, we may be more conscious of our actions than we are once we know someone well and the interactions “flow” naturally.

Knowing Your Audience

I do not think this is necessarily at odds with my account. Learning how to pace one’s speech to a young child when one is used to speaking to adults might take some effortful attention, and the only way to get to the point where one can have a good conversation (if there is such a thing) with a youngster is to begin by paying attention to the speed at which one talks. I still think that once one no longer has to think about it, she will be better able to glean information from the child and will not have her attention divided between trying to pay attention to both what the child is doing and how she sounds herself.

It is easier to get to know someone if you are not focused on what you have to do to hold up your end of the conversation. But more than whether we are consciously or unconsciously attending to our actions in an interaction, my point is that reflection is one-sided while interaction is not, and it is interaction that is crucial for knowing another person. In interaction, whether our thought processes are unconscious or conscious, their epistemic function is such that they allow us to coordinate our behavior with another person’s. This is the crucial distinction from conscious deliberation that occurs in a non-interactive context.

Bergamin claims that “breakdowns” in flow are more than just disruptive; rather, they provide opportunities to learn how to better execute actions, both in learning a skill and in getting to know another person. And it is true that in relationships, a fight or disagreement can often shed light on the underlying dynamics that are causing tension. But unlike the way you can learn from a few misses how to crack an egg properly, you cannot easily decide how to fix your actions in a relationship without allowing for input from the other party.

Certain breakdowns in communication, or interruptions of the “flow” of a conversation can help us know another person better insofar as they alert us to situations in which things are not going smoothly. But further thinking does not always get us out of the problem–further interacting does. You cannot sort it out in your head without input from the other person.

My central claim is that knowing another person requires interaction and that the interactive context is constitutively different from contexts that require one-sided deliberation rather than back and forth dynamic flows of behavioral signals and other information. However, I also point out that propositional knowledge of various sorts is necessary for knowing another person.

Bergamin is correct to point out that in my original essay I do not elaborate on what if anything propositional, conscious deliberative thinking can add to knowing another person. But elsewhere (2014) I have argued that part of what it means to know someone is to know various things about her and that when we know someone, we can articulate various propositions that capture features of her character.

In the essay under discussion, I focus on the claim that propositional knowledge is not sufficient for knowing another person and that we must start with the kind of knowledge that comes from direct interaction if we are to claim that we know another person. We do also gain useful and crucial propositional knowledge from our interactions as well as from other sources that are also part of our knowledge of others, but without the knowledge that comes only from interaction we would ordinarily claim to know things about a person, rather than to know her.

Bergamin is also right in asserting that my account implies that our interactions with others do not typically involve much thinking in the traditional sense. They are, as he speculates, “immersive, intersubjective events…such that each relationship is different for each of us and to some extent out of our control.”  This is partly true. While I might share a very different relationship to Jamie than you do, chances are that we can both recognize certain features of Jamie as being part of who he is. I was struck by this point at a recent memorial service when people with very different relationships spoke about their loved one, impersonating his accent, his frequently used turns of phrase, his general stubbornness, generosity, larger than life personality and other features that everyone at the service could recognize no matter whether the relationship was strictly professional, familial, casual, lasting decades, etc.

I have tentatively spelled out an account (2014) that suggests that with people we know, there are some things that only the people in the relationship share, such as knowledge of where they had lunch last week and what was discussed. But there is also knowledge that is shared beyond that particular relationship that helps situate that relationship vis-à-vis other, overlapping relationships, i.e., while I share a unique relationship with my mother, and so does my sister-in-law, we can both recognize some features of her that are the same for both of us. Further, my sister–in-law knows that I am often a better judge of what my mother wants for her birthday, since I have known my mother longer and can easily tell that she does not mean it when she says she does not want any gifts this year.

Bergamin’s concluding thoughts about the Heideggerian nature of my project are especially insightful, and I too am still working on the speculative implications of my account, which posits that (in Bergamin’s words), “If people are ‘moving targets,’ then we are not ‘things’ but ‘processes,’ systems that are in constant flux. To know such a process is not to try to nail down the ever-changing facts about it, but involves interacting with it. Yet we who interact are ourselves a similar kind of ‘process,’ and in getting to know somebody we are just as much the known as the knower. Our relationships, therefore, are a kind of identity, that involves us and yet exceeds us — growing and evolving over time.” My hope is that this is a project on which we and many other scholars will continue to make progress.

Contact details: btalbert@fas.harvard.edu

References

Bergamin, Joshua. “To Know and To Be: Second-Person Knowledge and the Intersubjective Self, A Reply to Talbert.” Social Epistemology Review and Reply Collective 6, no. 10 (2017): 43-47.

Cleary, Christopher. “Olympians Use Imagery as Mental Training.” New York Times,  February 22, 2014, https://www.nytimes.com/2014/02/23/sports/olympics/olympians-use-imagery-as-mental-training.html

Talbert, Bonnie. “Knowing Other People: A Second-person Framework.” Ratio 28, no. 2 (2014): 190–206.

Talbert, Bonnie. “Overthinking and Other Minds: The Analysis Paralysis.” Social Epistemology 31, no. 6 (2017): 1-12.

[1] There is some research that shows that conscious thoughtful reflection, indeed “visualization” can help a person perform an activity better. Visualization has been used to help promote success in sports, business, personal habits, and the like. Process visualization, which is sometimes used with varying degrees of success in athletes, is interesting for my purposes because it does seem to help in performing an activity, or to help with the know-how involved in some athletic endeavors. I do not know why this is the case, and I am a bit skeptical of some of the claims used in this line of reasoning. But I do not think we could use process visualization to help with our interactions with others and get the same kind of results, for the actions of another person are much more unpredictable than the final hill of the marathon or the dismount of a balance beam routine. It is also useful to note that some sports are easier than others to visualize, namely those that are most predictable. For more on this last point and on how imagery can be used to enhance athletic performance, see Christopher Cleary’s “Olympians Use Imagery as Mental Training” (2014).

[2] This leads to another point that is not emphasized in my original essay but perhaps should have been. Insofar as I liken getting to know another person to the “flow” one can experience in certain sports, I do not sufficiently point out that “flow” in some sports, namely those that involve multiple people, involves something much more similar to the “know-how” involved in getting to know another person than in sports where there is only one person involved. Interestingly, “team sports” and other multi person events are not generally cited as activities whose success can be significantly improved by visualization.

Author Information: Manuel Padilla Cruz, University of Seville, mpadillacruz@us.es

Cruz, Manuel Padilla. “Conceptual Competence Injustice and Relevance Theory, A Reply to Derek Anderson.” Social Epistemology Review and Reply Collective 6, no. 12 (2017): 39-50.

Please refer to:

The pdf of the article gives specific page numbers. Shortlink: https://wp.me/p1Bfg0-3RS

Contestants from the 2013 Scripps National Spelling Bee. Image from Scripps National Spelling Bee, via Flickr / Creative Commons

 

Derek Anderson (2017a) has recently differentiated conceptual competence injustice and characterised it as the wrong done when, on the grounds of the vocabulary used in interaction, a person is believed not to have a sophisticated or rich conceptual repertoire. His most interesting, insightful and illuminating work induced me to propose incorporating this notion to the field of linguistic pragmatics as a way of conceptualising an undesired and unexpected perlocutionary effect: attribution of lower level of communicative or linguistic competence. These may be drawn from a perception of seemingly poor performance stemming from lack of the words necessary to refer to specific elements of reality or misuse of the adequate ones (Padilla Cruz 2017a).

Relying on the cognitive pragmatic framework of relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004), I also argued that such perlocutionary effect would be an unfortunate by-product of the constant tendency to search for the optimal relevance of intentional stimuli like single utterances or longer stretches of discourse. More specifically, while aiming for maximum cognitive gain in exchange for a reasonable amount of cognitive effort, the human mind may activate or access assumptions about a language user’s linguistic or communicative performance, and feed them as implicated premises into inferential computations.

Although those assumptions might not really have been intended by the language user, they are made manifest by her[1] behaviour and may be exploited in inference, even if at the hearer’s sole responsibility and risk. Those assumptions are weak implicated premises and their interaction with other mentally stored information yields weakly implicated conclusions (Sperber and Wilson 1986/1995; Wilson and Sperber 2004). Since their content pertains to the speaker’s behaviour, they are behavioural implicatures (Jary 2013); since they negatively impact on an individual’s reputation as a language user, they turn out to be detrimental implicatures (Jary 1998).

My proposal about the benefits of the notion of conceptual competence injustice to linguistic pragmatics was immediately replied by Anderson (2017b). He considers that the intention underlying my comment on his work was “[…] to model conceptual competence injustice within relevance theory” and points out that my proposal “[…] must be tempered with the proper understanding of that phenomenon as a structural injustice” (Anderson 2017b: 36; emphasis in the original). Furthermore, he also claims that relevance theory “[…] does not intrinsically have the resources to identify instances of conceptual competence injustice” (Anderson 2017b: 36).

In what follows, I purport to clarify two issues. Firstly, my suggestion to incorporate conceptual competence injustice into linguistic pragmatics necessarily relies on a much broader, more general and loosened understanding of this notion. Even if such an understanding deprives it of some of its essential, defining conditions –namely, existence of different social identities and of matrices of domination– it may somehow capture the ontology of the unexpected effects that communicative performance may result in: an unfair appraisal of capacities.

Secondly, my intention when commenting on Anderson’s (2017a) work was not actually to model conceptual competence injustice within relevance theory, but to show that this pragmatic framework is well equipped and most appropriate in order to account for the cognitive processes and the reasons underlying the unfortunate negative effects that may be alluded to with the notion I am advocating for. Therefore, I will argue that relevance theory does in fact have the resources to explain why some injustices stemming from communicative performance may originate. To conclude, I will elaborate on the factors why wrong ascriptions of conceptual and lexical competence may be made.

What Is Conceptual Competence Injustice

As a sub-type of epistemic injustice (Fricker 2007), conceptual competence injustice arises in scenarios where there are privileged epistemic agents who (i) are prejudiced against members of specific social groups, identities or minorities, and (ii) exert power as a way of oppression. Such agents make “[…] false judgments of incompetence [which] function as part of a broader, reliable pattern of marginalization that systematically undermines the epistemic agency of members of an oppressed social identity” (Anderson 2017b: 36). Therefore, conceptual competence injustice is a way of denigrating individuals as knowers of specific domains of reality and ultimately disempowering, discriminating and excluding them, so it “[…] is a form of epistemic oppression […]” (Anderson 2017b: 36).

Lack or misuse of vocabulary may result in wronging if hearers conclude that certain concepts denoting specific elements of reality –objects, animals, actions, events, etc.– are not available to particular speakers or that they have erroneously mapped those concepts onto lexical items. When this happens, speakers’ conceptualising and lexical capacities could be deemed to be below alleged or actual standards. Since lexical competence is one of the pillars of communicative competence (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995), that judgement could contribute to downgrading speakers in an alleged scale of communicative competence and, consequently, to regarding them as partially or fully incompetent.

According to Medina (2011), competence is a comparative and contrastive property. On the one hand, skilfulness in some domain may be compared to that in (an)other domain(s), so a person may be very skilled in areas like languages, drawing, football, etc., but not in others like mathematics, oil painting, basketball, etc. On the other hand, knowledge of and abilities in some matters may be greater or lesser than those of other individuals. Competence, moreover, may be characterised as gradual and context-dependent. Degree of competence –i.e. its depth and width, so to say– normally increases because of age, maturity, personal circumstances and experience, or factors such as instruction and subsequent learning, needs, interests, motivation, etc. In turn, the way in which competence surfaces may be affected by a variety of intertwined factors, which include (Mustajoki 2012; Padilla Cruz 2017b).

Factors Affecting Competence in Communication

Internal factors –i.e. person-related– among which feature:

Relatively stable factors, such as (i) other knowledge and abilities, regardless of their actual relatedness to a particular competence, and (ii) cognitive styles –i.e. patterns of accessing and using knowledge items, among which are concepts and words used to name them.

Relatively unstable factors, such as (i) psychological states like nervousness, concentration, absent-mindedness, emotional override, or simply experiencing feelings like happiness, sadness, depression, etc.; (ii) physiological conditions like tiredness, drowsiness, drunkenness, etc., or (iii) performance of actions necessary for physiological functions like swallowing, sipping, sneezing, etc. These may facilitate or hinder access to and usage of knowledge items including concepts and words.

External –i.e. situation-related– factors, which encompass (i) the spatio-temporal circumstances where encounters take place, and (ii) the social relations with other participants in an encounter. For instance, haste, urgency or (un)familiarity with a setting may ease or impede access to and usage of knowledge items, as may experiencing social distance and/or more or less power with respect to another individual (Brown and Levinson 1987).

While ‘social distance’ refers to (un)acquaintance with other people and (dis)similarity with them as a result of perceptions of membership to a social group, ‘power’ does not simply allude to the possibility of imposing upon others and conditioning their behaviour as a consequence of differing positions in a particular hierarchy within a specific social institution. ‘Power’ also refers to the likelihood to impose upon other people owing to perceived or supposed expertise in a field –i.e. expert power, like that exerted by, for instance, a professor over students– or to admiration of diverse personal attributes –i.e. referent power, like that exerted by, for example, a pop idol over fans (Spencer-Oatey 1996).

There Must Be Some Misunderstanding

Conceptualising capacities, conceptual inventories and lexical competence also partake of the four features listed above: gradualness, comparativeness, contrastiveness and context-dependence. Needless to say, all three of them obviously increase as a consequence of growth and exposure to or participation in a plethora of situations and events, among which education or training are fundamental. Conceptualising capacities and lexical competence may be more or less developed or accurate than other abilities, among which are the other sub-competences upon which communicative competence depends –i.e. phonetics, morphology, syntax and pragmatics (Hymes 1972; Canale 1983; Bachman 1991; Celce-Murcia et al. 1995).

Additionally, conceptual inventories enabling lexical performance may be rather complex in some domains but not in others –e.g. a person may store many concepts and possess a rich vocabulary pertaining to, for instance, linguistics, but lack or have rudimentary ones about sports. Finally, lexical competence may appear to be higher or lower than that of other individuals under specific spatio-temporal and social circumstances, or because of the influence of the aforesaid psychological and physiological factors, or actions performed while speaking.

Apparent knowledge and usage of general or domain-specific vocabulary may be assessed and compared to those of other people, but performance may be hindered or fail to meet expectations because of the aforementioned factors. If it was considered deficient, inferior or lower than that of other individuals, such consideration should only concern knowledge and usage of vocabulary concerning a specific domain, and be only relative to a particular moment, maybe under specific circumstances.

Unfortunately, people often extrapolate and (over)generalise, so they may take (seeming) lexical gaps at a particular time in a speaker’s life or one-off, occasional or momentary lexical infelicities to suggest or unveil more global and overarching conceptualising handicaps or lexical deficits. This does not only lead people to doubt the richness and broadness of that speaker’s conceptual inventory and lexical repertoire, but also to question her conceptualising abilities and what may be labelled her conceptual accuracy –i.e. the capacity to create concepts that adequately capture nuances in elements of reality and facilitate correct reference to those elements– as well as her lexical efficiency or lexical reliability –i.e. the ability to use vocabulary appropriately.

As long as doubts are cast about the amount and accuracy of the concepts available to a speaker and her ability to verbalise them, there arises an unwarranted and unfair wronging which would count as an injustice about that speaker’s conceptualising skills, amount of concepts and expressive abilities. The loosened notion of conceptual competence injustice whose incorporation into the field of linguistic pragmatics I advocated does not necessarily presuppose a previous discrimination or prejudice negatively biasing hegemonic, privileged or empowered individuals against minorities or identities.

Wrong is done, and an epistemic injustice is therefore inflicted, because another person’s conceptual inventory, lexical repertoire and expressive skills are underestimated or negatively evaluated because of (i) perception of a communicative behaviour that is felt not to meet expectations or to be below alleged standards, (ii) tenacious adherence to those expectations or standards, and (iii) unawareness of the likely influence of various factors on performance. This wronging may nonetheless lead to subsequently downgrading that person as regards her communicative competence, discrediting her conceptual accuracy and lexical efficiency/reliability, and denigrating her as a speaker of a language, and, therefore, as an epistemic agent. Relying on all this, further discrimination on other grounds may ensue or an already existing one may be strengthened and perpetuated.

Relevance Theory and Conceptual Competence Injustice

Initially put forth in 1986, and slightly refined almost ten years later, relevance theory is a pragmatic framework that aims to explain (i) why hearers select particular interpretations out of the various possible ones that utterances may have –all of which are compatible with the linguistically encoded and communicated information– (ii) how hearers process utterances, and (iii) how and why utterances and discourse give rise to a plethora of effects (Sperber and Wilson 1986/1995). Accordingly, it concentrates on the cognitive side of communication: comprehension and the mental processes intervening in it.

Relevance theory (Sperber and Wilson 1986/1995) reacted against the so-called code model of communication, which was deeply entrenched in western linguistics. According to this model, communication merely consists of encoding thoughts or messages into utterances, and decoding these in order to arrive at speaker meaning. Since speakers cannot encode everything they intend to communicate and absolute explicitness is practically unattainable, relevance theory portrays communication as an ostensive-inferential process where speakers draw the audience’s attention by means of intentional stimuli. On some occasions these amount to direct evidence –i.e. showing– of what speakers mean, so their processing requires inference; on other occasions, intentional stimuli amount to indirect –i.e. encoded– evidence of speaker meaning, so their processing relies on decoding.

However, in most cases the stimuli produced in communication combine direct with indirect evidence, so their processing depends on both inference and decoding (Sperber and Wilson 2015). Intentional stimuli make manifest speakers’ informative intention –i.e. the intention that the audience create a mental representation of the intended message, or, in other words, a plausible interpretative hypothesis– and their communicative intention –i.e. the intention that the audience recognise that speakers do have a particular informative intention. The role of hearers, then, is to arrive at speaker meaning by means of both decoding and inference (but see below).

Relevance theory also reacted against philosopher Herbert P. Grice’s (1975) view of communication as a joint endeavour where interlocutors identify a common purpose and may abide by, disobey or flout a series of maxims pertaining to communicative behaviour –those of quantity, quality, relation and manner– which articulate the so-called cooperative principle. Although Sperber and Wilson (1986/1995) seriously question the existence of such principle, they nevertheless rest squarely on a notion already present in Grice’s work, but which he unfortunately left undefined: relevance. This becomes the corner stone in their framework. Relevance is claimed to be a property of intentional stimuli and characterised on the basis of two factors:

Cognitive effects, or the gains resulting from the processing of utterances: (i) strengthening of old information, (ii) contradiction and rejection of old information, and (iii) derivation of new information.

Cognitive or processing effort, which is the effort of memory to select or construct a suitable mental context for processing utterances and to carry out a series of simultaneous tasks that involve the operation of a number of mental mechanisms or modules: (i) the language module, which decodes and parses utterances; (ii) the inferential module, which relates information encoded and made manifest by utterances to already stored information; (iii) the emotion-reading module, which identifies emotional states; (iv) the mindreading module, which attributes mental states, and (v) vigilance mechanisms, which assess the reliability of informers and the believability of information (Sperber and Wilson 1986/1995; Wilson and Sperber 2004; Sperber et al. 2010).

Relevance is a scalar property that is directly proportionate to the amount of cognitive effects that an interpretation gives rise to, but inversely proportionate to the expenditure of cognitive effort required. Interpretations are relevant if they yield cognitive effects in return for the cognitive effort invested. Optimal relevance emerges when the effect-effort balance is satisfactory. If an interpretation is found to be optimally relevant, it is chosen by the hearer and thought to be the intended interpretation. Hence, optimal relevance is the property determining the selection of interpretations.

The Power of Relevance Theory

Sperber and Wilson’s (1986/1995) ideas and claims originated a whole branch in cognitive pragmatics that is now known as relevance-theoretic pragmatics. After years of intense, illuminating and fruitful work, relevance theorists have offered a plausible model for comprehension. In it, interpretative hypotheses –i.e. likely interpretations– are said to be formulated during a process of mutual parallel adjustment of the explicit and implicit content of utterances, where the said modules and mechanisms perform a series of simultaneous, incredibly fast tasks at a subconscious level (Carston 2002; Wilson and Sperber 2004).

Decoding only yields a minimally parsed chunk of concepts that is not yet fully propositional, so it cannot be truth-evaluable: the logical form. This form needs pragmatic or contextual enrichment by means of additional tasks wherein the inferential module relies on contextual information and is sometimes constrained by the procedural meaning –i.e. processing instructions– encoded by some linguistic elements.

Those tasks include (i) disambiguation of syntactic constituents; (ii) assignment of reference to words like personal pronouns, proper names, deictics, etc.; (iii) adjustment of the conceptual content encoded by words like nouns, verbs, adjectives or adverbs, and (iv) recovery of unarticulated constituents. Completion of these tasks results in the lower-level explicature of an utterance, which is a truth-evaluable propositional form amounting to the explicit content of an utterance. Construction of lower-level explicatures depends on decoding and inference, so that the more decoding involved, the more explicit or strong these explicatures are and, conversely, the more inference needed, the less explicit and weaker these explicatures are (Wilson and Sperber 2004).

A lower-level explicature may further be embedded into a conceptual schema that captures the speaker’s attitude(s) towards the proposition expressed, her emotion(s) or feeling(s) when saying what she says, or the action that she intends or expects the hearer to perform by saying what she says. This schema is the higher-level explicature and is also part of the explicit content of an utterance.

It is sometimes built through decoding some of the elements in an utterance –e.g. attitudinal adverbs like ‘happily’ or ‘unfortunately’ (Ifantidou 1992) or performative verbs like ‘order’, ‘apologise’ or ‘thank’ (Austin 1962)– and other times through inference, emotion-reading and mindreading –as in the case of, for instance, interjections, intonation or paralanguage (Wilson and Wharton 2006; Wharton 2009, 2016) or indirect speech acts (Searle 1969; Grice 1975). As in the case of lower-level explicatures, higher-level ones may also be strong or weak depending on the amount of decoding, emotion-reading and mindreading involved in their construction.

The explicit content of utterances may additionally be related to information stored in the mind or perceptible from the environment. Those information items act as implicated premises in inferential processes. If the hearer has enough evidence that the speaker intended or expected him to resort to and use those premises in inference, they are strong, but, if he does so at his own risk and responsibility, they are weak. Interaction of the explicit content with implicated premises yields implicated conclusions. Altogether, implicated premises and implicated conclusions make up the implicit content of an utterance. Arriving at the implicit content completes mutual parallel adjustment, which is a process constantly driven by expectations of relevance, in which the more plausible, less effort-demanding and more effect-yielding possibilities are normally chosen.

The Limits of Relevance Theory

As a model centred on comprehension and interpretation of ostensive stimuli, relevance theory (Sperber and Wilson 1986/1995) does not need to be able to identify instances of conceptual competence injustice, as Anderson (2017b) remarks, nor even instances of the negative consequences of communicative behaviour that may be alluded to by means of the broader, loosened notion of conceptual competence injustice I argued for. Rather, as a cognitive framework, its role is to explain why and how these originate. And, certainly, its notional apparatus and the cognitive machinery intervening in comprehension which it describes can satisfactorily account for (i) the ontology of unwarranted judgements of lexical and conceptual (in)competence, (ii) their origin and (iii) some of the reasons why they are made.

Accordingly, those judgements (i) are implicated conclusions which (ii) are derived during mutual parallel adjustment as a result of (iii) accessing some manifest assumptions and using these as implicated premises in inference. Obviously, the implicated premises that yield the negative conclusions about (in)competence might not have been intended by the speaker, who would not be interested in the hearer accessing and using them. However, her communicative performance makes manifest assumptions alluding to her lexical lacunae and mistakes and these lead the hearer to draw undesired conclusions.

Relevance theory (Sperber and Wilson 1986/1995) is powerful enough to offer a cognitive explanation of the said three issues. And this alone was what I aimed to show in my comment to Anderson’s (2017a) work. Two different issues, nevertheless, are (i) the reasons why certain prejudicial assumptions become manifest to an audience and (ii) why those assumptions end up being distributed across the members of certain wide social groups.

As Anderson (2017b) underlines, conceptual competence injustices must necessarily be contextualised in situations where privileged and empowered social groups are negatively-biased or prejudiced against other identities and create patterns of marginalisation. Prejudice may be argued to bring to the fore a variety of negative assumptions about the members of the identities against whom it is held. Using Giora’s (1997) terminology, prejudice makes certain detrimental assumptions very salient or increases the saliency of those assumptions.

Consequently, they are amenable to being promptly accessed and effortlessly used as implicated premises in deductions, from which negative conclusions are straightforwardly and effortlessly derived. Those premises and conclusions spread throughout the members of the prejudiced and hegemonic group because, according to Sperber’s (1996) epidemiological model of culture, they are repeatedly transmitted or made public. This is possible thanks to two types of factors (Sperber 1996: 84):

Psychological factors, such as their relative easiness of storage, the existence of other knowledge with which they can interact in order to generate cognitive effects –e.g. additional negative conclusions pertaining to the members of the marginalised identity– or existence of compelling reasons to make the individuals in the group willing to transmit them –e.g. desire to disempower and/or marginalise the members of an unprivileged group, to exclude them from certain domains of human activity, to secure a privileged position, etc.

Ecological factors, such as the repetition of the circumstances under which those premises and conclusions result in certain actions –e.g. denigration, disempowerment, maginalisation, exclusion, etc.– availability of storage mechanisms other than the mind –e.g. written documents– or the existence of institutions that transmit and perpetuate those premises and conclusions, thus ensuring their continuity and availability.

Since the members of the dominating biased group find those premises and conclusions useful to their purposes and interests, they constantly reproduce them and, so to say, pass them on to the other members of the group or even on to individuals who do not belong to it. Using Sperber’s (1996) metaphor, repeated production and internalisation of those representations resembles the contagion of illnesses. As a result, those representations end up being part of the pool of cultural representations shared by the members of the group in question or other individuals.

The Imperative to Get Competence Correct

In social groups with an interest in denigrating and marginalising an identity, certain assumptions regarding the lexical inventories and conceptualising abilities of the epistemic agents with that identity may be very salient, or purposefully made very salient, with a view to ensuring that they are inferentially exploited as implicated premises that easily yield negative conclusions. In the case of average speakers’ lexical gaps and mistakes, assumptions concerning their performance and infelicities may also become very salient, be fed into inferential processes and result in prejudicial conclusions about their lexical and conceptual (in)competence.

Although utterance comprehension and information processing end upon completion of mutual parallel adjustment, for the informational load of utterances and the conclusions derivable from them to be added to an individual’s universe of beliefs, information must pass the filters of a series of mental mechanisms that target both informers and information itself, and check their believability and reliability. These mechanisms scrutinise various sources determining trust allocation, such as signs indicating certainty and trustworthiness –e.g. gestures, hesitation, nervousness, rephrasing, stuttering, eye contact, gaze direction, etc.– the appropriateness, coherence and relevance of the dispensed information; (previous) assumptions about speakers’ expertise or authoritativeness in some domain; the socially distributed reputation of informers, and emotions, prejudices and biases (Origgi 2013: 227-233).

As a result, these mechanisms trigger a cautious and sceptic attitude known as epistemic vigilance, which in some cases enables individuals to avoid blind gullibility and deception (Sperber et al. 2010). In addition, these mechanisms monitor the correctness and adequateness of the interpretative steps taken and the inferential routes followed while processing utterances and information, and check for possible flaws at any of the tasks in mutual parallel adjustment –e.g. wrong assignment of reference, supply of erroneous implicated premises, etc.– which would prevent individuals from arriving at actually intended interpretations. Consequently, another cautious and sceptical attitude is triggered towards interpretations, which may be labelled hermeneutical vigilance (Padilla Cruz 2016).

If individuals do not perceive risks of malevolence or deception, or do not sense that they might have made interpretative mistakes, vigilance mechanisms are weakly or moderately activated (Michaelian 2013: 46; Sperber 2013: 64). However, their level of activation may be raised so that individuals exercise external and/or internal vigilance. While the former facilitates higher awareness of external factors determining trust allocation –e.g. cultural norms, contextual information, biases, prejudices, etc.– the latter facilitates distancing from conclusions drawn at a particular moment, backtracking with a view to tracing their origin –i.e. the interpretative steps taken, the assumptions fed into inference and assessment of their potential consequences (Origgi 2013: 224-227).

Exercising weak or moderate vigilance of the conclusions drawn upon perception of lexical lacunae or mistakes may account for their unfairness and the subsequent wronging of individuals as regards their actual conceptual and lexical competence. Unawareness of the internal and external factors that may momentarily have hindered competence and ensuing performance, may cause perceivers of lexical gaps and errors to unquestioningly trust assumptions that their interlocutors’ allegedly poor performance makes manifest, rely on them, supply them as implicated premises, derive conclusions that do not do any justice to their actual level of conceptual and lexical competence, and eventually trust their appropriateness, adequacy or accuracy.

A higher alertness to the potential influence of those factors on performance would block access to the detrimental assumptions made manifest by their interlocutors’ performance or make perceivers of lexical infelicities reconsider the convenience of using those assumptions in deductions. If this was actually the case, perceivers would be deploying the processing strategy labelled cautious optimism, which enables them to question the suitability of certain deductions and to make alternative ones (Sperber 1994).

Conclusion

Relevance theory (Sperber and Wilson 1986/1995; Wilson and Sperber 2004) does not need to be able to identify cases of conceptual competence injustice, but its notional apparatus and the machinery that it describes can satisfactorily account for the cognitive processes whereby conceptual competence injustices originate. In essence, prejudice and interests in denigrating members of specific identities or minorities favour the saliency of certain assumptions about their incompetence, which, for a variety of psychological and ecological reasons, may already be part of the cultural knowledge of the members of prejudiced empowered groups. Those assumptions are subsequently supplied as implicated premises to deductions, which yield conclusions that undermine the reputation of the members of the identities or minorities in question. Ultimately, such conclusions may in turn be added to the cultural knowledge of the members of the biased hegemonic group.

The same process would apply to those cases wherein hearers unfairly wrong their interlocutors on the grounds of performance below alleged or expected standards, and are not vigilant enough of the factors that could have impeded it. That wronging may be alluded to by means of a somewhat loosened, broadened notion of ‘conceptual competence injustice’ which deprives it of one of its quintessential conditions: the existence of prejudice and interests in marginalising other individuals. Inasmuch as apparently poor performance may give rise to unfortunate unfair judgements of speakers’ overall level of competence, those judgements could count as injustices. In a nutshell, this was the reason why I advocated for the incorporation of a ‘decaffeinated’ version of Anderson’s (2017a) notion into the field of linguistic pragmatics.

Contact details: mpadillacruz@us.es

References

Anderson, Derek. “Conceptual Competence Injustice.” Social Epistemology. A Journal of Knowledge, Culture and Policy 37, no. 2 (2017a): 210-223.

Anderson, Derek. “Relevance Theory and Conceptual Competence Injustice.” Social Epistemology Review and Reply Collective 6, no. 7 (2017b): 34-39.

Austin, John L. How to Do Things with Words. Oxford: Clarendon Press, 1962.

Bachman, Lyle F. Fundamental Considerations in Language Testing. Oxford: Oxford University Press, 1990.

Brown, Penelope, and Stephen C. Levinson. Politeness: Some Universals in Language Usage. Cambridge: Cambridge University Press, 1987.

Canale, Michael. “From Communicative Competence to Communicative Language Pedagogy.” In Language and Communication, edited by Jack C. Richards and Richard W. Schmidt, 2-28. London: Longman, 1983.

Carston, Robyn. Thoughts and Utterances. The Pragmatics of Explicit Communication. Oxford: Blackwell, 2002.

Celce-Murcia, Marianne, Zoltán Dörnyei, and Sarah Thurrell. “Communicative Competence: A Pedagogically Motivated Model with Content Modifications.” Issues in Applied Linguistics 5 (1995): 5-35.

Fricker, Miranda. Epistemic Injustice. Power & the Ethics of Knowing. Oxford: Oxford University Press, 2007.

Giora, Rachel. “Understanding Figurative and Literal Language: The Graded Salience Hypothesis.” Cognitive Linguistics 8 (1997): 183-206.

Grice, Herbert P., 1975. “Logic and Conversation.” In Syntax and Semantics. Vol. 3: Speech Acts, edited by Peter Cole and Jerry L. Morgan, 41-58. New York: Academic Press, 1975.

Hymes, Dell H. “On Communicative Competence.” In Sociolinguistics. Selected Readings, edited by John B. Pride and Janet Holmes, 269-293. Baltimore: Penguin Books, 1972.

Ifantidou, Elly. “Sentential Adverbs and Relevance.” UCL Working Papers in Linguistics 4 (1992): 193-214.

Jary, Mark. “Relevance Theory and the Communication of Politeness.” Journal of Pragmatics 30 (1998): 1-19.

Jary, Mark. “Two Types of Implicature: Material and Behavioural.” Mind & Language 28, no. 5 (2013): 638-660.

Medina, José. “The Relevance of Credibility Excess in a Proportional View of Epistemic Injustice: Differential Epistemic Authority and the Social Imaginary.” Social Epistemology: A Journal of Knowledge, Culture and Policy 25, no. 1 (2011): 15-35.

Michaelian, Kourken. “The Evolution of Testimony: Receiver Vigilance, Speaker Honesty and the Reliability of Communication.” Episteme 10, no. 1 (2013): 37-59.

Mustajoki, Arto. “A Speaker-oriented Multidimensional Approach to Risks and Causes of Miscommunication.” Language and Dialogue 2, no. 2 (2012): 216-243.

Origgi, Gloria. “Epistemic Injustice and Epistemic Trust.” Social Epistemology: A Journal of Knowledge, Culture and Policy 26, no. 2 (2013): 221-235.

Padilla Cruz, Manuel. “Vigilance Mechanisms in Interpretation: Hermeneutical Vigilance.” Studia Linguistica Universitatis Iagellonicae Cracoviensis 133, no. 1 (2016): 21-29.

Padilla Cruz, Manuel. “On the Usefulness of the Notion of ‘Conceptual Competence Injustice’ to Linguistic Pragmatics.” Social Epistemology Review and Reply Collective 6, no. 4 (2017a): 12-19.

Padilla Cruz, Manuel. “Interlocutors-related and Hearer-specific Causes of Misunderstanding: Processing Strategy, Confirmation Bias and Weak Vigilance.” Research in Language 15, no. 1 (2017b): 11-36.

Searle, John. Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press, 1969.

Spencer-Oatey, Hellen D. “Reconsidering Power and Distance.” Journal of Pragmatics 26 (1996): 1-24.

Sperber, Dan. “Understanding Verbal Understanding.” In What Is Intelligence? edited by Jean Khalfa, 179-198. Cambridge: Cambridge University Press, 1994.

Sperber, Dan. Explaining Culture. A Naturalistic Approach. Oxford: Blackwell, 1996.

Sperber, Dan. “Speakers Are Honest because Hearers Are Vigilant. Reply to Kourken Michaelian.” Episteme 10, no. 1 (2013): 61-71.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. Oxford: Blackwell, 1986.

Sperber, Dan, and Deirdre Wilson. Relevance. Communication and Cognition. 2nd edition. Oxford: Blackwell, 1995.

Sperber, Dan, and Deirdre Wilson. “Beyond Speaker’s Meaning.” Croatian Journal of Philosophy 15, no. 44 (2015): 117-149.

Sperber, Dan, Fabrice Clément, Christophe Heintz, Olivier Mascaro, Hugo Mercier, Gloria Origgi, and Deirdre Wilson. “Epistemic Vigilance.” Mind & Language 25, no. 4 (2010): 359-393.

Wharton, Tim. Pragmatics and Non-verbal Communication. Cambridge: Cambridge University Press, 2009.

Wharton, Tim. “That Bloody so-and-so Has Retired: Expressives Revisited.” Lingua 175-176 (2016): 20-35.

Wilson, Deirdre, and Dan Sperber. “Relevance Theory”. In The Handbook of Pragmatics, edited by Larry Horn, and Gregory Ward, 607-632. Oxford: Blackwell, 2004.

Wilson, Deirdre and Tim Wharton. “Relevance and Prosody.” Journal of Pragmatics 38 (2006): 1559-1579.

[1] Following a relevance-theoretic convention, reference to the speaker will be made through the feminine third person singular personal pronoun, while reference to the hearer will be made through its masculine counterpart.

Author Information: Julian Reiss, Durham University, julian.reiss@durham.ac.uk; Sarah Wieten, Durham University wietens@gmail.com

Reiss, Julian and Sarah Wieten. “On Justin Biddle’s ‘Lessons from the Vioxx Debacle’.” Social Epistemology Review and Reply Collective 4, no. 5 (2015): 20-22.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-24M
Please refer to:

vioxx phone

Image credit: vistavision, via flickr

Justin Biddle’s (2007) article “Lessons from the Vioxx Debacle: What the Privatization of Science can teach us about Social Epistemology” is one of the highest regarded in this journal, with a high rate of citation. The article raised the alarm about the possible negative consequences about the increasing privatization of scientific research, and issued a call for epistemologists to attend seriously to the specific particularities of the fields they wished to characterize. This call was specifically leveled at philosophers of science such as Kitcher and Longino who, according to Biddle, were too interested in their claims being generalizable to all scientific disciplines to say anything relevant to any particular discipline. Biddle writes of their claims, Continue Reading…