Richly Trustworthy Allies, William Tuckwell

Here’s a plausible definition of an ally: an individual who supports a non-dominant group’s pursuit of their justice-based interests. One way to develop a more detailed theory of an ally is by specifying ‘support a non-dominant group’s pursuit of their justice-based interests’ as the function of an ally, reflecting on what properties an individual ought to have if they’re to fulfil this function, and then taking those properties to be constitutive of an ally … [please read below the rest of the article].

Image credit: Historic Bremen via Flickr / Creative Commons

Article Citation:

Tuckwell, William. 2019. “Richly Trustworthy Allies.” Social Epistemology Review and Reply Collective 8 (10): 33-39.

The PDF of the article gives specific page numbers.

Here’s a plausible definition of an ally: an individual who supports a non-dominant group’s pursuit of their justice-based interests. One way to develop a more detailed theory of an ally is by specifying ‘support a non-dominant group’s pursuit of their justice-based interests’ as the function of an ally, reflecting on what properties an individual ought to have if they’re to fulfil this function, and then taking those properties to be constitutive of an ally. Using this methodology, I’m going to suggest that a form of trustworthiness that’s been theorised by Karen Jones (2012)—rich trustworthiness—is one such property. Towards the end of the post I’ll make a start at exploring what it takes for an ally to be richly trustworthy in online contexts.

I’ll start out by defending the claim that trustworthiness is a property of an ally before problematising this claim and arguing that it is in fact rich trustworthiness, and not mere trustworthiness, that’s a property of an ally.

Trustworthiness and Allyship

As I see it, the function of an ally is to support a non-dominant group’s pursuit of their justice-based interests. Call this the ally-function. I’ll take it that any property that serves the ally-function is a property of an ally. Let’s consider the possibility that trustworthiness is one such property.[1]

Karen Jones (2012, 70-71) defends the following account of trustworthiness:

Three-place trustworthiness: B is trustworthy with respect to A in domain of interaction D, if and only if she is competent with respect to that domain, and she would take the fact that A is counting on her, were A to do so in this domain, to be a compelling reason for acting as counted on.

The basic idea here is that the trustworthy agent is one who is competent in some domain and who is willing to put their competences to work in the service of another because they’re being counted on to do so.

A more detailed explanation of Jones’ account requires spelling out its various features. Firstly, the domain of interaction in which B is trustworthy is rarely going to be as narrow as some specific action-type. It’ll more often be some project that A is counting on B to help them advance (2012, 72). There are likely to be various specific competences B must have to be trustworthy in a given domain. Secondly, trustworthiness is dispositional, and so may never manifest. Rather, the trustworthy agent is one whose competences are activated in the service of another if they come to be counted on (2012, 72). Thirdly, a compelling reason is a reason that cannot be easily outweighed, but it need not be a decisive one (2012, 71). Finally, on Jones’ account it is the fact that A is counting on B that motivates B to act as counted on (2012, 69-70).

To be sure, there are other accounts of trustworthiness around. All tend to agree that the trustworthy agent is one that can be counted on. Where they differ is on the reason that they take to motivate the trustworthy agent for acting as counted on. For example, whereas Jones’ account has it that the trustworthy agent is motivated to act because they are being counted on to do so, on Russell Hardin’s account (2002) the trustworthy agent acts because it is in their interest to take the interests of the trustor into account. While I can’t adjudicate between the views here, it’s worth noting that Jones’s account can capture an intuitive thought about what we want from allies that Hardin’s cannot: that allies are people who act because others are counting on them to advance the cause of justice, rather than, say, to enhance their own social status which might be thought to be antithetical to allyship.[2]

Now we know what trustworthiness is, I’ll turn to providing an argument that trustworthiness should be included amongst the properties of an ally. The first step in the argument is to get clear on what the relevant domain of interaction is. Trustworthiness with respect to just any old domain or other won’t do the job. Rather, we’re interested in trustworthiness with respect to non-dominant groups in the domain of advancing their justice-based interests. What does it take to be competent in this domain? A range of specific competences will be required, the details of which will vary with the needs and struggles of each allied-with group. Nevertheless, there are some competences that are required for one to be trustworthy with respect to marginalized groups (in the domain of advancing that group’s justice-based interests) across the board. Here I’ll look at two epistemic competences that seem to me to be essential to being a trustworthy ally.

Firstly, it’s important that allies are knowledgeable about their allied-with group: about the ways in which the group is unjustly disadvantaged, the harms that they sustain, and the ways in which harms might be mitigated. Such knowledge contributes towards competence in the relevant domain by providing a basis for action; allies will struggle to take actions that advance the justice-based interests of their allied-with group if they don’t know the ways in which the group are disadvantaged or what actions successfully mitigate such injustices.

Often the best source of this knowledge will be members of the allied-with group themselves. Victims are often aware that injustice is taking place, that there are people who need assistance, and of the mechanisms by which injustice is inflicted. They also tend to have an appreciation of effective strategies for resistance (see, e.g., Wylie 2003, Tanesini 2019).

Much of this knowledge can be transmitted via the testimony of the members of the allied-with group. However, if allies are to attain this knowledge by testimony, they’ll often need a second competence: what Kristie Dotson has labelled testimonial competence. The testimonially competent agent is able to understand the content of proffered testimony, to detect failures in their own understanding, and to demonstrate this ability to those to whom she is listening (Dotson 2011, 245). Testimonial competency can be demonstrated by, for example, taking the time to double-check and affirm one’s own understanding of the testimony with the speaker. Incompetency can be demonstrated by, for example, expressing incredulity in response to a speaker’s reliable testimony (Dotson 2011, 246).

Testimonial competence is often required if speakers are to offer up knowledge because some testimony contains ‘unsafe’ content; information that if shared with audiences that lack testimonial competence, can result in a speaker being harmed (Dotson 2011, 244). For example, the speaker may suffer from testimonial injustice as the incompetent hearer judges them to be less credible than they in fact are (Fricker 2007). Dotson claims that when speakers do not trust the testimonial competency of their audience, they may self-silence in order to protect themselves against potential harms. Not only does this self-silencing constitute a further harm suffered by the speaker, it also results in hearers missing out on important knowledge. Thus, if allies are to attain the required knowledge, it’ll be handy for them to be testimonially competent.

Possessing these two competences—knowledge and testimonial competence—make it the case that a person is competent in the relevant domain. If these competences are put to work in the service of the allied-with group, then their justice-based interests are well-served. Or, equivalently, the ally-function is well-served. Thus, we have good reasons to take trustworthiness to be a property of an ally.

Not by Trustworthiness Alone

Trustworthiness appears to do the ally-function a good service. But, it’d be a mistake to settle on trustworthiness as explicated above as a property of an ally. Trustworthy agents are of little use to us if we don’t know that they exist or where they are. We might occasionally get lucky in extending trust to someone who just so happens to be trustworthy despite them giving us no indication that this is the case, but this will be rare. Getting lucky is rarer still when one has good reasons to think others lack trustworthiness, or that they’re outright untrustworthy. Given that, for the most part, allies belong to social groups that have a history of marginalizing, oppressing, and exploiting nondominant groups, the default attitude of members of nondominant groups toward allies will often be one of distrust. Thus, if trustworthy allies are to be of use they’ll need to identify themselves. Allies will need to signal their trustworthiness so that members of nondominant groups have reasons to turn to them for support. Allies need to be not only trustworthy, but to be richly trustworthy (Jones 2012, 73-78): to be trustworthy in at least one domain, and both willing and able to reliably signal those domains in which they are trustworthy.[3][4]

Rich trustworthiness isn’t easy to master. Jones points out that a range of skills are required: knowing what one’s intended audience will take to be a reliable signal; knowing how to communicate the signal, and the ability to anticipate and overcome impediments to one’s signalling. Acquiring these skills is made difficult by the fact that we’re almost always already signalling. The clothes we wear, the way we speak, our qualifications, our job, and who we’re friends with, all communicate information the content of which can at times be outside of our control. What we communicate depends upon the background norms, expectations, and shared understandings that are operative in a given context. Mastering the skills required to be richly trustworthy thus requires a familiarity with these complex contextual factors, knowing when the standing social signal is enough to do the desired job, and when one needs to take steps to disrupt it.

More and more of the work done by allies is coming to be carried out online. Not all signals that are reliable in offline contexts will be reliable in online contexts. For example, whereas silence in response to a racist remark made in a face to face interaction can effectively communicate disapproval of the remark, and thus support for the group targeted by the remark, silence in an online conversation is closer to one not being in the conversation at all. This raises the question of how an ally might signal their trustworthiness online.

A common piece of advice given to allies points towards one promising signalling mechanism available in online contexts: amplifying the voices of the members of non-dominant groups. The top three articles recommended by a Google search of ‘how to be a good ally’ all show up advice to this effect (Lamont 2016; Burns 2016; Sebastian 2017). Following it has various benefits: it raises awareness of the situation of marginalized people, and it does so by utilising knowledge developed and possessed by those very people. Non-dominant people are thereby put at the forefront of struggles for justice and their value and autonomy is affirmed in the process. When an ally amplifies the voices of their allied-with group they signal that they’ve listened to the group, have understood their concerns, and are aware that their own role is one of support, rather than being front and centre.

Online platforms, such as social media platforms equipped with repost and retweet functions, provide an easy means by which to amplify. Information is plentiful and a retweet or repost costs little. But, signalling online isn’t without challenges. In the remainder of this post I’ll work through some of these challenges in order to begin to develop a picture of what the richly trustworthy ally might look like, and to get a deeper sense of just how difficult a capacity is it to master.

One challenge facing allies in signalling their trustworthiness online is that bots—social media accounts created to amplify other accounts at very high rates—and clone accounts are pervasive on social media platforms, and distinguishing a fake account from a real user isn’t always easy. You can easily be misled into thinking that you’re amplifying a non-dominant voice, when in fact you’re amplifying a fake user. Being duped by a fake user both undermines the rationale for amplifying and can also signal a poor grasp of the actual concerns of the allied-with group.

Here’s an example of this challenge: during the lead up to the 2016 US presidential election Russian internet agencies set up fake accounts encouraging a lack of faith in democratic institutions among black Americans. The aim was to suppress the black American vote, to the benefit of Donald Trump. One of many fake Instagram accounts named @afrokingdom wrote: ‘Black people are smart enough to understand that Hillary doesn’t deserve our votes! DON’T VOTE!’ (Parham 2017). Voting for Hillary Clinton was clearly in the interests of most black Americans and refraining from voting contrary to such interests. Thus, the ally who amplifies @afrokingdom’s post would be acting in a way that both undermines and demonstrates a lack of familiarity with the interests of black Americans

Allies can mitigate this risk by checking their sources carefully and coming to learn how to spot bots and fake accounts. Here are three tips for spotting bots. Firstly, activity. Bots are set up to post frequently, often hundreds of times a day. That’s a lot more than most people post. If an account is posting at a high rate, then it’s probably a bot. Secondly, anonymity. Often bot accounts will lack personal identifying information such as photos. If there’s a lack of personal information, it’s probably a bot. Thirdly, if it only reposts or likes others’ posts—this for the most part what bots are created for—but doesn’t create its own posts then it’s probably a bot. If after carrying out this advice one suspects that they have stumbled across a bot, they can report it to protect others from falling foul of the same risk.

A further worry is that attempts at amplifying the voices of some members of non-dominant groups can lead to the drowning out the voices of others that have a real stake in being heard. This is a particularly important worry for the those who have large social networks. When such people amplify a message that is subsequently amplified by their followers, the amplified claim appears to be a common, and therefore important, concern. Independent claims coming from people with real stakes appear less common, and thus less important, by comparison. When you amplify the voices of some members of your allied-with group in a way that drowns out the voices of other members of that group, you signal that you’re attuned only the concerns of some members of the group, but not others.

A final worry is that amplifying voices by reposting or retweeting on social media is too cheap to be perceived to be a reliable signal. Robert Frank (1988) has argued that for signals to be reliable they need to be either difficult to pull off or costly. Cheap signals are not read as a reliable guide to people’s genuine commitments because they can be done flippantly or for other reasons. Repost and retweets may be read as mere ‘slacktivism’—low cost efforts that do little more than generate a good feeling—done for likes and the appearance of being socially conscious (see Lawford-Smith 2015, 11 for discussion). This suggests that the ally needs to up the cost of their signal.

One way to do so without moving away from the amplification strategy is not merely to retweet or repost, but to engage in argument in online forums such as Facebook or the comments sections of newspaper websites, making use of the knowledge from members of the allied-with group, reiterating their claims and demands, and referring directly to them where appropriate. Will this be costly enough? My running this suggestion past various people has been met with scepticism that this constitutes a sufficiently large increase in cost. If this scepticism generalises, then we’re left with a real puzzle as to why despite following the advice of the allied-with group by amplifying their voices, this is not taken to be a reliable signal by that very group. What would be costly enough, online?

Rich trustworthiness is an important property of an ally, one that allies will have to work hard to cultivate. There are host of further complicating factors that haven’t been touched on here. For example, what counts as a reliable signal will vary both with the group that one is allied-with and the group that one is a member of. Additionally, the fact that allied-with groups will tend to be heterogeneous means that what some group members take to a reliable signal, others might take to be unreliable. While we don’t yet have a clear strategy for allies signalling trustworthiness online—as if there could be a one size fits all solution—we’ve certainly got an important problem for allies to solve.

Contact details: William Tuckwell, University of Melbourne,


Alfano, Mark and Nicole M. A. Huijts. forthcoming. “Trust and Distrust in Institutions and Governance.” In The Routledge Handbook of Trust and Philosophy edited by Judith Simon. Routledge.

Burns, Katelyn. 2016. “So, I Want to be an Ally: Reponses Section.” Medium 9 March. Accessed 22 July 2019.

Dotson, Kristie. 2011. ‘Tracking Epistemic Violence, Tracking Practices of Silencing’, Hypatia, 26 (3): 236-257.

Fricker, Miranda. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: OUP.

Frank, Robert H. 1988. Passions Within Reason. New York: Norton.

Hardin, Russell. 2002. Trust and Trustworthiness. New York, NY: Russell Sage Foundation.

Jones, Karen. 2012. “Trustworthiness.” Ethics 123 (1): 61-85.

Lamont, Amélie. 2016. Guide to Allyship. Accessed 22 July 2019.

Lawford-Smith, Holly. 2015. “Unethical Consumption & Obligations to Signal.” Ethics & International Affairs 29 (3): 315-330.

McKinnon, Rachael. 2019. “Gaslighting as Epistemic Violence: ‘Allies,’ Mobbing, and Complex Post-Traumatic Stress Disorder, Including a Case Study of Harassment of Transgender Women in Sport.” In Overcoming Epistemic Injustice edited by Lauren Freeman and Jeanine Weekes Schroer. Routledge.

Parham, Jason. 2017. “Targeting Black Americans, Russia’s IRA Exploited Racial Wounds.” Wired 17 December. Accessed 28 July 2019.

Sebastian, Hallie. 2017. “How to Tell if You’re Being a Good Ally.” Teen Vogue 15 August . Accessed 22 July 2019.

Tanesini, Alessandra. 2019. “Standpoint Then and Now.” In The Routledge Handbook of Social Epistemology edited by Miranda Fricker, Peter J. Graham, David Henderson, Nikolaj J.L.L. Pedersen, 335-343. New York: Routledge.

Wylie, Alison. 2003. “Why Standpoint Matters.” In Science and Other Cultures: Issues in Philosophies of Science and Technology edited by Sandra Harding and Robert Figueroa, 26-48. New York: Routledge.

[1] Rachel McKinnon (2019) also briefly discusses the importance of trust and trustworthiness to allyship.

[2] Though I’m tempted to go with Jones’ account of trustworthiness, we can get the conclusion that trustworthiness is a property of ally whichever account turns out to be preferable.

[3] Note that rich trustworthiness is compatible with all account of trustworthiness.

[4] Jones’ formulation of rich trustworthiness takes it to be a relation that obtains between two agents: ‘B is richly trustworthy with respect to A…’ (Jones 2012, 74). Given that I’m concerned with rich trustworthiness as a relation that obtains between an agent, an ally, and a collection of agents, an allied-with group understood as social groups that are not themselves agents, but are rather collections of agents, Jones’ formulation is not the perfect fit. It’s perhaps more accurate to say that the ally should be globally richly trustworthy: ‘For all agents, B is globally richly trustworthy to the extent that (i) B is willing and able to reliably signal to others those domains in which B is competent and willing to take the fact that others are counting on her, were they to do so, to be a compelling reason for acting as counted on and (ii) there are some domains in which she will be responsive to others’ dependency’ (Alfano & Huijts forthcoming: 2 -3). Global rich trustworthiness is a generalization of rich trustworthiness that measures not only how B is disposed to some particular agent, but how B is disposed to a range of agents. In the interests of simplicity, I’ll stick with Jones’ formulation in this post as it gives a sufficient flavour of the phenomenon of rich trustworthiness required for present purposes.

Categories: Articles

Tags: , , , , , , , , ,

4 replies


  1. The Trouble With ‘Fake News’, David Coady – Social Epistemology Review and Reply Collective
  2. Algorithm-Based Illusions of Understanding, Jeroen de Ridder – Social Epistemology Review and Reply Collective
  3. Is Conspiracy Theorising Irrational? Neil Levy – Social Epistemology Review and Reply Collective
  4. So What if ‘Fake News’ is Fake News? Jeroen de Ridder – Social Epistemology Review and Reply Collective

Leave a Reply