Developing a Model of Groupstrapping: A Response to Baumgaertner and Nguyen, Kenneth Boyd

In their responses to my article “Epistemically Pernicious Groups and the Groupstrapping Problem” (Boyd 2018), Bert Baumgaertner (“Groupstrapping, Boostrapping, and Oops-strapping: A Reply to Boyd”) and C. Thi Nguyen (“Group-strapping, Bubble, or Echo Chamber?”) have raised interesting questions and opened lines of inquiry regarding my discussion of what I hope to be a way to help make sense of how members of groups can continue to hold beliefs that are greatly outweighed by countervailing evidence (e.g. antivaxxers, climate-change deniers, etc.) … [please read below the rest of the article].

Image credit: Rebecca Siegel via Flickr / Creative Commons

Article Citation:

Boyd, Kenneth. 2019. “Developing a Model of Groupstrapping: A Response to Baumgaertner and Nguyen.” Social Epistemology Review and Reply Collective 8 (8): 32-39. https://wp.me/p1Bfg0-4k2.

The PDF of the article gives specific page numbers.

This article replies to:

  • Baumgaertner, Bert. 2019. “Groupstrapping, Bootstrapping, and Oops-strapping: A Reply to Boyd.” Social Epistemology Review and Reply Collective 8 (6): 4-7. https://wp.me/p1Bfg0-4bE.
  • Nguyen, C. Thi. 2019. “Group-Strapping, Bubble, or Echo Chamber?” Social Epistemology Review and Reply Collective 8 (6): 31-37. https://wp.me/p1Bfg0-4dr.
Articles in this dialogue:

In their responses to my article “Epistemically Pernicious Groups and the Groupstrapping Problem” (Boyd 2018), Bert Baumgaertner (“Groupstrapping, Boostrapping, and Oops-strapping: A Reply to Boyd”) and C. Thi Nguyen (“Group-strapping, Bubble, or Echo Chamber?”) have raised interesting questions and opened lines of inquiry regarding my discussion of what I hope to be a way to help make sense of how members of groups can continue to hold beliefs that are greatly outweighed by countervailing evidence (e.g. antivaxxers, climate-change deniers, etc.). To recap: the explanation I provided was that such beliefs persist at least in part because members of groups appeal to the groups themselves as sources of information, a process that can start a nasty feedback loop in which member credence is increased as a result of group testimony, and apparent group credence is increased as a result of an increase in member credence. This process—which I called “groupstrapping”—differs from other popular explanations of how epistemically pernicious groups work in that it appeals to relationships not between individual members, but between members and the group itself. As a result, groupstrapping can occur alongside or independently of the more frequently discussed effects that individuals have on one another.

However, with so much potentially going on inside these groups it can be difficult to determine what, exactly, the driving force is behind the persistence of the over-inflated confidences and vicious belief-forming behaviors. We already have two potential explanations in the form of epistemic bubbles—i.e. wherein group members are exposed to limited information that tends to come from the same kinds of sources—and echo chambers—i.e. wherein member confidence increases because the same beliefs are constantly reinforced by other members, with outside voices potentially being quashed—so what role does groupstrapping have to play?

Considering the Effects of Groupstrapping

Baumgaertner raises a concern along these lines when he notes that, at least when it comes to individual instances, the effects of groupstrapping will be very small. Consider, for example, a case in which a member of a very large group appeals to the testimony of that group to form a belief. Although it may very well be the case that groupstrapping occurs, since the group’s belief is determined by the contribution of so many members, the fact that only one such member increased their credence by a little bit will make very little difference overall. As Baumgaertner notes, this effect could be more noticeable if multiple members are involved, the group is small, and member and group beliefs are consistently influencing each other over time. However, the effects may initially be subtle, and difficult to detect.

Nguyen’s concern is similar: it may be difficult to determine whether groupstrapping is occurring or having much of an overall effect given everything else that’s going on inside of these epistemically pernicious groups. He also rightly notes that while there may be cases in which looking to a group that you’re a member of (what I’ve called “self-membership groups”) for information may be fishy, there will certainly be cases in which it is perfectly fine. Here Nguyen and I are in agreement: as I mentioned in my original article, there is likely nothing illicit about cases in which I appeal to, for example, the testimony of the university of which I am a member, or a large laboratory that I work in, or a national agency that I’m a part of, etc. What Nguyen very helpfully provides, however, is a list of criteria distinguishing those groups in which members appealing to group testimony might be fishy, and those in which it seems fine:

  1. I need to be part of an integrative group.
  2. I need to believe that p.
  3. My beliefs need to be partially constitutes of that group’s belief that p.
  4. I need to use the fact that the group believes that p to increase my degree of confidence in p by some amount, x.
  5. x needs to exceed the contribution made by the integrative procedure of the group.

The key condition here is 5: as Nguyen notes, group belief might be determined by a kind of integrative procedure, one that may make it the case that if I appeal to the testimony of a self-membership group then I will be completely warranted in increasing my credence in my belief that p. Nguyen provides an example in which group belief is determined by a committee, one that actively considers the beliefs of the members of a group, only deciding what to believe after some careful deliberation. If the group believes that p after all this work, then it does indeed seem like a member could appeal to the testimony of the group and increase their credence in a way that would not risk starting a groupstrapping process.

This is a nice way to distinguish those groups that are likely to become epistemically pernicious from those that aren’t. My biggest concern, of course, is with those groups of the former category, and in trying to figure out exactly what is going on inside them. In what follows, then, I want to employ some of the suggestions from Baumgaertner and Nguyen to make more of a case for the existence of groupstrapping, and the usefulness of the concept of groupstrapping for explaining illicit belief-formation within those epistemically pernicious groups.

Before I do that, two things to keep in mind. First, I don’t think that groupstrapping is typically going to be the sole explanation for how beliefs in epistemically pernicious groups persist. Since people are complicated, the ways that they form beliefs are complicated, and groups come in many different shapes, sizes, and structures, I think it would be unlikely that there is a single factor that can explain all of the bad beliefs for every person in every different type of group. Instead, in developing an account of the current epistemic hellscape we should let a thousand weeds bloom, as it were, and look to a plurality of explanations for these bad beliefs and belief-forming procedures. The case that I want to make, then, is that groupstrapping should be counted among those weeds.

Second, while the effects of groupstrapping may be subtle or difficult to detect in individual instances, this is no different from the other pernicious belief-forming behaviors that are more commonly discussed. Consider, for example, an echo chamber in which members reinforce each others’ beliefs and actively discredit outside sources: each instance of a belief being echoed back or outside opinion being disapproved of is not on its own going to have much of an effect on the beliefs on the members within that group. Instead, it is the continuous process of many beliefs being echoed and instances of outside sources being ignored or discredited that result in an epistemically pernicious group. The same can be said, I think, for groupstrapping: while it is true that individual instances will likely not have much of an effect overall, it is the cumulation of many instances of groupstrapping that will have a noticeable effect.

Sources of Evidence

With that being said, I want to appeal to three main sources of evidence to further bolster by claim that groupstrapping has an important role to play in the formation of at least some kinds of epistemically pernicious groups: first, there are lots of mundane instances in which people appeal to groups as sources of information, which indicates that the relationship between member and group is something that ought to be taken into consideration when explaining pernicious belief-forming behavior; second, there are many groups in which communication between group members is minimal, or would otherwise not generally be of the type that would result in individuals actually updating their beliefs on the basis of what other individual members believed; and third, there are epistemically pernicious groups in which the way that member beliefs are updated mimics groups that have members that act as privileged sources of information, but in which no such individual member exists. I will look at each of these reasons in turn; the first two will be quick, the last one will take a bit of explaining.

First: individuals not only treat other individuals as sources of information, but groups, as well. This is important because standard explanations for how epistemically pernicious groups operate deal exclusively in relationships between individual members. For example, consider how an echo chamber is supposed to work: members of a group are constantly communicating with one another, and so the fact that I hear that you think that p, and that someone else does, as well, and that that other person does too, etc., makes me much more confident than I should be that p is true. While I think that this does occur, there also seem to be cases in which members of a group will appeal to the testimony of the group much more frequently than they would appeal to the testimony of any other individual member. Consider, for example, how internet-addicted individuals like myself make basically any decision these days: if I want to know what product I should buy, which restaurant to go to, which movie to watch, or book to read, or album to listen to, etc., chances are I will not seek out the advice of any particular individual, but will instead appeal to the consensus of a group (probably in the form of a review website). This is not yet to say that there is anything epistemically illicit about appealing to groups in this way as a source of information. But it does show the need to provide an explanation for belief-formation that does not appeal solely to relationships between individual members.

Second, and relatedly, is the problem that epistemically pernicious groups persist when members within the group have only the most tenuous of relationships with one another. Here I predominantly have in mind online groups, in which individual members may know little or nothing about anyone else in the group. Appeals to epistemic bubbles and echo chambers are supposed to be particularly adept at accounting for the belief-forming processes of these kinds of groups, but I am worried that the peculiar nature of online group membership creates problems that cannot be so easily accounted for. Specifically, I have in mind the fact that the anonymity granted by the internet makes it such that I can know almost nothing about anyone else in my group: explanations for the perniciousness of epistemically pernicious groups that rely on communication amongst members fail to take into account the fact that, in general, I am not going to trust that some anonymous yobo knows enough about what they’re talking about to get me to increase my credence in my belief.

Now, I may be particularly discerning when it comes to forming beliefs, or I might just be deluded into thinking that I am. And there is plenty of research to support the claim that we find those who agree with us to be more reputable sources of information, generally speaking (see e.g. Metzger et al. (2010), Metzger and Flanagin (2015), Panagopoulos and Harrison (2016), Fazio et al. (in press)). So it is not as though the testimony of even an anonymous fellow group member will have no effect whatsoever on what I believe. But again, there seems to be something missing from the story of how epistemically pernicious groups form that relies exclusively on the relationships between members, especially when those members do not have anything of a relationship with each other to speak of. Positing that it is the relationship between members and the group itself that plays a role in the epistemically pernicious nature of these groups can help bridge that gap.

The third and more complicated bit of support for the interpretive usefulness of the concept of groupstrapping comes from how epistemically pernicious groups may be structured. Again, I don’t think that there is any one structure that all such groups have. However, I think that such groups tend not to be structured in one of the ways that Nguyen describes, namely in such a way that a group belief is determined as the result of conscientious effort in deliberating about what to believe on the basis of the views of its members, with the result that there such a belief acquires additional epistemic value as a result of that deliberative process. Epistemically pernicious groups will tend not to come in the form of conscientiously run laboratories or research groups: instead, the type of group I have in mind here is again more like a social media group, one that generally has little oversight and tends not to engage in any kind of serious deliberation at the group level.

So how should we think about the ways in which beliefs in these kinds of groups tend to get propagated amongst its members? Here is where I want to speculate a bit, and look to recent work in network epistemology that models ways in which members of groups acquire and update their beliefs. For example, one can create models of groups with different relationships among members (e.g. in which everyone talks to everyone else, or only certain people talk to certain others, or some people are taken to have more influence than others, etc.) and in which members start off with varying initial state (e.g. groups in which everyone agrees with each other, or in which there is widespread disagreement, or general agnosticism, etc.) in order to see how information might flow amongst the members. All of these models can be represented mathematically, and various interesting results can then be derived (see Zollman (2013) for an example of such interesting results).

I will not here be running any simulations or trying to derive any theorems. However, I want to draw attention to a particular group structure from work in network epistemology that I think is relevant to the discussion of groupstrapping, namely one in which a group has members that occupy positions as privileged sources of information within the group. Venkatesh Bala and Sanjeev Goyal (1998) call these members a “royal family”, which constitute “a small set of agents who are observed by everyone” (597), and provide three examples of groups with such royal families:

  1. Agricultural groups in which “individual farmers observe their neighbouring farmers but all the farmers observe a few large farmers and research laboratories.”
    1. The “consumer goods market”, wherein “individual consumers discuss purchase decisions with their colleagues and friends and potential customers read one or two consumer magazines which report on some experiments/consumer experiences.”
      1. Research groups, wherein “individual researchers typically keep abreast of developments in their own narrow area of specialization, and also try to keep informed about the work of the pioneers/intellectual leaders in their subject more broadly defined” (597).

      In Bala and Goyal’s models, the views of the royal family tend to propagate throughout the group, such that eventually the significant majority of the members come to share the same beliefs as the royal family. James Owen Weatherall and Caitlin O’Connor (2019) illustrate groups with this kind of structure in the following way (in which circle represent members, lines represent connections between members, and changing from a filled-in circle to an empty one represents a change in belief):
       

      In the above model—what Weatherall and O’Connor fittingly call a “star network” (143)—we can see that all peripheral members communicate with the member(s) in the centre, although not necessarily with each other. Once the central member changes their belief, that belief tends to propagate to the remaining members. This is, of course, a very simplified version of a group which, were it to be one in the real world, would undoubtedly be much messier. However, the general lesson is that when groups have privileged members, their beliefs tend to be the ones that members of the group end up sharing.

      This is, of course, not to say that there is necessarily anything wrong with groups that have this star-like structure: it seems like a good thing, for example, for a group to have a royal family consisting of expert members that communicate their well-researched beliefs to other members. So what does any of this have to do with groupstrapping? Here’s the thought: I think there are a lot of epistemically pernicious groups that look like those illustrated above, but in which it is the group itself that occupies that privileged position.

      Why think this? Well, consider again one of my worries with the way that epistemically pernicious groups have traditionally been analyzed, namely in terms of the relationships between members, and that it seems that in many epistemically pernicious there is little-to-no relationship to speak of between members, especially when members of groups are predominantly anonymous individuals online. So where are members mostly getting their information from?

      Here I think we can look to one of Bala and Goyal’s examples of groups with a royal family, namely the “consumer goods market” that involves individuals appealing to “one or two magazines which report on some experiments/consumer experiences”. Consider an updated version of this example, in which instead of magazines we have online groups that make posts that are approved or disapproved of by members of the group (say, in the form of “likes”, or “hearts”, or “upvotes”, or what-have-you). In consulting the group for information, then, members of the group will tend to look to those posts that are most highly endorsed and adjust their beliefs accordingly, a process that will then impact the way that future posts are endorsed. This kind of group is structured in a way similar to that illustrated above, with a central influential source that all other members observe. However, instead of it being a member, the royal family is the group itself. This kind of model can then explain how we can help account for the proliferation of beliefs in epistemically pernicious groups when members are relatively disconnected from one another.

      This picture also coheres with Baumgaertner’s nice illustration of groups that may become epistemically pernicious only after many iterations of a groupstrapping procedure. Here again are his two models:


      Figure 1: Iterative Structure of Bootstrapping



      Figure 2: Groupstrapping and Oops-strapping


       
       

      As Baumgaertner describes it, on the left we have a situation in which multiple individuals seek out information from a group, but do not themselves contribute to the overall position of the group after seeking out that information, while on the right we have a situation in which an individual seeks out information from a group while also contributing to the overall position of the group afterwards, changing the group as a result. Baumgaertner rightly argues that while the process on the right can result in an epistemically pernicious group, there’s nothing necessarily problematic about the one on the left. The process I am trying to illustrate here looks something like a combination of these two models, such that member beliefs are all informed by the group (which looks like the model on the left) but in which those member beliefs then determine the group’s belief (which looks like the model on the right).

      So to sum up this third long, speculative bit of support for groupstrapping: when there is little communication and no personal relationship to speak of between members in a group, the members of the group might get their information predominantly from the group itself, resulting in group beliefs standing in for what would otherwise be beliefs of a privileged member or members. While there is nothing necessarily wrong with groups that have royal families, issues arise when the beliefs of the royal family are determined with little or no oversight by the beliefs of the members themselves. Hence: groupstrapping.

      I’ve already said a lot, but I want to conclude by considering one final consequence of accepting the view that groupstrapping can be a significant force in the formation of epistemically pernicious groups, namely that it can explain how beliefs persist in groups that were initially formed around a royal family that has since abdicated the throne. Consider, for example, the group of antivaxxers which was initially formed on the basis of the testimony of Andrew Wakefield, who purported to show that there is a link between vaccines and autism. Again, although it is an oversimplification to say that the group formed in so neat a way as to be represented by the star model above, it seems plausible that at first Wakefield constituted the royal family of the group—the antivaxx king, as it were—whom all other members treated as a privileged source of information. It is not surprising, then, that a belief that vaccines are dangerous should spread in such a group. What is strange, though, is that the beliefs in the group persisted even after Wakefield was thoroughly and publicly discredited. One might think that once a group’s privileged source of information is shown to have been wrong that the group would perhaps disband, or at least update their beliefs. So why didn’t this happen?

      Again, I don’t think there’s one simple answer to this question. But here is perhaps part of a complex answer: the group persisted because there was a successor to the throne, namely the group itself. The picture, then, is one in which a group can take on a life of its own, such that misinformation can continue to be spread even when a formerly privileged source of information no longer plays the role of providing group members with information. This picture may also help explain why it’s so difficult to convince members of such groups to leave them: while discrediting privileged sources of information might seem like a good way to get people to change their beliefs, that the group can act as its own privileged source means that this strategy might not ultimately be terribly effective.

      Contact details: Kenneth Boyd, University of Southern Denmark, kenneth.boyd@gmail.com

      References

      Bala, Venkatesh and Sanjeev Goyal. 1998. “Learning from Neighbours.” The Review of Economic Studies 65(3): 595-621.

      Baumgaertner, Bert. 2019. “Groupstrapping, Bootstrapping, and Oops-strapping: A Reply to Boyd.” Social Epistemology Review and Reply Collective 8 (6): 4-7. https://wp.me/p1Bfg0-4bE.

      Boyd, Kenneth. 2018. “Epistemically Pernicious Groups and the Groupstrapping Problem.” Social Epistemology 33 (1): 61-73.

      Fazio, Lisa, David Rand, and Gordon Pennycook. In Press. “Repetition Increases Perceived Truth Equally for Plausible and Implausible Statements.” Psychonomic Bulletin & Review: 1-19.

      Metzger, Miriam and Andrew Flanagin. 2015. “Psychological Approaches to Credibility Assessment Online.” In Sundar, S. Shyam (ed.). The Handbook of the Psychology of Communication Technology: pp. 445-466.

      Metzger, Miram, Andrew Flanagin, Ryan B. Medders. 2010. “Social and Heuristic Approaches to Credibility Evaluation Online.” Journal of Communication 60: 413-439.

      Nguyen, C. Thi. 2019. “Group-Strapping, Bubble, or Echo Chamber?” Social Epistemology Review and Reply Collective 8 (6): 31-37. https://wp.me/p1Bfg0-4dr.

      O’Connor, Cailin and James Owen Weatherall. 2019. The Misinformation Age. New Haven, CT, USA: Yale University Press.

      Panagopoulos, Costas and Brian Harrison. 2016. “Consensus Cues, Issue Salience and Policy Preferences: An Experimental Investigation.” North American Journal of Psychology 18(2): 405-18.

      Zollman, Kevin. 2013. “Network Epistemology: Communication in Epistemic Communities.” Philosophy Compass 8(1): 15-27.



      Categories: Critical Replies

      Tags: , , , , , , ,

Leave a Reply

Discover more from Social Epistemology Review and Reply Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading