Reply to Jeroen de Ridder’s “Online Illusions of Understanding”, Justin McBrayer

Professor de Ridder (2022) argues that while online informational environments are epistemically good in some ways, they also have an epistemic flaw: they create illusions of understanding instead of the real McCoy. … [please read below the rest of the article].

Image credit: [●] wim goedhart via Flickr / Creative Commons

Article Citation:

McBrayer, Justin. 2023. “Reply to Jeroen de Ridder’s ‘Online Illusions of Understanding’.” Social Epistemology Review and Reply Collective 12 (5): 1-7. https://wp.me/p1Bfg0-7Le.

🔹 The PDF of the article gives specific page numbers.

This article replies to:

❧ de Ridder, Jeroen. 2022. “Online Illusions of Understanding.” Social Epistemology 1–16. doi: 10.1080/02691728.2022.2151331.

Internet-Based Knowledge

On the one hand, the epistemic goods are (at least) twofold. First, Professor de Ridder grants that online informational environments are conducive to propositional knowledge. A quick Google search will teach you that the Battle of Hastings happened in 1056. Second, online informational environments are also pretty good at delivering practical knowledge (like knowledge-how). For example, there’s a YouTube video on how to fix just about anything from a loose door handle to a leaky toilet.

Both points seem right to me. The internet is an epistemic boon. But each case also has serious limitations. Sure, the internet can provide a lot of propositional knowledge, but that gets less and less likely the more controversial the claim. In effect, there’s a direct correlation between controversy and epistemic utility: the more controversial a claim gets, the harder it is to confirm online. That’s because the online environment is (largely) epistemically egalitarian: any idiot with a laptop can post something. So, on the one hand, the internet is a great source of propositional knowledge. On the other, its reliability slips as propositions become contested.

There’s a limitation on practical knowledge, too, though the nature of that limitation is different. It’s not controversy that limits our ability to know how to do something after watching an online tutorial. It’s our own psychology. Heuristics from the superiority illusion (thinking that what’s difficult for others is easy for you) to the Dunning-Kruger Effect (where those with the lowest skill level are most likely to overestimate their abilities) make it clear that we are wired to be overconfident in our abilities. Professor de Ridder acknowledges as much when he cites Rozenblitz and Keill’s (2002) work on the illusion of explanatory depth later in his paper. These heuristics don’t just hamper our understanding. They also impede our knowledge of how to do something by watching a YouTube video or reading a ten-step Wiki. How many people have watched a video on how to fix a washing machine and then go on to confidently destroy it with their newfound “knowledge”?

But even delimited in these ways, it’s pretty obvious that the internet is a source of propositional and practical information. In these respects, our intellectual lives are far richer than they were in the pre-internet days. No complaints there from either Professor de Ridder or me.

Understanding is another thing entirely.

On Understanding

The precise nature of understanding is up for grabs, but philosophers largely agree that it goes beyond mere knowledge. To understand something is not just to know that it’s true but to know why it’s true or know how that truth relates to other important truths in the area. Those relations might be explanatory, logical, causal, etc. Sometimes philosophers call this “seeing how things hang together.”

As such, understanding is a more demanding epistemic state than knowing that or knowing how. Even in an epistemically neutral environment, we should expect humans to have a harder time understanding things than knowing them. And what’s worse, according to Professor de Ridder, the internet isn’t an epistemically neutral environment.

He convincingly argues that genuine understanding is difficult to come by online. Instead, online informational environments often produce illusions of understanding instead of the real thing. This happens for two reasons. First, the structure of information itself (e.g. hyperlinks embedded in text) can lead us to think we understand something when we don’t. Second, the operational processes that we rely on to sort through that firehose of information (e.g. reliance on a search engine) can cause illusions of understanding.

I think Professor de Ridder is right about both points, and my own work on fake news offers substantiating evidence. I’ll sketch some of that first for structure, then for operations, and close with a very basic explanation for why online informational environments create epistemic trouble of this sort.

Consider first the structure of information online, especially when it comes to the news. Start with the fact that in the news, almost all stories have a headline and a body (that’s true, of course, for in-print news as well). That structure alone causes significant misunderstanding and produces illusions of understanding.

Headlines pose two problems. First, many, many people read only the headlines of a story and assume they know what’s going on. Some surveys in the States show that about 60% of us don’t read beyond the headlines in the news. That’s bad because their brevity ensures that the information in headlines is necessarily limited. It would be difficult to really understand what’s happening in the Russia-Ukraine war or the causes of inflation by simply reading headlines. There’s not enough information there for us to see “how things hang together.” Headlines might produce propositional knowledge (“Russia invades Ukraine”) without producing an understanding of the situation. And yet if people are reading dozens of headlines a day, they are likely to get the feeling that they understand world events and so on.

Second, headlines are routinely skewed to produce false impressions even when they are strictly speaking true. In other words, journalists tweak the phrasing of a headline to frame a story in a way that conveys or emphasizes some aspects of the story and downplays others. Political journalists obviously do these sorts of things to boost their readership (see here and here for examples), but the problem is ubiquitous.

Framing Knowledge

I think that kind of framing poses a problem for propositional knowledge. For example, suppose you read this headline in the Wall Street Journal: “Stocks slip in the first quarter as inflation grows.” That headline gives you the impression that stocks went down because inflation grew. But it doesn’t, strictly speaking, say that. The headline announces a correlation between two things, but we read it as causation between two things. Newspaper headlines literally do this all the time. When the headline reads “Black motorist shot by white policeman,” you can’t help but assume that the incident was racially motivated. That’s what happens when journalists include the framing of race in the headline, even if the headline doesn’t actually say that race was a relevant factor.

But even if you disagree with me that headline framing is bad for propositional knowledge, it’s pretty obviously bad for understanding. If your goal is to see how things hang together, you can’t rely on headlines that selectively highlight some aspects of a story over others. Whether orchestrated by the political right or left, following headlines will give you a partial and biased sense of how things hang together.

Combined, the informational limitations of headlines and the framing of headlines create illusions of understanding: we read headlines that convey precious little information in biased ways, skip the story, and assume we have a handle on what’s going on. We often don’t.

On Hyperlinks

Hyperlinks are another great example of a problematic structure for web-based information. Instead of footnotes or parenthetical references, many online sources use hyperlinks to cite evidence for claims made in the text. Interested readers can click on the hyperlinked word or phrase to pull up a new window that includes corroborating information. That’s a sensible way to document evidence without burdening the reader with 10-point font.

But hyperlinks provide a warm blanket of epistemic comfort even when they shouldn’t. Be honest: when was the last time you clicked a hyperlink when reading a news story about a controversial event? If you’re like most of us, the answer is almost never.

And yet seeing words in blue, underlined font made you think the story was well-evidenced and trustworthy even though you didn’t check it. The very existence of the hyperlinks produces an illusion of understanding: you think you see how the information in the article hangs together with other claims, but you don’t ferret out those connections yourself. You assume the connections are there, even when the hyperlinks would in fact have redirected you to a conspiracy theory website or worse.

Turning to organizational processes, the trouble continues. The amount of information available in the current online environment is truly staggering. And we have to orient ourselves in the midst of it. Here’s how Richard Saul Wurman describes it in his book Information Architects (1997):

There is a tsunami of data that is crashing onto the beaches of the civilized world. This is a tidal wave of unrelated, growing data formed in bits and bytes, coming in an unorganized, uncontrolled, incoherent cacophony of foam. None of it is easily related, none of it comes with any organizational methodology. It is full of flotsam and jetsam. As it washes up on our beaches, we see people in suits and ties skipping along the shoreline, men and women in fine shirts and blouses dressed for business…They nod their heads and say “Yes, this is important, this is good stuff. The person sitting next to me, sitting in the next office down the aisle, they understand it, so I will smile, making believe I understand it, too” (15).

There is no way that we can take the firehose of data provided online and fashion it into a limited set of results that we can easily comprehend, much less understand. And so we let someone—or some thing—else sort through the data for us, and we nod our heads in an illusion of understanding.

Professor de Ridder is right: the processes that we use for sorting through online information make us think we understand things when we often don’t. Search engines are a prime example.

On Search Engines

A search engine uses a query to scour the web and return results relevant to that query. But there are epistemic pitfalls both in which results it pulls and how those results are displayed. First, which results you see depends on who you are. In other words, search engine results are personalized. Google started doing this way back in 2009 by loading anonymous cookies on your computer that would tailor results to your location and other personal details. Nowadays, search engines take far more information into consideration as they determine which results to display. Given their browsing histories, an anti-vaccination activist and an MD will get very different results when they search ‘do vaccines cause autism?’ in any of the standard browsers.

Some philosophers have already argued that personalized search results undermine justification (for example, see Miller and Record 2013). Whether that’s so, it also affects understanding. Understanding is seeing how things hangs together. But that won’t happen when your search results are not a representative sample of all the information there is. Put another way, you’ll never see how things hang together if you’re only getting one side of the story, and that’s exactly what personalized search engines give you.

Consider a concrete example: the efficacy of COVID precautions like PPE, mask mandates, personal space requirements, etc. Truly understanding which precautions were effective and which were not was a daunting task that even our best scientists failed at various points in the pandemic. Early on, the CDC said masks weren’t important, then they reversed course and advocated for any type of mouth/nose covering, then they backtracked further to say that only masks helped but that neck gaitors actually made things worse (for example, see here). This was not due to a conspiracy theory or incompetence: some issues are just hard to understand.

And if you were trying to understand the issue for yourself, personalized search results were sure to make the situation worse. A search engine knows your political tribe and will return results in line with your tribal thinking. If you always trusted the CDC, you’d get results that gave you that side of the story. If you were a longtime government skeptic, you’d get a different side of the story. In neither case would you have a robust set of results that would promote understanding of an evidentially complicated issue.

Second, results in a search engine are presented in a listed order. Professor de Ridder flags the role of an anchoring bias here: the results we see first are likely to have an outsized effect on our understanding of the issue. I want to raise an issue that compounds this problem: virtually every search engine on the planet hosts sponsored results and places those sponsored results both at the top and throughout the results.

To make the point, I just pulled up the Chrome browser and searched ‘best electric vehicle 2023’. My top three hits were all sponsored: one by a government entity, another by Nissan, and a third by Toyota. Spots number four and five were legitimate (meaning non-paid for) results for Car and Driver magazine and Edmunds. After that, I see a list of other questions I might ask, raising all of the problems about auto-complete and suggested questions that Professor de Ridder decries in his article. The pattern continues after the questions: three more sponsored links, a handful of legitimate ones, four more sponsored links, and so forth down the page.

The combination of organizing results in a list where those with the most dollars can insert themselves into prominent places in the informational landscape poses an obvious obstacle to understanding. Maybe you can still get some knowledge about the best EVs in 2023 by avoiding the sponsored links and clicking only on those with high reliability. But your overall impression of the informational landscape is marred by the fact that informationally weak results are displayed alongside of (or even before) informationally better results. That makes it awfully hard to see how things hang together. And search result sponsorship is not an isolated problem: from product reviews to ad placement, online information is a web of pay-to-play opportunities.

In sum, Professor de Ridder is right that the structure and organizational processes of the internet often make us think we understand something when we really don’t. The fake news crisis is an example of that illusion run amok. Voters routinely claim to understand complicated matters of politics from the causes of inflation to solutions to international conflicts. They then confidently queue up for elections brimming with confidence. It’s likely that online information has bolstered such illusions of understanding.

Why does the internet have this epistemic implication? The short answer is that our informational architecture incentives engagement over truth.

Engagement vs. Truth

Whether we’re talking about the design of city streets or the design of a university degree, design plans incentivize some behavior over others. Wide lanes with generous shoulders incentivize higher driving speeds than narrow lanes with minimal shoulders. Speed bumps incentivize slower speeds than unencumbered pavement.

The same goes for the design of information systems. And the internet is built to reward engagement over all else. For example, search engines deliver the results they think you are most likely to click. It’s really a ranking of probable engagement more than anything else. And when it comes to satisfying our preferences, that’s a really good thing. When I search ‘pizza to go’, I don’t need a definition of this phrase, its etymology, or a list of the top restaurants in New York City, someplace far away from my actual location. Meeting my preferences means telling me which restaurants in my immediate locale offer pizza to go. That’s the link I’m most likely to click. Personalized results for the win.

But what’s good for preferences isn’t so good for truth. When we want to know what the facts are, our preferences should be immaterial. But Google doesn’t discriminate between how it returns results. ‘Pizza to go’ gets the same treatment as ‘is climate change real?’ In both cases, search engines are delivering results we are most likely to click.

The obvious problem is that engagement is not a proxy for any epistemic good that you might care about: evidence, justification, or truth. And yet engagement is what drives our most basic informational architecture from search engines to news stories. MSNBC is not rewarded by how truthful or accurate their stories are. They are rewarded on how many clicks, shares, and reads they get. The more eyes, the more advertising dollars they can command. That’s a clear incentive to follow engagement even when it comes apart from the truth.

As long as the online informational environment prioritizes engagement over evidence (or some other epistemic good) we can expect that environment to continue to produce illusions of understanding and other epistemic problems.

Author Information:

Justin McBrayer, jpmcbrayer@fortlewis.edu, Fort Lewis College.

References

de Ridder, Jeroen. 2022. “Online Illusions of Understanding.” Social Epistemology 1–16. doi: 10.1080/02691728.2022.2151331.

Miller, Boaz and Isaac Record. 2013. “Justified Belief In A Digital Age: On The Epistemic Implications Of Secret Internet Technologies.” Episteme 10 (2): 117–134.

Rozenblit, Leonid and Frank Keil. 2002. “The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth.” Cognitive Science 26 (5): 521–562.

Wurman, Richard Saul. 1997. Information Architects. Graphis Inc.



Categories: Critical Replies

Tags: , , , , , ,

Leave a Reply