A Gathering Crowd: Collective Intelligence and Medicine, William Davis

In the United States, as in many other locales throughout the world, “seeing a doctor” no longer requires physically going to an office. From phone and text communication, to video consultations, twenty-first century medicine involves technological mediation—the notion that technologies shape how people interact with the world around them—and that necessitates a comprehensive reexamination of the physician-patient relationship … [please read below the rest of the article].

Image credit: jfcherry via Flickr / Creative Commons

Article Citation:

Davis, William. 2019. “A Gathering Crowd: Collective Intelligence and Medicine.” Social Epistemology Review and Reply Collective 8 (11): 1-7. https://wp.me/p1Bfg0-4D7.

PDF logoThe PDF of the article gives specific page numbers.

In the United States, as in many other locales throughout the world, “seeing a doctor” no longer requires physically going to an office. From phone and text communication, to video consultations, twenty-first century medicine involves technological mediation—the notion that technologies shape how people interact with the world around them—and that necessitates a comprehensive reexamination of the physician-patient relationship.

From diagnosis to intervention, the practice of medicine is adapting to one of the most significant societal trends of the last two decades: nearly all inquiry begins with an internet search. We seek answers to questions and information on topics by typing into a search bar and, seemingly, trusting the results we find there. We might not be far off from a time when “seeing a doctor” only tangentially involves an actual human doctor.

Whether or not we should envision that future as one to aspire to will be the focus of this series of posts for the Social Epistemology Review and Reply Collective. Below is part I.

Improving Medical Education?

A recent editorial from the open-access arm of the Journal of the American Medical Association, JAMA Network Open, delves into a topic of interest to us at SERRC: how ought medical diagnosticians in clinical settings be trained, and how ought they make decisions? Examining the methods of analysis that clinical diagnosticians employ would certainly be a place to start: do they employ the hypothetico-deductive model, or even some form of abduction (Trimble and Hamilton 2016, 343)? Such questions deserve attention, but notice what they leave out: who and what are the clinical diagnosticians consulting when making decisions? We need a more robust account of the social to assess the adequacy of intervention strategies that intend to improve diagnosis.

Medical schools in the USA have begun opening their doors to applicants who, a couple decades ago, would have found difficulty matriculating: liberal arts graduates. The shift in emphasis, seemingly coincident with a revamped format of the MCAT (medical college admission test) that promotes the biopsychosocial model of illness as opposed to the biomedical model (Engel 1977; Adler 2009), aims partly to address the concern that medical practitioners need more empathy, communicative ability, and compassion.

The thinking seems to be that if students with more variety in their undergraduate training enter colleges of medicine, they will provide perspectives that complement their colleagues whose undergraduate degrees focused on the life and physical sciences. Bring together students from a wider variety of disciplines and train them together; they will learn from and teach other; they will be the new synthesis that transforms medicine in the twenty-first century.

The content of the new MCAT serves to discipline aspiring practitioners: applicants still need to know biology, chemistry, physics, statistics, and calculus, but now test-takers must also demonstrate a more expansive understanding of the psychological and sociological factors impacting health. The biomedical model has been found wanting. It too often discounts psychological and sociological factors by not acknowledging them as significant contributors to health and well-being, or relegating to a separate “non-medical” sphere like psychiatry (Engel 1977; Adler 2009). Thus, the test (to be fair, other practitioners and committee findings do as well, see below) tells us, the medical field needs practitioners with impressive interdisciplinary training who are adept at examining more than just medical charts. Physicians must be skilled interlocutors capable of respecting patient autonomy while also applying their medical knowledge to specific cases.

Concurrent with an increasing emphasis on the psychological and sociological factors of health is the rise of technologies often found under the heading Medicine 2.0 or Health 2.0 (Montanio 2016; Wazny 2016) that permit greater patient and lay public involvement in diagnoses and treatment discussions. A 2012 Pew Research Center poll estimates that 35% of adults in the U.S.A. had used the internet to understand a medical condition, and that about half of those individuals went to see a physician after looking that information up online. More broadly, 72% of those surveyed said they had looked online for health information in the past year (Fox and Duggar 2013). Given the increasing ubiquity of smartphones and internet access in the country, we might fairly imagine that even more individuals are going online now, in 2019, to find out about medical conditions than they were seven years ago.

It is remarkable that, after going online to learn about medical conditions, only about half of the people went to visit a health care professional (Fox and Duggar 2013). Though it is likely some of those web surfers were simply curious about symptoms of diseases that they did not have, some (including their family members or loved ones) might have had the symptoms and still not sought health advice in a traditional medical setting.

One explanation might be a lack of bond or trust with a particular general practitioner; we do not live in an age of home visits from a doctor. Instead, patients and health professionals move around. Electronic health records, then, are important because patients want portability, and care providers want flexibility: if a patient’s physician cannot see her on a particular day, her records can be parsed by another physician to provide the patient with care that day.  Electronic health records, however, also mediate the physician-patient relationship in various ways, as well as promote, I argue, the idea that a patient’s file is sufficient to understand her (health needs). Why should patient and provider know each other if diagnosis and treatment depend chiefly on information found in a file?

Better Health through Improved Engagement

A National Academies of Sciences, Engineering, and Medicine 2015 report recommends methods to improve medical diagnostic accuracy and reduce diagnostic error. Their first recommendation: “Facilitate more effective teamwork in the diagnostic process among health care professionals, patients, and their families” (6). The report emphasizes the significance of “leveraging the expertise, power, and influence of the patient community” to help improve diagnoses— explanations of health problems that then inform other health care choices (2015, 2-6). Health records matter, of course, but those records are only as accurate as the information gathered, and even then are not sufficient alone to improve health outcomes.

Health care practitioners appear to need engaged patients (and the families of patients) to improve diagnostic accuracy and effectiveness, but with more health information available online, patients might feel less need to speak with a health care practitioner. Further, despite recommendations to the contrary, medical students and health practitioners themselves may focus on improving skills like clinical reasoning to the detriment of skills like interpersonal communication, a topic I will return to with greater emphasis in future essays on SERRC.

Just as lay publics seek out health information on the internet, so, too, do medical professional trainees and practitioners through analysis of web-based patient vignettes (Dhaliwal 2013; Meyer, et al. 2013). Recently developed software (e.g., the Human Diagnosis Project–HumanDx) provides even greater connectivity for such analysis, allowing users to compare their diagnoses with responses from other health practitioners. Is our understanding of the social beginning to thicken, or are we simply imagining individuals cogitating and diagnosing on their own without much need for interacting with other practitioners?

Effective and compassionate communication on the part of the physician requires practice and desire: practitioners must believe it works, and not just because some committee or software tells them it does. When truth and knowledge are found online, mere keystrokes and clicks away, however, anything other than unidirectional communication seems both absurd and tedious.

Equating Diagnostic Accuracy with Improved Outcomes

A compelling, even if untethered to reality, version of the correct approach to diagnosis involves the lone physician analyzing information from a patient’s medical history and lab results to create a detailed list of possible diagnoses. Then, that same physician attempts to rule out all possible diagnoses save one: the correct diagnosis (Fihn 2019, 1). Such a cursory procedural overview does not explicitly state that physicians, especially early-career practitioners as well as physicians in training, should consult other physicians when making a diagnosis. The image of a solitary physician teasing out a correct diagnosis from disparate and often disconnected mounds of data might make compelling television drama, but it also presents a misleading narrative: only one physician is required to make accurate diagnoses.

Medical errors account for tens of thousands of deaths each year in the U.S.A. (Institute of Medicine 2000). Diagnostic errors, according to a study from the 1990s, contribute to about 17% of those deaths (Leap, I., Brennan, T., Laird, N. et al. 1991). Mistakes in the medical profession might lead to loss of life or other impairments. Understandably, health practitioners—as well as patients, insurers, health providers, etc.—want to reduce errors as much as possible. A recurring theme in the literature on diagnostic accuracy points to increased critical reflection as an essential component in the reduction of diagnostic errors (Mamede, Schmidt, and Rikers 2006, 138).

Increased training in, and practice of, critical reflection and meta-reasoning would likely increase diagnostic accuracy partly because such activities permit the physician to determine her own biases and causes of error in reasoning (Mamede et al. 2006, 143-4). If increased individual training and practice in these areas are all that is required, then software like HumanDx and teaching models like One Minute Preceptor might be adequate resources. If improved diagnostic accuracy requires more physician self-reliance, then those same solutions again seem viable. If improved patient outcomes is the aim, and that includes reductions in diagnostic errors, then a first step ought to be increasing and improving communication amongst all health practitioners, patients and family members, insurers, etc., as the 2015 report from the National Academies of Sciences, Engineering, and Medicine recommends.

My interest in this topic involves a potential overreliance on technologies like HumanDx to train and provide continuing education for medical practitioners and what such technological mediation promotes in terms of behavior and practice. As one piece of a broader educational strategy, training software like HumanDx has a clear place. Overemphasis on such technologies, however, leads one to imagine the kinds of fanciful medical encounters found in science fiction film and television series like Star Trek: a machine performs some sort of scan on a person and makes an immediate diagnosis and/or medical intervention.

Back on Earth in the early twenty-first century, the clinical diagnostic process still involves human engagement. The four critical components of gathering information, developing a hypothesis, testing that hypothesis, and reflecting critically on the results of the test(s), at present, is done by people, not software alone (Trimble and Hamilton 2016, 343). However, medical practitioners are re-examining just how they ought to go about the practice of diagnosis given that the ability to connect with and get feedback from other physicians regarding a diagnosis continues to become, in a technical and temporal sense, easier and easier (Fihn 2019).

The image of a lone physician making predictions (potential diagnoses) and testing them (ruling the diagnoses out one by one) does not seem to jibe with our increasingly interconnected world. More importantly, such an image is counterproductive to improving the practice of medicine, of which diagnosis certainly comprises an important part. Using collective intelligence, a kind of crowdsourcing, is not new for medical practitioners (Fihn 2019, 2); the scale at which the resources needed for collective intelligence can be marshaled, however, makes this emerging technology a subject of particular importance for social epistemologists.

Future Directions

Steve Fuller’s seminal question of social epistemology, familiar to SERRC members and readers of our work, asks how we should go about trying to gain knowledge:

How should the pursuit of knowledge be organized, given that under normal circumstances knowledge is pursued by many human beings, each working on a more or less well-defined body of knowledge and each equipped with roughly the same imperfect cognitive capacities, albeit with varying degrees of access to one another’s activities ([1988] 2002)?

Physicians certainly possess a “more or less well-defined body of knowledge” and definitely have some access to each other’s activities—indeed, their body of knowledge has been built upon accrued understandings of past cases and interventions. Training software like HumanDx ostensibly increases the access physicians, and training physicians, have to each other’s diagnostic decisions. The diagnostic process, of which a diagnostic decision comprises a significant but not complete part, however, relies on interpersonal communication and interaction (Improving Diagnosis in Health Care 2015, 2).

The use of crowdsourcing technologies like HumanDx in healthcare settings—and I am imagining a not-distant future where the software has already been “trained” by human physicians and acts as a second or co-diagnostician—could easily promote in patients and practitioners the notion that diagnostic decisions are made based on objective observations and tests, and that the results/decisions are ‘perspective-free’ in the sense that they lack bias.  If crowdsourcing technologies in healthcare become ubiquitous, including for training medical practitioners and providing them with continued feedback and practice once they are working, then health practitioners will have the opportunity to acknowledge their co-dependence on each other, on patient narrative, and on machines and software.

The following serves as an itinerary for some of the content I will be exploring through this forum in the coming months and year, but it is not exhaustive. My regular installments (roughly monthly) could, however, become rather interactive; I am hopeful that some readers will reach out to discuss collaborative opportunities and share their insights.

Guiding Questions and Topics

How, and in which contexts, should technologies like HumanDx that draw upon “collective intelligence” be deployed in healthcare settings? Responses to this normative question might rely on a number of descriptive, empirical questions about quality of patient care, diagnostic accuracy, and overall collaboration amongst practitioners when making diagnoses.

How might patients be impacted when they see that when making diagnoses, their health practitioners are drawing on banks of information collected and scrutinized by other professionals? By the by, health practitioners are already searching the internet during some clinical visits, though sometimes surreptitiously. What would happen if the practitioner turned the screen of their monitor for the patient to see what they are doing, how they are performing it, and what they are acting on based on the results?

Could widespread adoption of “collective intelligence” technologies improve diagnostic accuracy, and encourage collaboration among health practitioners so that they feel increasingly comfortable to seek advice and help from other practitioners/devices?

Would increasing reliance on “others”—machines, patients, practitioners, software—decrease instances of epistemic injustice (Fricker, 2007)? Would health practitioners be more inclined to listen to and hear their patients’ stories if they see repeated evidence and instances that relying on “others” improves performance, diagnostic accuracy, patient surveys, etc.?

The Human Diagnosis Project serves as an opportunity to explore how the search for, and distribution of, knowledge ought to be organized. In other words, HumanDx provides social epistemologists a path to investigate and, potentially, to influence the use of medical technologies that rely on “collective intelligence.” Below are three potential pathways that could be followed when considering just the software and could be something used in a module of an advanced undergraduate or graduate seminar:

1. What are the limits of what software can teach people? How does HumanDx approach these limits?

2. How do patients and practitioners benefit, and suffer, from implementation of a technology like HumanDx?

3. Ought there be a limit to HumanDx and similar technologies that rely on “collective intelligence” to operate, i.e., in certain circumstances only, in any situation, etc.?

Contact details: William Davis, California Northstate University, william.davis@cnsu.edu

References

Adler, Rolf. 2009. “Engel’s Biopsychosocial Model Is Still Relevant Today.” Journal of Psychosomatic Research 67: 607-611.

Dhaliwal, Gurpreet. 2013. “Known Unknowns and Unknown Unknowns at the Point of Care.” Journal of the American Medical Association Internal Medicine 173 (21): 1959-61.

Engel, George. 1977. “The Need for a New Medical Model: A Challenge for Biomedicine.” Science 196 (4286): 129-136.

Fox, Susannah, and Maeve Duggan. 2013. “Health Online 2013.” Pew Research Center’s Internet and American Life Project. Http://pew internet.org/Reports/2013/Health-online.aspx.

Fricker, Miranda. 2007 2010. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press.

Fuller, Steve. 2002 [1988]. Social Epistemology. Bloomington, In: Indiana University Press.

Institute of Medicine. 2000. To Err Is Human: Building a Safer Health System. Washington, DC: The National Academies Press. https://doi.org/10.17226/9728.

Leap, Lucian, Troyes Brennan, Nan Laird, Ann Lawthers, et al. 1991. The Nature of Adverse Events in Hospitalized Patients—Results of the Harvard Medical Practice Study II. New England Journal of Medicine 324: 377-84.

Mamede, Silvia, Henk Schmidt, and Remy Rikers. 2006. Diagnostic Errors and Reflective Practice in Medicine. Journal of Evaluation in Clinical Practice 13: 138-45.

Meyer, Ashley, Velma Payne, Derek Meeks, Radha Rao, and Hardeep Singh. 2013. Physicians’ Diagnostic Accuracy, Confidence, and Resource Requests: A Vignette Study. Journal of the American Medical Association Internal Medicine 173 (21): 1952-1959.

Montano, Alexandria. 2016. “Medicine 2.0: Have We Gone Too Far?” Journal of Healthcare Law and Policy 19: 149-88.

National Academies of Science, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. Washington, DC: The National Academies Press. https://doi.org/17226/21794

Wazny, Kerri. 2016. “‘Crowdsourcing’ Ten Years in: A Review.” Journal of Global Health 7 (2): 1-13.



Categories: Articles

Tags: , , , , , , , , , , , ,

Leave a Reply

Discover more from Social Epistemology Review and Reply Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading