On Justin Biddle’s ‘Lessons from the Vioxx Debacle,’ Julian Reiss and Sarah Wieten

Author Information: Julian Reiss, Durham University, julian.reiss@durham.ac.uk; Sarah Wieten, Durham University wietens@gmail.com

Reiss, Julian and Sarah Wieten. “On Justin Biddle’s ‘Lessons from the Vioxx Debacle’.” Social Epistemology Review and Reply Collective 4, no. 5 (2015): 20-22.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-24M
Please refer to:

vioxx phone

Image credit: vistavision, via flickr

Justin Biddle’s (2007) article “Lessons from the Vioxx Debacle: What the Privatization of Science can teach us about Social Epistemology” is one of the highest regarded in this journal, with a high rate of citation. The article raised the alarm about the possible negative consequences about the increasing privatization of scientific research, and issued a call for epistemologists to attend seriously to the specific particularities of the fields they wished to characterize. This call was specifically leveled at philosophers of science such as Kitcher and Longino who, according to Biddle, were too interested in their claims being generalizable to all scientific disciplines to say anything relevant to any particular discipline. Biddle writes of their claims,

The first problem with the proposals of both Kitcher and Longino is their overly general character. Both philosophers attempt to articulate purely general accounts of the organization of research—that is, accounts that apply to all areas of scientific research. Yet, there is little reason to believe that a completely general account can be successful… Just as philosophers of science, since Arthur Fine’s 1988 Presidential Address to the Philosophy of Science Association, have gone ‘back to the laboratory’ and begun to give up the quest for purely general explications of fundamental methodological concepts (Fine 1988), so should social epistemologists focus upon specific areas of science and attempt to determine, on a case by case basis, how these particular areas should be structured … The second problem is their overly abstract character. Both of these philosophers intend for their ideals to be relevant to actual scientific communities … neither, however, draws significantly upon concrete discussions of the ways in which scientific communities are actually organized. If philosophers of science are to provide helpful advice regarding the organization of research, then their proposals should be well-informed by examinations of how science is actually structured (23-24).

However, it may be that Biddle’s own suggestion remains somewhat general and abstract. We suggest that his solution does not solve the problem he raises any more satisfactorily than those he critiques, especially given what he has to say about the importance of power relations in drug development. Describing his solution for the problems in deciding questions of drug efficacy, he writes

Under this system, two groups of advocates would argue before a panel of judges over such questions as whether—and under what conditions—a drug should be allowed on the market and whether a drug that is already on the market should remain so. One set of advocates would consist of industry or industry-sponsored scientists who would argue on behalf of the pharmaceutical company. The other set would consist of scientists who receive no funding from pharmaceutical companies; these advocates would argue on behalf of the public that, for example, a given drug is sufficiently dangerous that is should be taken off the market. The panel of judges could consist, for example, of FDA or University scientists who are independent of any industry that might have a stake in the outcome of the proceedings (34).

While we are sympathetic in principle both to Biddle’s approach and his specific recommendation, we have to ask whether he takes his ‘philosophy of science in practice’ approach far enough, and to challenge the proposal of an adversarial system for drug approval. We will argue that the adversarial system, while probably an improvement over the status quo, is unlikely to work well, given the realities of contemporary biomedical research and the incentive structure in which both advocates would find themselves.

We agree that “more and more pharmaceutical researchers are better viewed as advocates than as disinterested evaluators of research,” but do not think that making the distribution of interests explicit would be enough to solve the epistemic problem (34). Normally, under an adversarial system a decision making body does not collect any evidence on its own and bases its decision solely on the evidence submitted by the parties. Moreover, in an adversarial system the decision making process typically takes place only after all investigations that would reasonably be undertaken have concluded.

In such a system, both parties have incentives to produce, select, and present evidence that promotes their interests, and to ignore or suppress discordant evidence. Such a system would clearly lead to adverse results unless the production, selection and presentation of evidence were heavily regulated—as it is, for instance, in criminal law. If representatives of the pharmaceutical industry had free reign over the production, selection and presentation of evidence, they might, for instance, (continue to?) give inadequate doses of alternative treatments to patients in the control group, (continue to?) manipulate evidence about side effects, (continue to?) present data from selected trial periods, and (continue to?) interpret results in most favorable lights. What could representatives of the public do?

Short of running their own clinical trials, they could only attempt to poke holes into the pharmaceutical industry’s body of evidence, and given the disparity in resources available to each group, it is unlikely that the public’s representatives will be able to run parallel trials for each industry trial. However, no matter what the public representatives criticize, the criticisms would not help to reveal the truth about the safety and efficacy of the new treatment at hand. We would not learn how the treatment would have performed had adequate doses of the alternative been given, what outcomes screening for other side effects would have had or what the long-term performance of the treatment would have been. At best we could hope that the pharmaceutical industry anticipates objections in such a way as to preclude the worst kinds of bad practice.

Nevertheless, it remains the case that in such a system, bad epistemic practice is not only encouraged, it is to some extent morally sanctioned: the pharmaceutical industry is not expected to behave in epistemically optimal ways just as the accused is not expected to incriminate him- or herself in a criminal trial.

Moreover, it does not seem likely that regulation would eliminate the problem. Clinical trials are based on countless judgments, for instance concerning eligibility, adequacy of alternative treatment and its dosage, the length of trial, appropriateness of outcome measures, and so on. There is no way to formulate a priori guidelines that are both sharp enough to make a difference to practice and loose enough to accommodate case specific differences between treatments, populations, and other aspects of the trial.

If the representatives of the public can produce their own evidence a different set of problems would arise. If they interpret their role also as an interested advocate, for instance, one whose goal is the minimization of adverse health consequences, they would have an incentive to produce, select and present the evidence in such a way that this cause is promoted. We would end up with two incommensurable narratives, neither of which would be likely to be true. Nor would there be a straightforward process for amalgamating the discordant evidence (see for instance Stegenga 2012).

If, on the other hand, the representatives of the public have a mixed interest in maximizing the development of new effective therapies and minimizing adverse side effects and would design their trials accordingly, one would have to ask why the pharmaceutical industry should continue to supply evidence that is known likely to be biased. Still a question would arise about how to finance these independently conducted trials (for some proposals see, for instance, Brown 2006 and Reiss 2010).

Biddle’s article is to be lauded for bringing such highly specific logistics to the attention of a philosophy audience. Traditionally the philosopher is only required to give an abstract outline of a practice, and the practitioners themselves would fill in the details given the particularities of their field. But given Biddle’s success in convincing this community of the importance of investigating and reimagining systems for evaluating evidence according to the particularities of the subject matter at hand, such logistics can no longer be ignored.

References

Biddle, Justin. “Lessons from the Vioxx Debacle: What the Privatization of Science Can Teach Us About Social Epistemology.” Social Epistemology 12, no. 1 (2007): 21-39. doi: 10.1080/02691720601125472.

Brown, James R. “Regulation and Research: Approaches to Market Failures in Medicine.” Paper presented at The Commerce and Politics of Science: An International Conference, Notre Dame, September 21-24, 2006.

Reiss, Julian. “In Favour of a Millian Proposal to Reform Biomedical Research.” Synthese, 177, no. 3 (2010): 427-447. doi: 10.1007/s11229-010-9790-7.

Stegenga, Jacob. “Rerum Concordia Discors: Robustness and Discordant Multimodal Evidence.” In Characterizing the Robustness of Science: After the Practice Turn in Philosophy of Science, edited by Léna Soler, Emiliano Trizio, Thomas Nickles, and William Wimsatt, 207-226. Dordrecht: Springer, 2012.



Categories: Critical Appreciation

Tags: , , , , , ,

Leave a Reply