No Cause for Epistemic Alarm: Radically Collaborative Science, Knowledge and Authorship, Søren Harnow Klausen

SERRC —  March 14, 2017 — Leave a comment

Author Information: Søren Harnow Klausen, University of South Denmark,

Klausen, Søren Harnow. “No Cause for Epistemic Alarm: Radically Collaborative Science, Knowledge and Authorship.” Social Epistemology Review and Reply Collective 6, no. 3 (2017): 38-61.

The PDF of the article gives specific page numbers. Shortlink:

Image credit: stop that pigeon!, via flickr


New forms of radical collaboration—notably “big science,” multi-authorship and academic ghostwriting—have brought renewed attention to the social nature of science. They have been thought to raise new and pressing epistemological problems, especially because they appear to have put in jeopardy the transparency, accountability and responsibility associated with traditional scientific practice. Against this worried stance, I argue that the new practices can be adequately accounted for within a standard epistemological framework. While radical collaboration may carry serious practical problems and risks, and requires critical attention to the way science is organized and communicated, it raises no fundamentally new epistemological problems. It may even serve as an example of a less restrained and more fruitful, albeit calculatedly risky, mode of conduct that could enhance scientific creativity.

Science is a collaborative enterprise. It is arguably becoming ever more collaborative. A number of contemporary trends seem to support such a diagnosis. There is, first, the rise of big science, that is, of large-scale, infrastructure-dependent research, epitomized by high-energy physics or the Human Genome Project. Even in fields still dominated by smaller-scale science, research has become increasingly collective. The research group has long been recognized as the fundamental unit of scientific knowledge production,[1] and global and regional research networks are gaining importance.[2]

A further significant manifestation of the trend towards increased collaboration is multi-authorship. The average number of authors per paper is growing steadily, with some fields now turning out papers with several hundreds or even thousands of names on the author list. Many of the persons named may not have done any authoring in the traditional sense, but appear on the byline due to their contributions as fundraisers, managers, project partners or engineers, and some may have been granted so-called honorary or gratuitous authorship. And although bylines are getting crowded, not all authors may actually be listed as such, as academic ghostwriting is also becoming widespread.

The recent trends towards free or forced collectivization of science have prompted a new wave of critical inquiry. It has been argued that radically collaborative research (henceforth RC) raises a new kind of epistemic problem, because it has put in jeopardy the transparency, accountability and responsibility associated with traditional scientific practice. When there is no centre of command, when epistemic labour is distributed widely over a seemingly uncoordinated mass of people, it not only gives rise to moral and political concerns or engineering challenges, but calls into doubt whether the activities in question can count as scientific knowledge production at all.

Worries like these have been raised by a group of philosophers of science and cognition whom I shall refer to as the Georgetown Alarmists, consisting of Bryce Huebner, Rebecca Kukla and Eric Winsberg.[3] In a series of papers written jointly or individually, the Georgetown Alarmists argue that we are facing are not only problems of scale, but a whole new quality of problems.[4]

Against this I will argue that although multi-authorship and collectivization are obviously trends that call for critical attention, they do not give rise to any significantly new problems. In particular, they should not be a cause for epistemic alarm. There is plenty of reason for ethical and political concerns about how big science is conducted—but that is a different issue (and, again, it can be doubted whether there is anything inherently problematic about big science, even when measured by these non-epistemic standards). More traditional forms of science and scientific authorship exhibit the same basic features. So if there is a problem, it is neither new nor special to large-scale collaborative research. Moreover, I will argue that traditional mainstream epistemology has all the resources needed to handle the new cases of collaborative science.

The Case for Epistemic Alarm

The reasoning of the Georgetown Alarmists (hereafter abbreviated GA) can be summarized as follows (it should not be understood as a single argument, but rather a set of more or less interrelated theses).[5]

i) Genuine authorship requires accountability (being able to justify and vouch for the truth for the claims made in one’s publication)

ii) Genuine group authorship is possible, but requires a unified and coherent group. It requires that each author is accountable for all the claims made in the publication, or that each author knows which collaborator is responsible for which claims, or that at least one member of the group retains centralized control over the research process[6]

iii) Genuine authorship is more than a purely institutional status; it must represent a “specific form” of epistemic labor[7]

iv) Authorship in RC does not meet the criteria for genuine authorship (since neither i) nor ii) is fulfilled). Radical collaboration leads to authorless publications[8]

v) Epistemic responsibility requires accountability[9]

vi) Knowledge requires epistemic responsibility[10]or accountability[11]

vii) Radical collaborations yield a fundamental epistemic problem rather than a mere engineering problem[12]; they lead to a lack or loss of scientific knowledge

To put it very briefly: Radical collaboration leads to authorless publications, which in turn lead to a loss of knowledge. One way of construing the position of GA on authorship is to say that they accept the poststructuralist “death of the author” view, famously expounded by Barthes[13] and Foucault,[14] as an account of radically collaborative authorship, but reject it as an account of traditional authorship—and that they take traditional authorship to be the normatively superior notion of authorship, the notion of genuine authorship. Contrary to the poststructuralists, they do not welcome the death of the author.

It must be said that although the GA appear to be conservative or “traditionalists” in some of their attitudes toward science, they do have an accurate and realistic understanding of contemporary scientific practice, and they can hardly be accused of being luddites. They take the recent trends to be far from surprising.[15] And not only do they recognize that a return to smaller-scale formats is practically impossible; they agree that it would hardly be desirable.[16] Instead they seem to call for a new framework for assessing and regulating collaborative research processes. Still, their sketchy suggestions for what has to be done do point towards a relatively tight system of control and more rigorous demands for transparency and accountability, which I fear could hamper scientific progress.

I find it difficult to render GA’s reasoning in a balanced way, since it strikes me as relying on a series of questionable assumptions. But I think at least the following can be said to in favour of their conclusions: Collaborative science is an extremely messy affair. It exhibits little transparency or personal accountability. It can surely cause some initial worry to see how bits of evidence and interpretation are tossed around, and how little individual researchers understand, at least in some cases, of what their collaborators are doing or even of the overall process of which they are part.

Moreover, it is a widespread assumption in epistemology that knowledge requires a subject, and that a subject needs to be both sufficiently unified—i.e. have an integrated and coherent mental architecture—and have some kind of reflective access to its own mental states and processes. It is, furthermore, common to expect the process of scientific knowledge-production to exhibit an extraordinary high degree of transparency, reflectivity, unity and systematic coherence. In order to rebut GA’s claims, I have to show these assumptions, which cannot be denied a certain naturalness or initial appeal, to be either wrong or irrelevant.

It may be objected that my attribution of a distinctive, and joint, ”alarmist” position to Kukla, Huebner and Winsberg is an untenable construction. Thus it could be noted that only Huebner has been directly concerned with group knowledge. [17] Now my main interest is of course not exegesis or contemporary intellectual history. It suffices that the views discussed are typical, influential and have been voiced or suggested by at least part of GA. I do, however, find ample evidence in the writings of GA that they do hold a distinctive joint position. They do make clearly located epistemic agency a central condition for scientific knowledge.[18] While the term “knowledge” may not surface in all of their writings, they see multi-authorship as the source of an epistemic problem—and since they all adopt a fairly narrow an orthodox conception of the epistemic, it seems fair to assume that this must mean a problem concerning knowledge. Moreover, while they do distinguish authorship from knowing (as one should; an author can of course be wrong!), they come very close to claiming, and clearly do suggest, that authorship is a necessary condition for knowledge in the cases of radical collaboration they consider.

For example, Kukla writes, following up immediately on her claim that the traditional author in collaborative research is dead, that “[i]n radically distributed, collaborative research, there is no one who has a cognitive state instantiating a full justification for the claims that make it to print.”[19] This sounds very much like a claim that inasmuch as there is no author in the traditional sense, the standard conditions for knowledge are not fulfilled. Moreover, the connection between accountability and scientific knowledge production is posited by Kukla, Huebner and Winsberg alike; and they analyse the alleged lack of epistemic accountability in radically collaborative research in terms of a failure to meet the conditions for group authorship.[20] At any rate, should it turn out, contrary to these strong indications, that GA do not posit any necessary connection between authorship and knowledge, then they owe us an explanation of the sort of pressing epistemic problem they do, quite persistently, claim has been raised by multi-authorship.

Dismantling the Case for Epistemic Alarm

The structure of GA’s argument makes several different lines of response possible. One might (A) accept that publications in radically collaborative science are authorless, but reject the connection between authoring and knowledge (v-vii). Or one might (B) accept this connection, but insist that the requirements for authorship can be met. This in turn can be done either by trying (B1) to show that collaborative science actually meets the requirements laid down by GA (i-iii), or by arguing (B2) that these requirements are too strong. In fact I think that both (A) and (B2) can be developed quite convincingly. (B1) appears less promising, since GA are obviously right about the empirical facts, i.e. the messiness of collaborative research (though even here there is room for debate, as we shall see).

There is also the possibility of (C) accepting the whole reasoning up until vi)—agreeing that new forms of collaboration leads to a loss of knowledge, but denying that this makes for an epistemic crisis. One might hold that knowledge is not the most relevant epistemic desideratum, arguing that the production of reliable information may be valuable enough, perhaps that such information feeds into a larger societal process that is likely to lead to a gain in significant knowledge in the long run. I am less attracted to this line of reply, which would leave intact GA’s spectacular and apparently alarming claim that large parts contemporary science are unable to directly produce knowledge. But it provides a relevant fall-back position, because some might want to follow GA in upholding some relatively strong internalist requirements on knowledge.[21]

Now to the arguments. I will proceed in two steps, first considering the requirements for authorship and then the requirements for knowledge. I shall argue that the production of publications in radical collaborative research may still qualify as authorship, if this is understood in a less demanding and more realistic way. I shall then further argue that at least the kind of authorship favoured by GA is not a necessary condition for knowledge.

Forms and Conditions of Authorship

GA reject the possibility that the publication practices associated with radical collaboration can qualify as group authorship. This appears to fit well with the received view of such authorship. It is common to require of a group that there must be some relation of mutual recognition among its members. Group membership has also been taken to entail reflexivity—i.e. each and every member of a group must view herself as a member of group in question.[22] Last, but not least, it has been assumed that for a group to function as an epistemic agent, it must exhibit joint attention i.e. all the members must attend to—and take a stand on—a common target proposition or set of propositions,[23] or a common body of evidence.[24] In line with these views, Livingston has proposed that genuine joint authorship requires a significant degree of “mutual knowledge” and “reciprocal monitoring and assistance.”[25]

Cases of RC do not meet these criteria. But it should be noted that in spite of their almost axiomatic status among philosophers of collective agency, the strict conditions for group membership just outlined appear rather idiosyncratic. They seem to limit the domain of collective epistemic agency quite substantially. Many groups have a much loser structure; and it is debatable whether even the paradigmatic cases of small and tightly knit groups really meet the proposed criteria. Moreover, even the otherwise strict criteria imposed by theories of collective agency do not necessarily add up to an accountability requirement of the sort espoused by GA. With the possible exception of Mathiesen’s[26] account of groups with explicitly epistemic goals,[27] such theories do not demand that the group members should be able to justify the beliefs to which they commit themselves collectively.

Outside the narrow field of the philosophy of social agency, groups have been defined less demandingly. In organization theory groups are individuated with reference to their tasks.[28] I have myself suggested that we delimit an epistemic collective by taking it to consist of all and only those members who contribute significantly to an epistemic task—a task which does not need to be recognized as such by all, or even any, of them.[29] I am aware that such an inclusive notion of an epistemic collective is controversial. It makes it difficult to draw a clear boundary between the members of the group and those with whom the group is merely interacting; a problem that becomes especially pressing in the absence of a clear notion of what should count as a significant epistemic contribution. In any case, GA would no doubt insist that an epistemic collective in this more inclusive sense is unable to function as a genuine author.

Still, the appropriateness of the strict requirements on group authorship is put in serious doubt by the fact that even individual subjects are hardly able to meet them. It seems unlikely that even so-called individual authors retain a high degree of “centralized control” over the research processes documented in their publications.

For one thing, researchers have to depend extensively on the work of other researchers, often without knowing very much about its epistemic merits. A famous example has been given by Hardwig,[30] as part of his case for the claim that scientists generally have to rely on what he describes as blind trust. Hardwig pointed out that even though the Bieberbach conjecture is considered to have been proven by de Branges in 1985, no single mathematician, including de Branges himself, has ever had sufficient justification for each step in the proof. De Branges relied on computer verification by Cautchy, and especially on work of Askey, who had the specialized knowledge of hypergeometric functions which he himself lacked. Askey, on the other hand, did not know enough complex analysis to complete or verify the proof himself. Though de Branges’ original 1985 paper seems to be a typical case of classical authorship, and even the result of an individual research project, the knowledge formation process behind it turns out to have been highly collective, complex, and far from completely transparent.

It may be said that the broad accountability requirement laid down by GA[31] is still met by this example, since de Branges at least knew which contributor was responsible for which claims. He did retain some kind of centralized control, and probably also had a fairly clear idea of the sort of work Askey was good at, since he had some knowledge of hypergeometric functions himself. But the example shows that even paradigm examples of centralized control are often indirect and partly blind, based on a more or less superficial identification of collaborators’ competences and the relevance of their contributions. Other examples of scientific knowledge production can be expected to be further out on the continuum between the completely controlled and completely uncontrolled processes. The celebrated modern evolutionary synthesis is a complex network of results and hypotheses from, inter alia, selectionism, genetics, statistics, paleontology, botanics, cytology, ecology and geology. While it is again likely that the key proponents of this view have been able to gauge the overall significance and reliability of the different contributions, it is also clear that have not been able to epistemically penetrate the whole system of interdependent assumptions. To take one of GA’s favourite themes, they have not, for example, been able to assess the underlying inductive risk-taking decisions.[32]

It is an open question to what extent centralized control and monitoring is aimed at as a regulative ideal, and to what extent it is actually achieved. Authors of literary fiction frequently employ deliberate strategies for reducing the impact of authorial metacognition, allowing for a freer play of ideas.[33] Scientific authors seldom do something like that. Standard formats for scientific papers function as a means for forcing the author to say all and only what ought to be said, to lay all her cards on the table and address the key issues directly. There may also be a relevant difference between papers in the natural sciences and the humanities, inasmuch as the former usually report the results of independent research processes, whereas in the case of a least some of the latter, the writing itself is an integral part of the research process, making it more open-ended and less subject to complete authorial control.

Yet even in the case of standard natural science papers, complete transparency and metacognitive control is at most a regulative ideal. Idioms are borrowed uncritically; references and quotations are made with incomplete knowledge and understanding of the work cited; formulas and rules of inference are applied blindly, and so on. Textbook accounts of the history of science give an impression of a smooth process of almost perfect dissemination; the original discovery was the hard bit, but afterwards it was easy to pick up for other scientists. Yet closer scrutiny reveals that in many cases, scientists assented to and built on theories they did not yet fully understand. A famous example is the reception of Newton’s Principia, which was very quickly recognized as immensely important, but appeared inaccessible and almost incomprehensible to his contemporaries, even the most capable of whom were only able to achieve a partial understanding. Locke is told to have got the gist of Principia from Huygens, who reassured him that the mathematics and mechanics were sound; and many physicists of his time likewise had to rely on authority and their overall estimation of the credibility of the work.[34]

I am not saying that we should give up the very idea of authorial control and authority, only that we should understand these notions more modestly and realistically. By doing so, we may avoid buying into the death-of-the-author-thesis even in the case of RC. Like other texts, multi-authored papers do have real authors, who are, to a certain degree, responsible for their content. In most cases, pains have been taken to establish and follow procedures that regulate the writing process. Rules are laid down as to who should and who should not be included among the authors. Approval is required from the leaders of different subgroups. Not least, measures are taken for imposing coherence on the process and assure that a unified result is arrived at and communicated unambiguously.[35] The individual contributors can all be assumed to know at least the general kind and rough outline of the project in which they are involved. This seems similar to cases from non-scientific text-production in which authors willingly and knowingly engage in processes of interaction that are likely to lead to results that they may not have specifically intended as such.[36]

Hence there is plenty of unifying metacognition present in cases of RC, even if it can be questioned how far it is relevant. In the case of so-called individual authorship, it apparently suffices that a text can be seen as depending on, and informed by, the intentions of authors. There is no need to require the presence of overarching, strategic intentions—though in most actual cases, some such intentions have clearly been at work. Even among conservative defenders of authorial authority, it is widely acknowledged that meaning-constitutive intentions can be unconscious, in the sense that the author need not herself be aware of having them.[37] Traditional intentionalists about literary meaning may have emphasized first-order intentions too strongly and given too little emphasis to the role of authorial metacognition.[38] Nevertheless, their view seems plausible enough to serve as a clear indication that authorship in general does not require any particularly comprehensive or effective strategic intentions. What matters it that what the author meant was successfully communicated, not whether it was her intention to communicate it thus.[39]

It may still be argued that collective authorship requires more in terms of metacognition, because in this case the unity of the author subject and the writing process is much more precarious. Individual authors are individuated as a particular subjects independently of the writing process, whereas groups authors have no such “natural” unity.

There are least three things to say in reply. First, it is an open question how much “natural unity” there is to an individual subject qua author. Though she may be a distinct human being, the mental subsystems responsible for her production of meaningful text need not be closely integrated; and she may function mostly as a transmitter for external influences. Secondly, for all the undeniable messiness of RC, it is still not impossible that at least the sensible, more or less intuitive metacognition requirements are actually met. The rules and procedures followed may not satisfy the strong transparency required by GA; but they may suffice for ensuring something like the “meshing of plans” and “monitoring” required by Livingston’s analysis of collective authorship.[40] Since joint commitments are generally allowed to be merely implicit, the regulatory policies adopted may also suffice for turning even large and heterogeneous groups of collaborators into something like a plural subject (even if it must then be admitted that such a subject can, in other respects, be disintegrated, and its doings messy and intransparent).

Thirdly, it may be admitted that authorship in radically collaborative science falls short of a certain traditional notion of authorship, but denied that this notion is the relevant one. When it comes to scientific publications, there is long tradition, much older than the trend towards large- scale collaboration, of using the notion of authorship for other purposes than to indicate the intentional creation of bits of meaningful text. Authorship is used to lay claim to scientific results and appoint credit for ideas and discoveries. This does not mean that scientific authorship is a “purely institutional status.” It functions as a means for declaring who has been predominantly responsible for generating the knowledge presented in a publication. In fact, it meets GA’s requirement (iii)) that authorship should represent a “specific form of epistemic labor”; it is just that it is not merely, or primarily writing labor. It is an open question to what extent new forms of radical collaboration still follow this practice in an epistemically innocuous way. But at least there need not be any problem merely because it departs from the traditional “literary” notion of authorship. It should be added that the “credit-appointing” notion is not special to scientific publication practice, but also widely applied to even literature of the more artistic kind. It is always a partly pragmatic decision whether a collaborator and/or strong source of inspiration (be it a muse, an editor or assistant or) should be listed as a co-author or merely mentioned in the acknowledgments.[41]

Loss of Knowledge?

Now to the more fundamental question of the requirements for knowledge. Again, several lines of reply are available. First, consider the assumption that knowledge requires a subject, which is implicit in vii) and connects the worries about authorship with the concern for knowledge. How exactly is this to be understood? I take it too be quite intuitive (though not obviously correct) that knowledge must be realized by a conscious being, or at least a being capable of forming mental states like beliefs.[42] But this does not by itself require any specific degree of mental integration or the presence of metacognitive states.

When thinking about the requirements for group knowledge, we should pay close attention to the kind of psychological requirements that are usually made for individual knowledge, the paradigm case of traditional epistemology. Philosophers do not generally require any general reflective awareness on part of the subject (and in this they are clearly in line with ordinary ways of thinking about and ascribing knowledge). Nor is it required that a knowing subject needs any knowledge of epistemological principles or of the justificatory power of her evidence.[43] It is also regarded as unproblematic that the ingredients of knowledge are distributed among different mental states of the subject, which do not generally embody representations of each other. We should not use a double standard and impose stricter conditions on group knowledge.[44] Hence it seems that a weak group subject could qualify as a bearer of knowledge. If a group of individuals are sufficiently well connected, if they contribute to a common epistemic task, and if they possess the necessary cognitive capacities, so that all the necessary epistemic factors are present among them, then they can be said to know. This is in line with ordinary usage, as we often say things like “biologists knew that traits were passed down from generation to generation” or “the CIA knew that the terrorist group was planning an attack.” We ascribe knowledge to groups that are loosely delineated and within which the relevant epistemic factors are likely to be distributed.[45]

There is nothing inherently externalist about the idea of weak group subject functioning as a bearer of knowledge. We can attribute knowledge to such a subject in virtue of the reliability of the belief-forming processes it employs, but also by observing that it posses sufficient evidence and/or acts according to certain principles, e.g. rules of inference. The only kind of internalism that is ruled out is one that requires a high degree of metacognitive access to the evidence and rule-following in question, and central control of the sub-processes. But this is a version of the theory that has little to say for it and is rejected by leading contemporary internalists. There may be good reason to prefer higher-order or centralized control in specific cases, but only inasmuch as it enhances the reliability of the first-order processes.

A sensible internalism may even be compatible with the possibility of knowledge that is not realized by a subject at all. This may sound outlandish, but once it is accepted that the subject as such contributes little to the epistemic status of its mental states, it becomes hard to say why it should be considered necessary for knowledge. Internalist criteria could be fulfilled simply by the presence of sufficient evidence. Of course this evidence would probably have to consist in, or be necessarily related to, states of consciousness awareness, which arguably presuppose some minimal form of self-consciousness or subjectivity. But this is different from the notion of a cognitive centre of control that monitors and unifies the individual mental states.

GA defend the need for positing a subject in the stronger sense by arguing that epistemic responsibility is a necessary condition (vi). But this again buys into an implausibly extreme form of externalism, viz. a strongly deontological theory, according to which epistemic justification requires the fulfilment of certain epistemic duties. The implausibility of such a theory is highlighted by the fact that individuals are able to acquire knowledge in very passive ways, as when I come to know that the sun has set by seeing the light fade. This does not seem to depend on any kind of norm fulfilling or responsibility on my part; I have come to know, regardless of whether I am prepared to defend any claims or act in any particular way. This does not render epistemic responsibility unimportant. Even though I reject group responsibility as a necessary condition for group knowledge, it is likely that well-functioning epistemic collectives do exhibit a significant degree of distributed responsibility.[46] That is, their members will be alert to risks and errors in their sub-domain, and committed to controlling and improving the sub-procedures they employ. As with authorship and subjectivity, there is also room for alternative interpretations of responsibility.

We have thus explored two ways of countering the claims of GA. One can accept the general subject requirement for knowledge, but insist that a subject does not need to meet the strong criteria for group authorship. The authors, or a significant subset of the authors, of papers in high energy physics might be said to know the results presented, even if they fail to constitute a group author. Or one can agree that there is hardly any subject in a significant sense to whom the knowledge can be attributed, but insist that there may be sufficient knowledge around anyhow.

In spite of this, I reckon that many will want to uphold the subject requirement in a relatively strong form. Fortunately, my case against GA does not hang on any controversial thesis on authorship or group knowledge. There is a much less controversial line of reply. It may be admitted that the author, whoever that might be, cannot not the subject of the putative knowledge. Instead it can be argued that knowledge is produced by the collective[47]—and that it is either possessed by some or one of the authors or will be produced in competent readers of the paper. It is natural to describe the cases of radical collaboration as typical social process of knowledge creation, in which testimony plays a crucial part.

GA also reject this suggestion. Kukla argues that individual collaborators cannot be attributed testimony-based knowledge. For this would have to be based on an assessment of the reliability, in context, of the source of the testimony.[48] And it is precisely such an assessment that, according to GA, cannot be performed in cases of radical collaboration.

Again, the reasoning is based on controversial assumptions. First, GA only consider the—admittedly unrealistic, but also less relevant—possibility of all the authors having testimony-based confidence in all of its parts.[49] In contrast, I am suggesting that a network of local testimonial relationships may be sufficient to bind together the whole process and ensure that someone ends up forming the relevant item of knowledge. Secondly, it appears that GA are committed to a strongly reductionist view of testimony. They require that the recipient should have positive reasons for taking the giver of testimony to be a reliable source. Such a view has been rejected by a large number of philosophers who have instead taken testimony to be a fundamental source of justification and so advocated a non-reductionist or “direct” view.[50] But even though non-reductionism provides an easy, and not altogether implausible, way out, I think that GA are right in requiring some sort of vindication of the testimonial practices in question. This seems especially pertinent in the case of scientific collaboration. The thought of scientists just passing on and taking up bits of putative evidence does seem discomforting. So the question is rather what, and how much, it takes for an individual collaborator to acquire knowledge through testimony, and how far it is exemplified in cases of radical collaboration.

As mentioned above, Hardwig argued that scientists have to rely on blind trust. But this may be a somewhat exaggerated description of the actual practice.[51] Trustworthiness is not assigned randomly. At least in scientific collaboration, it is indeed based on some kind of assessment, even if the assessment is often done quickly and almost instinctively. Assignments of trustworthiness are made on the basis of institutional status, known track record, indications of field of expertise, meta-knowledge about the state and potential of the research domain and approach in question, etc. They are semi-blind: Blind as to the internal epistemic merits of the data or piece of theorizing in question, inasmuch as the recipient would not generally be able to generate the knowledge herself (otherwise the testimony would be more or less redundant)—but not completely blind, as they are sensitive to external features of the contributions, the subject matter, general methodology employed and the qualifications of the contributor.

Moreover, instead of requiring of individual collaborators that they themselves assess the reliability of the source, it might be sufficient that the sources are sufficiently reliable. This could be the case in a scientific community, the members of which can generally be expected to make contributions that are competent and relevant (what Hardwig calls a “climate of trust”). GA are sceptic of such view a, because it substitutes mere reliability for accountability.[52] But even if they were right that such accountability is necessary for the collaborators to qualify as a group author or subject of collective knowledge, it is hard to see why it should be necessary for individual collaborators to acquire knowledge by testimony. Besides, a wide range of intermediate positions are available, which retain smaller or larger internalist elements. Hardwig, for one, did not opt for mere reliability. By taking trust to be a necessary requirement for testimonial knowledge, he demanded that an individual scientist must hold certain warranted beliefs about the testifier.

GA may be right that there is a special problem with interdisciplinarity.[53] It might be feared that even though a source is reliable in its own domain, its reliability when combined with, or applied to, another domain, is not sufficient, or at least something that cannot be taken for granted. I think the best answer to this worry is simply to admit that interdisciplinary science is generally more challenging and risky, but that it is justified by its potential gains.[54] It should be added, however, that much of the same risk pertains to so-called mono-disciplinary science as well, since this does also regularly involve the transfer and application of theories or findings from other domains, be it other parts of the same science or neighbouring sciences (e.g. optics in astronomy, computer or information science in cell biology or sociology in the study of religion). It is one thing to know a theory or a set of data and another to know it to be applicable or significant to some other field. Moreover, despite the lack of transparency and centralized expertise, we should not underestimate the capacity of individual collaborators to understand and even assess contributions from other fields. Goldman argues that even laymen can be in a position to evaluate the trustworthiness of experts.[55]

Collins and Evans[56] point out that there is a kind of expertise that falls short of “full-blown practical immersion” in a field, but still involves mastery in its language—what they call “interactional expertise,” and see as especially important for forging collaborative relationships. Theoretical physicists do usually have some, and sometimes considerable, understanding of the contributions of the experimental physicists, and vice versa. Even research managers, though they may be managers first and researchers second, still know considerably more than the layman about the different fields involved in radical collaboration, their compatibility and potential contributions. And fields like high-energy physics or genetics are, in spite of the huge scale of the research activities and the sophisticated technology involved, probably not the most extreme and challenging forms of interdisciplinarity, as they draw on a common pool of compatible and already partly integrated theories and results from the natural sciences. In sum, it seems likely that less is needed for producing knowledge by radical collaboration than required by GA, but also that more than they assume is actually at hand.

Risks, Costs and Benefits

Let us now finally consider the costs and benefits of attempts restore transparency and accountability in RC. I think there is good reason to believe that the introduction of stricter control procedures will be counter-productive, though it is, admittedly, an open empirical question to what extent this is actually the case.[57]

First, we can ask if there is evidence that lack of transparency and responsibility has lead to significant epistemic loss. This does not seem to be the case. There are some spectacular cases of scientific fraud that might have been prevented by stricter control regimes. But even if still more cases have gone undetected, there is no need to assume that they have led to any general loss of quality or reliability of science.[58]

There are, of course, more subtle forms of negative influence that should also be taken into account. Even relatively few cases of outright fraud may suffice to give the public an unfavourable impression of science, which could in turn lead to waning support. Lack of accountability and control may create false images of state-of-the-art research, i.e. making some fields or paradigms appear more important or robust than they actually are, which in turn could lead to false research priorities and even a skewing of subsequent research, if certain approaches or theoretical paradigms, which are actually less well founded or relevant, come to lead the way for others. There is also a risk that the influence of commercial interests on research might go undetected.

It is, however, debatable how much of this should actually be considered an epistemic loss. There is a fairly well-documented, suspiciously strong correlation between private funding of research and industry-supportive conclusions.[59] Yet it is doubtful in how far this indicates that there is something fundamentally wrong when seen from a narrow epistemic standpoint. It is most likely that commercial interests cause researchers to ask certain questions and refrain from asking others, which might have led to less positive results. I am myself in favour of a highly inclusive notion of the epistemic, and so quite prepared to admit that a negative influence on research priorities, problem selection, relevance or uptake of research results, or even the future of science in general, could be considered an epistemic problem. But according to the more exclusive notion favoured by GA, these must be seen as problems of another, more external, kind. A researcher could be perfectly accountable (and act epistemically responsibly) while also acting badly in a wider sense, by e.g. deliberately avoiding carrying out experiments that might detect negative side-effects of drugs or food products.

In any case, I think the problems just outlined are best addressed by counter-measures on a very general level, rather than by meddling with the internal features of collaborative research processes. For example, decisions about research priorities may be subjected to democratic control (Kitcher 2001; 2011).[60] Privately funded research may be checked and counter-balanced by publicly funded research, some which could be directed specifically at issues of societal concern, so as to serve the wider interests of citizens. Moreover, there seem to be significant risks of the same sort even if one keeps to traditional forms of knowledge production. Quite generally, it seems plausible to assume that the drive towards collaborative research makes it more likely, all things considered, that interests, biases and fraud will be detected. Surely it is very far from certain that they will actually be detected in any specific case. Surely the messy, distributed character of large-scale research may even provide researchers with special opportunities to hide less laudable aspects of their conduct. But this should be compared with traditional individual or small-scale research, where there is hardly any external control prior to publication.

Now to the risks of imposing stricter control on collaborative research processes. It goes without saying that efficient control procedures are costly in terms of time, manpower and other resources. There is a special risk that collaborative science could become exaggeratedly slow and tedious, because the control, which has otherwise been “outsourced,” i.e. left to subsequent debate and testing by other scientists, now has to be done prior to publication. Traditionally, a single scientist has been able to come up with a bold conjecture, or publish apparent findings, with relatively little critical resistance to be surmounted in the first instance, thus quickly bringing it up for consideration and making it into a publicly available source of inspiration. I do not think that a completely free and extremely diverse market of scientific ideas is epistemically superior. Gatekeeping has its merits, as false or insufficiently justified beliefs can be difficult to weed out once they have become socially transmitted.[61] Noise is also a serious problem, and a reason for not allowing too much diversity, especially in an age that is already marked by an explosion in publication output and lack of perspicuity. On the other hand, there is hardly any doubt that a relatively diversified science, and a relatively quick process of scientific communication, are important goods that might justify loosening, or at least not strengthening, some of the prior control. The scientific community will not benefit from a situation in which significant findings are not published until a tedious negotiation process has been completed, nor from an extensive mainstreaming of research output and approaches. Peer-review has developed into a substantially delaying factor, and it should come as no surprise that quicker—and dirtier—publication formats are emerging (as they have in fact existed in some fields for a long time, cf. the “letters”-genre), e.g. journals like Plos One that eliminates subjective assessments of significance or scope from the review process, open or “dynamic” peer review etc.

In fact, it seems that actual large-scale collaboration is already marked by a tendency to introduce internal control mechanisms that slow down the publication and make the output less diverse. The authorship protocols in high energy physics serve not just to ensure the quality of research, but also to streamline and unify publications stemming from a particular research project.[62] Hence it seems again that actual practice may come closer to meeting the requirements of GA; but it is questionable whether this is really desirable, at least when one is concerned with epistemic efficacy. It is certainly possible that internal streamlining can be epistemically beneficial, inasmuch as it brings out the main significance of findings and confers upon them an authority that ensures ready uptake by the scientific community. There is a real risk that allowing for premature publication or ambiguous statements can hinder recognition of really important findings. Still, there is surely a limit to the amount of streamlining and prior control that can be beneficial, all things considered. In any case, it is interesting to notice that the requirements of GA, though allegedly motivated by a concern for transparency, could reduce general transparency: in order to achieve mutual consent and understanding among scientific collaborators, moving towards the ideal of group authorship (in the strong sense), it can be necessary to restrict and delay communication with the outside world, including scientific peers.

Of course it is debatable whether considerations of speed of scientific progress, fecundity or optimal use of resources are epistemically relevant. On a consequentialist understanding of epistemic normativity they obviously are.[63] On a more narrow deontological understanding, as appears to be favoured by GA, they are more likely to be not. But everyone must be somehow concerned with the balance between epistemic gains and other aspects of scientific practices. Feasibility conditions for science policies matter, regardless of whether they are seen as internal or external to scientific knowledge production.

The above discussion shows that the relationship between openness, transparency and responsibility is highly complex. Some of the measures that could enhance responsibility are likely to decrease openness (because results and their significance have to be negotiated internally among the collaborating scientists before being revealed to the larger scientific community). Full openness may not further transparency with regard to the main thrust and significance of joint research; it can also obscure it, causing it to drown in a multitude of statements, interpretations and less relevant details.

Openness (and accountability) is thus not always conducive to our epistemic goals. Communication on the internet is an instructive parallel case. Frost-Arnold has argued persuasively that internet anonymity can enhance both dissemination of true beliefs and error-detection, as it serves to remove social inhibition and so to ensure that relevant knowledge is disseminated quickly.[64] Research on computer-mediated group discussion likewise indicates that anonymity in discussion increase both the quantity and novelty of ideas shared.[65] The potential value of anonymity is acknowledged by the current practice of double-blind peer review, which is partly justified by the assumption that anonymous reviewers are less likely to be overly polite and will feel free to voice all sorts of potentially relevant criticism, and that reviewers would be even harder to recruit if they knew that their identity could be revealed. This is not to say that the practice of double-blind review, in its contemporary form, is generally superior.[66] There is also evidence of less beneficial effects of anonymity,[67] e.g. loafing.[68] As noted above, the function and value of openness is a complex issue, which probably does not admit of any general answer.

More generally, the role of tacit knowledge in science has long been acknowledged. There is often good reason to make such knowledge explicit as far as it goes. But some knowledge, or parts of it, is best left tacit. Explication is costly and may impair performance.[69] It is not just that codification and communication procedures take time; they can even reduce the competence of individual, and possibly also collective, agents.[70]

Both openness and explication may be defended by an appeal to the importance of reproducibility of studies and results in science. It can be argued that this epistemically important virtue is compromised if there are any “black boxes” or instances of blind trust in a scientific process. For example, probabilistic proofs in mathematics have been criticized for not being transferable. They can be performed over and over again, and so they are, strictly speaking, reproducible—but since they rely on e.g. randomization devices, they cannot be expected provide exactly the same justification in each instance, as the evidence, like the numbers picked, will differ from case to case.[71] But again, transferability is not an absolute value, only a desideratum that must be balanced against other concerns, and whose own value is arguably conditional on its contribution to more fundamental goals of truth and error-avoidance. Probabilistic proofs may thus promote the epistemic goals of mathematicians, even if they fall short of being transferable.[72] Moreover, at least from a reliabilist point of view, reproducibility in principle is hardly better than non-reproducibility if it is so rare and difficult to accomplish that actual testing will seldom or never be carried out.

In sum, there is no reason to assume that a reduction in the degree of openness or accountability must necessarily constitute an epistemic problem. Besides, it is even questionable whether radically collaborative science does represent an overall reduction in the degree of openness or accountability as compared to traditional, smaller-scale research.

Scientific Creativity

Another general concern that may go against regimenting scientific collaboration is the undisputed importance of scientific creativity. Reliably identifying conditions of such creativity, which are complex and elusive, has proven highly difficult.[73] But there is ample evidence that interdisciplinarity, and, more generally, diversity and combination of methodological and theoretical approaches, are among the most pervasive features of the processes that are known to have led to significant discoveries.[74] This is further supported by studies in group psychology likewise indicating that diversity is conducive to creativity,[75] though some studies also point to its possible drawbacks, as too many different—or too widely differing—standpoints tend to make mutual understanding and cooperation difficult.[76] Kitcher has argued more abstractly that a diversity of research programs is epistemically beneficial.[77] In a similar vein, Weisberg and Muldoon have demonstrated that scientific “mavericks” are epistemically more productive than “followers.”[78] Zollman has used computer modelling to show that disconnected research teams are more likely to converge upon the right hypothesis than strongly connected networks of scientists, who are more prone to accept initial results that favour the wrong hypothesis.[79]

In spite of all these a priori and a posteriori reasons for assuming diversity, of a certain qualified kind, to be conducive to scientific truth and error avoidance, it would be too much to say that interdisciplinarity or diversity as such is epistemically beneficial tout court. There is reason to assume that more often than not interdisciplinary research, at least of the more radical type, leads to dead ends or at least very meagre results. But the relatively few cases of great success achieved by interdisciplinarity may still suffice to justify it, considering that mono-disciplinary research also yield rapidly diminishing returns and generally have a poor success rate, especially if the goal is defined as the production of significant truths. We have here a case where a process—engaging in interdisciplinary collaboration—may have a reliability significantly below 0.5, yet still count as sufficiently effective, because of its capacity for producing new and relevant truths and the poor record of the alternative processes available.

The concern for creativity might also justify a relaxed stance when it comes to compliance with established standards and methods of particular subfields and their compatibility. It is characteristic of even canonical examples of individual scientific creativity, like Maxwell’s development of the theory of the electromagnetic field, that standards, including theoretical assumptions, have been combined and altered in the process.[80] It is doubtful that typical cases of large-scale collaboration involve such creative twisting of standards. It is more likely that researchers will keep to the usual repertoire of established theories from the natural sciences (and hence the fear that big science could hamper scientific creativity, because of its conformity-enforcing role). But it is still important to remember that there is a trade-off between the conservative and the creative aspects of the scientific process. As Kuhn urged, the tendency to convergent thought must be counterbalanced by a tendency to divergent thought.[81] Additional regulation or insistence that established standards and methods be followed very closely could seriously hamper creativity.

GA may object to the above considerations that they miss the central thrust of their criticism. They do not wish to erect barriers to creativity, or limit the free play or smooth propagation of ideas. They are worried about industry-like conditions being imposed on scientific collaborators, with the risk of minimizing relevant dissent and evading responsibility, and wish to ensure a certain level of critical awareness and dialogue within the scientific collective.

There should be no doubt about GA’s good intentions. And I do see how it can seem inappropriate to associate their view with an almost reactionary attitude, or to defend big science with a concern for creativity and free flow of ideas. But I am afraid that for all the good intentions, almost any practicable solution to the problem posed by GA are likely to have conservative implications. It is hard to see how one could ensure that channels and procedures will be used for supporting divergent, rather than convergent thinking. In the absence of any clear idea about how one might regulate more discriminately, in a way that promotes only epistemically beneficial practices, and does not e.g. lead to group-think, holding back of important results or a general slowing down of scientific progress, we are left with the choice between a generally permissive and a more restrictive policy. I am not averse to regulatory measures in general, and even open to the suggestion that policies could be justified that are not just formal but qualitative and content-related, i.e. a kind of “epistemic affirmative action” aimed at boosting particular processes and suppressing others.[82] But I am worried that the costs of imposing general requirements will outweigh the benefits.

A final worry[83] to be considered is this. I have repeatedly implied that there may be problems similar to those highlighted by GA, only they are not genuinely epistemic. To this is could objected that we ought not care about the label “epistemic.” But it is not me who is obsessed with epistemic purity. Quite to the contrary, I have contended that if GA are right in their view of knowledge, considered as a conceptual analysis, then we should conclude that knowledge matters less than we have assumed, and not necessarily change our practice. I do, however, see a point in distinguishing between the epistemic aspects of collaborative science in a broad sense of the word and distinctively ethical or political issues. Some of the questions I have considered may be said to be matters of stipulation; but again, the burden is on GA to show that theirs are the relevant concepts to bring to the table—and I have provided reasons for thinking that they are not.


I have argued that the reasoning of GA relies on a whole series of implausibly, or at least controversially, strong assumptions about the nature of authorship, group knowledge, collective subjectivity, knowledge, responsibility and testimony. I have argued tentatively in favour of alternatives, some of which may admittedly be perceived as too radical, or at least equally controversial. But I have also tried to show that more mainstream, or even conservative, epistemological positions allow for a less pessimistic diagnosis of the trend towards radical collaboration, as they do not have any alarmist implications. One can stick to epistemological internalism, but adopt a distributed view of justification and responsibility, and/or acknowledge the possibility that radical collaboration terminates in the production of significant knowledge through testimonial transmission.

The upshot of my discussion is that there is nothing fundamentally or inherently problematic about large-scale collaborative research. In fact it may be seen as merely an (admittedly large and otherwise spectacular) institutional rearrangement; a new way of organizing and delimiting the same types of knowledge creation and dissemination processes that have always been characteristic of science. Experts in different fields and subfields communicate and contribute, more or less (un)knowingly, to the solution of scientific problems, trusting each other to various degrees, depending on their beliefs about the credentials of their peers, on processes of certification etc. If anything, multi-authorship and related practices have contributed to make these messy and decentralised processes more regulated and transparent, for good and for bad.

I have given some reasons for believing that the benefits of more loosely regulated, radically collaborative science may trump the inevitable risks and losses. They have, admittedly, been somewhat speculative (although, I contend, much less so than are the alarmist arguments). This is inevitable. We have very little empirical evidence for the superiority or inferiority of specific ways of organizing and conducting research; and unfortunately, such evidence is extremely hard to obtain, as we cannot carry out large-scale experiments, and too little can be gained from consulting the historical record. Many of those who lament the way science is currently organized or conducted do present their views as being based on historical evidence. Their arguments often come down the colloquial wisdom that you should “never change a winning team.” But who knows how much the team has been winning, after all? Surely science has done much better than soothsaying or witchcraft. But we have very little basis for comparison with alternative paths of development that could still be considered developments of science. Hence we still have to rely on a priori reasoning, albeit informed by selective evidence, case-studies and the like. And there is simply no a priori reason to assume that RC should be epistemically inferior.

I must once again stress that I have not been arguing that big science is unproblematic. I have hinted in some ways in which it may, indirectly, have negative epistemic consequences, though these may be outweighed by other and more positive effects—while I have also noted that big science may inhibit creativity and knowledge production not by failing to meet, but rather by conforming too closely to the requirements laid down by GA. More serious, perhaps, are the ethical and political issues.[84] Practices like gratuitous authorship may not matter much for gain or loss of knowledge; but they may be bad for the distribution of credit and wear on scientists’ motivation and reduce mutual trust.[85] This could also have epistemic consequences in the long run, if it threatens the meritocratic system or makes scientists more suspicious and less keen on taking part in collaborative work or even working in specific fields. It is, however, an open question whether something this is likely to happen, and whether the negative side-effects will be balanced by the positive, e.g. the heightened visibility and impact of important research that may come from its being associated with certain persons, regardless of their actual contributions.

Let me finally note that the very notion of RC, though certainly suggestive, is actually too indiscriminate to be of much use for theoretical or empirical studies of contemporary science. It combines features like scale, distribution, decentralization and interdisciplinarity, which are in reality more loosely associated and may not be best exemplified by the favourite examples of GA. One way to move beyond pure speculation would be to carry out more detailed case studies of collaboration in specific fields and of specific types and comparing the results, which could in turn serve as a basis for the construction of more adequate concepts. In the meantime, I will allow myself to assume that for all the spectacular, indeed mind-blowing news about big-scale collaboration and multi-authorship, there is, from a philosophical point of view, really nothing new under the sun.


Adams, Jonathan. “Collaborations: The Rise of Research Networks.” Nature 490 (17 October 2012): 335–336.

Ahlstrom-Vij, Kristoffer and Jeffrey Dunn. “A Defence of Epistemic Consequentialism.” Philosophical Quarterly 64, no. 257 (2014): 541–551.

Andersen, Hanne. “The Second Essential Tension: On Tradition and Innovation in Interdisciplinary Research.” Topoi 32, no. 1 (2013): 3-8.

Barthes, Roland.  “The Death of the Author.” In Image-Music-Text, trans. Stephen Heath, 142-148. Waukegan, Il.: Fontana Press, 1977.

Bird, Alexander. “Social Knowing: The Social Sense of ‘Scientific Knowledge.’” Philosophical Perspectives 24, no. 1 (2010): 23-56.

BonJour, Laurence. “A Version of Internalist Foundationalism.” In Epistemic Justification, eds. Laurence BonJour and Ernest Sosa, 3-93. Oxford: Blackwell, 2003.

Boisot, Max H. Knowledge Assets: Securing Competitive Advantage in the Information Economy. Oxford: Oxford University Press, 1998.

Chamorro-Premuzic, Tomas. “Why Brainstorming Works Better Online.” Harvard Business Review. April 02, 2015.

Christopherson, Kimberly. “The Positive and Negative Social Implications of Anonymity in Internet Interactions: ‘On the Internet, Nobody Knows You’re a Dog.’” Computers in Human Behavior 23 (2007): 3038-3056.

Collins, Harry. Tacit and Explicit Knowledge. Chicago: University of Chicago Press, 2010.

Collins, Harry and Robert Evans. Rethinking Expertise. Chicago: Chicago University Press, 2007.

Connolly, Terry, Leonard M. Jessup and Joseph S. Valacich. “Effects of Anonymity and Evaluative Tone on Idea Generation in Computer-Mediated Groups.” Management Science 36, no. 6 (1990): 689-703.

Conee, Earl and Richard Feldman. “Internalism Defended.” In Evidentialism, eds. Earl Conee and Richard Feldman, 53-82. Oxford: Oxford University Press, 2004.

Donaldson, Lex. American Anti-Management Theories of Organization: A Critique of Paradigm Proliferation. Cambridge: Cambridge University Press, 1995.

Easwaran, Kenny. “Probabilistic Proofs and Transferability.” Philosophia Mathematica (III) 17 (2009): 341-62.

Etzkowitz, Henry. MIT and The Rise of Entrepreneurial Science. London: Routledge, 2002.

Fallis, Don. “What Do Mathematicians Want? Probabilistic Proofs and the Epistemic Goals of Mathematicians.” Logique et Analyse 45 (2002): 373-88.

Fallis, Don. “Probabilistic Proofs and the Collective Epistemic Goals of Mathematicians.” In Collective Epistemology, eds. Hans Bernard Schmid, Marcel Weber, Daniel Sirtes, 157-175. Ontos Verlag, 2011.

Foucault, Michel. “What is an Author?” In Aesthetics, Method and Epistemology, edited by J. D. Faubion. Translated by R. Hurley et al., 205-222. New York: The New Press, 1998.

Frost-Arnold, Karen. “Trustworthiness and Truth: The Epistemic Pitfalls of Internet Accountability.” Episteme 11, no. 1 (2014): 63-81.

Galison, Peter. “The Collective Author.” In Scientific Authorship: Credit and Intellectual Property in Science, edited by Peter Galison and Mario Biagioli, 325-353. New York and Oxford: Routledge, 2003.

Galison, Peter and Mario Biagioli. Scientific Authorship: Credit and Intellectual Property in Science. New York and Oxford: Routledge, 2003.

Gilbert, Margaret. On Social Facts. London: Routledge, 1989.

Gilbert, Margaret. Joint Commitment: How We Make the Social World. Oxford: Oxford University Press, 2014.

Goldman, Alvin I. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63, no. 1 (2001): 85-110.

Graham, Peter. “Liberal Fundamentalism and Its Rivals.” In The Epistemology of Testimony, eds. Jennifer Lackey and Ernest Sosa, 93-115. Oxford: Oxford University Press, 2006.

Hardwig, John. “Epistemic Dependence.” Journal of Philosophy 82, no. 7 (1985): 335-349.

Hardwig, John. “The Role of Trust in Knowledge.” Journal of Philosophy 88, no. 12 (1991): 693-708.

Hirsch, Eric D. Validity in Interpretation. New Haven, CT: Yale University Press, 1967.

Hong, Lu and Scott Page. “Groups of Diverse Problem Solvers Can Outperform Groups of High-Ability Problem Solvers.” Proceedings of the National Academy of Sciences of the United States, 101, no. 46 (2004): 16385–16389.

Huebner, Bryce. Macrocognition: A Theory of Distributed Minds and Collective Intentionality. Oxford: Oxford University Press, 2014.

Huebner, Bryce, Rebecca Kukla, and Eric Winsberg. “Making an Author in Radically Collaborative Research.” In Scientific Collaboration and Collective Knowledge, edited by Thomas Boyer, Connor Mayo-Wilson, and Michael Weisberg. Oxford: Oxford University Press, forthcoming.

Iliffe, Robert. “Butter for Parsnips: Authorship, Audience, and the Incomprehensibility of the Principia.” In Scientific Authorship: Credit and Intellectual Property in Science, edited by Peter Galison and Mario Biagioli, 33-66. New York and Oxford: Routledge, 2003.

Juhl, Peter D. Interpretation: An Essay in the Philosophy of Literary Criticism. Princeton: Princeton University Press, 1980.

Kieser, Alfred and Peter Walgenbach. Organisation. Stuttgart: Schäffer-Poesel, 2010.

Kitcher, Philip. “The Division of Cognitive Labor.” Journal of Philosophy 87, no. 1 (1990): 5–22.

Kitcher, Philip. The Advancement of Science. New York: Oxford University Press, 1993.

Kitcher, Philip. Science, Truth and Democracy. Oxford: Oxford University Press, 2001.

Kitcher, Philip. Science in a Democratic Society. Amherst, NY: Prometheus, 2011.

Klausen, Søren H. “Two Notions of Epistemic Normativity.” Theoria 75 (2009): 161-178.

Klausen, Søren H. “Sources and Conditions of Scientific Creativity.” In Handbook of Research on Creativity, edited by Janet Chan and Kerry Thomas, 33-47. Cheltenham: Elgar, 2013.

Klausen, Søren H. “Group Knowledge: A Real-World Approach.” Synthese 192, no. 3 (2015): 813-839.

Klausen, Søren H. “Levels of Literary Meaning.” Philosophy and Literature 41, no. 1 (2017; in press).

Koestler, Arthur. The Act of Creation. London: Penguin, 1964.

Kornblith, H. On Reflection. Oxford: Oxford University Press, 2012.

Krimsky, Sheldon. “Do Financial Conflicts of Interest Bias Research? An Inquiry into the ‘Funding Effect’ Hypothesis.” Science, Technology & Human Values 38, no. 4 (2013). Effect and Bias.PDF.

Kukla, Rebecca. “‘Author TBD’: Radical Collaboration in Contemporary Biomedical Research.” Philosophy of Science 79, no. 5 (2012): 845-858.

Kuhn, Thomas. The Essential Tension: Selected Studies in Scientific Tradition and Change. Chicago: University of Chicago Press, 1977.

Lesser, Lenard I., Cara B. Ebbeling, Merrill Goozner, David Wypij, and David S. Ludwig. “Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles.” PLOS Medicine 4, no. 1 e5 (2007): 0041-0046. doi:10.1371/journal.pmed.0040005.

Livingston, Paisley. Art and Intention. Oxford: Oxford University Press, 2005.

Levine, John M. et al. “Newcomer Innovation in Work Teams.” In Group Creativity: Innovation Through Collaboration, edited by Paul B. Paulus and Bernard A. Nijstad, 202-224. Oxford: Oxford University Press, 2003.

List, Christian and Philip Petitt. Group Agency. Oxford: Oxford University Press, 2011.

Mathiesen, Kay. “The Epistemic Features of Group Beliefs.” Episteme 2 (2006): 161-175.

Marušić, Ana, Lana Bošnjak and Ana Jerončić. “A Systematic Review of Research on the Meaning, Ethics and Practices of Authorship across Scholarly Disciplines.” Plos One 6.9 e23477 (2011): 1-17. doi:10.1371/journal.pone.0023477.

Milliken, Frances J. et al. “Diversity and Creativity in Work Groups.” In Group Creativity: Innovation Through Collaboration, edited by Paul B. Paulus and Bernard A. Nijstad, 32-62. Oxford: Oxford University Press, 2003.

Myers-Schulz, Blake and Eric Schwitzgebel. “Knowing that P Without Believing that P.” Nous 47, no. 2 (2013): 371-384.

Nauenberg, Michael. “The Reception of Newton’s Principia.” Cornell: arXiv: 1503.06861 (2015).

Nersessian, Nancy. Creating Scientific Concepts. Cambridge, MA: MIT Press, 2008.

Nestle, Marion. “Food Company Sponsorship of Nutrition Research and Professional Activities: A Conflict of Interest? Public Health Nutrition 4.5 (2001): 1015-1022.

Owens, David. Reason Without Freedom: The Problem of Epistemic Normativity. London: Routledge, 2000.

Paulus, Paul B. and Bernard A. Nijstad, eds. Group Creativity. Oxford: Oxford University Press, 2003.

Petersen, E. N. and Schaffalitzky, C. S. D. “Why Not Open the Black Box of Journal Editing in Philosophy? Make Peer Reviews of Published Papers Available.” Forthcoming.

Rowbottom, Darrell. “N-Rays and the Semantic View of Scientific Progress.” Studies in History and Philosophy of Science 39 (2008): 277–278.

Sarewitz, Daniel. Frontiers Of Illusion: Science, Technology, and the Politics of Progress. Philadelphia: Temple University Press, 1996.

Scott, William B. Organizations: Rational, Natural, and Open Systems, 3rd ed. Englewood Cliffs, N.J.: Prentice Hall, 1992.

Schmitt, Frederick F. “Transindividual Reasons.” In The Epistemology of Testimony, edited by Jennifer Lackey and Ernest Sosa, 193-224. Oxford: Oxford University Press, 2006.

Smolin, Lee. “Why No New Einstein?” Physics Today June 2005: 56-57.

Stanley, Jason. Know How. Oxford: Oxford University Press, 2011.

Tuomela, Raimo. The Philosophy of Sociality: The Shared Point of View. Oxford: Oxford University Press, 2007.

Winsberg, Eric, Bryce Huebner, and Rebecca Kukla. “Accountability and Values in Radically Collaborative Research.” Studies in History and Philosophy of Science Part A 46 (2014): 16-23.

Weisberg, Michael and Ryan Muldoon. “Epistemic Landscapes and the Division of Cognitive Labor.” Philosophy of Science 76, no. 2 (2009): 225–252.

Zollman, Kevin J. S. “The Epistemic Benefit of Transient Diversity.” Erkenntnis 72, no. 1 (2010): 17-35.

[1] Etzkowitz, MIT and The Rise of Entrepreneurial Science.

[2] Adams, “Collaborations.”

[3] Winsberg is not from Georgetown, but I count him among the Georgetown Alarmists because of his association with Huebner and Kukla. It is likely that not all of GA subscribe to all of the claims attributed to them in this paper, at least not with equal confidence or emphasis. Nevertheless, a common “alarmist” attitude is clearly detectable in their writings.

[4] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 13; 23.

[5] Thus, iv) follows from i) and ii) together with the empirical facts about RC. vii) follows from vi), v) and iv) (though vii is also inferred directly from the lack of accountability in RC; it may be said that GA do not generally claim that authorship is a condition for scientific knowledge, only that the conditions they lay down for collective scientific knowledge overlap those they lay down for authorship).

[6] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 2ff.

[7] Kukla, “‘Author TBD,’” 848.

[8] Ibid., 857; Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 1.

[9] Winsberg, Huebner and Kukla, “Accountability and values in radically collaborative research,” 1.

[10] Huebner, Macrocognition, 213f.

[11] Winsberg, Huebner and Kukla, “Accountability and values in radically collaborative research,” 1.

[12] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 13.

[13] Barthes, “The Death of the Author.”

[14] Foucault, “What is an Author?”

[15] Kukla, “‘Author TBD,’” 846.

[16] Ibid., 852.

[17] As one reviewer kindly did.

[18] E.g. right at the beginning of Winsberg, Huebner, and Kukla, “Accountability and values in radically collaborative research.”

[19] Kukla, “‘Author TBD,’” 849.

[20] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research.”

[21] It is, moreover, debatable whether scientific progress should be measured in terms of knowledge- or merely truth-production. See Rowbottom 2008 for a defence of the latter view.

[22] Tuomela, The Philosophy of Sociality; List and Pettit speak of a common awareness (Group Agency, 33).

[23] Gilbert, On Social Facts; Joint Commitment.

[24] Mathiesen, “The Epistemic Features of Group Beliefs.”

[25] Livingston, Art and Intention, Ch. 3.

[26] Mathiesen, “The Epistemic Features of Group Beliefs.”

[27] Mathiesen requires of the members of a group with an epistemic goal that its members must commit themselves to follow certain practices, viz. those that are seen as appropriately regulating epistemic endeavours. That still seems weaker than GA’s accountability requirement, which involves an actual ability to justify the claims in question (though it is weakened in some subsequent formulations—cf. 2) above.

[28] Scott, Organizations, 10; Donaldson, American Anti-Management Theories of Organization, 135; Kieser and Walgenbach, Organisation, 6.

[29] Klausen, “Group Knowledge.”

[30] Hardwig, “The Role of Trust in Knowledge.”

[31] Cf. 2.

[32] Nor is it likely that they have been able to point to a neighbouring scientist who could (contrary to what was suggested by a reviewer). The emergence of the evolutionary synthesis was structurally more similar to RC (as described by GA) than to a simple chain of ordered, cumulative epistemic tasks.

[33] Klausen, “Levels of Literary Meaning.”

[34] Iliffe, “Butter for Parsnips”; Nauenberg, “The Reception of Newton’s Principia.”

[35] Galison, “The Collective Author.”

[36] Cf. Livingston, Art and Intention.

[37] Hirsch, Validity in Interpretation; Juhl, Interpretation.

[38] Klausen, “Levels of Literary Meaning.”

[39] The ordinary notion of authorship may be ambiguous—or disjunctive—inasmuch as both the intentional production of first-order meaningful language (of certain specific kinds) and the selection, organization and communication of such language may suffice for authorship.

[40] Livingston, Art and Intention, Ch. 3.

[41] Ezra Pound was not recognized as co-author of Eliot’s The Waste Land, even if he appears to have acted as a kind of editor, or even metacognitive assistant, for Eliot, helping to select and arranging and arrange the vast and heterogeneous material that Eliot had compiled. There are numerous cases of works, by e.g. Wolfe, Yeats and Brecht, which appear to have been produced in a genuinely collective manner, without their co-authors having been explicitly recognized as such. Hence the scientific practice of authorship attribution can be said to be, at least in certain respects, more in line with the commonsense notion of authorship than the traditional “literary” one, which has often generated a wrong impression of solitary work.

[42] Though it has recently been suggested, not quite implausibly, that knowledge does not even require belief. See e.g. Myers-Schulz and Schwitzgebel, “Knowing that P Without Believing that P.”

[43] This is conceded even by leading internalists, e.g. evidentialists like Conee and Feldman, “Internalism Defended” or BonJour, “A Version of Internalist Foundationalism.”

[44] Klausen, “Group Knowledge.”

[45] See Bird, “Social Knowing” for a defense of a similar view.

[46] A socialized version of responsibilism has been proposed by Owens, Reason Without Freedom: “A belief is justified if every rational agent to whom responsibility for the belief applies or can pass acts responsibly with regard to the belief” (cf. Schmitt 2006, 215).

[47] Of course, GA have not claimed that no knowledge is produced in RC, as one reviewer pointed out. But they obviously worry that the relevant kind of knowledge—the putative scientific contribution, the end result—will not really be produced.

[48] “‘Author TBD,’” 850.

[49] Ibid., 849. In fairness to GA, it should be said that they are, perhaps, merely concerned with the possibility that testimony could tie together the whole group of collaborators, in the way required for authorship, and not with the possibility of distributed testimony-based knowledge. The latter possibility should be taken seriously, however.

[50] See e.g. the overview given by Graham, “Liberal Fundamentalism and Its Rivals.”

[51] Although the trust in question is blind as defined by Hardwig himself: The recipient does not have the reasons that are necessary to (directly) justify the belief in question (Hardwig, “The Role of Trust in Knowledge,” 699). Goldman, “Experts” argues convincingly that the actual practice of acquiring knowledge by testimony is less blind (in the wider sense) than Hardwig and proponents of a direct, non-reductionist view usually assume.

[52] Winsberg et al. 2013, 1.

[53] Huebner, Kukla, and Winsberg, “Making an Author in Radically Collaborative Research,” 13.

[54] Klausen, “ Sources and Conditions of Scientific Creativity.”

[55] Goldman, “Experts.”

[56] Collins and Evans, Rethinking Expertise.

[57] From the theoretical perspective of GA, this may be somewhat different. On their view, improved accountability will automatically increase knowledge production, as it is built into their definition of knowledge.

[58] Cf. Kitcher, Science in a Democratic Society, 145.

[59] Nestle, “Food company sponsorship of nutrition research and professional activities”; Lesser et al., “Relationship between Funding Source and Conclusion among Nutrition-Related Scientific Articles”; Krimsky, “Do Financial Conflicts of Interest Bias Research?”

[60] Kitcher, Science, Truth and Democracy; Science in a Democratic Society.

[61] Goldman, “Experts,” 205ff.

[62] Galison, “The Collective Author.”

[63] Klausen, “Two Notions of Epistemic Normativity”; Ahlstrom-Vii and Dunn, “A Defence of Epistemic Consequentialism.”

[64] Frost-Arnold, “Trustworthiness and Truth.”

[65] Connolly, Jessup, and Valacich, “Effects of Anonymity and Evaluative Tone on Idea Generation in Computer-Mediated Groups.”

[66] For arguments that point to the relative importance of accountability, see Petersen and Schaffalitzky, “Why not open the black box of journal editing in philosophy?”

[67] As one reviewer kindly pointed out.

[68] Christopherson, “The Positive and Negative Social Implications of Anonymity in Internet Interactions”; but see Chamorro-Premuzic, “Why Brainstorming Works Better Online” for a more favourable assessment.

[69] Stanley, Know How, 173f.

[70] See Boisot 1998, 42ff., for an instructive summary and analysis of the findings from management and organization science.

[71] Easwaran, “Probabilistic Proofs and Transferability.”

[72] See Fallis, “What Do Mathematicians Want?” and “Probabilistic Proofs and the Collective Epistemic Goals of Mathematicians.” The latter also points out that replication of experiments in science generally proceeds in this way; they are not based on the exact same evidence, only on evidence of the same specific type.

[73] Klausen, “Sources and Conditions of Scientific Creativity.”

[74] Koestler, The Act of Creation; Nersessian, Creating Scientific Concepts; Klausen, “Sources and Conditions of Scientific Creativity.”

[75] Milliken et al., “Diversity and Creativity in Work Groups”; Hong and Page, “Groups of Diverse Problem Solvers Can Outperform Groups of High-Ability Problem Solvers.”

[76] Levine et al., “Newcomer Innovation in Work Teams.”

[77] Kitcher, “The Division of Cognitive Labor”; The Advancement of Science.

[78] Weisberg and Muldoon, “Epistemic Landscapes and the Division of Cognitive Labor.”

[79] Zollman, “The Epistemic Benefit of Transient Diversity.”

[80] Nersessian, Creating Scientific Concepts.

[81] Kuhn, The Essential Tension; see also Andersen, “The Second Essential Tension.”

[82] Cf. Goldman 1999, 210; 216.

[83] Kindly raised by a reviewer.

[84] In its present state, big science is entangled with tendencies and assumptions, e.g. about the relationship between science and society, that certainly deserve critical attention. For a critical analysis of the assumptions behind post-WW2 science policy, see Sarewitz 1996

[85] See Marušić et al. 2011 for an instructive, but somewhat inconclusive survey that indicates that there is indeed a serious issue, but no clear evidence that the situation is problematic.

No Comments

Be the first to start the conversation!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s