Giangiuseppe Pili (2021) has written an interesting response to my article, “Rethinking the Just Intelligence Theory of National Security Intelligence Collection and Analysis: The Principles of Discrimination, Necessity, Proportionality and Reciprocity” (Miller 2021). I agree with much of what Pili has to say. Here I want to clarify and expand on some of the points I made in my original article but do so in the light of Pili’s comments. The comments in question pertain to: … [please read below the rest of the article].
Miller, Seumas. 2021. “Reply to Giangiuseppe Pili’s ‘The Missing Dimension—Intelligence and Social Epistemology’.” Social Epistemology Review and Reply Collective 10 (10): 54-58. https://wp.me/p1Bfg0-6f2.
🔹 The PDF of the article gives specific page numbers.
❧ Miller, Seumas. 2021. “Rethinking the Just Intelligence Theory of National Security Intelligence Collection and Analysis: Principles of Discrimination, Necessity, Proportionality and Reciprocity.” Social Epistemology 35 (3): 211-231.
❦ Pili, Giangiuseppe. 2021. “The Missing Dimension—Intelligence and Social Epistemology: A Reply to Miller’s ‘Rethinking the Just Intelligence Theory of National Security Intelligence Collection and Analysis’.” Social Epistemology Review and Reply Collective 10 (7): 1-9.
❦ Miller, Seumas. 2015. “Joint Epistemic Action and Collective Moral Responsibility.” Social Epistemology 29 (3): 280-302.
In my article I said:
Since the end of epistemic activity is (roughly speaking) knowledge and, therefore as a matter of logic, one embarks on an epistemic project from a position of ignorance—and with a set of questions to be answered, e.g. Is there a threat? What is the nature of the threat?—the content of the end is in an important sense and, by definition, unknown. Accordingly, any prior moral assessment of the contemplated epistemic action is necessarily radically incomplete, since it depends in large part on the moral costs attaching to the realization of the epistemic end, i.e. of being in possession of the answers to the questions sought—something which is, to reiterate, by definition unknown” (216).
I suggested that this feature of epistemic action distinguished it from kinetic action. In the case of kinetic action, but not epistemic actions, one knows how what the content of one’s end is. For instance, when a combatant fires at an enemy combatant intending to kill him, the combatant knows what his own end is, i.e. the enemy combatant dead as a result of being shot by the combatant.
Epistemic Action and Kinetic Action
This distinction between epistemic action and kinetic action is consistent with the fact that the consequences of a kinetic action might be unknown to the person who performs the kinetic action (the agent of the kinetic action). For instance, unknown to the combatant who shoots dead the enemy combatant, his lethal action may have made known his presence to an officer with important dispatches who is accompanying the enemy combatant, thereby enabling the officer to escape.
This distinction between epistemic action and kinetic action is also consistent with the fact that epistemic action is, nevertheless, action and, therefore, elements of the content of the end of the epistemic action are necessarily known to its agent. For instance, an intelligence officer of nation-state A seeking to establish whether or not the military force of nation-state B pose a threat and, if so, what the nature of the threat is, knows at the very least that the source of the potential threat is B and its military force.
The necessarily radical incompleteness of the content of the ends of epistemic actions, but not of kinetic actions, has moral implications. As I said in my article, in one important respect, at least in national security settings, decisions to perform epistemic actions, e.g. conduct intelligence operations, are less morally problematic than decisions to perform kinetic actions, e.g. bomb installations, because, generally speaking, kinetic actions per se frequently are harmful whereas epistemic actions per se are not; it is only the kinetic actions based on the results of epistemic actions that frequently are harmful. However, as I also pointed out in my article, in another respect, decisions to perform epistemic actions are more problematic than are decisions to perform kinetic actions, since unlike in the case of kinetic actions, agents do not know what a successfully performed epistemic action consists in prior to its successful performance.
Notice that this is not the same thing as agents not knowing whether they will succeed or not. An epistemic agent might know that he will succeed in performing his intended epistemic action, e.g. that he will discover whether or not the potentially belligerent state B is in fact developing WMDs. But prior to the successful performance of the epistemic action he does not know whether or not B has WMDs. It might be thought that at least the epistemic agent will know what the possible state of affairs of interest to him consists in, even if he does not know whether or not it exists, e.g. the (possible) state of affairs consisting of B possessing chemical WMDs. But even this is not necessarily so. For instance, it is not so, if A does not know whether B is a threat and, if even if B is a threat, what the nature of that threat is likely to be. Perhaps B is developing a secret new kind of weapon (e.g. an atomic bomb during the period when atomic bombs had yet to be invented) and the characteristics of this weapon are entirely unknown to A. Accordingly, in such cases it is difficult to make ex ante conditional judgments of the form: If p then we should x but not if not p. For we don’t know what the content of the proposition p (and, therefore, of not p) is.
It might be argued that while epistemic actions are indeed problematic in this respect, this does not have significant moral implications. However, this is not so.
First, epistemic action often yields moral knowledge in the sense of morally laden knowledge (as opposed to knowledge of moral theories or principles), e.g. that Donald Trump lied or that the Nazis committed genocide against Jews.
Second, even if most epistemic activity does not yield morally laden knowledge or is not in itself harmful, it may involve harmful kinetic actions as a means, e.g. coercive interrogation.
Third, those from whom secret information is being sought are not likely to be kindly disposed to those seeking to discover their secrets; hence, in most jurisdictions, enemy espionage agents are criminals.
Fourth, more generally, national security intelligence collection takes place in an adversarial setting; nation-state A is seeking knowledge about B which B does not wish A to have, and vice-versa. Accordingly, this epistemic activity involves secrecy, deception, disinformation and so on, i.e. it inevitably has moral implications even if we set aside that it is undertaken in the service of kinetic action.
Fifth, finally, and most importantly, epistemic action in national security settings has moral implications by virtue of leading to morally significant kinetic action, including harmful kinetic action; indeed, at times, extremely harmful action, e.g. waging war.
Undesirable and Desirable Possibilities
In terms of the kinetic consequences of epistemic action (including deliberate inaction) there are three general notional possibilities. Firstly, there is a continuing state of ignorance as a result of failing to engage in epistemic action. Secondly, there is epistemic action which is unsuccessful in the sense that it yields falsity. Thirdly, there is epistemic action which is successful in the sense that it yields knowledge.
The first possibility is prima facie undesirable (at least in terms of the national security interests of the nation-state in question, as it sees these interests). Ignorance does not provide a good basis for a nation-state to make national security decisions that further its national security interests. So, the associated counterfactual has the form: If we had known that p then we would have done x and, thereby, averted the bad national security outcome, O.
The second possibility is likewise prima facie undesirable (again, at least in terms of the national security interests of the nation-state in question, as it sees these interests); false beliefs are not a good basis upon which to make national security decisions and, in particular, decisions with respect to potentially very harmful kinetic actions. So, the associated counter-factual has the form: If we had not falsely believed that q, then we would not have done y and, thereby, would not have caused the bad national security consequence, C.
The third possibility is prima facie desirable (yet again, at least in terms of the national security interests of the nation-state in question, as it sees these interests). Knowledge provides a good basis upon which to make national security decisions. Accordingly, the decision-makers in question know that p and perform the kinetic action, x. They are neither ignorant for possessed of a false belief. So, there are two associated alternative counterfactuals:
Notice that these counterfactuals are ex post, i.e. we know them with the benefit of hindsight. Accordingly, they are consistent with our ignorance of the ex ante conditional mentioned above, i.e. If p then we ought to x. For ex ante we might not even know what the content of the proposition p is. Hence, decisions in relation to what epistemic actions to perform are inherently blind in a manner in which decisions to undertake kinetic actions are not. A graphic illustration of the morally problematic consequences of this feature of epistemic action is research into the virulence and transmissibility of pathogens for the purpose of creating a vaccine. Researchers undertaking such research might, indeed in some cases have, learned out to create a more virulent, more transmissible and, therefore, more dangerous pathogen than the one they are attempting to create a vaccine to combat (Miller 2018, Chapter 8).
Let us now turn to our second issue. Notice that the above claims about the desirability of successful epistemic action in national security settings are cast in terms of the national security interests of the particular state in question, as it sees those interests. Naturally, it might not correctly or accurately see those national security interests. Moreover, those national security interests might themselves be morally problematic, e.g. in the case of a nation-state seeking to suppress a morally justified secessionist movement. Accordingly, from an objective standpoint, the desirability of these claims about the desirability of successful epistemic action depends on the particular good or bad ends to which the resulting knowledge is put (and also on the intervening kinetic means to those ends) and, more generally, on the competence and moral character, so to speak, of the particular state engaging in the epistemic action in question. Thus, it may be morally preferable that an expansionist authoritarian state is in a state of ignorance or has false beliefs regarding its neighbor’s low level of military preparedness.
Let us, therefore, assume, firstly, that other things being equal successful (i.e. knowledge generating) epistemic action undertaken by national security intelligence agents is morally desirable and, secondly, that other things being equal unsuccessful (i.e. falsehood generating) or non-existent (i.e. ignorance maintaining) epistemic action (or inaction) undertaken by these agents is morally undesirable. We assume this for the reason that while the national security intelligence agents, and their military, police and political masters, in question make mistakes they are by and large competent, relatively morally virtuous, and they are functioning in a legitimate state. By a “legitimate state”, I mean one that is competent (e.g. it is able to provide essential services), able to maintain its authority, (e.g. it is not politically unstable), and, in relation to human rights violations, is not ‘beyond the pale’ (e.g. it is not engaged in genocidal practices or slavery).
One question that now arises pertains to a legitimate state (in my above somewhat permissive sense) which is, nevertheless, an authoritarian state. This state engages in widespread violations of political rights and/or engages in serious human right violations in a restricted range of cases, e.g. against those who threaten its authority. Accordingly, some of its national security interests, as it sees them, are not morally justified; indeed, they are morally repugnant. Accordingly, members of its intelligence agencies who engage in epistemic action in the service of these national security interests are providing part of the means (knowledge) to achieve morally bad ends, e.g. repression of a minority engaged in morally justified insurrection. Accordingly, their successful epistemic action is, at least to this extent, morally undesirable. Moreover, conversely it may also be that some of their unsuccessful epistemic action (and epistemic inaction maintaining ignorance) is, by parity of reasoning, morally desirable.
Another question that arises here pertains to the ascription and degree of moral responsibility that attaches to epistemic agents engaged in national security intelligence for the harm caused by the kinetic actions the performance of which relies on their epistemic actions. Naturally, if a combatant shoots dead an enemy combatant or a police officer arrests a person who supports boycotts and other peaceful activities done in opposition to an authoritarian state, then the combatant and the police officer are morally responsible for their (respective) actions, albeit they may have diminished responsibility if their actions were both lawful and performed in compliance with the orders of their superior officers. But what of the intelligence officer who, for instance, identified the person to be arrested by the police officer, i.e. he provided a necessary means for the police officer to make the arrest? As is the case with the police officer, the intelligence officer’s action is, let us assume, legal and performed in accordance with the orders of his superiors; he is likewise ‘just doing his job’. Surely, the intelligence officers bears some degree of moral responsibility for the morally undesirable outcome, i.e. for the arrest and incarceration of the person seeking by peaceful means to realize his and others’ moral rights.
Collective Moral Responsibility
I conclude this response to Pili by making a suggestion. One way of framing this problem is in terms of collective moral responsibility. If intelligence agencies and, therefore, intelligence officers, contribute to the overall collective ends of national security, but do so by cooperating with other agencies, such as police agencies and, therefore, police officers, by providing a necessary means (i.e. knowledge) for the realization of these collective ends, then national security is a joint enterprise in which intelligence agencies and their officers are important participants. Accordingly, the various participants, be they intelligence officers, police officers, or their political masters, can be held collectively morally responsible, at least in principle, for the realization of these collective ends, including in the case of morally unacceptable national security collective ends (Miller 2015).
Miller, Seumas. 2021. “Rethinking the Just Intelligence Theory of National Security Intelligence Collection and Analysis: Principles of Discrimination, Necessity, Proportionality and Reciprocity.” Social Epistemology 35 (3): 211-231.
Miller, Seumas. 2018. Dual Use Science and Technology, Ethics and Weapons of Mass Destruction. Springer.
Miller, Seumas. 2015. “Joint Epistemic Action and Collective Moral Responsibility.” Social Epistemology 29 (3): 280-302.
Pili, Giangiuseppe. 2021. “The Missing Dimension—Intelligence and Social Epistemology: A Reply to Miller’s ‘Rethinking the Just Intelligence Theory of National Security Intelligence Collection and Analysis’.” Social Epistemology Review and Reply Collective 10 (7): 1-9.
Categories: Critical Replies