Author Information: Steve Fuller, University of Warwick, S.W.Fuller@warwick.ac.uk
Shortlink: http://wp.me/p1Bfg0-2YQ
Editor’s Note: The SERRC thanks Symposion for permitting us to repost Steve Fuller’s reply to Bill Lynch’s review essay.
-
Please also refer to Fuller, Steve. “Social Epistemology for Theodicy without Deference: Response to William Lynch.”Symposion 3, no. 2 (2016): 207-218.
Image credit: Aaron, via flickr
Let me start by saying that despite the strong critique that Bill Lynch lodges against the world-view developed in Knowledge: The Philosophical Quest in History.[1] I must credit him with having set out at the start of his essay an admirably comprehensive overview of my intellectual trajectory, including a keen sense of the spirit which has animated it, as well as some of its key twists and turns. I am painfully aware that though I remain very much an engaged and productive thinker, most readers appear to encounter my work like isolated ruins of a lost civilization. The reason may be, as Lynch correctly notes, that I am drawn to bring together sensibilities that are normally seen to be at odds with one another. For this reason, I have always seen Hegel as a model for what a good philosopher should be—someone very much immersed in the differences of his time yet at the same time trying to transcend them by finding a place in the imaginary future (or “The Mind of God”) where they are each given their due.
To be sure, the very idea of “social epistemology” already pointed to such a tendency, given my original interest in recovering a strong normative philosophy of science in the face of an equally strong empirical turn in the history and sociology of science. However, for roughly the past decade, in the context of configuring the future of human condition (of “Humanity 2.0”), I have been combining a progressivist vision of science and technology—perhaps of the sort that postmodernism was supposed to have laid to rest—and an eschatological vision of our having been created “in the image and likeness of God” that is likely to disturb ordinary churchgoing Christians, who would prefer not to take that part of Genesis too literally. As Lynch also correctly observes, if there is a clear target in my book, it is the sort of naturalism—shared by Epicurus, Hume and Darwin—which inclines one to atheism and a generally diminished view of the prospects for the human condition.
I want to spend most of this response defending my recourse to theodicy as a normative horizon, since that is clearly the aspect of my world-view which Lynch finds most offensive. However, my way into that will be through Lynch’s astute observation that much of my intellectual style can be explained by my hostility to deference in all its forms.
Against Deference
Deference is the signature anti-democratic attitude. It goes beyond the call of respect, which is the recognition of someone else as your equal. Deference involves self-subordination. In officially democratic societies, expertise is the only tolerable form of deference, resulting in what already in Social Epistemology I had called “cognitive authoritarianism.”[2] Yet expertise works only because the experts have persuaded us that the knowledge they possess is exactly the knowledge we need and, moreover, that it requires just the sort of esoteric training which they have. For me, this argument is less about justifying the “cognitive division of labour” than about discouraging people from using their own resources to solve whatever problems they face to their own satisfaction.
I don’t mean to say that expert knowledge should be ignored but it should be seen as a necessary evil—the more necessary, the more evil. It imposes structure on what would otherwise be a dynamic situation. Indeed, I believe capitalism’s instinct to seek cheaper alternatives for any product which threatens to create a bottleneck in the market—that is, a source of rent—applies no less to knowledge itself. Thus, a progressive social epistemology is dedicated to deconstructing (i.e. “creatively destroying”) expertise by making its knowledge more generally available for use, be it through teaching or technology. This is where my own version of social epistemology differs most profoundly in spirit from the sort of analytic social epistemology promoted nowadays by Alvin Goldman and Philip Kitcher.
I should also say that my hostility to deference extends to humility, which I now take to be an especially arch form of arrogance which comes from thinking that you know better than your “betters” just how bad you are. As a piece of social epistemology, humility amounts to a pre-emptive strike against others falsifying your knowledge claims, which serves to immunize you against the prospect of self-improvement.
Humility first became fashionable among followers of Donna Haraway in the late 1980s, when she popularized the idea of “nervous laughter” as an appropriate normative response to science and technology during the Cold War.[3] The idea was to make oneself appear vulnerable to critique by appearing to reveal a “dirty secret”; namely, that one continues to support science and technology despite their potential for mass destruction.
I was originally well-disposed to humility, but I saw it as a dialogical virtue not as immunity from dialogue.[4] However, as a more generically postmodern sensibility took hold, humility morphed into invulnerability in the guise of a studied ambivalence towards whatever happened, an attitude which Latour had already canonized in terms of Janus-faced images of the countervailing interpretations of “technoscience” which are peppered throughout the book.[5] Here ambivalence is simply the polite face of unfalsifiability, which absolves you from having to take responsibility for anything you say.
Taken in the context of Latour’s evolution from science anthropologist to eco-friendly metaphysician over the past three decades, it would be easy to read this studied ambivalence as oracular, but in practice it has reinvented old positivist ideas of value-neutrality and instrumentalism in a more florid ontological setting. Instead of the positivist gesture of the researcher remaining silent in the manner of an epistemic ascetic, Latour’s followers in science and technology studies (on the empirical side) and “object-oriented ontology” (on the metaphysical side) have exploited the trope of systrophe to pile on descriptions from many different angles which serve to obscure any normative orientation that they might be thought to have. Those attuned to theology might see Latour’s move as the Catholic way of matching what the logical positivists had achieved by more Protestant means.
In any case, this rhetorical move is papered over in science and technology studies by a redeployment of the long-standing methodological principle of “symmetry,” whereby social explanations should not make reference to factors or events that were not operative at the time of the event in question. Thus, appeals to “truth” and “falsity,” judgements which are reached—if at all—only after the fact, are not allowed. Yet it is worth recalling that in its original formulation, the symmetry principle did not preclude the researcher from making true/false judgements as such: It simply prohibited such judgements from being included as part of the explanation of what happened. Thus, many early interpreters of Shapin and Schaffer—myself included—used the historical contingencies surrounding Hobbes’ exclusion from the Royal Society (which began the fateful separation of philosophy from science) to argue on normative grounds that it would have been better had membership been extended to him.[6]
Nowadays, however, it is more common to treat “symmetry” as something akin to the equal-time doctrine in journalism, its de facto definition of “objectivity.” In Fuller (2000), this is what I identified as the “Prig” attitude adopted by historians whose professional commitments are stronger to representing the archive than the people and events referenced in the archive.[7] At one level, historiographical Priggishness seems humble, even modest, in its refrain from judgement. However, at another level, it is simply the arrogation of power through normative detachment from a situation, what Georg Simmel originally dubbed the tertius gaudens, the third party who benefits by not taking sides in a conflict—and perhaps even by promoting the conflict as unresolvable.
Such an attitude has helped to position science and technology studies researchers as prime candidates for policy-based research contracts. For this reason, and as an antidote, I have become increasingly attracted to Jean-Paul Sartre’s rather totalizing notion of responsibility, whereby we bear some direct responsibility for both what we say/do and what we don’t. Of course, once you take responsibility in this extended sense, you remain always open to criticism—and the avoidance of criticism through studied ambivalence is no longer an option.
For Theodicy
I understand Sartre’s extended conception of responsibility in terms of what Lynch treats as the bugbear of my world-view; namely, theodicy. Max Weber got the significance of theodicy right when he observed that the great world-religions can be distinguished by their differing senses of cosmic justice. In the Abrahamic religions, which posit varying degrees of similarity in kind between humans and creative deity, theodicy aims more specifically to justify to humans God’s often seemingly inscrutable, if not perverse, modus operandi.
Nevertheless theodicy has never been a comfortable topic for either clerics or lay people to discuss. Nowadays, thanks largely to Kant, the main problem with theodicy is seen to be its self-aggrandizing assumption that we might be able to get into the mind of God. Kant’s charge became increasingly pointed once God’s existence itself could no longer be taken for granted, at which point theodicy morphed from mere blasphemy to a lightly veiled version of Nietzsche’s will to power. Lynch’s misgivings seem to be coming from this general set of considerations.
However, among the faithful, theodicy has been problematic because of the potentially alienating image of God that it implied. After all, here was a deity who seems capable of tolerating all manner of evil and suffering as long as it can be turned towards some ultimate good. Such a God may be quite rational and efficient but not very compassionate. More to the point, would such a deity—were it to exist—be worthy of our allegiance? Darwin, for one, clearly thought not. Indeed, natural selection is basically Reverend Thomas Malthus’ population pressure model of theodicy minus the providential hand of God giving a larger meaning to the process.[8]
To put the matter crudely, but perhaps not so far from what Lynch thinks, what had previously been seen (in Malthus) as the means for realizing the Divine Plan came to be registered (in Darwin) as the unintended consequences of a complex process which exists only for its own sake—and not out of any particular concern for humanity. In other words, Darwin accepted the phenomena and even some of the modes of inference which theodicy had associated with God’s inscrutable ways—the “cunning of reason,” as Hegel semi-secularized it—but could not assign divine authorship to them.
I believe that Darwin de-authorized God in this fashion largely on moral grounds; namely, that the Malthusian theodicy (which was endorsed by William Paley, the godfather of contemporary intelligent design theory) implicated a deity with whom Darwin could not have a personal relationship, as this deity appeared to be indifferent to the fates of individual lives. Recall both Darwin’s Christian upbringing and the grief he suffered at the loss of his daughter. Christianity stresses the individuality of the human soul and the uniqueness of humanity’s saviour, Jesus, who is distinguished by his empathic capacity with each individual human. Population thinking of the sort pioneered by Malthus and generalized by Darwin is antithetical to this traditional understanding of Christianity. But here one should not underestimate the radical shift in Christianity’s cognitive and emotional centre of gravity brought about by the Protestant Reformation.
In particular, John Calvin and his followers began to explore in detail the implications of the radical difference in perspective between the ends of a transcendent and infinite deity and the experiences of a spatio-temporally bound humanity. In this “reformed” vision of Christianity, Jesus came to be seen less as the literal incarnation of God and more as a mask (persona in the original Greek sense) which God adopts to justify his actions in a way ordinary humans can understand. Not surprisingly, as this thinking becomes more developed, “Unitarian” forms of Christianity which de-emphasize the unique personality of Jesus become more prominent. Theodicy also comes from this reformed Christian view, and its two main secular legacies are utilitarianism (via Reverend Joseph Priestley) and population thinking (via Priestley’s student, Malthus).
Accordingly, many of the traditional “humane” virtues of Jesus come to be seen in purely instrumental terms, which is to say, virtuous only insofar as they are instrumental. Compassion would fall under this category. Compassion is not a virtue in itself, and in fact can do harm if it promotes a false sense of personal security in the face of genuine existential risk. In other words, the proper emotional terminus of compassion is not a feeling that one’s fate will improve (even if only in the next life) but that one’s plight serves a higher purpose, which should be understood rationally. Indeed, reformed Christians stress the sacrificial nature of Jesus’ death as discharging the debt incurred by Adam’s Fall. Jesus’ divinity lies specifically in his recognition and acceptance of this fate, which is something that Christians in turn should seek to emulate in their lives.
Implied here is an attitude towards the past, which from a secular standpoint can only be called “progressive,” though Calvin almost certainly did not see matters this way. In particular, the past is treated as the hereditary burden of Original Sin which each generation of humans is obliged to mitigate if not fully overcome. Admittedly only God’s Grace determines success in the matter, yet the default normative setting of the past is clearly negative, insofar as whatever misery remains in the world is a reminder of our still fallen state. On this view, while it may not be within humanity’s gift to remove the world’s abiding misery (only God can allow that to happen), the continued existence of such misery is meant to provide an incentive for humans to try to make the world better. Or, as Leibniz put it more abstractly a century after Calvin, we need evil in the world to excite (by contrast) our knowledge of what is good. Durkheim later observed that public executions performed a concrete version of the same function in reinforcing secular society’s norms.
One can also see this general train of thought in the work of Ronald Fisher, who provided the first general mathematical formulation of natural selection theory in the 1920s. He was both a Calvinist and a eugenicist, and regarded the two stances as opposite sides of the same coin.[9] He is perhaps the closest to a direct descendant of Malthus in terms of cognitive-affective orientation when it comes to population thinking. The very idea that one might need to look at the aggregate of the human condition—that is, take seriously the fate of each individual as if they all counted equally—to determine what is in humanity’s best interests is both democratic and godlike. In case of the latter, it comports with the Christian view that God disposes of each person’s fate individually, even as it reinforces some of the scarier features of democratic elections, e.g. that simple majorities can dominate over minorities, a consequence of the fact that in an election, each person’s decision contributes equally to binding everyone’s will.
From this standpoint, we can see Kant’s categorical imperative as the abstract expression of this principle, understood as the frame of mind in terms of which each person should cast their vote. In effect, for Kant, the rational moral agent internalizes the democratic voting procedure as his/her normative horizon, as opposed to simply voting his/her interests and then relying on the procedure itself to sort out the outcome. A good way to see this shift in frame of reference is as a version of the classical philosophical idea that humans can see themselves under multiple metaphysical guises. The stereotyped division in early modern philosophy between “rationalists” and “empiricists” largely turns on identifying the appropriate guise.
Rationalists stressed the overlap of human and divine being, and empiricists the overlap of human and animal being. This in turn explained the relative priority each side gave the various mental faculties. Against this backdrop, Kant can be seen as actually trying to forge a more sui generis sense of the human—hence his coinage of “anthropology”—such that humans are not merely part-divine and/or part-animal but most of all, part-each other. Now, this might be by virtue of being children of the same God or members of the same biological species. But in either case, it establishes a metaphysical standpoint from which to assert the fundamental equality of all people qua people.[10]
The style of population thinking associated with natural selection complicates this trajectory, as it effectively reinjects this democratic turn into the disposition of life itself. The sense of “democracy” that is relevant to nature understood as a “constituency” transpires at the level of the entire ecology, in which a reduction of one’s species’ population coexists with an increase in another species’ population. However, in this case, the “vote” one takes is with one’s life, more specifically, whether one lives long enough to bring offspring to fruition. In this context, genetic capacity functions as the biological correlate to the rational capacity that provides the frame of mind in which a vote should be taken in a democracy. And so, corresponding to the Kantian who internalizes the voting procedure as his/her normative horizon is the eugenicist who internalizes the laws of genetics. Just as we already ask responsible citizens to think in terms of policies that are likely to benefit the entire society, we might extend this deliberation to include the sort of people we would wish to have inhabit such a society. And of course, as it becomes easier to access biometric data, individuals will be able to make more informed choices on the matter. But of course, the original eugenicists already believed that people should take it upon themselves to decide whether or not to have children, depending on what they know of their genetic capacity.
People may find this train of thought quite logical or totally scary—and here I think the Nazi atrocities do cloud our judgement. But our judgement is equally clouded by the crude conceptions of “ability” and “disability” with which even welfare state eugenicists have operated, not to mention the unfortunate policies which followed from them. Nevertheless, despite these negative lessons of history, I basically think that this is the direction of normative travel, and it is to a better place. However, there are some philosophical loose ends. The main one is that the smoothness of this narrative depends on our successfully internalizing natural selection, understood as the divine surrogate. This presupposes a specific historical trajectory, which has so far gone through two stages:
(1) Malthus (to Darwin): In the beginning, natural selection is a purely external, Calvinist godlike force which is indifferent to the fate of individual humans. Moreover, individuals simply follow their passions, based on their self-interest as understood in the relative short term (i.e. the current or, at most, the very next generation). This is an argument against both the democratic vote and munificent welfare policies.
(2) Fisher (from Galton): Natural selection comes to be internalized as part of the self-understanding of, first, legislators but eventually, the populace. (We might think of this two-step process as going from Bentham to Kant in terms of the secularisation of the moral horizon.) Thus, people come to judge, say, whether having that extra child is likely to be to everyone’s benefit; if not, then self-selection occurs against reproduction. This line of thought is facilitated by corresponding changes in the environment from the late 19th century onward, from mass education in “civic biology,” as eugenics was often presented in high school textbooks in the early 20th century, to improvements in public hygiene. In effect, what looks from the outside as a disciplining of nature amounts to our internalization of natural selection as part of our own self-understanding. Moreover, if one has retained Malthus’ original theological disposition (as did Fisher), this process amounts to our becoming more God-like, which is the position of contemporary “transhumanism,” a term coined by the officially non-religious eugenicist, Julian Huxley.
Let me pick up on the Kantian connection, since Kant’s profoundly detached ethical attitude, one swayed neither by one’s own nor others’ passions, was part of his strategy to relocate our intuitions of the divine which he believed could not be borne out by pure reason alone. Here I would suggest that Kant retains the reading of Jesus’ parable of the Good Samaritan that was favoured in his Calvinist upbringing—namely, that the “universal love of humanity” (agape) consists in recognizing in the disadvantaged person a rational will like one’s own regardless of whatever positive or negative feelings one immediately registers about the person. After all, the sort of visceral responses that we dignify as compassion are ultimately based on our animal natures (e.g. the simple fact that we can imagine what it is to feel another’s pain), which is the source of Original Sin, which in turn can only be checked if not purged by the sort of principled “deontological” ethic that Kant proposed.
My point in all this is to suggest that the sort of abstract understanding of life’s meaning promoted by theodicy—and which Kant relocates in ethics—sets the stage for the attitude taken towards the individual in population thinking in the social and biological sciences in the 19th and 20th centuries. As Ian Hacking started to make clear forty years ago, our modern paradigms of probability and statistical reasoning originated in early modern attempts to mathematize theological claims in the wake of what is generally called the “Scientific Revolution.”[11]
Malthus, writing at the end of the 18th century, may be seen as the last great achievement in this movement. However, these efforts at mathematization—for which Leibniz and others had seen theodicy as providing a metaphysical foundation—had been already subject to a hermeneutical backlash in Leibniz’s day, two generations prior to Malthus. “Historico-critical” scholars of the Bible inspired by Spinoza began to question the sacred book’s literal—including mathematical—meaning, which, for example, had been used to set the date of Creation at 4004 BC.
A new phase of this anti-mathematical backlash recurred a century after Malthus, only this time in a purely secular guise—against neo-classical economics and experimental psychology, which attempted to quantify human meaning-making in terms of various decisions taken (in the market, in response to stimuli, etc.). In this version of the debate, which dominated 20th century philosophical discussion of the social sciences, the descendants of the Leibnizian literalists were the “analytic” or “positivist” school, while the mantle of their Spinozist critics fell to the “historicist” or “interpretivist” school. Thus, the positivists reproduced the arguments of the Biblical literalists of yore, but this time tied to the sensory, verbal and numerical “data”—understood as “texts” with the sort of reverence previously reserved for Sacred Scripture. The interpretivists denied that texts could be treated in such a literal fashion but required contextualisation in the subjects’ lifeworld.
To be sure, in this second round, the interpretivists faced the additional burden of having to deal with the successful secularization of theodicy’s godlike standpoint in policymaking—first, in political economy and, later, economics and official statistics, which increasingly included psychometrics. Indeed, in retrospect the relatively seamless transition from Leibniz’s theodicy to Bentham’s legislator can be tracked in the ease with which Malthus’ own identity morphed from that of theologian to economist. In any case, the faith that reformed Christians routinely had in the literal understanding of the Bible was inherited by the faith we now invest (at least for policy purposes) in quantified generalizations of human conduct.
In neither case has the faith ever been asserted without objection. However, the conditions under which we might doubt one version of textual literalism should be seen as comparable to the conditions under which we might doubt the other. At stake is our epistemic access, respectively, to the divine mind and the human mind. Both the original Biblical literalists and today’s statisticians and psychometricians are convinced that, even granting the vicissitudes of imperfect human cognition (both at the time of expression and in its transmission over time and space) we have a sufficiently robust empirical record for orienting our conduct.
Theodicy’s Lesson to Philosophy: Epistemology as the Higher Ethics
For me, one of the most attractive features of theodicy, which was clearly recognized by Leibniz, is that all the evil in the world which we might be tempted to attribute to God turns out to be a form of ignorance on our own part. Evil becomes error, and Original Sin the recognition of our own finitude as ignorance, which then creates an endless thirst for knowledge, which, in a sense, reproduces the sin while providing the basis for overcoming it. After all, we could have remained finite creatures without ever having to recognize our finitude, in which case we would have remained in the Garden of Eden. But we would have also remained as animals, to whom this “in itself” sense of ignorance—to use the Hegelian jargon—has been traditionally attributed. Whereas animals don’t know that they don’t know, humans do. In this respect, humans are animals who can stand outside themselves in order to see on the other side of their epistemic limits. For Leibniz and other devotees of theodicy, such feats of the imagination constituted “rational intuition,” a faculty which overlaps with the divine mind. However, Kant notoriously debunked such feats of the imagination as no more than projective fictions.
But if evil is error, then two orientations towards it are possible: (1) We can try to prevent it. (2) We can let it happen and then try to use it. What we normally call “learning” involves doing both, first (2) and then (1) when the next opportunity for error arises. However, if the “we” is meant as a personification of natural selection, then that’s not really how it works, despite the efforts of the psychologist Donald Campbell and others to develop an “evolutionary epistemology.” Natural selection is really just about (2). In other words, according to natural selection, we live in a world in which error is intrinsic to the normal course of things (aka genetic variation and mutation). The only remaining question is who at any given moment takes most advantage of this regular error generation. “Advantage” in the context of natural selection is ultimately about reproductive achievement. But how do the non-reproducing members contribute to a stronger common gene pool in the future?
The slightly glib but not trivial answer is “simply by being there.” This answer is in the spirit of Leibniz’s view that evil is required for us to recognize good. Information economists nowadays talk about this in terms of the noise that’s necessary for the signal to be received. In other words, it is difficult to tell good from bad unless you’ve got a baseline, which is the “background noise.” This is used by economists to justify the proliferation of entrants into, say, the labour market or, for that matter, the academic research market—namely, with more entrants, the signal-to-noise ratio can be more easily detected.
To be sure, this begs lots of questions about the receiver’s mindset that enables it to draw such distinctions. But in any case, such distinctions are drawn. The difference between attributing this mindset to God or natural selection lies in whether there is something “principled” to be understood which we might turn to our advantage—even “game the system.” This is why the “blindness” of natural selection—its utter obliviousness to what humans might recognize as rational—has been the most irksome feature of Darwin’s specific account of evolution.
To be fair, Darwin knew nothing about genetics, let alone its basis in molecular biology, which no doubt contributed to his forthright denial of reason in nature (aka teleology). But of course, our knowledge on this score has massively improved since Darwin’s day, yet Darwin’s scepticism concerning teleology remains the default scientific sensibility. Thus, the slightest evidence of teleology is followed by Darwin-inspired accounts showing how it could have been brought about without positing teleology. To anti-Darwinists, such as intelligent design theorists, these accounts simply reveal the often counter-intuitive means by which the divine or otherwise intelligent ends were brought about.
What all this suggests is that the metaphysically interesting question about evolution is not whether it is true but whether it is something that we can understand, control and direct in a way which allows us to flourish indefinitely in a way no other species has. Commitment to an answer of “yes” runs counter to what Darwin thought was possible, yet it would corroborate the Biblical idea that we are created in imago dei. In other words, the progression of humanity amounts to, in Popperian terms, a “bold hypothesis” as we subject our species to ever greater risk. Indeed, I have been promoting this idea as an ethic associated with the proactionary principle, the exact opposite of the better known precautionary principle.[12]
The longer humanity succeeds at beating the odds, the greater the likelihood that we know what we’re doing, even as we take several significant hits along the way. However, this “knowledge” is not an inductive generalization from past experience but a deeper epistemic capacity, one which nowadays tends to be associated with a “causal” understanding of reality but is not so far from what Leibniz and especially modern mathematicians have characterised as “rational intuition.”
Finally, let me provide some sense of how theodicy came to play such a central role in my thinking. Early in my career I was influenced by a distinction that Jon Elster drew based on his reading of Norbert Wiener. It is introduced in Fuller (1988): the difference between “strategic” and “parametric” rationality.[13] The difference turns on how one deals with error. The strategic rationalist envisages error as something active, which recurs in new and perhaps more insidious forms with each effort at elimination, very much in the manner of an adversary. In contrast, the parametric rationalist sees error as a passive deficit from which one might recover through some act of completion.
Corresponding to these two epistemic notions are two ethical ones, in which “error” means “evil.” The strategic opponent is like the positive incarnation of evil in Zoroastrianism, which was given a Christian makeover as “Satan.” Parametric error is more like the privation account of human evil provided in Augustinian theology, whereby Original Sin is associated with our own freely lost divinity, which might be somehow redeemed in the future, something closer to “weakness of the will.” A shift from a strategic to a parametric orientation towards what we do not know about nature emboldened devotees of the inquisitorial (what we now call “experimental”) method in the early modern era—most notably Francis Bacon—to conclude that it might be easier to extract the secrets of nature than those of our fellow humans, who operate in a more strategic vein to evade our inquiries.[14]
In between these two positions on error sits the “deficit with a memory” (residue), or “debt,” which lingers after the deficit has been met, since “completion” in the case of debt rarely means restoring an original state but rather some equivalent level of compensation for the original disruptive act. Here too there is both an epistemic and an ethical spin: Who is able to benefit from my exposure of vulnerability—both in terms of how it was brought about and how I managed to redress it?
If I am the main beneficiary, then the debt remains in me as “conscience” or “superego” or some other self-disciplining faculty of the soul, which prompts me always to think that good is never good enough. However, if someone else is the main beneficiary, because they are also witness to my vulnerability, then they are in a position to exploit me, be it as a Mafia don or a capitalist employer. Our susceptibility to such exploitation is bound to take a new and potentially more insidious turn as such corporate information giants as Google incentivize us to reveal more and more about ourselves in return for free access to their search engines and databases. To put it somewhat more metaphysically, humans discharge the indebtedness of their being by becoming the gift that can only keep giving.
References
Fuller, Steve. Social Epistemology. Bloomington: Indiana University Press, 1988.
Fuller, Steve. Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press, 2000.
Fuller, Steve. Humanity 2.0: What It Means to Be Human Past, Present and Future. London: Palgrave Macmillan, 2011.
Fuller, Steve. Knowledge: The Philosophical Quest in History. London: Routledge, 2015.
Fuller, Steve and James H. Collier. Philosophy, Rhetoric and the End of Knowledge 2nd ed. (Orig. Fuller 1993). Mahwah NJ: Lawrence Erlbaum Associates, 2004.
Fuller, Steve and Veronika Lipinska. The Proactionary Imperative: A Foundation for Transhumanism. London: Palgrave Macmillan, 2014.
Hacking, Ian. The Emergence of Probability. Cambridge UK: Cambridge University Press, 1975.
Haraway, Donna. Simians, Cyborgs, Women. London: Free Association Books, 1991.
Harrison, Peter. The Fall of Man and the Foundations of Science. Cambridge UK: Cambridge University Press, 2007.
Latour, Bruno. Science in Acton. Milton Keynes UK: Open University Press, 1987.
Passmore, John. The Perfectibility of Man. London: Duckworth, 1970.
Shapin, Steven and Simon Schaffer. Leviathan and the Air-Pump. Princeton: Princeton University Press, 1985.
[1] Steve Fuller 2015.
[2] Fuller 1988, chapter 12.
[3] Haraway 1991.
[4] cf. Steve Fuller and James H. Collier 2004, chapter 8.
[5] Bruno Latour 1987.
[6] Steven Shapin and Simon Schaffer 1985.
[7] Fuller 2000.
[8] John Passmore 1970, chapter 9.
[9] Steve Fuller and Veronika Lipinska 2014, chapter 3.
[10] Fuller 2011, chapters 1-2.
[11] Hacking 1975.
[12] Fuller and Lipinska 2014.
[13] Fuller 1988, chapter 2.
[14] Peter Harrison 2007.
Categories: Books and Book Reviews
Leave a Reply