Archives For virtue theory

Author Information: Damien Williams, Virginia Tech, damienw7@vt.edu

Williams, Damien. “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 64-69.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3US

Image by Stu Jones via CJ Sorg on Flickr / Creative Commons

 

Shannon Vallor’s most recent book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting takes a look at what she calls the “Acute Technosocial Opacity” of the 21st century, a state in which technological, societal, political, and human-definitional changes occur at such a rapid-yet-shallow pace that they block our ability to conceptualize and understand them.[1]

Vallor is one of the most publicly engaged technological ethicists of the past several years, and much of her work’s weight comes from its direct engagement with philosophy—both philosophy of technology and various virtue ethical traditions—and the community of technological development and innovation that is Silicon Valley. It’s from this immersive perspective that Vallor begins her work in Virtues.

Vallor contends that we need a new way of understanding the projects of human flourishing and seeking the good life, and understanding which can help us reexamine how we make and participate through and with the technoscientific innovations of our time. The project of this book, then, is to provide the tools to create this new understanding, tools which Vallor believes can be found in an examination and synthesis of the world’s three leading Virtue Ethical Traditions: Aristotelian ethics, Confucian Ethics, and Buddhism.

Vallor breaks the work into three parts, and takes as her subject what she considers to be the four major world-changing technologies of the 21st century.  The book’s three parts are, “Foundations for a Technomoral Virtue Ethic,” “Cultivating the Self: Classical Virtue Traditions as Contemporary Guide,” and “Meeting the Future with Technomoral Wisdom, OR How To Live Well with Emerging Technologies.” The four world changing technologies, considered at length in Part III, are Social Media, Surveillance, Robotics/Artificial Intelligence, and Biomedical enhancement technologies.[2]

As Vallor moves through each of the three sections and four topics, she maintains a constant habit of returning to the questions of exactly how each one will either help us cultivate a new technomoral virtue ethic, or how said ethic would need to be cultivated, in order to address it. As both a stylistic and pedagogical choice, this works well, providing touchstones of reinforcement that mirror the process of intentional cultivation she discusses throughout the book.

Flourishing and Technology

In Part I, “Foundations,” Vallor covers both the definitions of her terms and the argument for her project. Chapter 1, “Virtue Ethics, Technology, and Human Flourishing,” begins with the notion of virtue as a continuum that gets cultivated, rather than a fixed end point of achievement. She notes that while there are many virtue traditions with their own ideas about what it means to flourish, there is a difference between recognizing multiple definitions of flourishing and a purely relativist claim that all definitions of flourishing are equal.[3] Vallor engages these different understandings of flourishing, throughout the text, but she also looks at other ethical traditions, to explore how they would handle the problem of technosocial opacity.

Without resorting to strawmen, Vallor examines The Kantian Categorical Imperative and Utilitarianism, in turn. She demonstrates that Kant’s ethics would result in us trying to create codes of behavior that are either always right, or always wrong (“Never Murder;” “Always Tell the Truth”), and Utilitarian consequentialism would allow us to make excuses for horrible choices in the name of “the Greater Good.” Which is to say nothing of how nebulous, variable, and incommensurate all of our understandings of “utility” and “good” will be with each other. Vallor says that rigid rules-based nature of each of these systems simply can’t account for the variety of experiences and challenges humans are likely to face in life.

Not only that, but deontological and consequentialist ethics have always been this inflexible, and this inflexibility will only be more of a problem in the face of the challenges posed by the speed and potency of the four abovementioned technologies.[4] Vallor states that the technologies of today are more likely to facilitate a “technological convergence,” in which they “merge synergistically” and become more powerful and impactful than the sum of their parts. She says that these complex, synergistic systems of technology cannot be responded to and grappled with via rigid rules.[5]

Vallor then folds in discussion of several of her predecessors in the philosophy of technology—thinkers like Hans Jonas and Albert Borgmann—giving a history of the conceptual frameworks by which philosophers have tried to deal with technological drift and lurch. From here, she decides that each of these theorists has helped to get us part of the way, but their theories all need some alterations in order to fully succeed.[6]

In Chapter 2, “The Case for a Global Technomoral Virtue Ethic,” Vallor explores the basic tenets of Aristotelian, Confucian, and Buddhist ethics, laying the groundwork for the new system she hopes to build. She explores each of their different perspectives on what constitutes The Good Life in moderate detail, clearly noting that there are some aspects of these systems that are incommensurate with “virtue” and “good” as we understand them, today.[7] Aristotle, for instance, believed that some people were naturally suited to be slaves, and that women were morally and intellectually inferior to men, and the Buddha taught that women would always have a harder time attaining the enlightenment of Nirvana.

Rather than simply attempting to repackage old ones for today’s challenges, these ancient virtue traditions can teach us something about the shared commitments of virtue ethics, more generally. Vallor says that what we learn from them will fuel the project of building a wholly new virtue tradition. To discuss their shared underpinnings, she talks about “thick” and “thin” moral concepts.[8] A thin moral concept is defined here as only the “skeleton of an idea” of morality, while a thick concept provides the rich details that make each tradition unique. If we look at the thin concepts, Vallor says, we can see the bone structure of these traditions is made of 4 shared commitments:

  • To the Highest Human Good (whatever that may be);
  • That moral virtues understood to be cultivated states of character;
  • To a practical path of moral self-cultivation; and
  • That we can have a conception of what humans are generally like.[9]

Vallor uses these commitments to build a plausible definition of “flourishing,” looking at things like intentional practice within a global community toward moral goods internal to that practice, a set of criteria from Alasdair MacIntyre which she adopts and expands on, [10] These goals are never fully realized, but always worked toward, and always with a community. All of this is meant to be supported by and to help foster goods like global community, intercultural understanding, and collective human wisdom.

We need a global technomoral virtue ethics because while the challenges we face require ancient virtues such as courage and charity and community, they’re now required to handle ethical deliberations at a scope the world has never seen.

But Vallor says that a virtue tradition, new or old, need not be universal in order to do real, lasting work; it only needs to be engaged in by enough people to move the global needle. And while there may be differences in rendering these ideas from one person or culture to the next, if we do the work of intentional cultivation of a pluralist ethics, then we can work from diverse standpoints, toward one goal.[11]

To do this, we will need to intentionally craft both ourselves and our communities and societies. This is because not everyone considers the same goods as good, and even our agreed-upon values play out in vastly different ways when they’re sought by billions of different people in complex, fluid situations.[12] Only with intention can we exclude systems which group things like intentional harm and acceleration of global conflict under the umbrella of “technomoral virtues.”

Cultivating Techno-Ethics

Part II does the work of laying out the process of technomoral cultivation. Vallor’s goal is to examine what we can learn by focusing on the similarities and crucial differences of other virtue traditions. Starting in chapter 3, Vallor once again places Aristotle, Kongzi (Confucius), and the Buddha in conceptual conversation, asking what we can come to understand from each. From there, she moves on to detailing the actual process of cultivating the technomoral self, listing seven key intentional practices that will aid in this:

  • Moral Habituation
  • Relational Understanding
  • Reflective Self-Examination
  • Intentional Self-Direction of Moral Development
  • Perceptual Attention to Moral Salience
  • Prudential Judgment
  • Appropriate Extension of Moral Concern[13]

Vallor moves through each of these in turn, taking the time to show how each step resonates with the historical virtue traditions she’s used as orientation markers, thus far, while also highlighting key areas of their divergence from those past theories.

Vallor says that the most important thing to remember is that each step is a part of a continual process of training and becoming; none of them is some sort of final achievement by which we will “become moral,” and some are that less than others. Moral Habituation is the first step on this list, because it is the quality at the foundation of all of the others: constant cultivation of the kind of person you want to be. And, we have to remember that while all seven steps must be undertaken continually, they also have to be undertaken communally. Only by working with others can we build systems and societies necessary to sustain these values in the world.

In Chapter 6, “Technomoral Wisdom for an Uncertain Future,” Vallor provides “a taxonomy of technomoral virtues.”[14] The twelve concepts she lists—honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and technomoral wisdom—are not intended to be an exhaustive list of all possible technomoral virtues.

Rather, these twelve things together form system by which to understand the most crucial qualities for dealing with our 21st century lives. They’re all listed with “associated virtues,” which help provide a boarder and deeper sense of the kinds of conceptual connections we can achieve via relational engagement with all virtues.[15] Each member of the list should support and be supported by not only the other members, but also any as-yet-unknown or -undiscovered virtues.

Here, Vallor continues a pattern she’s established throughout the text of grounding potentially unfamiliar concepts in a frame of real-life technological predicaments from the 20th or 21st century. Scandals such as Facebook privacy controversies, the flash crash of 2010, or even the moral stances (or lack thereof) of CEO’s and engineers are discussed with a mind toward highlighting the final virtue: Technomoral Wisdom.[16] Technomoral Wisdom is a means of being able to unify the other virtues, and to understand the ways in which our challenges interweave with and reflect each other. In this way we can both cultivate virtuous responses within ourselves and our existing communities, and also begin to more intentionally create new individual, cultural, and global systems.

Applications and Transformations

In Part III, Vallor puts to the test everything that we’ve discussed so far, placing all of the principles, practices, and virtues in direct, extensive conversation with the four major technologies that frame the book. Exploring how new social media, surveillance cultures, robots and AI, and biomedical enhancement technologies are set to shape our world in radically new ways, and how we can develop new habits of engagement with them. Each technology is explored in its own chapter so as to better explore which virtues best suit which topic, which good might be expressed by or in spite of each field, and which cultivation practices will be required within each. In this way, Vallor highlights the real dangers of failing to skillfully adapt to the requirements of each of these unprecedented challenges.

While Vallor considers most every aspect of this project in great detail, there are points throughout the text where she seems to fall prey to some of the same technological pessimism, utopianism, or determinism for which she rightly calls out other thinkers, in earlier chapters. There is still a sense that these technologies are, of their nature, terrifying, and that all we can do is rein them in.

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

While the implications of climate catastrophes, dystopian police states, just-dumb-enough AI, and rampant gene hacking seem real, obvious, and avoidable to many of us, many others take them as merely naysaying distractions from the good of technosocial progress and the ever-innovating free market.[17] With that in mind, we need tools with which to begin the process of helping people understand why they ought to care about technomoral virtue, even when they have such large, driving incentives not to.

Without that, we are simply presenting people who would sell everything about us for another dollar with the tools by which to make a more cultivated, compassionate, and interrelational world, and hoping that enough of them understand the virtue of those tools, before it is too late. Technology and the Virtues is a fantastic schematic for a set of these tools.

Contact details: damienw7@vt.edu

References

Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a World Worth Wanting New York: Oxford University Press, 2016.

[1] Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a World Worth Wanting (New York: Oxford University Press, 2016) ,6.

[2] Ibid., 10.

[3] Ibid., 19—21.

[4] Ibid., 22—26.

[5] Ibid. 28.

[6] Ibid., 28—32.

[7] Ibid., 35.

[8] Ibid., 43.

[9] Ibid., 44.

[10] Ibid., 45—47.

[11] Ibid., 54—55.

[12] Ibid., 51.

[13] Ibid., 64.

[14] Ibid., 119.

[15] Ibid., 120.

[16] Ibid., 122—154.

[17] Ibid., 249—254.

Author Information: Paul R. Smart, University of Southampton, ps02v@ecs.soton.ac.uk

Smart, Paul R. “(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 45-55.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uq

Please refer to:

Image by BTC Keychain via Flickr / Creative Commons

 

Richard Heersmink’s (2018) article, A virtue epistemology of the Internet: Search engines, intellectual virtues, and education, provides an important and timely analysis of the Internet from the standpoint of virtue epistemology.[1] According to Richard, the Internet is an important epistemic resource, but it is one that comes with a range of epistemic hazards. Such hazards, he suggests, motivate a consideration of the ways in which individuals should interact with the Internet.

In particular, Richard appeals to a specific branch of virtue epistemology, known as virtue responsibilism, arguing that certain kinds of cognitive trait (e.g., curiosity and open-mindedness) are useful in helping us press maximal epistemic benefit from the Internet. Given the utility of such traits, coupled with the epistemic importance of the Internet, Richard suggests that educational policy should be adapted so as to equip would-be knowers with the cognitive wherewithal to cope with the epistemic challenges thrown up by the online environment.

There is, no doubt, something right about all this. Few would disagree with the claim that a certain level of discernment and discrimination is important when it comes to the evaluation of online content. Whether such ‘virtues’ are best understood from the perspective of virtue responsibilism or virtue reliabilism is, I think, a moot point, for I suspect that in the case of both virtue responsibilism and virtue reliabilism what matters is the way in which belief-forming informational circuits are subject to active configuration by processes that may be broadly construed as metacognitive in nature (Smart, in pressa). That, however, is a minor quibble, and it is one that is of little consequence to the issues raised in Richard’s paper.

For the most part, then, I find myself in agreement with many of the assumptions that motivate the target article. I agree that the Internet is an important epistemic resource that is unprecedented in terms of its scale, scope, and accessibility. I also agree that, at the present time, the Internet is far from an epistemically safe environment, and this raises issues regarding the epistemic standing of individual Internet users. In particular, it looks unlikely that the indiscriminate selection and endorsement of online information will do much to bolster one’s epistemic credentials.

We thus encounter something of a dilemma: As an epistemic resource, the Internet stands poised to elevate our epistemic standing, but as an open and public space the Internet provides ample opportunities for our doxastic systems to be led astray. The result is that we are obliged to divide the online informational cornucopia into a treasure trove of genuine facts and a ragbag collection of ‘false facts’ and ‘fake news.’ The information superhighway, it seems, promises to expand our epistemic power and potential, but the road ahead is one that is fraught with a dizzying array of epistemic perils, problems, and pitfalls. What ought we to do in response to such a situation?

It is at this point that I suspect my own views start to diverge with those of the target article. Richard’s response to the dilemma is to focus attention on the individual agent and consider the ways in which an agent’s cognitive character can be adapted to meet the challenges of the Internet. My own approach is somewhat different. It is borne out of three kinds of doubt: doubts about the feasibility (although not the value) of virtue-oriented educational policies, doubts about the basic validity of virtue theoretic conceptions of knowledge, and doubts about whether the aforementioned dilemma is best resolved by attempting to change the agent as opposed to the environment in which the agent is embedded. As always, space is limited and life is short, so I will restrict my discussion to issues that I deem to be of greatest interest to the epistemological community.

Reliable Technology

Inasmuch as intellectual virtues are required for online knowledge—i.e., knowledge that we possess as a result of our interactions and engagements with the Internet—they are surely only part of a much  broader (and richer) story that includes details about the environment in which our cognitive systems operate. In judging the role of intellectual virtue in shielding us from the epistemic hazards of the online environment, it therefore seems important to have some understanding of the actual technologies we interact with.

This is important because it helps us understand the kinds of intellectual virtue that might be required, as well as the efficacy of specific intellectual virtues in helping us believe the truth (and thus working as virtues in the first place). Internet technologies are, of course, many and varied, and it will not be possible to assess their general relevance to epistemological debates in the present commentary. For the sake of brevity, I will therefore restrict my attention to one particular technology: blockchain.

Blockchain is perhaps best known for its role in supporting the digital cryptocurrency, Bitcoin. It provides us with a means of storing data in a secure fashion, using a combination of data encryption and data linking techniques. For present purposes, we can think of a blockchain as a connected set of data records (or data blocks), each of which contains some body of encrypted data. In the case of Bitcoin, of course, the data blocks contain data of a particular kind, namely, data pertaining to financial transactions. But this is not the only kind of data that can be stored in a blockchain. In fact, blockchains can be used to store information about pretty much anything. This includes online voting records, news reports, sensor readings, personal health records, and so on.

Once data is recorded inside a blockchain, it is very difficult to modify. In essence, the data stored within a blockchain is immutable, in the sense that it cannot be changed without ‘breaking the chain’ of data blocks, and thereby invalidating the data contained within the blockchain. This property makes blockchains of considerable epistemic significance, because it speaks to some of the issues (e.g., concerns about data tampering and malign forms of information manipulation) that are likely to animate epistemological debates in this area.

This does not mean, of course, that the information stored within a blockchain is guaranteed to be factually correct, in the sense of being true and thus yielding improvements in epistemic standing. Nevertheless, there are, I think, reasons to regard blockchain as an important technology relative to efforts to make the online environment a somewhat safer place for would-be knowers. Consider, for example, the title of the present article. Suppose that we wanted to record the fact that a person known as Paul Smart—that’s me—wrote an article with the title:

(Fake?) News Alert: Intellectual Virtues Required for Online Knowledge!

We can incorporate this particular piece of information into a blockchain using something called a cryptographic hash function, which yields a unique identifier for the block and all of its contents. In the case of the aforementioned title, the cryptographic hash (as returned by the SHA256 algorithm[2]) is:

7147bd321e79a63041d9b00a937954976236289ee4de6f8c97533fb6083a8532

Now suppose that someone wants to alter the title, perhaps to garner support for an alternative argumentative position. In particular, let’s suppose they want to claim that the title of the article is:

Fake News Alert: Intellectual Virtues Required for Online Knowledge!

From an orthographic perspective, of course, not much has changed. But the subtlety of the alteration is not something that can be used to cause confusion about the actual wording of the original title—the title that I intended for the present article. (Neither can it be used to cast doubt about the provenance of the paper—the fact that the author of the paper was a person called Paul Smart.) To see this, note that the hash generated for the ‘fake’ title looks nothing like the original:

cc05baf2fa7a439674916fe56611eaacc55d31f25aa6458b255f8290a831ddc4

It is this property that, at least in part, makes blockchains useful for recording information that might otherwise be prone to epistemically malign forms of information manipulation. Imagine, for the sake of argument, that climatological data, as recorded by globally distributed sensors, was stored in a blockchain. The immutability of such data makes it extremely difficult for anyone to manipulate the data in such a way as to confirm or deny the reality of year-on-year changes in global temperature. Neither is it easy to alter information pertaining to the provenance of existing data records, i.e., information about when, where, and how such data was generated.

None of this should delude us into thinking that blockchain technology is a panacea for Internet-related epistemic problems—it isn’t! Neither does blockchain obviate the need for agents to exercise at least some degree of intellectual virtue when it comes to the selection and evaluation of competing data streams. Nevertheless, there is, I think, something that is of crucial epistemological interest and relevance here—something that makes blockchain and other cybersecurity technologies deserving of further epistemological attention. In particular, such technologies may be seen as enhancing the epistemic safety of the online environment, and thus perhaps reducing the need for intellectual virtue.

In this sense, the epistemological analysis of Internet technologies may be best approached from some variant of modal epistemology—e.g., epistemological approaches that emphasize the modal stability of true beliefs across close possible worlds (Pritchard, 2009, chap. 2). But even if we choose to countenance an approach that appeals to issues of intellectual virtue, there is still, I suggest, a need to broaden the analytic net to include technologies that (for the time being at least) lie beyond the bounds of the individual cognitive agent.

Safety in Numbers

“From an epistemic perspective,” Richard writes, “the most salient dimension of the Internet is that it is an information space” (Heersmink, 2018, p. 5). Somewhat surprisingly, I disagree. Although it is obviously true that the Internet is an information space, it is not clear that this is its most salient feature, at least from an epistemological standpoint. In particular, there is, I suggest, a sense in which the Internet is more than just an information space. As is clear from the explosive growth in all things social—social media, social networks, social bots, and so on—the Internet functions as a social technology, yielding all manner of opportunities for people to create, share and process information in a collaborative fashion. The result, I suggest, is that we should not simply think of the Internet as an information space (although it is surely that), we should also view it as a social space.

Viewing the Internet as a social space is important because it changes the way we think about the epistemic impact of the Internet, relative to the discovery, production, representation, acquisition, processing and utilization of knowledge. Smart (in pressb), for example, suggests that some online systems function as knowledge machines, which are systems in which some form of knowledge-relevant processing is realized by a socio-technical mechanism, i.e., a mechanism whose component elements are drawn from either the social (human) or the technological realm.

An interesting feature of many of these systems is the way in which the reliability (or truth-conducive) nature of the realized process is rooted in the socio-technical nature of the underlying (realizing) mechanism. When it comes to human computation or citizen science systems, for example, user contributions are typically solicited from multiple independent users as a means of improving the reliability of specific epistemic outputs (Smart, in pressb; Smart and Shadbolt, in press; Watson and Floridi, 2018). Such insights highlight the socially-distributed character of at least some forms of online knowledge production, thereby moving us beyond the realms of individual, agent-centric analyses.

On a not altogether unrelated note, it is important to appreciate the way in which social participation can itself be used to safeguard online systems from various forms of malign intervention. One example is provided by the Google PageRank algorithm. In this case, any attempt to ‘artificially’ elevate the ranking assigned to specific contributions (e.g., a user’s website) is offset by the globally-distributed nature of the linking effort, coupled with the fact that links to a specific resource are themselves weighted by the ranking of the resource from which the link originates. This makes it difficult for any single agent to subvert the operation of the PageRank algorithm.

Even ostensibly non-social technologies can be seen to rely on the distributed and decentralized nature of the Internet. In the case of blockchain, for example, multiple elements of a peer-to-peer network participate in the computational processes that make blockchain work. In this way, the integrity of the larger system is founded on the collaborative efforts of an array of otherwise independent computational elements. And it is this that (perhaps) allows us to think of blockchain’s epistemically-desirable features as being rooted in something of a ‘social’ substrate.

All of this, I suggest, speaks in favor of an approach that moves beyond a preoccupation with the properties of individual Internet users. In particular, there seems to be considerable merit in approaching the Internet from a more socially-oriented epistemological perspective. It is easy to see the social aspects of the Internet as lying at the root of a panoply of epistemic concerns, especially when it comes to the opportunities for misinformation, deception, and manipulation. But in light of the above discussion, perhaps an alternative, more positive, take on the Internet (qua social space) starts to come into sharper focus. This is a view that highlights the way in which certain kinds of online system can work to transform a ‘vice’ into a ‘virtue,’ exploiting the social properties of the Internet for the purposes of dealing with reliability-related concerns.

Image by Dariorug via Flickr / Creative Commons

 

Filter Bubblicious

Search engines form one of the focal points of Richard’s analysis, and, as with previous work in this area, Richard finds at least some aspects of their operation to be highly problematic. A particular issue surfaces in respect of personalized search. Here, Richard’s analysis echoes the sentiments expressed by other epistemologists who regard personalized search algorithms as of dubious epistemic value.

In fact, I suspect the consensus that has emerged in this area fails to tell the whole story about the epistemic consequences of personalized search. Indeed, from a virtue epistemological position, I worry that epistemologists are in danger of failing to heed their own advice—prematurely converging on a particular view without proper consideration of competing positions. In my new-found role as the virtue epistemologist’s guardian angel (or should that be devil’s advocate?), I will attempt to highlight a couple of reasons why I think more empirical research is required before we can say anything useful about the epistemological impact of personalized search algorithms.

My first worry is that our understanding about the extent to which search results and subsequent user behavior is affected by personalization is surprisingly poor. Consider, for example, the results of one study, which attempted to quantify the effect of personalization on search results (Hannak et al., 2013). Using an empirical approach, Hannak et al. (2013) report a demonstrable personalization effect, with 11.7% of search results exhibiting differences due to personalization. Interestingly, however, the effect of personalization appeared to be greater for search results with lower rankings; highly ranked results (i.e., those appearing at the top of a list of search results) appeared to be much less affected by personalization.

This result is interesting given the observation that college students “prefer to click on links in higher positions even when the abstracts are less relevant to the task at hand” (Heersmink, 2018, p. 6). From one perspective, of course, this tendency looks like a vice that jeopardizes the epistemic standing of the individual user. And yet, from another perspective, it looks like the preference for higher ranked search results is poised to negate (or at least reduce) the negative epistemological effects of personalized search. What we seem to have here, in essence, is a situation in which one kind of ‘intellectual vice’ (i.e., a tendency to select highly-ranked search results) is playing something of a more positive (virtuous?) role in mitigating the negative epistemological sequelae of a seemingly vicious technology (i.e., personalized search).

None of this means that the epistemic effects of personalized search are to the overall benefit of individual users; nevertheless, the aforementioned results do call for a more nuanced and empirically informed approach when considering the veritistic value of search engines, as well as other kinds of Internet-related technology.

A second worry relates to the scope of the epistemological analysis upon which judgements about the veritistic value of search engines are based. In this case, it is unclear whether analyses that focus their attention on individual agents are best placed to reveal the full gamut of epistemic costs and benefits associated with a particular technology, especially one that operates in the socio-technical ecology of the Internet. To help us understand this worry in a little more detail, it will be useful to introduce the notion of mandevillian intelligence (Smart, in pressc; Smart, in pressd).

Mandevillian intelligence is a specific form of collective intelligence in which the cognitive shortcomings and epistemic vices of the individual agent are seen to yield cognitive benefits and epistemic virtues at the collective or social level of analysis, e.g., at the level of collective doxastic agents (see Palermos, 2015) or socio-epistemic systems (see Goldman, 2011). According to this idea, personalized search systems may play a productive role in serving the collective cognitive good, providing a means by which individual vices (e.g., a tendency for confirmation bias) are translated into something that more closely resembles an epistemic virtue (e.g., greater cognitive coverage of a complex space of thoughts, ideas, opinions, and so on). Consider, for example, the way in which personalized search may help to focus individual attention on particular bodies of information, thereby restricting access to a larger space of ideas, opinions, and other information.

While such forms of ‘restricted access’ or ‘selective information exposure’ are unlikely to yield much in the way of an epistemic benefit for the individual agent, it is possible that by exploiting (and, indeed, accentuating!) an existing cognitive bias (e.g., confirmation bias), personalized search may work to promote cognitive diversity, helping to prevent precipitant forms of cognitive convergence (see Zollman, 2010) and assisting with the epistemically optimal division of cognitive labor (see Muldoon, 2013). This possibility reveals something of a tension in how we interpret or evaluate the veritistic value of a particular technology or epistemic practice. In particular, it seems that assessments of veritistic value may vary according to whether our epistemological gaze is directed towards individual epistemic agents or the collective ensembles in which those agents are situated.

The Necessity of Virtue

As Richard notes, virtue epistemology is characterized by a shift in emphasis, away from the traditional targets of epistemological analysis (e.g., truth, justification and belief) and towards the cognitive properties of would-be knowers. “Virtue epistemology,” Richard writes, “is less concerned with the nature of truth and more concerned with the cognitive character of agents” (Heersmink, 2018, p. 2). This is, no doubt, a refreshing change, relative to the intellectual orientation of traditional philosophical debates.

Nevertheless, I assume that virtue epistemologists still recognize the value and priority of truth when it comes to issues of epistemic evaluation. Someone who holds false beliefs is not the possessor of knowledge, and this remains the case irrespective of whatever vices and virtues the agent has. In other words, it does not matter how careful, attentive and assiduous an agent is in selecting and evaluating information, if what the agent believes is false, they simply do not know.

What seems to be important in the case of virtue epistemology is the role that intellectual virtue plays in securing the truth of an agent’s beliefs. In particular, the central feature of virtue epistemology (at least to my mind) is that the truth of an agent’s beliefs stem from the exercise of intellectual virtue. It is thus not the case that truth is unimportant (or less important) when it comes to issues of positive epistemic standing; rather, it is the role that intellectual virtue plays in establishing the truth of an agent’s beliefs. An agent is thus a bona fide knower when they believe the truth and the truth in question is attributable to some aspect of their cognitive character, specifically, a cognitive trait (virtue responsibilism) or cognitive faculty (virtue reliabilism).

What then makes something a vice or virtue seems to be tied to the reliability of token instantiations of processes that are consistent with an agent’s cognitive character. Intellectual virtues are thus “cognitive character traits that are truth-conducive and minimalise error” (Heersmink, 2018, p. 3), while intellectual vices are characterized as “cognitive character traits that are not truth-conducive and do not minimalise error” (Heersmink, 2018, p. 3). It is this feature of the intellectual virtues—the fact that they are, in general, reliable (or give rise to reliable belief-relevant processes)—that looks to be important when it comes to issues of epistemic evaluation.

So this is what I find problematic about virtue theoretic approaches to knowledge. (Note that I am not an epistemologist by training, so this will require a generous—and hopefully virtue-inspiring swig—of the ole intellectual courage.) Imagine a state-of-affairs in which the Internet was (contrary to the present state-of-affairs) a perfectly safe environment—one where the factive status of online information was guaranteed as a result of advances in cyber-security techniques and intelligent fact-checking services. Next, let us imagine that we have two individuals, Paul and Sophia, who differ with respect to their cognitive character. Paul is the least virtuous of the two, unreflectively and automatically accepting whatever the Internet tells him. Sophia is more circumspect, wary of being led astray by (the now non-existent) fake news.

Inasmuch as we see the exercise of intellectual virtue as necessary for online knowledge, it looks unlikely that poor old Paul can be said to know very much. This is because the truth of Paul’s beliefs are not the result of anything that warrants the label ‘intellectual virtue.’ Paul, of course, does have a lot of true beliefs, but the truth of these beliefs does not stem from the exercise of his intellectual virtues—if, indeed, he has any. In fact, inasmuch as there is any evidence of virtue in play here, it is probably best attributed to the technologies that work to ensure the safety of the online environment. The factive status of Paul’s beliefs thus has more to do with the reliability of the Internet than it does with the elements of his cognitive character.

But is it correct to say that Paul has no online knowledge in this situation? Personally, I do not have this intuition. In other words, in a perfectly safe environment, I can see no reason why we should restrict knowledge attributions to agents whose beliefs are true specifically as the result of intellectual virtue. My sense is that even the most unreflective of agents could be credited with knowledge in a situation where there was no possibility of them being wrong. And if that is indeed the case, then why insist that it is only the exercise of intellectual virtue that underwrites positive epistemic standing?

After all, it seems perfectly possible, to my mind, that Sophia’s epistemic caution contributes no more to the minimization of error in an epistemically benign (i.e., safe) environment than does Paul’s uncritical acceptance. (In fact, given the relative efficiency of their doxastic systems, it may very well be the case that Sophia ends up with fewer true beliefs than Paul.) It might be claimed that this case is invalidated by a failure to consider the modal stability of an agent’s beliefs relative to close possible worlds, as well as perhaps their sensitivity to counterfactual error possibilities. But given the way in which the case is characterized, I suggest that there are no close possible worlds that should worry us—the cybersecurity and fact checking technologies are, let us assume, sufficiently robust as to ensure the modal distance of those worrisome worlds.

One implication of all this is to raise doubts about the necessity of intellectual virtue, relative to our conceptual understanding of knowledge. If there are cases where intellectual virtue is not required for positive epistemic standing, then intellectual virtue cannot be a necessary condition for knowledge attribution. And if that is the case, then why should intellectual virtue form the basis of an approach that is intended to deal with the epistemic shortcomings of the (contemporary) Internet?

Part of the attraction of virtue epistemology, I suspect, is the way in which a suite of generally reliable processes are inextricably linked to the agent who is the ultimate target of epistemic evaluation. This linkage, which is established via the appeal to cognitive character, helps to ensure the portability of an agent’s truth-tracking capabilities—it helps to ensure, in other words, that wherever the agent goes their reliable truth-tracking capabilities are sure to follow.

However, in an era where our doxastic systems are more-or-less constantly plugged into a reliable and epistemically safe environment, it is not so clear that agential capabilities are relevant to epistemic standing. This, I suggest, raises doubts about the necessity of intellectual virtue in securing positive epistemic status, and it also (although this is perhaps less clear) encourages us to focus our attention on some of the engineering efforts (as opposed to agent-oriented educational programs) that might be required to make the online world an epistemically safer place.

Conclusion

What, then, should we make of the appeal to virtue epistemology in our attempt to deal with the  epistemic hazards of the Internet. My main concern is that the appeal to virtue epistemology (and the emphasis placed on intellectual virtue) risks an unproductive focus on individual human agents at the expense of both the technological and social features of the online world. This certainly does not rule out the relevance of virtue theoretic approaches as part of our attempt to understand the epistemic significance of the Internet, but other approaches (e.g., modal reliabilism, process reliabilism, distributed reliabilism, and systems-oriented social epistemology) also look to be important.

Personally, I remain agnostic with regard to the relevance of different epistemological approaches, although I worry about the extent to which virtue epistemology is best placed to inform policy-related decisions (e.g., those relating to education). In particular, I fear that by focusing our attention on individual agents and issues of intellectual virtue, we risk overlooking some of the socio-epistemic benefits of the Internet, denigrating a particular technology (e.g., personalized search) on account of its failure to enhance individual knowledge, while ignoring the way a technology contributes to more collective forms of epistemic success.

In concluding his thought-provoking paper on virtue epistemology and the Internet, Richard suggests that “there is an important role for educators to teach and assess [intellectual] virtues as part of formal school and university curricula, perhaps as part of critical thinking courses” (Heersmink, 2018, p. 10). I have said relatively little about this particular issue in the present paper. For what it’s worth, however, I can see no reason to object to the general idea of Internet-oriented educational policies. The only caveat, perhaps, concerns the relative emphasis that might be placed on the instillation of intellectual virtue as opposed to the inculcation of technical skills, especially those that enable future generations to make the online world a safer place.

No doubt there is room for both kinds of pedagogical program (assuming they can even be dissociated). At the very least, it seems to me that the effort to resolve a problem (i.e., engineer a safer Internet) is just as important as the effort to merely cope with it (i.e., acquire a virtuous cognitive character). But, in any case, when it comes to education and learning, we should not lose sight of the fact that the Internet is itself something that is used for educational purposes. Perhaps, then, the more important point about education and the Internet is not so much the precise details of what gets taught, so much as the issue of whether the Internet (with all its epistemic foibles) is really the best place to learn.

Contact details: ps02v@ecs.soton.ac.uk

References

Goldman, A. I. (2011). A guide to social epistemology. In A. I. Goldman and D. Whitcomb (Eds.), Social Epistemology: Essential Readings, pp. 11–37. New York, New York, USA: Oxford University Press.

Hannak, A., P. Sapiezynski, A. Molavi Kakhki, B. Krishnamurthy, D. Lazer, A. Mislove, and C. Wilson (2013). Measuring personalization of Web search. In D. Schwabe, V. Almeida, H. Glaser, R. Baeza-Yates, and S. Moon (Eds.), Proceedings of the 22nd International Conference  on World Wide Web, Rio  de Janeiro, Brazil, pp. 527–538. ACM.

Heersmink, R. (2018). A virtue epistemology of the Internet: Search engines, intellectual virtues, and education. Social Epistemology 32 (1), 1–12.

Muldoon, R. (2013). Diversity and the division of cognitive labor. Philosophy Compass 8 (2), 117–125.

Palermos, S. O. (2015). Active externalism, virtue reliabilism and scientific knowledge. Synthese 192 (9), 2955–2986.

Pritchard, D. (2009). Knowledge. Basingstoke, England, UK: Palgrave Macmillan.

Smart, P. R. (in pressa). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. (in pressb). Knowledge machines. The Knowledge Engineering Review.

Smart, P. R. (in pressc). Mandevillian intelligence. Synthese.

Smart, P. R. (in pressd). Mandevillian intelligence: From individual vice to collective virtue. In A. J. Carter, A. Clark, J. Kallestrup, O. S. Palermos, and D. Pritchard (Eds.), Socially Extended Epistemology. Oxford, UK: Oxford University Press.

Smart, P. R. and N. R. Shadbolt (in press). The World Wide Web. In J. Chase and D. Coady (Eds.), The Routledge Handbook of Applied Epistemology. New York, New York, USA: Routledge.

Watson, D. and L. Floridi (2018). Crowdsourced science: Sociotechnical epistemology in the e-research paradigm. Synthese 195 (2), 741–764.

Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis 72 (1), 17–35.

[1] This work is supported under SOCIAM: The Theory and Practice of Social Machines. The SOCIAM Project is funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J017728/1 and comprises the Universities of Southampton, Oxford and Edinburgh.

[2] See http://www.xorbin.com/tools/sha256-hash-calculator [accessed: 30th  January 2018].

Author Information: Damien Williams, Virginia Tech, damienw7@vt.edu

Williams, Damien. “Deleting the Human Clause: A Review of Ashley Shew’s Animal Constructions and Technological Knowledge.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 42-44.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uh

Animal Constructions and Technological Knowledge is Ashley Shew’s debut monograph and in it she argues that we need to reassess and possibly even drastically change the way in which we think about and classify the categories of technology, tool use, and construction behavior. Drawing from the fields of anthropology, animal studies, and philosophy of technology and engineering, Shew demonstrates that there are several assumptions made by researchers in all of these fields—assumptions about intelligence, intentionality, creativity and the capacity for novel behavior.

Many of these assumptions, Shew says, were developed to guard against the hazard of anthropomorphizing the animals under investigation, and to prevent those researchers ascribing human-like qualities to animals that don’t have them. However, this has led to us swinging the pendulum too far in the other direction, engaging in “a kind of speciesist arrogance” which results in our not ascribing otherwise laudable characteristics to animals for the mere fact that they aren’t human.[1]

Shew says that we consciously and unconsciously appended a “human clause” to all of our definitions of technology, tool use, and intelligence, and this clause’s presumption—that it doesn’t really “count” if humans aren’t the ones doing it—is precisely what has to change.

In Animal Constructions, Shew’s tone is both light and intensely focused, weaving together extensive notes, bibliography, and index with humor, personal touches, and even poignancy, all providing a sense of weight and urgency to her project. As she lays out the pieces of her argument, she is extremely careful about highlighting and bracketing out her own biases, throughout the text; an important fact, given that the whole project is about the recognition of assumptions and bias in human behavior. In Chapter 6, when discussing whether birds can be said to understand what they’re doing, Shew says that she

[relies] greatly on quotations…because the study’s authors describe crow tool uses and manufacture using language that is very suggestive about crows’ technological understanding and behaviors—language that, given my particular philosophical research agenda, might sound biased in paraphrase.[2]

In a chapter 6 endnote, Shew continues to touch on this issue of bias and its potential to become prejudice, highlighting the difficulty of cross-species comparison, and noting that “we also compare the intelligence of culturally and economically privileged humans with that of less privileged humans, a practice that leads to oppression, exploitation, slavery, genocide, etc.”[3] In the conclusion, she elaborates on this somewhat, pointing out the ways in which biases about the “right kinds” of bodies and minds have led to embarrassments and atrocities in human history.[4] As we’ll see, this means that the question of how and why we categorize animal construction behaviors as we do has implications which are far more immediate and crucial than research projects.

The content of Animal Constructions is arranged in such a way as to make a strong case for the intelligence, creativity, and ingenuity of animals, throughout, but it also provides several contrast cases in which we see that there are several animal behaviors which might appear to be intentional, but which are the product of instinct or the extended phenotype of the species in question.[5] According to Shew, these latter cases do more than act as exceptions that test the rule; they also provide the basis for reframing the ways in which we compare the behaviors of humans and nonhuman animals.

If we can accept that construction behavior exists on a spectrum or continuum with tool use and other technological behaviors, and we can come to recognize that animals such as spiders and beavers make constructions as a part of the instinctual, DNA-based, phenotypical natures, then we can begin to interrogate whether the same might not be true for the things that humans make and do. If we can understand this, then we can grasp that “the nature of technology is not merely tied to the nature of humanity, but to humanity in our animality” (emphasis present in original).[6]

Using examples from animal studies reaching back several decades, Shew discusses experimental observations of apes, monkeys, cetaceans (dolphins and whales), and birds. Each example set moves further away from the kind of animals we see as “like us,” and details how each group possess traits and behaviors humans tend to think only exist in ourselves.[7] Chimps and monkeys test tool-making techniques and make plans; dolphins and whales pass hunting techniques on to their children and cohort, have names, and social rituals; birds make complex tools for different scenarios, adapt them to novel circumstances, and learn to lie.[8]

To further discuss the similarities between humans and other animals, Shew draws on theories about the relationship between body and mind, such as embodiment and extended mind hypotheses, from philosophy of mind, which say that the kind of mind we are is intimately tied to the kinds of bodies we are. She pairs this with work from disability studies which forwards the conceptual framework of “bodyminds,” saying that they aren’t simply linked; they’re the same.[9] This is the culmination of descriptions of animal behaviors and a prelude a redefinition and reframing of the concepts of “technology” and “knowledge.”

Editor's note - My favourite part of this review roundtable is scanning through pictures of smart animals

Dyson the seal. Image by Valerie via Flickr / Creative Commons

 

In the book’s conclusion, Shew suggests placing all the products of animal construction behavior on a two-axis scale, where the x-axis is “know-how” (the knowledge it takes to accomplish a task) and the y-axis is “thing knowledge” (the information about the world that gets built into constructed objects).[10] When we do this, she says, we can see that every made thing, be it object or social construct (a passage with important implications) falls somewhere outside of the 0, 0 point.[11] This is Shew’s main thrust throughout Animal Constructions: That humans are animals and our technology is not what sets us apart or makes us special; in fact, it may be the very thing that most deeply ties us to our position within the continuum of nature.

For Shew, we need to be less concerned about the possibility of incorrectly thinking that animals are too much like us, and far more concerned that we’re missing the ways in which we’re still and always animals. Forgetting our animal nature and thinking that there is some elevating, extra special thing about humans—our language, our brains, our technologies, our culture—is arrogant in the extreme.

While Shew says that she doesn’t necessarily want to consider the moral implications of her argument in this particular book, it’s easy to see how her work could be foundational to a project about moral and social implications, especially within fields such as animal studies or STS.[12] And an extension like this would fit perfectly well with the goal she lays out in the introduction, regarding her intended audience: “I hope to induce philosophers of technology to consider animal cases and induce researchers in animal studies to think about animal tool use with the apparatus provided by philosophy of technology.”[13]

In Animal Constructions, Shew has built a toolkit filled with fine arguments and novel arrangements that should easily provide the instruments necessary for anyone looking to think differently about the nature of technology, engineering, construction, and behavior, in the animal world. Shew says that “A full-bodied approach to the epistemology of technology requires that assumptions embedded in our definitions…be made clear,”[14] and Animal Constructions is most certainly a mechanism by which to deeply delve into that process of clarification.

Contact details: damienw7@vt.edu

References

Shew, Ashley. Animal Constructions and Technological Knowledge. Lanham, MD: Lexington Books, 2017.

[1] Ashley Shew, Animal Constructions and Technological Knowledge p. 107

[2] Ibid., p. 73

[3] Ibid., p. 89, n. 7

[4] Ibid., pg. 107—122

[5] Ibid., pg. 107—122

[6] Ibid., p. 19

[7] On page 95, Shew makes brief mention various instances of octopus tool use; more of these examples would really drive the point home.

[8] Shew, pg. 35—51; 53—65; 67—89

[9] Ibid., p. 108

[10] Ibid., pg. 110—119

[11] Ibid., p. 118

[12] Ibid., p. 16

[13] Ibid., p. 11

[14] Ibid., p 105

Author Information: Steve Fuller, University of Warwick, UK, S.W.Fuller@warwick.ac.uk

Fuller, Steve. “Against Virtue and For Modernity: Rebooting the Modern Left.” Social Epistemology Review and Reply Collective 6, no. 12 (2017): 51-53.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3S9

Toby Ziegler’s “The Liberals: 3rd Version.” Photo by Matt via Flickr / Creative Commons

 

My holiday message for the coming year is a call to re-boot the modern left. When I was completing my doctoral studies, just as the Cold War was beginning to wind down, the main threat to the modern left was seen as coming largely from within. ‘Postmodernism’ was the name normally given to that threat, and it fuelled various culture, canon and science wars in the 1980s and 1990s.

Indeed, even I was – and, in some circles, continue to be – seen as just such an ‘enemy of reason’, to recall the name of Richard Dawkins’ television show in which I figured as one of the accused. However, in retrospect, postmodernism was at most a harbinger for a more serious threat, which today comes from both the ‘populist’ supporters of Trump, Brexit et al. and their equally self-righteous academic critics.

Academic commentators on Trump, Brexit and the other populist turns around the world seem unable to avoid passing moral judgement on the voters who brought about these uniformly unexpected outcomes, the vast majority of which the commentators have found unwelcomed. In this context, an unholy alliance of virtue theorists and evolutionary psychologists have thrived as diagnosticians of our predicament. I say ‘unholy’ because Aristotle and Darwin suddenly find themselves on the same side of an argument, now pitched against the minds of ‘ordinary’ people. This anti-democratic place is not one in which any self-respecting modern leftist wishes to be.

To be sure, virtue theorists and evolutionary psychologists come to the matter from rather different premises – the one metaphysical if not religious and the other naturalistic if not atheistic. Nevertheless, they both regard humanity’s prospects as fundamentally constrained by our mental makeup. This makeup reflects our collective past and may even be rooted in our animal nature. Under the circumstances, so they believe, the best we can hope is to become self-conscious of our biases and limitations in processing information so that we don’t fall prey to the base political appeals that have resulted in the current wave of populism.

These diagnosticians conspicuously offer little of the positive vision or ambition that characterised ‘progressive’ politics of both liberal and socialist persuasions in the nineteenth and twentieth centuries. But truth be told, these learned pessimists already have form. They are best seen as the culmination of a current of thought that has been percolating since the end of the Cold War effectively brought to a halt Marxism as a world-historic project of human emancipation.

In this context, the relatively upbeat message advanced by Francis Fukuyama in The End of History and the Last Man that captivated much of the 1990s was premature. Fukuyama was cautiously celebrating the triumph of liberalism over socialism in the progressivist sweepstakes. But others were plotting a different course, one in which the very terms on which the Cold War had been fought would be superseded altogether. Gone would be the days when liberals and socialists vied over who could design a political economy that would benefit the most people worldwide. In its place would be a much more precarious sense of the world order, in which overweening ambition itself turned out to be humanity’s Achilles Heel, if not Original Sin.

Here the trail of books published by Alasdair MacIntyre and his philosophical and theological admirers in the wake of After Virtue ploughed a parallel field to such avowedly secular and scientifically minded works as Peter Singer’s A Darwinian Left and Steven Pinker’s The Blank Slate. These two intellectual streams, both pointing to our species’ inveterate shortcomings, gained increasing plausibility in light of 9/11’s blindsiding on the post-Cold War neo-liberal consensus.

9/11 tore up the Cold War playbook once and for all, side-lining both the liberals and the socialists who had depended on it. Gone was the state-based politics, the strategy of mutual containment, the agreed fields of play epitomized in such phrases as ‘arms race’ and ‘space race’. In short, gone was the game-theoretic rationality of managed global conflict. Thus began the ongoing war on ‘Islamic terror’. Against this backdrop, the Iraq War proved to be colossally ill-judged, though no surprise given that its mastermind was one of the Cold War’s keenest understudies, Donald Rumsfeld.

For the virtue theorists and evolutionary psychologists, the Cold War represented as far as human rationality could go in pushing back and channelling our default irrationality, albeit in the hope of lifting humanity to a ‘higher’ level of being. Indeed, once the USSR lost the Cold War to the US on largely financial grounds, the victorious Americans had to contend with the ‘blowback’ from third parties who suffered ‘collateral damage’ at many different levels during the Cold War. After all, the Cold War, for all its success in averting nuclear confrontation, nevertheless turned the world into a playing field for elite powers. ‘First world’, ‘second world’ and ‘third world’ were basically the names of the various teams in contention on the Cold War’s global playing field.

So today we see an ideological struggle whose main players are those resentful (i.e. the ‘populists’) and those regretful (i.e. the ‘anti-populists’) of the entire Cold War dynamic. The only thing that these antagonists appear to agree on is the folly of ‘progressivist’ politics, the calling card of both modern liberalism and socialism. Indeed, both the populists and their critics are fairly characterised as somehow wanting to turn back the clock to a time when we were in closer contact with the proverbial ‘ground of being’, which of course the two sides define in rather different terms. But make no mistake of the underlying metaphysical premise: We are ultimately where we came from.

Notwithstanding the errors of thought and deed committed in their names, liberalism and socialism rightly denied this premise, which placed both of them in the vanguard – and eventually made them world-historic rivals – in modernist politics. Modernity raised humanity’s self-regard and expectations to levels that motivated people to build a literal Heaven on Earth, in which technology would replace theology as the master science of our being. David Noble cast a characteristically informed but jaundiced eye at this proposition in his 1997 book, The Religion of Technology: The Divinity of Man and the Spirit of Invention. Interestingly, John Passmore had covered much the same terrain just as eruditely but with greater equanimity in his 1970 book, The Perfectibility of Man. That the one was written after and the other during the Cold War is probably no accident.

I am mainly interested in resurrecting the modernist project in its spirit, not its letter. Many of modernity’s original terms of engagement are clearly no longer tenable. But I do believe that Silicon Valley is comparable to Manchester two centuries ago, namely, a crucible of a radical liberal sensibility – call it ‘Liberalism 2.0’ or simply ‘Alt-Liberalism’ – that tries to use the ascendant technological wave to leverage a new conception of the human being.

However one judges Marx’s critique of liberalism’s scientific expression (aka classical political economy), the bottom line is that his arguments for socialism would never have got off the ground had liberalism not laid the groundwork for him. As we enter 2018 and seek guidance for launching a new progressivism, we would do well to keep this historical precedent in mind.

Contact details: S.W.Fuller@warwick.ac.uk

Author Information: Ben Sherman, Brandeis University, shermanb@brandeis.edu

Sherman, Ben. “Learning How to Think Better: A Response to Davidson and Kelly.” Social Epistemology Review and Reply Collective 5, no. 3 (2016): 48-53.

The PDF of the article gives specific page numbers. Shortlink: http://wp.me/p1Bfg0-2JP

Please refer to:

think

Image credit: wallsdontlie, via flickr

My thanks to Davidson and Kelly for their reply to my paper.[1] I am grateful on two counts in particular:  Continue Reading…