Archives For wisdom

Author Information: Damien Williams, Virginia Tech,

Williams, Damien. “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 64-69.

The pdf of the article gives specific page references. Shortlink:

Image by Stu Jones via CJ Sorg on Flickr / Creative Commons


Shannon Vallor’s most recent book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting takes a look at what she calls the “Acute Technosocial Opacity” of the 21st century, a state in which technological, societal, political, and human-definitional changes occur at such a rapid-yet-shallow pace that they block our ability to conceptualize and understand them.[1]

Vallor is one of the most publicly engaged technological ethicists of the past several years, and much of her work’s weight comes from its direct engagement with philosophy—both philosophy of technology and various virtue ethical traditions—and the community of technological development and innovation that is Silicon Valley. It’s from this immersive perspective that Vallor begins her work in Virtues.

Vallor contends that we need a new way of understanding the projects of human flourishing and seeking the good life, and understanding which can help us reexamine how we make and participate through and with the technoscientific innovations of our time. The project of this book, then, is to provide the tools to create this new understanding, tools which Vallor believes can be found in an examination and synthesis of the world’s three leading Virtue Ethical Traditions: Aristotelian ethics, Confucian Ethics, and Buddhism.

Vallor breaks the work into three parts, and takes as her subject what she considers to be the four major world-changing technologies of the 21st century.  The book’s three parts are, “Foundations for a Technomoral Virtue Ethic,” “Cultivating the Self: Classical Virtue Traditions as Contemporary Guide,” and “Meeting the Future with Technomoral Wisdom, OR How To Live Well with Emerging Technologies.” The four world changing technologies, considered at length in Part III, are Social Media, Surveillance, Robotics/Artificial Intelligence, and Biomedical enhancement technologies.[2]

As Vallor moves through each of the three sections and four topics, she maintains a constant habit of returning to the questions of exactly how each one will either help us cultivate a new technomoral virtue ethic, or how said ethic would need to be cultivated, in order to address it. As both a stylistic and pedagogical choice, this works well, providing touchstones of reinforcement that mirror the process of intentional cultivation she discusses throughout the book.

Flourishing and Technology

In Part I, “Foundations,” Vallor covers both the definitions of her terms and the argument for her project. Chapter 1, “Virtue Ethics, Technology, and Human Flourishing,” begins with the notion of virtue as a continuum that gets cultivated, rather than a fixed end point of achievement. She notes that while there are many virtue traditions with their own ideas about what it means to flourish, there is a difference between recognizing multiple definitions of flourishing and a purely relativist claim that all definitions of flourishing are equal.[3] Vallor engages these different understandings of flourishing, throughout the text, but she also looks at other ethical traditions, to explore how they would handle the problem of technosocial opacity.

Without resorting to strawmen, Vallor examines The Kantian Categorical Imperative and Utilitarianism, in turn. She demonstrates that Kant’s ethics would result in us trying to create codes of behavior that are either always right, or always wrong (“Never Murder;” “Always Tell the Truth”), and Utilitarian consequentialism would allow us to make excuses for horrible choices in the name of “the Greater Good.” Which is to say nothing of how nebulous, variable, and incommensurate all of our understandings of “utility” and “good” will be with each other. Vallor says that rigid rules-based nature of each of these systems simply can’t account for the variety of experiences and challenges humans are likely to face in life.

Not only that, but deontological and consequentialist ethics have always been this inflexible, and this inflexibility will only be more of a problem in the face of the challenges posed by the speed and potency of the four abovementioned technologies.[4] Vallor states that the technologies of today are more likely to facilitate a “technological convergence,” in which they “merge synergistically” and become more powerful and impactful than the sum of their parts. She says that these complex, synergistic systems of technology cannot be responded to and grappled with via rigid rules.[5]

Vallor then folds in discussion of several of her predecessors in the philosophy of technology—thinkers like Hans Jonas and Albert Borgmann—giving a history of the conceptual frameworks by which philosophers have tried to deal with technological drift and lurch. From here, she decides that each of these theorists has helped to get us part of the way, but their theories all need some alterations in order to fully succeed.[6]

In Chapter 2, “The Case for a Global Technomoral Virtue Ethic,” Vallor explores the basic tenets of Aristotelian, Confucian, and Buddhist ethics, laying the groundwork for the new system she hopes to build. She explores each of their different perspectives on what constitutes The Good Life in moderate detail, clearly noting that there are some aspects of these systems that are incommensurate with “virtue” and “good” as we understand them, today.[7] Aristotle, for instance, believed that some people were naturally suited to be slaves, and that women were morally and intellectually inferior to men, and the Buddha taught that women would always have a harder time attaining the enlightenment of Nirvana.

Rather than simply attempting to repackage old ones for today’s challenges, these ancient virtue traditions can teach us something about the shared commitments of virtue ethics, more generally. Vallor says that what we learn from them will fuel the project of building a wholly new virtue tradition. To discuss their shared underpinnings, she talks about “thick” and “thin” moral concepts.[8] A thin moral concept is defined here as only the “skeleton of an idea” of morality, while a thick concept provides the rich details that make each tradition unique. If we look at the thin concepts, Vallor says, we can see the bone structure of these traditions is made of 4 shared commitments:

  • To the Highest Human Good (whatever that may be);
  • That moral virtues understood to be cultivated states of character;
  • To a practical path of moral self-cultivation; and
  • That we can have a conception of what humans are generally like.[9]

Vallor uses these commitments to build a plausible definition of “flourishing,” looking at things like intentional practice within a global community toward moral goods internal to that practice, a set of criteria from Alasdair MacIntyre which she adopts and expands on, [10] These goals are never fully realized, but always worked toward, and always with a community. All of this is meant to be supported by and to help foster goods like global community, intercultural understanding, and collective human wisdom.

We need a global technomoral virtue ethics because while the challenges we face require ancient virtues such as courage and charity and community, they’re now required to handle ethical deliberations at a scope the world has never seen.

But Vallor says that a virtue tradition, new or old, need not be universal in order to do real, lasting work; it only needs to be engaged in by enough people to move the global needle. And while there may be differences in rendering these ideas from one person or culture to the next, if we do the work of intentional cultivation of a pluralist ethics, then we can work from diverse standpoints, toward one goal.[11]

To do this, we will need to intentionally craft both ourselves and our communities and societies. This is because not everyone considers the same goods as good, and even our agreed-upon values play out in vastly different ways when they’re sought by billions of different people in complex, fluid situations.[12] Only with intention can we exclude systems which group things like intentional harm and acceleration of global conflict under the umbrella of “technomoral virtues.”

Cultivating Techno-Ethics

Part II does the work of laying out the process of technomoral cultivation. Vallor’s goal is to examine what we can learn by focusing on the similarities and crucial differences of other virtue traditions. Starting in chapter 3, Vallor once again places Aristotle, Kongzi (Confucius), and the Buddha in conceptual conversation, asking what we can come to understand from each. From there, she moves on to detailing the actual process of cultivating the technomoral self, listing seven key intentional practices that will aid in this:

  • Moral Habituation
  • Relational Understanding
  • Reflective Self-Examination
  • Intentional Self-Direction of Moral Development
  • Perceptual Attention to Moral Salience
  • Prudential Judgment
  • Appropriate Extension of Moral Concern[13]

Vallor moves through each of these in turn, taking the time to show how each step resonates with the historical virtue traditions she’s used as orientation markers, thus far, while also highlighting key areas of their divergence from those past theories.

Vallor says that the most important thing to remember is that each step is a part of a continual process of training and becoming; none of them is some sort of final achievement by which we will “become moral,” and some are that less than others. Moral Habituation is the first step on this list, because it is the quality at the foundation of all of the others: constant cultivation of the kind of person you want to be. And, we have to remember that while all seven steps must be undertaken continually, they also have to be undertaken communally. Only by working with others can we build systems and societies necessary to sustain these values in the world.

In Chapter 6, “Technomoral Wisdom for an Uncertain Future,” Vallor provides “a taxonomy of technomoral virtues.”[14] The twelve concepts she lists—honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and technomoral wisdom—are not intended to be an exhaustive list of all possible technomoral virtues.

Rather, these twelve things together form system by which to understand the most crucial qualities for dealing with our 21st century lives. They’re all listed with “associated virtues,” which help provide a boarder and deeper sense of the kinds of conceptual connections we can achieve via relational engagement with all virtues.[15] Each member of the list should support and be supported by not only the other members, but also any as-yet-unknown or -undiscovered virtues.

Here, Vallor continues a pattern she’s established throughout the text of grounding potentially unfamiliar concepts in a frame of real-life technological predicaments from the 20th or 21st century. Scandals such as Facebook privacy controversies, the flash crash of 2010, or even the moral stances (or lack thereof) of CEO’s and engineers are discussed with a mind toward highlighting the final virtue: Technomoral Wisdom.[16] Technomoral Wisdom is a means of being able to unify the other virtues, and to understand the ways in which our challenges interweave with and reflect each other. In this way we can both cultivate virtuous responses within ourselves and our existing communities, and also begin to more intentionally create new individual, cultural, and global systems.

Applications and Transformations

In Part III, Vallor puts to the test everything that we’ve discussed so far, placing all of the principles, practices, and virtues in direct, extensive conversation with the four major technologies that frame the book. Exploring how new social media, surveillance cultures, robots and AI, and biomedical enhancement technologies are set to shape our world in radically new ways, and how we can develop new habits of engagement with them. Each technology is explored in its own chapter so as to better explore which virtues best suit which topic, which good might be expressed by or in spite of each field, and which cultivation practices will be required within each. In this way, Vallor highlights the real dangers of failing to skillfully adapt to the requirements of each of these unprecedented challenges.

While Vallor considers most every aspect of this project in great detail, there are points throughout the text where she seems to fall prey to some of the same technological pessimism, utopianism, or determinism for which she rightly calls out other thinkers, in earlier chapters. There is still a sense that these technologies are, of their nature, terrifying, and that all we can do is rein them in.

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

While the implications of climate catastrophes, dystopian police states, just-dumb-enough AI, and rampant gene hacking seem real, obvious, and avoidable to many of us, many others take them as merely naysaying distractions from the good of technosocial progress and the ever-innovating free market.[17] With that in mind, we need tools with which to begin the process of helping people understand why they ought to care about technomoral virtue, even when they have such large, driving incentives not to.

Without that, we are simply presenting people who would sell everything about us for another dollar with the tools by which to make a more cultivated, compassionate, and interrelational world, and hoping that enough of them understand the virtue of those tools, before it is too late. Technology and the Virtues is a fantastic schematic for a set of these tools.

Contact details:


Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a World Worth Wanting New York: Oxford University Press, 2016.

[1] Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a World Worth Wanting (New York: Oxford University Press, 2016) ,6.

[2] Ibid., 10.

[3] Ibid., 19—21.

[4] Ibid., 22—26.

[5] Ibid. 28.

[6] Ibid., 28—32.

[7] Ibid., 35.

[8] Ibid., 43.

[9] Ibid., 44.

[10] Ibid., 45—47.

[11] Ibid., 54—55.

[12] Ibid., 51.

[13] Ibid., 64.

[14] Ibid., 119.

[15] Ibid., 120.

[16] Ibid., 122—154.

[17] Ibid., 249—254.

Author Information: David Pauleen, Massey University,; David Rooney, Macquarie University, and Ali Intezari, Massey University, A.

Pauleen, David, David Rooney and Ali Intezari. “Big Data, Little Wisdom: Trouble Brewing? Ethical Implications for the Information Systems Discipline.” Social Epistemology Review and Reply Collective 4, no. 8 (2015): 9-33.

The PDF of the article gives specific page numbers. Shortlink:


Image credit: A. Golden, via flickr


How can wisdom and its inherent drive for integration help information systems in the development of practices for responsibly and ethically managing big data, ubiquitous information, and algorithmic knowledge—particularly in their collection, integration, analysis, presentation, and use—and so make the world a better place? We use the recent financial crises to illustrate the perils of an overreliance on and misuse of data, information, and predictive knowledge when global IS are not wisely integrated. Our analysis shows that the global financial crisis was in part caused by a serious lack of integration of information with the larger context of social, cultural, economic, and political dynamics. Integration of all the variables in a global and information hungry industry is exceptionally difficult, and so ‘exceptionality’ of some kind is needed to make sufficient integration happen. Wisdom, we suggest, is the exceptionality needed to lead successful integration. We expect that a wisdom-based shift can lead to more organisationally effective and socially responsible IS.  Continue Reading…