Archives For Damien Williams

Author Information: Damien Williams, Virginia Tech, damienw7@vt.edu

Williams, Damien. “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 64-69.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3US

Image by Stu Jones via CJ Sorg on Flickr / Creative Commons

 

Shannon Vallor’s most recent book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting takes a look at what she calls the “Acute Technosocial Opacity” of the 21st century, a state in which technological, societal, political, and human-definitional changes occur at such a rapid-yet-shallow pace that they block our ability to conceptualize and understand them.[1]

Vallor is one of the most publicly engaged technological ethicists of the past several years, and much of her work’s weight comes from its direct engagement with philosophy—both philosophy of technology and various virtue ethical traditions—and the community of technological development and innovation that is Silicon Valley. It’s from this immersive perspective that Vallor begins her work in Virtues.

Vallor contends that we need a new way of understanding the projects of human flourishing and seeking the good life, and understanding which can help us reexamine how we make and participate through and with the technoscientific innovations of our time. The project of this book, then, is to provide the tools to create this new understanding, tools which Vallor believes can be found in an examination and synthesis of the world’s three leading Virtue Ethical Traditions: Aristotelian ethics, Confucian Ethics, and Buddhism.

Vallor breaks the work into three parts, and takes as her subject what she considers to be the four major world-changing technologies of the 21st century.  The book’s three parts are, “Foundations for a Technomoral Virtue Ethic,” “Cultivating the Self: Classical Virtue Traditions as Contemporary Guide,” and “Meeting the Future with Technomoral Wisdom, OR How To Live Well with Emerging Technologies.” The four world changing technologies, considered at length in Part III, are Social Media, Surveillance, Robotics/Artificial Intelligence, and Biomedical enhancement technologies.[2]

As Vallor moves through each of the three sections and four topics, she maintains a constant habit of returning to the questions of exactly how each one will either help us cultivate a new technomoral virtue ethic, or how said ethic would need to be cultivated, in order to address it. As both a stylistic and pedagogical choice, this works well, providing touchstones of reinforcement that mirror the process of intentional cultivation she discusses throughout the book.

Flourishing and Technology

In Part I, “Foundations,” Vallor covers both the definitions of her terms and the argument for her project. Chapter 1, “Virtue Ethics, Technology, and Human Flourishing,” begins with the notion of virtue as a continuum that gets cultivated, rather than a fixed end point of achievement. She notes that while there are many virtue traditions with their own ideas about what it means to flourish, there is a difference between recognizing multiple definitions of flourishing and a purely relativist claim that all definitions of flourishing are equal.[3] Vallor engages these different understandings of flourishing, throughout the text, but she also looks at other ethical traditions, to explore how they would handle the problem of technosocial opacity.

Without resorting to strawmen, Vallor examines The Kantian Categorical Imperative and Utilitarianism, in turn. She demonstrates that Kant’s ethics would result in us trying to create codes of behavior that are either always right, or always wrong (“Never Murder;” “Always Tell the Truth”), and Utilitarian consequentialism would allow us to make excuses for horrible choices in the name of “the Greater Good.” Which is to say nothing of how nebulous, variable, and incommensurate all of our understandings of “utility” and “good” will be with each other. Vallor says that rigid rules-based nature of each of these systems simply can’t account for the variety of experiences and challenges humans are likely to face in life.

Not only that, but deontological and consequentialist ethics have always been this inflexible, and this inflexibility will only be more of a problem in the face of the challenges posed by the speed and potency of the four abovementioned technologies.[4] Vallor states that the technologies of today are more likely to facilitate a “technological convergence,” in which they “merge synergistically” and become more powerful and impactful than the sum of their parts. She says that these complex, synergistic systems of technology cannot be responded to and grappled with via rigid rules.[5]

Vallor then folds in discussion of several of her predecessors in the philosophy of technology—thinkers like Hans Jonas and Albert Borgmann—giving a history of the conceptual frameworks by which philosophers have tried to deal with technological drift and lurch. From here, she decides that each of these theorists has helped to get us part of the way, but their theories all need some alterations in order to fully succeed.[6]

In Chapter 2, “The Case for a Global Technomoral Virtue Ethic,” Vallor explores the basic tenets of Aristotelian, Confucian, and Buddhist ethics, laying the groundwork for the new system she hopes to build. She explores each of their different perspectives on what constitutes The Good Life in moderate detail, clearly noting that there are some aspects of these systems that are incommensurate with “virtue” and “good” as we understand them, today.[7] Aristotle, for instance, believed that some people were naturally suited to be slaves, and that women were morally and intellectually inferior to men, and the Buddha taught that women would always have a harder time attaining the enlightenment of Nirvana.

Rather than simply attempting to repackage old ones for today’s challenges, these ancient virtue traditions can teach us something about the shared commitments of virtue ethics, more generally. Vallor says that what we learn from them will fuel the project of building a wholly new virtue tradition. To discuss their shared underpinnings, she talks about “thick” and “thin” moral concepts.[8] A thin moral concept is defined here as only the “skeleton of an idea” of morality, while a thick concept provides the rich details that make each tradition unique. If we look at the thin concepts, Vallor says, we can see the bone structure of these traditions is made of 4 shared commitments:

  • To the Highest Human Good (whatever that may be);
  • That moral virtues understood to be cultivated states of character;
  • To a practical path of moral self-cultivation; and
  • That we can have a conception of what humans are generally like.[9]

Vallor uses these commitments to build a plausible definition of “flourishing,” looking at things like intentional practice within a global community toward moral goods internal to that practice, a set of criteria from Alasdair MacIntyre which she adopts and expands on, [10] These goals are never fully realized, but always worked toward, and always with a community. All of this is meant to be supported by and to help foster goods like global community, intercultural understanding, and collective human wisdom.

We need a global technomoral virtue ethics because while the challenges we face require ancient virtues such as courage and charity and community, they’re now required to handle ethical deliberations at a scope the world has never seen.

But Vallor says that a virtue tradition, new or old, need not be universal in order to do real, lasting work; it only needs to be engaged in by enough people to move the global needle. And while there may be differences in rendering these ideas from one person or culture to the next, if we do the work of intentional cultivation of a pluralist ethics, then we can work from diverse standpoints, toward one goal.[11]

To do this, we will need to intentionally craft both ourselves and our communities and societies. This is because not everyone considers the same goods as good, and even our agreed-upon values play out in vastly different ways when they’re sought by billions of different people in complex, fluid situations.[12] Only with intention can we exclude systems which group things like intentional harm and acceleration of global conflict under the umbrella of “technomoral virtues.”

Cultivating Techno-Ethics

Part II does the work of laying out the process of technomoral cultivation. Vallor’s goal is to examine what we can learn by focusing on the similarities and crucial differences of other virtue traditions. Starting in chapter 3, Vallor once again places Aristotle, Kongzi (Confucius), and the Buddha in conceptual conversation, asking what we can come to understand from each. From there, she moves on to detailing the actual process of cultivating the technomoral self, listing seven key intentional practices that will aid in this:

  • Moral Habituation
  • Relational Understanding
  • Reflective Self-Examination
  • Intentional Self-Direction of Moral Development
  • Perceptual Attention to Moral Salience
  • Prudential Judgment
  • Appropriate Extension of Moral Concern[13]

Vallor moves through each of these in turn, taking the time to show how each step resonates with the historical virtue traditions she’s used as orientation markers, thus far, while also highlighting key areas of their divergence from those past theories.

Vallor says that the most important thing to remember is that each step is a part of a continual process of training and becoming; none of them is some sort of final achievement by which we will “become moral,” and some are that less than others. Moral Habituation is the first step on this list, because it is the quality at the foundation of all of the others: constant cultivation of the kind of person you want to be. And, we have to remember that while all seven steps must be undertaken continually, they also have to be undertaken communally. Only by working with others can we build systems and societies necessary to sustain these values in the world.

In Chapter 6, “Technomoral Wisdom for an Uncertain Future,” Vallor provides “a taxonomy of technomoral virtues.”[14] The twelve concepts she lists—honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and technomoral wisdom—are not intended to be an exhaustive list of all possible technomoral virtues.

Rather, these twelve things together form system by which to understand the most crucial qualities for dealing with our 21st century lives. They’re all listed with “associated virtues,” which help provide a boarder and deeper sense of the kinds of conceptual connections we can achieve via relational engagement with all virtues.[15] Each member of the list should support and be supported by not only the other members, but also any as-yet-unknown or -undiscovered virtues.

Here, Vallor continues a pattern she’s established throughout the text of grounding potentially unfamiliar concepts in a frame of real-life technological predicaments from the 20th or 21st century. Scandals such as Facebook privacy controversies, the flash crash of 2010, or even the moral stances (or lack thereof) of CEO’s and engineers are discussed with a mind toward highlighting the final virtue: Technomoral Wisdom.[16] Technomoral Wisdom is a means of being able to unify the other virtues, and to understand the ways in which our challenges interweave with and reflect each other. In this way we can both cultivate virtuous responses within ourselves and our existing communities, and also begin to more intentionally create new individual, cultural, and global systems.

Applications and Transformations

In Part III, Vallor puts to the test everything that we’ve discussed so far, placing all of the principles, practices, and virtues in direct, extensive conversation with the four major technologies that frame the book. Exploring how new social media, surveillance cultures, robots and AI, and biomedical enhancement technologies are set to shape our world in radically new ways, and how we can develop new habits of engagement with them. Each technology is explored in its own chapter so as to better explore which virtues best suit which topic, which good might be expressed by or in spite of each field, and which cultivation practices will be required within each. In this way, Vallor highlights the real dangers of failing to skillfully adapt to the requirements of each of these unprecedented challenges.

While Vallor considers most every aspect of this project in great detail, there are points throughout the text where she seems to fall prey to some of the same technological pessimism, utopianism, or determinism for which she rightly calls out other thinkers, in earlier chapters. There is still a sense that these technologies are, of their nature, terrifying, and that all we can do is rein them in.

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

While the implications of climate catastrophes, dystopian police states, just-dumb-enough AI, and rampant gene hacking seem real, obvious, and avoidable to many of us, many others take them as merely naysaying distractions from the good of technosocial progress and the ever-innovating free market.[17] With that in mind, we need tools with which to begin the process of helping people understand why they ought to care about technomoral virtue, even when they have such large, driving incentives not to.

Without that, we are simply presenting people who would sell everything about us for another dollar with the tools by which to make a more cultivated, compassionate, and interrelational world, and hoping that enough of them understand the virtue of those tools, before it is too late. Technology and the Virtues is a fantastic schematic for a set of these tools.

Contact details: damienw7@vt.edu

References

Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a World Worth Wanting New York: Oxford University Press, 2016.

[1] Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a World Worth Wanting (New York: Oxford University Press, 2016) ,6.

[2] Ibid., 10.

[3] Ibid., 19—21.

[4] Ibid., 22—26.

[5] Ibid. 28.

[6] Ibid., 28—32.

[7] Ibid., 35.

[8] Ibid., 43.

[9] Ibid., 44.

[10] Ibid., 45—47.

[11] Ibid., 54—55.

[12] Ibid., 51.

[13] Ibid., 64.

[14] Ibid., 119.

[15] Ibid., 120.

[16] Ibid., 122—154.

[17] Ibid., 249—254.

Author Information: Damien Williams, Virginia Tech, damienw7@vt.edu

Williams, Damien. “Deleting the Human Clause: A Review of Ashley Shew’s Animal Constructions and Technological Knowledge.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 42-44.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3Uh

Animal Constructions and Technological Knowledge is Ashley Shew’s debut monograph and in it she argues that we need to reassess and possibly even drastically change the way in which we think about and classify the categories of technology, tool use, and construction behavior. Drawing from the fields of anthropology, animal studies, and philosophy of technology and engineering, Shew demonstrates that there are several assumptions made by researchers in all of these fields—assumptions about intelligence, intentionality, creativity and the capacity for novel behavior.

Many of these assumptions, Shew says, were developed to guard against the hazard of anthropomorphizing the animals under investigation, and to prevent those researchers ascribing human-like qualities to animals that don’t have them. However, this has led to us swinging the pendulum too far in the other direction, engaging in “a kind of speciesist arrogance” which results in our not ascribing otherwise laudable characteristics to animals for the mere fact that they aren’t human.[1]

Shew says that we consciously and unconsciously appended a “human clause” to all of our definitions of technology, tool use, and intelligence, and this clause’s presumption—that it doesn’t really “count” if humans aren’t the ones doing it—is precisely what has to change.

In Animal Constructions, Shew’s tone is both light and intensely focused, weaving together extensive notes, bibliography, and index with humor, personal touches, and even poignancy, all providing a sense of weight and urgency to her project. As she lays out the pieces of her argument, she is extremely careful about highlighting and bracketing out her own biases, throughout the text; an important fact, given that the whole project is about the recognition of assumptions and bias in human behavior. In Chapter 6, when discussing whether birds can be said to understand what they’re doing, Shew says that she

[relies] greatly on quotations…because the study’s authors describe crow tool uses and manufacture using language that is very suggestive about crows’ technological understanding and behaviors—language that, given my particular philosophical research agenda, might sound biased in paraphrase.[2]

In a chapter 6 endnote, Shew continues to touch on this issue of bias and its potential to become prejudice, highlighting the difficulty of cross-species comparison, and noting that “we also compare the intelligence of culturally and economically privileged humans with that of less privileged humans, a practice that leads to oppression, exploitation, slavery, genocide, etc.”[3] In the conclusion, she elaborates on this somewhat, pointing out the ways in which biases about the “right kinds” of bodies and minds have led to embarrassments and atrocities in human history.[4] As we’ll see, this means that the question of how and why we categorize animal construction behaviors as we do has implications which are far more immediate and crucial than research projects.

The content of Animal Constructions is arranged in such a way as to make a strong case for the intelligence, creativity, and ingenuity of animals, throughout, but it also provides several contrast cases in which we see that there are several animal behaviors which might appear to be intentional, but which are the product of instinct or the extended phenotype of the species in question.[5] According to Shew, these latter cases do more than act as exceptions that test the rule; they also provide the basis for reframing the ways in which we compare the behaviors of humans and nonhuman animals.

If we can accept that construction behavior exists on a spectrum or continuum with tool use and other technological behaviors, and we can come to recognize that animals such as spiders and beavers make constructions as a part of the instinctual, DNA-based, phenotypical natures, then we can begin to interrogate whether the same might not be true for the things that humans make and do. If we can understand this, then we can grasp that “the nature of technology is not merely tied to the nature of humanity, but to humanity in our animality” (emphasis present in original).[6]

Using examples from animal studies reaching back several decades, Shew discusses experimental observations of apes, monkeys, cetaceans (dolphins and whales), and birds. Each example set moves further away from the kind of animals we see as “like us,” and details how each group possess traits and behaviors humans tend to think only exist in ourselves.[7] Chimps and monkeys test tool-making techniques and make plans; dolphins and whales pass hunting techniques on to their children and cohort, have names, and social rituals; birds make complex tools for different scenarios, adapt them to novel circumstances, and learn to lie.[8]

To further discuss the similarities between humans and other animals, Shew draws on theories about the relationship between body and mind, such as embodiment and extended mind hypotheses, from philosophy of mind, which say that the kind of mind we are is intimately tied to the kinds of bodies we are. She pairs this with work from disability studies which forwards the conceptual framework of “bodyminds,” saying that they aren’t simply linked; they’re the same.[9] This is the culmination of descriptions of animal behaviors and a prelude a redefinition and reframing of the concepts of “technology” and “knowledge.”

Editor's note - My favourite part of this review roundtable is scanning through pictures of smart animals

Dyson the seal. Image by Valerie via Flickr / Creative Commons

 

In the book’s conclusion, Shew suggests placing all the products of animal construction behavior on a two-axis scale, where the x-axis is “know-how” (the knowledge it takes to accomplish a task) and the y-axis is “thing knowledge” (the information about the world that gets built into constructed objects).[10] When we do this, she says, we can see that every made thing, be it object or social construct (a passage with important implications) falls somewhere outside of the 0, 0 point.[11] This is Shew’s main thrust throughout Animal Constructions: That humans are animals and our technology is not what sets us apart or makes us special; in fact, it may be the very thing that most deeply ties us to our position within the continuum of nature.

For Shew, we need to be less concerned about the possibility of incorrectly thinking that animals are too much like us, and far more concerned that we’re missing the ways in which we’re still and always animals. Forgetting our animal nature and thinking that there is some elevating, extra special thing about humans—our language, our brains, our technologies, our culture—is arrogant in the extreme.

While Shew says that she doesn’t necessarily want to consider the moral implications of her argument in this particular book, it’s easy to see how her work could be foundational to a project about moral and social implications, especially within fields such as animal studies or STS.[12] And an extension like this would fit perfectly well with the goal she lays out in the introduction, regarding her intended audience: “I hope to induce philosophers of technology to consider animal cases and induce researchers in animal studies to think about animal tool use with the apparatus provided by philosophy of technology.”[13]

In Animal Constructions, Shew has built a toolkit filled with fine arguments and novel arrangements that should easily provide the instruments necessary for anyone looking to think differently about the nature of technology, engineering, construction, and behavior, in the animal world. Shew says that “A full-bodied approach to the epistemology of technology requires that assumptions embedded in our definitions…be made clear,”[14] and Animal Constructions is most certainly a mechanism by which to deeply delve into that process of clarification.

Contact details: damienw7@vt.edu

References

Shew, Ashley. Animal Constructions and Technological Knowledge. Lanham, MD: Lexington Books, 2017.

[1] Ashley Shew, Animal Constructions and Technological Knowledge p. 107

[2] Ibid., p. 73

[3] Ibid., p. 89, n. 7

[4] Ibid., pg. 107—122

[5] Ibid., pg. 107—122

[6] Ibid., p. 19

[7] On page 95, Shew makes brief mention various instances of octopus tool use; more of these examples would really drive the point home.

[8] Shew, pg. 35—51; 53—65; 67—89

[9] Ibid., p. 108

[10] Ibid., pg. 110—119

[11] Ibid., p. 118

[12] Ibid., p. 16

[13] Ibid., p. 11

[14] Ibid., p 105