Archives For AI

Author Information: Damien Williams, Virginia Tech, damienw7@vt.edu

Williams, Damien. “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues.” Social Epistemology Review and Reply Collective 7, no. 2 (2018): 64-69.

The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3US

Image by Stu Jones via CJ Sorg on Flickr / Creative Commons

 

Shannon Vallor’s most recent book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting takes a look at what she calls the “Acute Technosocial Opacity” of the 21st century, a state in which technological, societal, political, and human-definitional changes occur at such a rapid-yet-shallow pace that they block our ability to conceptualize and understand them.[1]

Vallor is one of the most publicly engaged technological ethicists of the past several years, and much of her work’s weight comes from its direct engagement with philosophy—both philosophy of technology and various virtue ethical traditions—and the community of technological development and innovation that is Silicon Valley. It’s from this immersive perspective that Vallor begins her work in Virtues.

Vallor contends that we need a new way of understanding the projects of human flourishing and seeking the good life, and understanding which can help us reexamine how we make and participate through and with the technoscientific innovations of our time. The project of this book, then, is to provide the tools to create this new understanding, tools which Vallor believes can be found in an examination and synthesis of the world’s three leading Virtue Ethical Traditions: Aristotelian ethics, Confucian Ethics, and Buddhism.

Vallor breaks the work into three parts, and takes as her subject what she considers to be the four major world-changing technologies of the 21st century.  The book’s three parts are, “Foundations for a Technomoral Virtue Ethic,” “Cultivating the Self: Classical Virtue Traditions as Contemporary Guide,” and “Meeting the Future with Technomoral Wisdom, OR How To Live Well with Emerging Technologies.” The four world changing technologies, considered at length in Part III, are Social Media, Surveillance, Robotics/Artificial Intelligence, and Biomedical enhancement technologies.[2]

As Vallor moves through each of the three sections and four topics, she maintains a constant habit of returning to the questions of exactly how each one will either help us cultivate a new technomoral virtue ethic, or how said ethic would need to be cultivated, in order to address it. As both a stylistic and pedagogical choice, this works well, providing touchstones of reinforcement that mirror the process of intentional cultivation she discusses throughout the book.

Flourishing and Technology

In Part I, “Foundations,” Vallor covers both the definitions of her terms and the argument for her project. Chapter 1, “Virtue Ethics, Technology, and Human Flourishing,” begins with the notion of virtue as a continuum that gets cultivated, rather than a fixed end point of achievement. She notes that while there are many virtue traditions with their own ideas about what it means to flourish, there is a difference between recognizing multiple definitions of flourishing and a purely relativist claim that all definitions of flourishing are equal.[3] Vallor engages these different understandings of flourishing, throughout the text, but she also looks at other ethical traditions, to explore how they would handle the problem of technosocial opacity.

Without resorting to strawmen, Vallor examines The Kantian Categorical Imperative and Utilitarianism, in turn. She demonstrates that Kant’s ethics would result in us trying to create codes of behavior that are either always right, or always wrong (“Never Murder;” “Always Tell the Truth”), and Utilitarian consequentialism would allow us to make excuses for horrible choices in the name of “the Greater Good.” Which is to say nothing of how nebulous, variable, and incommensurate all of our understandings of “utility” and “good” will be with each other. Vallor says that rigid rules-based nature of each of these systems simply can’t account for the variety of experiences and challenges humans are likely to face in life.

Not only that, but deontological and consequentialist ethics have always been this inflexible, and this inflexibility will only be more of a problem in the face of the challenges posed by the speed and potency of the four abovementioned technologies.[4] Vallor states that the technologies of today are more likely to facilitate a “technological convergence,” in which they “merge synergistically” and become more powerful and impactful than the sum of their parts. She says that these complex, synergistic systems of technology cannot be responded to and grappled with via rigid rules.[5]

Vallor then folds in discussion of several of her predecessors in the philosophy of technology—thinkers like Hans Jonas and Albert Borgmann—giving a history of the conceptual frameworks by which philosophers have tried to deal with technological drift and lurch. From here, she decides that each of these theorists has helped to get us part of the way, but their theories all need some alterations in order to fully succeed.[6]

In Chapter 2, “The Case for a Global Technomoral Virtue Ethic,” Vallor explores the basic tenets of Aristotelian, Confucian, and Buddhist ethics, laying the groundwork for the new system she hopes to build. She explores each of their different perspectives on what constitutes The Good Life in moderate detail, clearly noting that there are some aspects of these systems that are incommensurate with “virtue” and “good” as we understand them, today.[7] Aristotle, for instance, believed that some people were naturally suited to be slaves, and that women were morally and intellectually inferior to men, and the Buddha taught that women would always have a harder time attaining the enlightenment of Nirvana.

Rather than simply attempting to repackage old ones for today’s challenges, these ancient virtue traditions can teach us something about the shared commitments of virtue ethics, more generally. Vallor says that what we learn from them will fuel the project of building a wholly new virtue tradition. To discuss their shared underpinnings, she talks about “thick” and “thin” moral concepts.[8] A thin moral concept is defined here as only the “skeleton of an idea” of morality, while a thick concept provides the rich details that make each tradition unique. If we look at the thin concepts, Vallor says, we can see the bone structure of these traditions is made of 4 shared commitments:

  • To the Highest Human Good (whatever that may be);
  • That moral virtues understood to be cultivated states of character;
  • To a practical path of moral self-cultivation; and
  • That we can have a conception of what humans are generally like.[9]

Vallor uses these commitments to build a plausible definition of “flourishing,” looking at things like intentional practice within a global community toward moral goods internal to that practice, a set of criteria from Alasdair MacIntyre which she adopts and expands on, [10] These goals are never fully realized, but always worked toward, and always with a community. All of this is meant to be supported by and to help foster goods like global community, intercultural understanding, and collective human wisdom.

We need a global technomoral virtue ethics because while the challenges we face require ancient virtues such as courage and charity and community, they’re now required to handle ethical deliberations at a scope the world has never seen.

But Vallor says that a virtue tradition, new or old, need not be universal in order to do real, lasting work; it only needs to be engaged in by enough people to move the global needle. And while there may be differences in rendering these ideas from one person or culture to the next, if we do the work of intentional cultivation of a pluralist ethics, then we can work from diverse standpoints, toward one goal.[11]

To do this, we will need to intentionally craft both ourselves and our communities and societies. This is because not everyone considers the same goods as good, and even our agreed-upon values play out in vastly different ways when they’re sought by billions of different people in complex, fluid situations.[12] Only with intention can we exclude systems which group things like intentional harm and acceleration of global conflict under the umbrella of “technomoral virtues.”

Cultivating Techno-Ethics

Part II does the work of laying out the process of technomoral cultivation. Vallor’s goal is to examine what we can learn by focusing on the similarities and crucial differences of other virtue traditions. Starting in chapter 3, Vallor once again places Aristotle, Kongzi (Confucius), and the Buddha in conceptual conversation, asking what we can come to understand from each. From there, she moves on to detailing the actual process of cultivating the technomoral self, listing seven key intentional practices that will aid in this:

  • Moral Habituation
  • Relational Understanding
  • Reflective Self-Examination
  • Intentional Self-Direction of Moral Development
  • Perceptual Attention to Moral Salience
  • Prudential Judgment
  • Appropriate Extension of Moral Concern[13]

Vallor moves through each of these in turn, taking the time to show how each step resonates with the historical virtue traditions she’s used as orientation markers, thus far, while also highlighting key areas of their divergence from those past theories.

Vallor says that the most important thing to remember is that each step is a part of a continual process of training and becoming; none of them is some sort of final achievement by which we will “become moral,” and some are that less than others. Moral Habituation is the first step on this list, because it is the quality at the foundation of all of the others: constant cultivation of the kind of person you want to be. And, we have to remember that while all seven steps must be undertaken continually, they also have to be undertaken communally. Only by working with others can we build systems and societies necessary to sustain these values in the world.

In Chapter 6, “Technomoral Wisdom for an Uncertain Future,” Vallor provides “a taxonomy of technomoral virtues.”[14] The twelve concepts she lists—honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and technomoral wisdom—are not intended to be an exhaustive list of all possible technomoral virtues.

Rather, these twelve things together form system by which to understand the most crucial qualities for dealing with our 21st century lives. They’re all listed with “associated virtues,” which help provide a boarder and deeper sense of the kinds of conceptual connections we can achieve via relational engagement with all virtues.[15] Each member of the list should support and be supported by not only the other members, but also any as-yet-unknown or -undiscovered virtues.

Here, Vallor continues a pattern she’s established throughout the text of grounding potentially unfamiliar concepts in a frame of real-life technological predicaments from the 20th or 21st century. Scandals such as Facebook privacy controversies, the flash crash of 2010, or even the moral stances (or lack thereof) of CEO’s and engineers are discussed with a mind toward highlighting the final virtue: Technomoral Wisdom.[16] Technomoral Wisdom is a means of being able to unify the other virtues, and to understand the ways in which our challenges interweave with and reflect each other. In this way we can both cultivate virtuous responses within ourselves and our existing communities, and also begin to more intentionally create new individual, cultural, and global systems.

Applications and Transformations

In Part III, Vallor puts to the test everything that we’ve discussed so far, placing all of the principles, practices, and virtues in direct, extensive conversation with the four major technologies that frame the book. Exploring how new social media, surveillance cultures, robots and AI, and biomedical enhancement technologies are set to shape our world in radically new ways, and how we can develop new habits of engagement with them. Each technology is explored in its own chapter so as to better explore which virtues best suit which topic, which good might be expressed by or in spite of each field, and which cultivation practices will be required within each. In this way, Vallor highlights the real dangers of failing to skillfully adapt to the requirements of each of these unprecedented challenges.

While Vallor considers most every aspect of this project in great detail, there are points throughout the text where she seems to fall prey to some of the same technological pessimism, utopianism, or determinism for which she rightly calls out other thinkers, in earlier chapters. There is still a sense that these technologies are, of their nature, terrifying, and that all we can do is rein them in.

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

While the implications of climate catastrophes, dystopian police states, just-dumb-enough AI, and rampant gene hacking seem real, obvious, and avoidable to many of us, many others take them as merely naysaying distractions from the good of technosocial progress and the ever-innovating free market.[17] With that in mind, we need tools with which to begin the process of helping people understand why they ought to care about technomoral virtue, even when they have such large, driving incentives not to.

Without that, we are simply presenting people who would sell everything about us for another dollar with the tools by which to make a more cultivated, compassionate, and interrelational world, and hoping that enough of them understand the virtue of those tools, before it is too late. Technology and the Virtues is a fantastic schematic for a set of these tools.

Contact details: damienw7@vt.edu

References

Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a World Worth Wanting New York: Oxford University Press, 2016.

[1] Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a World Worth Wanting (New York: Oxford University Press, 2016) ,6.

[2] Ibid., 10.

[3] Ibid., 19—21.

[4] Ibid., 22—26.

[5] Ibid. 28.

[6] Ibid., 28—32.

[7] Ibid., 35.

[8] Ibid., 43.

[9] Ibid., 44.

[10] Ibid., 45—47.

[11] Ibid., 54—55.

[12] Ibid., 51.

[13] Ibid., 64.

[14] Ibid., 119.

[15] Ibid., 120.

[16] Ibid., 122—154.

[17] Ibid., 249—254.

Author Information: Robert Frodeman, University of North Texas, robert.frodeman@unt.edu

Frodeman, Robert. “The Politics of AI.” Social Epistemology Review and Reply Collective 7, no. 1 (2018): 48-49.

The pdf of the article provides specific page references. Shortlink: https://wp.me/p1Bfg0-3To

This robot, with its evocatively cute face, would turn its head toward the most prominent human face it could see.
Image from Jeena Paradies via Flickr / Creative Commons

 

New York Times columnist Thomas Friedman has been a cheerleader for technology for decades. He begins an early 2018 column by declaring that he wants to take a break from the wall-to-wall Trump commentary. Instead, ‘While You Were Sleeping’ consists of an account of the latest computer wizardry that’s occurring under our noses. What Friedman misses is that he is still writing about Trump after all.

His focus is on quantum computing. Friedman revisits a lab he had been to a mere two years earlier; on the earlier visit he had come away impressed, but feeling that “this was Star Wars stuff — a galaxy and many years far away.” To his surprise, however, the technology had moved quicker than anticipated: “clearly quantum computing has gone from science fiction to nonfiction faster than most anyone expected.”

Friedman hears that quantum computers will work 100,000 times faster than the fastest computers today, and will be able to solve unimaginably complex problems. Wonders await – such as the NSA’s ability to crack the hardest encryption codes. Not that there is any reason for us to worry about that; the NSA has our best interests at heart. And in any case, the Chinese are working on quantum computing, too.

Friedman does note that this increase in computing power will lead to the supplanting of “middle-skill and even high-skill work.” Which he allows could pose a problem. Fortunately, there is a solution at hand: education! Our educational system simply needs to adapt to the imperatives of technology. This means not only K-12 education, and community colleges and universities, but also lifelong worker training. Friedman reports on an interview with IBM CEO Ginni Rometty, who told him:

“Every job will require some technology, and therefore we’ll need to revamp education. The K-12 curriculum is obvious, but it’s the adult retraining — lifelong learning systems — that will be even more important…. Some jobs will be displaced, but 100 percent of jobs will be augmented by AI.”

Rometty notes that technology companies “are inventing these technologies, so we have the responsibility to help people adapt to it — and I don’t mean just giving them tablets or P.C.s, but lifelong learning systems.”

For that’s how it works: people adapt to technology, rather than the other way around. And what if our job gets outsourced or taken over by a machine? Friedman then turns to education-to-work expert Heather McGowan: workers “must reach up and learn a new skill or in some ways expand our capabilities as humans in order to fully realize our collaborative potential.” Education must become “a continuous process where the focused outcome is the ability to learn and adapt with agency as opposed to the transactional action of acquiring a set skill.” It all sounds rather rigorous, frog-marched into the future for our own good.

Which should have brought Friedman back to Trump. Friedman and Rometty and McGowan are failing to connect the results of the last election. Clinton lost the crucial states of Pennsylvania, Wisconsin, and Michigan by a total of 80,000 votes. Clinton lost these states in large part because of the disaffection of white, non-college educated voters, people who have been hurt by previous technological development, who are angry about being marginalized by the ‘system’, and who pine for the good old days, when America was Great and they had a decent paycheck. Of course, Clinton knew all this, which is why her platform, Friedman-like, proposed a whole series of worker re-education programs. But somehow the coal miners were not interested in becoming computer programmers or dental hygienists. They preferred to remain coal miners – or actually, not coal miners. And Trump rode their anger to the White House.

Commentators like Friedman might usefully spend some of their time speculating on how our politics will be affected as worker displacement moves up the socio-economic scale.

At root, Friedman and his cohorts remain children of the Enlightenment: universal education remains the solution to the political problems caused by run-amok technological advance. This, however, assumes that ‘all men are created equal’ – and not only in their ability, but also in their willingness to become educated, and then reeducated again, and once again. They do not seem to have considered the possibility that a sizeable minority of Americans—or any other nationality—will remain resistant to constant epistemic revolution, and that rather than engaging in ‘lifelong learning’ are likely to channel their displacement by artificial intelligence into angry, reactionary politics.

And as AI ascends the skills level, the number of the politically roused is likely to increase, helped along by the demagogue’s traditional arts, now married to the focus-group phrases of Frank Luntz. Perhaps the machinations of turning ‘estate tax’ into ‘death tax’ won’t fool the more sophisticated. It’s an experiment that we are running now, with a middle-class tax cut just passed by Congress, but which diminishes each year until it turns into a tax increase in a few years. But how many will notice the latest scam?

The problem, however, is that even if those of us who live in non-shithole countries manage to get with the educational program, that still leaves “countries like Egypt, Pakistan, Iran, Syria, Saudi Arabia, China and India — where huge numbers of youths are already unemployed because they lack the education for even this middle-skill work THAT’S [sic] now being automated.” A large cohort of angry, displaced young men ripe for apocalyptic recruitment. I wonder what Friedman’s solution is to that.

The point that no one seems willing to raise is whether it might be time to question the cultural imperative of constant innovation.

Contact details: robert.frodeman@unt.edu

References

Friedman, Thomas. “While You Were Sleeping.” New York Times. 16 January 2018. Retrieved from https://www.nytimes.com/2018/01/16/opinion/while-you-were-sleeping.html