Site icon Social Epistemology Review and Reply Collective

24 Philosophy Professors React to ChatGPT’s Arrival, Part I, Ahmed Bouzid

For someone like myself who makes their living in the field of Human Language Technology, two dates from the past decade or so have stood as  watershed moments in that field: October 4, 2011 when Apple’s Siri was launched with the release of the iPhone 4S, and November 6, 2014 when the Amazon Echo started shipping. A third date has now joined those two moments in my mind: November 30, 2022 the day when ChatGPT was released. … [please read below the rest of the article].

Image credit: Alpha Photo via Flickr / Creative Commons

Article Citation:

Bouzid, Ahmed. 2023. “24 Philosophy Professors React to ChatGPT’s Arrival.” Social Epistemology Review and Reply Collective 12 (3): 50-65. https://wp.me/p1Bfg0-7Hg.

🔹 The PDF of the article gives specific page numbers.

Editor’s Note: Ahmed Bouzid’s “24 Philosophy Professors React to ChatGPT’s Arrival” is presented in two parts. Please find below Part I. Please refer to Part II. The PDF of the entire article is linked above in the Article Citation.

Introduction

Now, whether or not this new technology will live up to the hype that is being whipped up in its name is, in my opinion, neither a serious nor an interesting question, because the answer to that question will eventually be: ‘No, of course it will not!’ No technology ever lives up to its hype. However, a more fruitful exercise, I believe, is capturing the initial reactions to the technology from those who are not necessarily vested in its success—or who may in fact feel threatened by it. For such an exercise often brings to the surface hidden ways of thinking, given and unspoken prejudices, and embedded biases that will, with the passage of time, fade away and dissolve as the technology becomes enmeshed into the fabric of our daily life, replaced by new hidden and undifferentiated ways of thinking, prejudices, and biases. Being aware of that starting point of the change that takes place as we venture away from that point and towards and into a new way of being, can help us, the hope is, keep alive in ourselves the awareness that, even though it may feel at any given moment as if we are finally arriving, the reality is that we will always be in flight and that we will never arrive.

In the spirit of capturing such a moment, finding ourselves now not quite four months since the launch of ChatGPT, I asked a couple dozen philosophy professors to give me a paragraph or so outlining their high level thoughts and gut-feel reaction to the arrival on the scene of ChatGPT, with the pointed question posed to some (but not all): “Is ChatGPT a threat to academia?”

As to: Why, of all people to ask, philosophers? The answer is this: I will take and use, though never without checking and triple checking when in doubt, output from ChatGPT on any discipline (even poetry and fiction), except for output that purports to be philosophical, which I refuse to take seriously to any degree other than as a stimulant of sorts for my own thoughts. For philosophical “output” on questions of the moment, I need it from a fellow creature of my species, living in their version of the here and now, giving me feedback in their version of the here and now, thinking about thinking and about being, and about ourselves thinking about thinking and being, as such creatures have been trained to do. Perhaps along the way, I will be lucky to encounter a brand new concept that they may have created, which only they, the philosophers that they are, are in a position to do, for that is the core purpose of their profession.[1]

Below are the answers of those who were able and willing to respond to my call within the deadline offered.[2]

Bryan William Van Norden, Vassar College (USA), The School of Philosophy, Wuhan University (China)
Christian Miller, Wake Forest University
Daniel Cunningham, Villanova University
Diane P. Michelfelder, Macalester College
Donovan Miyasaki, Wright State University
Eric Schwitzgebel, University of California at Riverside
Gregory R. Peterson, South Dakota State University
James M. Okapal, Missouri Western State University
James Stacey Taylor, The College of New Jersey
Jamie Phillips, Clarion University
Jamie L. Phillips, PennWest University
Kevin Decker, Eastern Washington University

❧ ❧ ❧

Bryan William Van Norden, Vassar College (USA), The School of Philosophy, Wuhan University (China)

Historically, every change in communications technology leads to a change in the content and nature of what is communicated. This is what Marshall McLuhan (1911-1980) meant when he famously said that “the medium is the message.” Every such change has advantages and disadvantages. Plato worried that the invention of writing would lead people to be lazy about memorizing things. The printing press helped bring about democracy and modern science by making the written word more widely accessible, but also made it increasingly difficult to keep up with everything there is to read. Computers and the internet made it easier than ever before to access information, but they have also shortened our attention spans and fragmented what was left of our common culture. ChatGPT will make it easier than ever for us to produce simple compositions, but it will also make most writers less skilled and less attentive to the craft of writing.

❧ ❧ ❧

 Christian Miller, Wake Forest University

I am a philosopher working at a private university in the United States. With respect to my corner of the academic world, I see a number of benefits and concerns related to ChatGPT, both with respect to research and teaching. Here I will just focus on one of the most immediate and alarming concerns, namely facilitating plagiarism. For many years students could go online, find content that they could incorporate into their writing assignments, and present it as their own, thereby plagiarizing. But this took a fair amount of work, and was often easy for professors (or Google!) to detect. That all changes with ChatGPT.

Now a student can feed it standard philosophy paper topics, a length requirement, and a requested style of writing, and typically ChatGPT will immediately supply at least a decent essay in return. It might not be an A+, but it could be a B, and for many students that’s good enough. Plus the style might not make it obvious that it is written by ChatGPT. ChatGPT detection programs won’t give you answers with 100% certainty, so it can be hard to prove that the essay was plagiarized, and university honor/ethics committees might not be convinced. To counteract all this, assignments will need to shift to more in-class work, or oral exams, or at-home paper assignments that are highly tailored to the class material, or perhaps some other options. These might work well for some professors and for some in-person courses. But I wonder what the future holds especially for online, asynchronous courses. More generally, I worry about the future of student honesty and integrity.

❧ ❧ ❧

 Daniel Cunningham, Villanova University

The advance of AI technology surely portends many changes for academia, both menacing and exciting. But what worries me most about it is the threat it poses to the process of learning itself and the social and political consequences which might follow. If the most elementary procedures of writing do not need to be gradually and painstakingly learned but can instead be “skipped,” the human writer needing only to supply a framing prompt, we will lose much more than a knowledge of grammar and syntax; we will lose the ability to think. Writing is the most intensive form of engagement with a subject available to the human mind. The act of figuring out how to express a thought logically and convincingly—including how to fit it into the framework of rules governing grammar, syntax, and rhetoric—forces the writer to challenge and refine the thought, forces them to challenge their own commitment to that thought, forces them to question what other thoughts it might imply. This process is difficult and painful, but it is crucial to intellectual development.

In threatening to eliminate the difficult and painful aspects of learning, programs such as ChatGPT join the general trend of contemporary consumer culture, which—devoid of meaningful innovations which might improve society in some substantive way, such as by reducing inequality or shortening working hours—promises merely to minimize annoyance, preaching to us incessantly that it is our right as late moderns to live easy, shallow lives paid for by constant, inane work. One might reasonably accuse me of the slippery slope fallacy if such developments were not already well underway, irrespective of AI. The erosion of the expectation that learning must be gradual and difficult is evident to anyone who teaches, and its effects are visible in our social and political discourse. The enraged parents who harass teachers and school board members, or who opt for home-”schooling,” because they refuse to allow their children to confront ideas and worldviews different from their own, are people who themselves never accepted that learning and growth involve difficulty. Technologies such as ChatGPT are only the latest means by which consumer culture rewards them for their laziness.

❧ ❧ ❧

Diane P. Michelfelder, Macalester College

Does ChatGPT represent a threat to academia? Possibly, but not for the reasons generally found in media headlines. When OpenAI dropped ChatGPT onto an unsuspecting public in late November 2022, faculty quickly discovered that it could create convincing prose at breakneck speed in response to user requests, prompting a geyser of concerns that students would use it to ghostwrite assignments and researchers would draw on it to come up with text for journal publications.

A salient sign of the alarm is that Sciences Po has gone so far as to threaten to possibly ban students who use it not only from the institution but from French higher education as a whole. That academics would be concerned about such matters makes sense—after all, how could academics not be concerned about academic integrity? At the same time, to be focused on how chatbots can exacerbate plagiarism is to be distracted from deeper issues at stake.

This new, generative AI isn’t called simply ChatG. While the hype lies on the G side; the turbulence it produces comes from the other, PT side. ChatGPT is not a creative. It is pre-trained on vast oceans of Internet data using a recently-developed model in order to predict what might come next in a sequence of words. This way of processing language brings about misinformation (false claims about real things) and hallucinations (make-believe facts). And while it is not always transparent when to rely on what ChatGPT comes up with, no matter how plausible it sounds, we also know painfully little about what efforts are being made to align ChatGPT with longstanding academic values such as the commitment to the pursuit of truth and the free play of academic inquiry. That Microsoft has laid off its entire Ethics and Society team does not boost confidence here. In other words, looking to the PT rather than to the C dimension of ChatGPT shifts the focus of critical attention away from the product and onto the responsibility of the producer.

Do philosophers, particularly those whose work lies in the ambit of technological ethics, have a responsibility to discourage those in academic communities from using ChatGPT? Yes, but not because chatbots tempt students to plagiarize or to outsource the demanding work that goes along with research. Until there is more reason to be confident that a solution to the alignment problem will not threaten academic values, there is every reason to remain wary.

❧ ❧ ❧

 Donovan Miyasaki, Wright State University

Whether ChatbotGPT is portrayed as a divinity or a devil, as our salvation or obsolesce, our dominant responses recall Feuerbach and Marx’s view that the gods of both religion and market are fetishes: alienated projections of our own human qualities, treated as independent agencies that we must fear and obey. But artificial intelligence is not artificial: it is our own extended mind, an abstraction drawn from real collective intellectual agency and, like any instrument, an extension rather than diminishment of human powers. But like any false idol, it will control us to the degree that we believe it does. The danger is that we will fail to recognize and underline the difference between human intelligence and its extensions.

“Real” intelligence is not rooted in information but  in the living needs, desires, and curiosity that seek it out as means to their ends. Real intelligence is individual, concrete, active, and critical—an anticipation of possible futures—while artificial intelligence is collective, abstract, passive, and positive—a mere recording and synthesis of our intellectual past. If we not only make use of but begin to conform ourselves to that mode of intelligence, we will starve, forget, and lose both our taste and ability for our own distinctive, broader and richer kind.

One task of the highest works of culture is to remind us that we are more than producers and consumers, more than transmitters of values, goods, and services: we are the determiners and source of those values. They teach us to revere not just the transmission of information but its critical and creative revaluation, a capacity rooted in the unique subjectivity, circumstances, and needs of living, breathing individuals, in contrast to the abstract agencies of the market and of an AI engineered primarily to serve it. A useful instrument is a congealed recording of past collective intelligence, minus the agency of its living members. And when our past collective intelligence is mistaken for an end rather than a means, for our authority rather than our servant, it condemns us to an eternal repetition of the same, protecting the moral and political status quo by reducing critical, creative, and curious individuals into instruments of their own instruments.

Artificial intelligence is not a new or independent intelligence: it is a copy of the intelligence of living agents trapped in undead objects: when we forget that, we forget our own humanity in ways that may one day systematically eliminate it.

❧ ❧ ❧

Eric Schwitzgebel, University of California at Riverside

Looking a bit further down the road, large language models like ChatGPT might be combined with representational models of aspects of the world, audiovisual input streams, “cognitive workspaces” where models are updated in light of input, emotionally valenced speech outputs, reward algorithms that help shape learning and further outputs, and perhaps motor outputs in robotic bodies.  At that point, both ordinary users and theorists of consciousness might begin legitimately to wonder whether we have crossed over some line into creating entities with sentience and rights.  Unless we have a well-justified consensus theory of consciousness, there’s likely to be substantial disagreement about the moral status of such future entities, including pressure from some groups to start granting them rights and pressure from other groups to treat them as disposable tools.  I recommend against creating AI systems whose moral status is legitimately disputable in this way.  Either create systems that are plainly disposable tools, and make it clear from the user interface that they are no more than that, or go all the way, if it’s ever possible, to creating systems about which there is justifiable consensus that they deserve moral consideration, and then build them with user interfaces that encourage people to treat them as is appropriate to their real moral standing.

❧ ❧ ❧

Gregory R. Peterson, South Dakota State University

The dramatic arrival of ChatGPT onto the public stage portends a new age of AI, one full of potential threats and promises. While next generation AI may enable programmers to code better and writers to improve the pace and quality of their writing, it may also replace entire professions and sow serious doubts concerning authorship and the veracity of what one reads and views. The advances demonstrated by ChatGPT will no doubt prove disruptive in ways that we cannot fully predict, but we should not view the likelihood and severity of such disruptions as inevitable. The evolution of AI calls for a concomitant evolution of formal and informal institutions that work to harness and promote positive implementations of AI and deter the realization of negative outcomes.

Technology libertarians view institutions with skepticism, but it is precisely through the development of effective and just institutions that civilization advances. Labor laws protect worker health and safety, international nuclear nonproliferation treaties have significantly slowed the spread of nuclear weapons, and standards of peer review contribute to the reliability of scientific results. Institutions provide incentives and sanctions, embodying values that, in turn, shape the environment within which innovation occurs. Addressing the advances of next generation AI will require a range of institutional responses, from the prosaic development and implementation of tools of detection to international agreements placing limits on the use of AI in military applications. Although institutional innovation often lags behind technological innovation, the advent of next generation AI stands to acerbate this problem. It is important that we work to prevent this from happening. By doing so we can harness AI’s potential while avoiding the most dangerous pitfalls.

❧ ❧ ❧

 James M. Okapal, Missouri Western State University

The simple answer here is no, [I don’t believe that the emergence of AI like ChatGPT represents a threat to academia]. These types of software, by themselves, do not threaten academia. For those students who move beyond general education courses and have to create documents that require evaluation and creativity (the top tiers of Bloom’s Taxonomy), the software can’t create the relevant material with evidence, citations, and correct explanations. Students who use the software in general education courses will fail to develop the skills to remember, understand, apply and analyze original material. At the point that such a student enters into their major, upper division courses, the student will either have to learn those skills and develop the higher order skills of evaluation and creativity or they will not be able to complete their degree. In other words, in the long run, the students are either delaying the development of those skills or merely failing to complete their degree program.  Surely some will get by without doing any of this, but that was the case before the existence of the software and I don’t see much of a change due to these programs. The goals of academia, to have students who go through the system develop skills and content knowledge to become contributing members of society, will survive this software.

❧ ❧ ❧

 James Stacey Taylor, The College of New Jersey

ChatGPT will be a boon, rather than a burden, to academia. ChatGPT functions by learning and replicating patterns in language. It does well in providing exegetical accounts of standard concepts and the views of well-known authors. However, it is unable to offer original evaluations of claims or arguments that go beyond those that could be predicted by anyone with sufficient exposure to subject matter in question. It cannot present original arguments of its own for any normative claims. When asked to do so, it hedges in a very predictable fashion, writing that “Some say this. some say that, ultimately this is a difficult question on which people disagree”.

Chat GPT’s ability to provide cogent exegeses of descriptive claims should indicate to academics that their teaching must move beyond asking students merely to acquire knowledge of what others have said. Lazy and predictable essay questions (such as “Outline Descartes’ ‘Trademark Argument’ for the existence of God”) will need to be replaced. This could be through the use of questions that challenge students to develop creative answers. (“If you don’t understand Descartes’ Trademark Argument, is this your fault?”) Or it could be through requiring students to engage with the course material that has been taught in a creative way. (“How would a souffle cook respond to Descartes’ claim that ‘nothing comes from nothing’?”) The need to set questions that Chat GPT cannot answer should push academics towards creative teaching.

Chat GPT’s limitations should also indicate to academics where the value of their endeavors lie: In facilitating the development of students’ abilities to evaluate and criticize claims that are made and the arguments offered to support them, and to construct original arguments to support their own positions.

Since creativity and analytical ability lie at the heart of philosophy, ChatGPT should awaken universities to the centrality of philosophy to their educational mission.

❧ ❧ ❧

Jamie Phillips, Clarion University

I think the conversation you had in ChatGPT indicates to me the value that such AI would have for individual philosophers working on a research question. The current process for generating answers to research questions, outside of our own initial private speculations, always involves tons of reading (much of this entirely useless) along with initial presentations of one’s theories and arguments at philosophy conferences soliciting feedback (much of this also useless). If ChatGPT had access to journals and textbooks, we could very quickly begin the process of identifying cogently formulated summary answers to our research questions derived from thousands of philosophers, both speeding up research and improving its depth and comprehensiveness. Given that philosophers would all be using the same AI and the same databases, we might even begin a process whereby we better unify our philosophical answers into something closer to a world-wide consensus. Maybe this would end up being a way to finally and genuinely achieve wide-reflective equilibrium on all philosophical questions and plausible philosophical progress.

❧ ❧ ❧

 Jamie L. Phillips, PennWest University

The current process for generating answers to research questions, outside of our own initial private speculations, always involves tons of reading (much of this entirely useless) along with initial presentations of one’s theories and arguments at philosophy conferences soliciting feedback (much of this also useless). If ChatGPT had access to journals and textbooks, we could very quickly begin the process of identifying cogently formulated summary answers to our research questions derived from thousands of philosophers, both speeding up research and improving its depth and comprehensiveness. Given that philosophers would all be using the same AI and the same databases, we might even begin a process whereby we better unify our philosophical answers into something closer to a world-wide consensus. Maybe this would end up being a way to finally and genuinely achieve wide-reflective equilibrium on all philosophical questions and plausible philosophical progress.

❧ ❧ ❧

 Kevin Decker, Eastern Washington University

From my perspective in philosophy of ethics and social theory, ChatGPT is a disaster for schooling—for the same reasons that it is a disaster for all of us. With every spam text and email I get, every leaked password, with each report of identity theft, with all the deepfakes I see and hear, and certainly with the rise of a new level of political deception and self-deception, I am becoming convinced we live in the Great Age of Fraud (perhaps a new great age, I’m not asserting that this is the first). ChatGPT—just like online consumer algorithms and customer service outsourced throughout the world—demolishes precisely the things human beings need in order to reclaim their humanity. Face to face interactions, trust and accountability in the expectation of truth, greater simplicity and fewer, better options are what ChatGPT weakens along with the other, largely communication/information technology/social media-based engines of deception that I mentioned. What are we getting in return? A new lease on intelligence? Short cuts for students? Efficiency? Convenience? For those trans-humanists who are interested in the potential amalgamation of the human and the machine, good luck with that. I’m no Luddite, but it’s clear to me that ChatGPT does violence to the sustainability of human relationships. Perhaps it’s time for our own Butlerian Jihad, á la Frank Herbert’s Dune?

❧ Please refer to Part II of “24 Philosophy Professors React to ChatGPT’s Arrival.”

Author Information:

Ahmed Bouzid, ahmed.bouzid@gmail.com, Founder & CEO Witlingo.


[1] Deleuze, Gilles and Félix Guattari. 1996. What is Philosophy? Translated by Hugh Tomlinson and Graham Burchell III. Columbia University Press.

[2] The contributions are listed alphabetically by first name.

Exit mobile version