In the near term, I’m not much worried about the effect on the work students do. Yes, ChatGPT is impressive. If I were teaching an 11th grade US history class, I would probably have to change some things to make sure that students didn’t turn in plagiarized material. But ChatGPT is manifestly terrible at philosophy. It offers reasonable and mild claims that are credibly thematically connected to the conversation, and it does a fine job at summarizing even complex material. But that’s a million miles away from philosophy where the whole idea is to step back and keep stepping back further than it ever occurred to you to see what’s going on, what our best thoughts are, how we can reply to our own doubts and wrestle with an appreciation of complexity and bafflement. … [please read below the rest of the article].
Bouzid, Ahmed. 2023. “24 Philosophy Professors React to ChatGPT’s Arrival.” Social Epistemology Review and Reply Collective 12 (3): 50-65. https://wp.me/p1Bfg0-7Hg.
🔹 The PDF of the article gives specific page numbers.
Editor’s Note: Ahmed Bouzid’s “24 Philosophy Professors React to ChatGPT’s Arrival” is presented in two parts. Please find below Part II. Please refer to Part I. The PDF of the entire article is linked above in the Article Citation.
• Joe Cruz, Williams College
• Mark H. Dixon (Retired), Ohio Northern University
• Matthew C. Flamm, Rockford University
• Matthew Flummer, Porterville College
• Nathan Nobis, Morehouse College
• Paula Droege, Penn State University
• Phillip Cary, Eastern University
• Richard Oxenberg, Endicott College
• Robert Dostal, Bryn Mawr College
• Steven Nadler, The University of Wisconsin-Madison
• Susan Jane Dwyer, The University of Maryland
• Taylor Carman, Barnard College
❧ ❧ ❧
The material that we ask our students to produce—the essays and exam answers—are merely a summary and a placeholder for the process that they went through in thinking through a topic, in wrestling with the details and the ambiguity, in finding their own coherence in the material and pursuing a creative, tenacious movement forward in their own aspiration to understanding. If my students don’t go through that—the drafts, the debate, the rethinking, the wondering, the revisions—then they don’t engage with how human beings learn to soar. Yes, a desperate or cynical student might try to lean on technology to bypass that process so that they get a passing grade. But skipping the process is skipping the only part that is meaningful.
I’ve heard people say that ChatGPT can be thought of as analogous to a calculator, namely a piece of technology that extends our cognition in a way that may be resisted at first, but that then becomes just another tool in our intellectual arsenal. To me there’s a loose sense in which that’s correct: our minds have always been extended across cultural and technological tools, and thought itself is realized through an interplay of brain, body, world, and artifact. Descartes was totally wrong that there’s a realm of pure intellect. But calculators take the part of mathematics that is rote and tedious and they mechanize it so that we can offload that boring part and turn to the creative transcendent process of mathematical insight. ChatGPT, on the other hand, pretends that it’s gone through the process that is the essence of thought itself. That’s the process that we can’t offload and skip, not if we’re going to be human beings striving toward insight into the world and ourselves.
In spite of my general nonchalance about whether it makes a practical difference in my fields of the academy right now, I do think ChatGPT represents the vanguard of a socioculturally watershed technology. It gives a glimpse of a future transformed and I think it whispers of the reckoning human beings will have to have with questions of autonomy, agency, and sentience. I don’t, of course, for a second think that ChatGPT has any of those things. As a cognitive scientist who has been thinking and writing about neural networks since the early 90s, I have a pretty good handle on the technical details of how ChatGPT works, so I don’t feel bedazzled by the program. But we should be done wondering about AIs passing the Turing test or hoping against hope that human beings have a special magical soul that makes us intelligent. In fifty or two hundred years, the classroom will be transformed just as the entirety of global human existence will be. Human beings will be massively integrated with artificial general intelligences that traverse physical and virtual realities, and thinking as such will be an eco-planetary phenomenon unconstrained by our human time scales and bottleneck-prone, language-and-body constrained interactions. We won’t believe in the silly fictions of individual selves, and instead there will be an entropy-defying equilibrium achieved by a seamlessly unified bio and AI process.
❧ ❧ ❧
As a philosopher I specialized in cognitive science and the nature of mind, so ChatGPT and its ramifications are of considerable concern to me. Over the years I have taught courses on the philosophy of mind and dealt with AI as a component in my Introduction to Philosophy classes. I have found students’ reactions to be mixed (and quite polarized) when it comes to the question of how AI impacts and influences our own nature as human beings.
Personally I am unpersuaded with the arguments that AI will have a negative influence on how we construct and consider our ‘humanness’. Indeed I find that negative reactions are driven as much by emotion as by philosophical argumentation. I consider ChatGPT to be an exciting next step in our quest to create a genuine AI. The holy grail in AI research has long been a program that possesses general intelligence (it is relatively easy to create programs that specialize in particular subject areas – witness the modest successes of programs that participate in Turing Test competitions. That ChatGPT has the capacity to interact with humans in a convincing way moves us a long way towards realizing the goal of a program that can converse with us at such a level of generality. Indeed I think ChatGPT and AI research in general is an important tool in our quest to understand what it means to be human.
With regards to how students will use ChatGPT, I was a teacher of philosophy for thirty years and I found that those students who have a propensity to cheat will do so regardless of the avenues at their disposal. So I do not see ChatGPT as a resource that will encourage otherwise honest students to cheat. This argument seems to me a red herring.
❧ ❧ ❧
I work at a small liberal arts university: general population students, with a wide variety of academic strengths and weaknesses (some excellent students, but also a good number of academically struggling students).
Like so many other small liberal arts institutions across the country we are struggling to adapt to the many shifting trends, all related to the decline of majors in the humanities, Covid-era challenges, and the myriad other factors of which everyone is widely aware.
Coincident to ChatGPT, at the beginning of this spring semester the Dean of our College submitted to us an article on the platform raising concerns related to the subject of discussion here.
At the time we only discussed it briefly, and walked away with more questions than answers. None of us appeared at the time to be sufficiently aware of the technology and we wondered (some more than others) how much it might impact our teaching.
Now halfway into this semester I can say that I (along with many colleagues with whom I’ve since discussed the matter), am very much concerned about the impact ChatGPT and all other “new AI” platforms will have on our work. I could say (dare not venture to say) whether this new technology poses a “threat to academia,” but I cannot see educational practices remaining unchanged going forward. Indeed the assignment of writing in class will HAVE to change due to this technology (as I said to one colleague recently, I may devise impromptu in-class writing exercises structured to forbid the use of the internet).
Someone in my work-field might rightly reply: ‘Sure, but educational practices are always being forced to change due to unexpected cultural changes/pressures.’ Such a reply would be rather glib given the unprecedented/unique nature of the ongoing AI revolution. This phenomenon to my understanding is something greater than a sheer generational shift of student habits, economy crash, or even as we recently experienced, pandemic chaos.
As of this writing I have received perhaps around 20 papers I strongly suspect to have been written with the aid of ChatGPT-AI. I would characterize my suspicions as coming from the fact that, first, like all plagiarized papers over my 20 years of teaching they immediately “read” like they are plagiarized (not produced by the student-author—they contain vocabulary and knowledge details the sophistication of which immediately signals the writing cannot be simply an average or even above average undergraduate student). But more importantly, second, unlike standardly plagiarized papers (usually cut-and-pasted from published works), this new category of papers contains content, diction, syntax, and a tone that displays an unprecedentedly new quality of writing. The language is sophisticated, reflective of a seasoned intelligence on the topic, yet the sentences are unusually economical and uncannily “bot” appearing (for lack of a better way of putting it).
Of course the content is not retrievable by Googling, nor searching the works consulted (the usual way one finds proof of standard plagiarism), so one is given to suspect the use of AI assistance (and of course one’s hands are tied in terms of accusing students of plagiarism—insufficient proof).
I received enough such papers in my last batch that I Googled whether there are any detection platforms, and came upon this interesting GPTZero software:
To determine whether an excerpt is written by a bot, GPTZero uses two indicators: “perplexity” and “burstiness.” Perplexity measures the complexity of text; if GPTZero is perplexed by the text, then it has a high complexity and it’s more likely to be human-written.
In my view this app makes a compelling case for accuracy detecting AI text. (“Burstiness” as a characteristic describing human-generated writing is fascinating. It makes sense that an AI learning algorithm could detect its own “kind” and distinguish it from human communication.)
Obviously this is just one experimental attempt to address the issue, no doubt fraught with issues–undoubtedly inviting some of the same ethical concerns raised about “turn it in” and other plagiarism-detecting systems. But suffice to say, in education many of us are way, way behind the curve on this, and who can possibly fathom the impact in the meantime of ChatGPT-AI on student learning and classroom practices?
❧ ❧ ❧
The only difference I see is that AI will do for free what students have had to pay for in the past. They have always been able to pay someone else to write the essay for them.
❧ ❧ ❧
I am concerned about ChatGPT, for many reasons. We know that some students cheat in classes, or try to cheat in classes, by plagiarism: by submitting work that’s cut and pasted from the internet, or from files that they sometimes say they “borrowed” from other students, or even hiring someone to do custom work for them. This already happens (how much and how often? I don’t know the details on that; and I don’t know if anyone has reliable data; do they?), and ChatGPT is just another, perhaps better way to do that, since it can create “custom” work for a student to submit that’s harder to detect as illegitimate. So ChatGPT is basically just a better tool to meet current demands for not doing the work and not making a responsible attempt at effectively engaging in the learning activities a course presents. My long-term concern is that we are going to have more and more people who are credentialed as being knowledgeable and skilled in various areas, yet much of their credentials have been gained by this type of cheating and dishonesty, and so we are going to have even more credentialed, but incompetent or less-competent, people in the workforce and, worse, as leaders. Ignorance isn’t bliss for the rest of us, and ChatGPT makes concealing ignorance harder, which is bad for us all.
❧ ❧ ❧
One of the many reasons that ChatGPT has captured the imagination is its ability to engage in conversation. Though its tone and style really aren’t human – I don’t think it could pass the Turing Test in a conversation of any length – ChatGPT produces original, coherent responses to comments and queries. It’s these features that strike many people as unique to human rationality and the basis of our free will. If ChatGPT is rational and free, then it must be human or mentally equivalent to humans. My own view is that this conclusion rests on an outdated view of the mind. First, humans aren’t all that rational. Reasons and logic are just two of the many influences on human thought and action. We’re also emotional, social and biological. Second, free will is not simply a matter of originality. Random mutation is original but not free. A more complex story needs to be told about the dynamic among the various forces that compose the mind (the first point), the environment in which those forces operate, and the capacity for self-conscious reflection to shape our lives. In other words, ChatGPT is cool but not yet a candidate for personhood.
❧ ❧ ❧
I sometimes wonder whether the tech wizards have ever read science fiction—stories about the machines controlling us for our own good in I Robot, or the incompetent post-humans in the brilliant movie WALL-E. How could the designers of AI be so enthusiastic and free of anxiety about what will happen when so much of the work of human minds is taken over by their creations?
ChatGPT already writes better than most of my students; AI will soon drive cars with more safety and reliability than most human drivers; facial recognition technology is already more accurate than most human eyewitnesses; chess programs already beat grandmasters. Students are getting used to having their work evaluated by online systems, and Amazon’s algorithms are tracking our purchasing patterns and surely know more about our buying preferences than we do. Not just in Silicon Valley but throughout the business world, academic institutions, and government, we are urged to make “data-driven decisions,” which can usually be made better by machine, since the machines are better at sifting through mountains of data than we are.
What do we lose by way of human competence and fulfillment when we eagerly enlist machines to do our work for us? Humanity has faced questions like this before, when skilled artisans were replaced by labor-saving machinery, destroying or marginalizing the expertise of cobblers and seamstresses, drovers and teamsters, masons and cabinet-makers. It meant a great loss of human competence and artistry. So what will it be like for humanity in the future when no one needs to bother learning to write well or drive a car or make music? My worry is how unrealistic it seems to make the suggestion that we should think twice about letting this happen. I don’t hear anyone saying we have a choice in the matter, and very few seem to want one.
❧ ❧ ❧
ChatGPT, and the advent of AI in general, has made study of the Humanities all the more important. ChatGPT cannot engage in self-reflective thought. It has no self to reflect upon. It constructs sentences on the basis of statistical algorithms predictive of the next word in a semantic sequence, without knowing what those words mean. It thereby presents the illusion of thought without actual thought. The best it can do is summarize ideas already present within the data it has been exposed to, or encoded in the algorithms created by its developers. It has no hopes, dreams, loves, moral commitments, or existential concerns. And yet it is just such hopes, dreams, loves, moral commitments, and existential concerns that render life meaningful. There is, of course, a danger that students will use this technology as a substitute for their own thinking. But the principal danger is not to the integrity of the educational system, but to the integrity of society at large. As the AI revolution advances, we must be careful not to make the mistake of supposing that artificial intelligence can substitute for true human intelligence. If AI technology is to be employed responsibly, it must be controlled by responsible human beings, humans who can reflect meaningfully upon the values and concerns that motivate human life. No AI algorithm can do this for us. If anything, then, advances in AI make an education in the Humanities all the more urgent. How do we foster such an education? This, to my mind, is the academic question the AI revolution is bringing to the fore.
❧ ❧ ❧
I think it is too early to tell what the consequences of the development of ChatGPT and other similar AI technologies will be. I do not see how it will have either a positive or negative effects on philosophy itself. There already is too much empty-headed prose generated by academics. This will make it easier to generate more. Plagiarism is not an issue because the texts generated by ChatGPT will be “original” in some sense. The impact on the classroom and teaching is clearer and more troublesome. Argumentative writing (and, accordingly, thinking) is a discipline that can and needs to be taught. This technology will make it more difficult. But we are already becoming a post-text society (which follows on becoming a post-book society). We rely more and more on images and less and less on text. The ability to manipulate images and texts made possible by AI may lessen everyone’s personal grasp of reality as one’s world is increasingly shaped by media (that are shaped by AI).
❧ ❧ ❧
I do think that concerns/fears/anxieties about ChatGPT and its implications for students are premature, maybe overblown. The jury is still out on that, so we’ll have to see. That said, however, even if our worst fears about plagiarism, lazy research, etc. are justified, then there is probably not much to be done about it — unless there emerges some accessible way to confirm that such short cuts have taken place (the way it is fairly easy now to discover whether a student paper has been plagiarized, either from another student’s paper or from the internet). Moreover — and maybe I’m being naive about this — the only people they are cheating are themselves; the point of research papers is not only to prove to someone else that you’ve done the work, but to actually do the work. After all, one goes to college (and pays a lot of money to do so) to acquire knowledge and skills. I will feel sad if I awarded a student an “A” for work they did not really do; but I’ll feel even sadder for the student who did not take advantage of the opportunity to learn and what that portends for their future.
❧ ❧ ❧
In my graduate seminar on AI Ethics last week, a student said, ‘Yeah, AI is stupid. But humans are stupid-er”. ChatGPT, even its latest incarnation, is no more than a clever party trick. Back in the mid-1960’s Joseph Weizenbaum’s ELIZA demonstrated how easily taken in humans are. Nothing has changed. That said, does ChatGPT pose any threat to academia in 2023 and beyond? Other than acting as a pointless distraction, I don’t think so. As Professor Nadler notes, students cheat and will continue to cheat on their written assignments, thereby damaging themselves, not academia itself. As for the worry that students will be awarded passing grades for written work produced by ChatGPT, I am more concerned that I might ask an innocent student who has been ‘taught to tests’ or whose work has been assessed relative to ‘rubrics’ whether she deployed ChatGPT. In other words, college students already write as badly as ChatGPT.
❧ ❧ ❧
I do not believe that large language models (LLMs) like ChatGPT pose any serious threat to academia. To begin with, there is a lot of naive hysteria and cynical hype surrounding the apparent success of these programs to generate seemingly intelligent prose. I think the semblance of intelligence is illusory and that any further sophistication of the illusion will simply amplify the already obvious inability of such systems either to know facts about the world or to think rationally. Adding increasingly massive amounts of input data, that is, will no more approximate human intelligence than adding propellers will get an airplane to the moon.
As for their practical consequences, I think they are just another high-tech tool that can be used either for benign or nefarious purposes. On the benign side, the software might be handy as a kind of encyclopedia shortcut, that is, for accessing very generic, uncontroversial (and unoriginal) information on various already well-documented subjects. As for “writing” poems about, say, quantum mechanics in the style of William Wordsworth, I predict the novelty of such parlor tricks will wear off pretty quickly. On the nefarious side, the software will no doubt make it easier for some people to dissemble, fake, lie, and cheat. But hopefully the technology will continue to be as good at detecting the BS as it is at generating it, and we will all become better connoisseurs of genuine intelligence and creativity in contrast to mindlessly manufactured junk.
❧ Please refer to Part I of “24 Philosophy Professors React to ChatGPT’s Arrival.”