It was a welcome relief from the running endless procession of redundant “critical” takes on AI to encounter Professor David Gunkel’s essay, “Deconstruction to the Rescue”. Finally, a perspective on Artificial Intelligence, Generative or otherwise, that does not smuggle through its criticism the freight train of pedestrian logocentrism. It took a while, but here we finally have it, something more than the usual wailing about “inaccuracy”, “falsehood”, “distortion”, “fakeness”, and so on. … [please read below the rest of the article].
Bouzid, Ahmed. 2023. “Deleuze to the Rescue.” Social Epistemology Review and Reply Collective 12 (7): 7–12. https://wp.me/p1Bfg0-7Vq.
🔹 The PDF of the article gives specific page numbers.
❦ Bouzid, Ahmed. 2023. “24 Philosophy Professors React to ChatGPT’s Arrival.” Social Epistemology Review and Reply Collective 12 (3): 50-65.
In this reaction, I wish to engage Professor Gunkel’s essay by embracing its general deconstructivist approach but do so by unfolding a Deleuzian perspective on some of the key points that Professor Gunkel offers.
The main sin of AI writ large and in its manifold manifestations, we are told by the contrabandista that Professor Gunkel calls out, consists in the “faulty” and therefore “dangerous” representations that it delivers of some pure—or at least, purer than what is offered by this AI—truth, and that we must therefore be careful, weary, and at least cautious when engaging such an AI, and that until this AI has converged to where it needs to be, we need to be vigilant. So much so, indeed, that a pause may be needed so that we may take stock of how to proceed cautiously and assess what we have wrought and at least begin to think about doing something (we are never told who that ‘we’ is supposed to include, or how to ‘begin,’ or what that ‘something’ exactly could be done).
Puzzlingly, however, we are assured that there is no need to panic right now, for this threat will come to fruition only down the line and not just yet, if left unchecked. For now, we may carry on, business as usual. Although “we” (that is, the experts among us who know) should, out of an abundance of caution, at least begin investigating, talking and thinking about the monster that is lumbering towards us, intent on our collective destruction as humans. And so forth.
The glaring problem, obviously, is that these “experts” upon whom we are expected to rely for this investigation are the very same proud parents of the creature they promise to carefully inspect and tame. Hence the rub. These experts don’t roam this earth to protect their fellow humans from harm, they roam this earth to create wealth for themselves and, once they have done so, they rove the skies and the heavens to play God.
But these experts also understand that the other side—the multitudes under threat—are not an undifferentiated mass of gullible consumers: Among them are sentient citizens and thinkers and activists who feel that the threat presented by the AI merchants is real and who wish to mobilize the energies of civil society and the institutions that a healthy democracy relies upon to maintain its health, to counter such a threat.
So, what is an alert, unflappable AI merchant to do? The answer: Adopt a strategy that frames the perceived problem as one of optimization. There is a delta between where we are and where we need to be, yes, and that delta is too wide as things stand, yes, and the task at hand is therefore to narrow that delta as fast as possible, and before it’s too late. So, let’s get one with it, this optimization. (Crucial to note, our intrepid merchants wish to remind us, is that they too, the merchants of AI, are our sisters and brothers, our fellow humans, on our side in the divide between human and robot, flesh-and-blood animals and silicon-and-digits artifacts, as are therefore as much under threat as we are. )
Now, I am not here to pretend that we have nothing to be alarmed about or that we should not begin to talk or plan or even to act in preparation—I wish we had done a lot of that when social media began infiltrating our world and lives. But I am here to side with Professor Gunkel in pointing out that the most common framing of our current challenge is, if I may put it in my own words, at the very least self-defeating. Indeed, it is not because of a deviation of sorts from “the truth” or from the possibility and even reality of “disinformation” that we should be alarmed. No, the matter is far more concrete than that. What should alarm us is the fact that these technologies (Generative AI, conversational interactive chat bots like ChatGPT, Big Data) are deeply threatening because in some uses of such technology they take the digital intrusion that has been ongoing for more than a decade now to a whole new level.
What we should be afraid of is the depth of micro-manipulation. Micro-manipulation made possible by the dynamic and ceaseless gathering of micro-data about human users and the coupling of this micro-data, collected through interactions between humans and smart, useful and compelling chatbots, with powerful algorithms perpetually rendered more powerful through the big micro-data that is collected. The fact that such a world of worlds of data is continually being collected and is privately owned by powerful interests, should be the starting main point of our alarm.
A critique of these technologies that focuses on the failure by these algorithms to represent some putatively objective truth that exists in some putatively platonic form—this logocentrism—not only misses the nature of the danger, but in fact, as a result of its misguided gaze upon what is, as I will argue, not a threat but an opportunity, enables those who see in these technologies potent avenues to gather enormous additional wealth, to build up their machinery now as we all discuss how we should consider the dangerous future later, down the line. (For the most vociferous of the logocentrists is Professor Gary Marcus, who is on a mission to help build “an AI that we can trust,” and who has been methodically compiling in exquisite detail the myriad ways in which ChatGPT and similar technologies have been failing. )
So, let us be clear about the nature of the authority that we need to not only question (as per Professor Grunkel) but also and first to expose, and expose in clear terms that describe not only their easy-to-pinpoint bottom lines but also what animates them as agents of the Leviathan to which they owe their fortunes—a Leviathan that does not understand good vs. evil, but only good vs. bad, a sober Leviathan that must be shown every three months charts that had better display lines that point the direction of the top right quadrant, or else. It is this “else” that keeps our merchants awake at night, an “else” that one can describe accurately as nothing much more than this: “A slower than planned accumulation of the personal wealth of the troubled insomniac,” and not the health of the Leviathan as such, let alone some looming cataclysm that places “humanity as we know it” on the path of grave peril—but again, only down the line.
Immanence and Becoming
So then, if the logocentric take and critique of AI is not much more than a useful distraction (useful to those who are focused on the monetization of everything), innocently adopted by those who do sincerely feel (and rightly so) that a real threat is lurking and looming, but who find themselves scuttled and neutralized by the tight embrace of clear eyed merchants who cynically exploit this logocentric framing and turn it into an effective delay tactic (that, given its expected repetition, a strategy in effect), how is one to counter the logocentric take that these merchants wish us to embrace?
Enter Gilles Deleuze.
Logocentrism, by its unspoken insistence that intensional content flows from an intentional authority, indeed “has everything backwards and upside down,” as Professor Gunkel notes, casting the reader and the listener as mere receivers, passive consumers of prêt à manger edibles, when in reality they should be (and are) acting more akin to chefs of their own meals, or artisans of their own goods, with the artist—i.e. the creator of the content—a “post production medium.” This logocentrism presumes a posited transcendence available as a starting source from which meaning flows downward into passive receptacles.
The key problematics within this framework then are reduced to ensuring two things: The purity of that which flows and the readiness of the receptacles to receive what flows to them. Logocentrism’s twin calls to action are to set up purity filters and to toughen up the receptacles to ensure that should impurities manage to seep through, the corrosive damage is minimal. Hence the various censorship moves and the continuous red flashing of “danger” and the call upon the information consumer to remain in a state of heightened vigilance.
Deleuze rejects this view of the reader as a mere, passive receptacle, and instead insists on a far more active, far more resilient and crucially, essentially creative creature. Instead of transactions (you give me this in exchange for that), Deleuze prefers encounters, and not between a superior, complete origin and an inferior incomplete destination striving for greater completeness through its acquisitive transactions, but between a sweltering body without organs that is ever changing and a self-sufficient immanence that encounters such a body and does as it pleases with it through a process of continuous productions and becomings.
In the case of say a technology such as ChatGPT, the matter plays itself out as follows: While the logocentrist’s eyes are trained to spot fact-failures (for instance, that so and so was the CEO of a company, when they never, ever were) and then to cry out foul and then to move to cast in doubt the integrity of the AI, with a call to action for “improving” such a system and setting up guardrails to protect against willful “disinformation,” so that we may one day get ourselves to “trust” such an AI; in contrast, the Deleuzian is more interested in the ways that this new machine, this ChatGPT truc, can help one engage in novel and freeing ways with the heterogeneous forces embodied in this machine. The Deleuzian ethos would be to poke at this thing and to see what one gets and then to do what one may want to do with it. For, remember, unlike Plato, Deleuze is not interested in how one ought to be or ought to do, but what one may do and how one may be.
Actions and Lines of Flight
Framed in this way, then, ChatGPT should not be engaged with as if it were a dispenser of truths, but as a machine that may help one get at useful information, insights, perspectives that in turn open new opportunities for different types of engagements. “Let No One Enter who is Seeking ‘Truth” would be the Deleuzian slogan—but not one announced in a spirit of aspersion casting (as in: “This thing will misinform you”), but one of a clearer characterization as to what sort of a machine one should expect to be working with. ChatGPT in this sense is not a destination but a jumping off point, a place where you are triggered, where you can mash things up as you please, where you delight in the differences between what you expected and what you got, what you know and what ChatGPT says you should know, and even between fact and fiction.
Indeed, some fruitful projects and investigations have been enabled by Generative AI machines: If a robot gives me coherent answers every time I ask it something, even if factually faulty here and there, sometimes answers that are not trivial at all and that, if they had been given by a human being, one would say that such a human being understood what was asked of them, does this mean that the robot is understanding what I am asking it? To most the answer is a clear “No” or an “I don’t know,” to some (such as myself), it’s a clear “Yes,” but either way, the follow up questions of “Why not” and “Why?” are now available for us to ask in ways that without a bot that performs consistently and against which we can prime and sharpen our questioning, we would not be asking with the same playful, experimental spirit.
Equally old questions now open to new reframings are the notion of “respect”—should we respect the robot, and as we delve into the question, we will have to ask, what does it mean to be respectful, and is focusing on the object of respect the way to tackle the question or perhaps it’s the respecter who needs to be the locus of our attention?,  Similarly, the old Cartesian mind-body question also finds a new lease on life, as do the questions of “sentience”, “identity”, “authorship” and more. In fact, one could learn quite a bit how one is being perceived or is marketing themselves from how ChatGPT fails, as for instance when it hallucinates.
Create and Do Differently
One would think that it should be obvious by now (and probably from the very start) that ChatGPT is not engaged in the business of truth delivery but rather in the project of enabling users to create textual artifacts. The most glaring indicator is the basic fact that ChatGPT does not provide any references when it delivers responses. It gives you a narrative and it steps aside. It is up to you to decide what to do next: accept as gospel truth what you read, or question it, or reject it, or use some other machine to investigate the veracity of some of its content. It is up to you and to what you are engaging the ChatGPT machine for.
But more than that: If you wish to summarize a one-thousand-word essay of yours to a short paragraph, it will do it, and it is up to you to decide whether the summary is good or not, or at least useful enough for your purposes to use it as a starting point. Perhaps you want the machine to translate your essay into Spanish. It does it, but you are not sure how good the translation is, and so you ask a friend who is a native speaker to go over it and make sure that it got it more or less right. Or perhaps it’s a tedious job description that you want help with, or maybe you wish it to help you rewrite an awkward sounding sentence or soften the tone of an email. In none of these encounters are you expected to accept an alleged “truth” that may or may not be. All of these offer you jump off points and lines of flight.
As for differences: ChatGPT is a wonderful challenge to those writers who are now deemed easily replaceable by this new machine—marketers, PR agents, HR managers and even scholars who churn out their papers almost mechanically. Why are they deemed replaceable? Simply because they decided to make sameness, not difference, and certainly not Derrida’s différance, the backbone of how they produce their work. And so, those who are not differentiating, who are not stretching their humanity to deliver the new, who have tragically stopped becoming, are going to disappear (or hopefully find a new line of work). Unless, of course, they take their very nemesis by the horns and use it as the tool that it may be for them to help them deliver novelty that is singularly their own creative contribution.
In any case, lest I fall into another trap—let’s call it the enchantment trap of Deleuzian play—while sidestepping the logocentrism trap of stern judgment, I reiterate my bottom line call to action: We need to keep our critical eye, when we take a critical stand, on the basic fact that if you want to explain what motivates the merchants of AI and what the subtext of their pronouncements may be, don’t overcomplicate it. Simply do as Deep Throat advised: Follow the money.
Benjamin, Walter. 1969. Illuminations: Essays and Reflections. Edited by Hannah Arendt. Translated by Harry Zohn. New York: Schocken Books.
Claypool, Rick and Cheyenne Hunt. 2023. “Sorry in Advance!” Rapid Rush to Deploy Generative A.I. Risks a Wide Array of Automated Harms.” Public Citizen 18 April. https://www.citizen.org/article/sorry-in-advance-generative-ai-artificial-intellligence-chatgpt-report/.
Conroy, J Oliver. 2022. “Power-Hungry Robots, Space Colonization, Cyborgs: Inside the Bizarre World of ‘Longtermism’.” The Guardian 20 November. https://www.theguardian.com/technology/2022/nov/20/sam-bankman-fried-longtermism-effective-altruism-future-fund.
Deleuze, Gilles. 1995 (1968). Difference and Repetition. Translated by Paul Patton. New York: Columbia University Press.
Deleuze, Gilles and Felix Guattari. 1987 (1980). A Thousand Plateaus: Capitalism and Schizophrenia. Translated by Brian Massumi. Minneapolis, MN: University of Minnesota Press.
Derrida, Jacques. 1967. De La Grammatologie. Les Editions de Minuit.
Fuller, Steve. 2018. Post-Truth: Knowledge as a Power Game. London and New York: Anthem Press.
Gunkel, David 2021. Deconstruction. Boston: MIT Press.
 O’Brien, Matt. 2023. “AI Warning: Human Extinction Threat Should Be a ‘Global Priority,’ Say Experts.” Fast Company. 30 May. https://www.fastcompany.com/90902749/ai-warning-human-extinction-threat-should-be-a-global-priority-say-experts.
 Sullivan, Mark. 2023. “What Is the Real Point of All These Letters Warning about AI?” Fast Company 31 May. https://www.fastcompany.com/90902786/what-is-the-real-point-of-all-these-letters-warning-about-ai.
 Challies, Tim. 2022. “Why Do Billionaires Want to Live Forever?” The Challies Blog April 25. https://www.challies.com/articles/why-do-billionaires-want-to-live-forever/.
 O’Brian, Matt. 2023. “Scientists Warn of AI Dangers but Don’t Agree on Solutions.” Associated Press 3 May. https://www.usnews.com/news/business/articles/2023-05-03/scientists-warn-of-ai-dangers-but-dont-agree-on-solutions.
 Alombert, Anne. 2023. Schizophrénie Numérique: La Crise de L’esprit à lLère des Nouvelles Technologies. Editions Allia.
 Center on Privacy and Technology. 2023. “Sam’s Plan to Too-Late to Regulate.” Medium 1 June. https://medium.com/center-on-privacy-technology/sams-plan-to-too-late-regulate-2e91516369f9.
 Tucker, Emily. 2023. “Deliberate Disorder: How Policing Algorithms Make Thinking About Policing Harder.” N.Y.U. Review of Law & Social Change Volume 46: 86-116.
 Hao, Karen 2019. “Why AI Is a Threat to Democracy — And What We Can Do to Stop It.” MIT Tech Review 26 February. https://www.technologyreview.com/2019/02/26/66043/why-ai-is-a-threat-to-democracyand-what-we-can-do-to-stop-it/.
 Alombert, Anne. 2023. Schizophrénie Numérique: La Crise de L’esprit à lLère des Nouvelles Technologies. Editions Allia.
 Barba, Paul. 2023. “Does ChatGPT Understand Language?” VentureBeat 25 February. https://venturebeat.com/ai/does-chatgpt-understand-language/.
 Tigar, Daniel W. 2023. “On Respect for Robots.” Robonomics: The Journal of the Automated Economy 4: 37. https://journal.robonomics.science/index.php/rj/article/view/37.
 Bouzid, Ahmed. 2021. “On Voice AI Politeness.” Voicebot.ai 29 May. https://voicebot.ai/2021/05/29/on-voice-ai-politness/.
 Whang, Oliver. 2023. “Can Intelligence Be Separated from the Body?” The New York Times 11 April. https://www.nytimes.com/2023/04/11/science/artificial-intelligence-body-robots.html.
 Metz, Cade. 2022. “A.I. Is Not Sentient. Why Do People Say It Is?” The New York Times 5 August. https://www.nytimes.com/2022/08/05/technology/ai-sentient-google.html.
 Coeckelbergh, Mark and David J. Gunket. 2023. “ChatGPT: Deconstructing the Debate and Moving Forward.” AI & Society 1–11. 5 June.
 Lebow, Sara 2023. “When ChatGPT Hallucinations Are ‘A Feature, Not a Bug’ for Marketers.” Insider Intelligence. 23 May. https://www.insiderintelligence.com/content/chatgpt-hallucinations-feature-bug-marketers.