On Generative AI

Jentery Sayers (he / him)
17 March 2023 | UVic
“Generative AI” Panel | The Matrix Institute
with Callum Curtis, Yun Lu, Valerie Irvine, George Tzanetakis
Moderated by Neil Ernst (Computer Science)

Generative AI

Prompt for the Panel: Prepare a brief opening statement (2-4 minutes) to engage discussion about DALL-E, ChatGPT, and academia. “What is the current state of these tools? What do we anticipate happening, and how should academia and industry prepare?”

Opening Statement (2-4 minutes)

This morning I wish to respond to Neil’s prompt, which asked us how we might prepare for generative AI, with two suggestions that I believe are actionable.

1) My first suggestion is that we enrich our language models and even entertain the possibility that language cannot be modeled. A starting point here may be academic debates during the 1980s about the question of intent. Many argued, and I agree, that speech is not the same as language. I may mean what I say, but I cannot say exactly what I mean. Why? Because language is fundamentally social, and — stronger — language and history speak through me when I communicate, hence phenomena such as semantic drift and polysemy, which I cannot control. As Yale English professor, Paul Fry, reminds us, “[l]anguage is an unintentional speech” (2012). Thinking of language this way, as something incredibly difficult if not impossible to model, prepares us to bypass the proverbial game of, “Was it written by a human or a bot?” The most recent GPT-4 technical report suggests, for instance, that the model could be power-seeking (yikes!), and it of course produces content that can be circulated, edited, recontextualized, and so on. What matters most, then, is that AI is a relation, not whether a bot says exactly what it means or even means what it says.

2) Following this point is my second and final one: we can prepare for generative AI and engage it by keeping it social: by organizing around it and negotiating with it, if only by slowing it down a bit. The reality is most of us, and arguably all academics, have practically no control over the technologies foisted upon us. When I teach media studies at UVic, I like to remind students that technologies do not fall suddenly from the sky and produce ripple effects. They have complex cultural histories before they congeal into the gadgets we use. Yet, as ChatGPT demonstrated, technologies may nevertheless feel as if they’ve arrived from nowhere and may thus put us in a state of alarm attended by DIY QA testing. Does it work? What does it do? How do I recognize it? AI-cloaking initiatives such as the Glaze Project at the University of Chicago interest me for this reason, as do ongoing unionization efforts involving new technologies and platforms. When we talk about generative AI, we are also talking about deskilling and precarity: about labour developments that will disproportionately affect those who are already vulnerable. Filmmaker, Hito Steyerl, recently said that generative AI is really just an onboarding tool for tech conglomerates (2023). I agree, and I imagine many of us will be onboarded into an occupational dependency whether we like it or not. Perhaps what I’m about to say simply suggests it’s a Friday and I’m naively optimistic; still, I maintain hope that a critical stance on AI means not only asking how it works, what it values, by whom it’s built, and whom it benefits most but also reminding each other that it’s a lived experience and not merely a tool.

References and Further Reading

Bajohr, Hannes. “On Artificial and Post-Artificial Texts.” Self-published, 2023. (For more on intent, including communicative intent.)

Bender, Emily M. et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT ‘21, 3-10 March 2021. (For more on language models and their risks.)

Brown, Kate. “Hito Steyerl on Why NFTs and A.I. Image Generators Are Really Just ‘Onboarding Tools’ for Tech Conglomerates.” Artnet, 10 March 2023. (Context for Steyerl’s comments on AI and art. Hat tip to Justin Lincoln at Whitman College at for pointing me to this one.)

Fry, Paul. Theory of Literature. Yale University Press, 2012.

Kirschenbaum, Matthew. “Prepare for the Textpocalypse.” The Atlantic, 8 March 2023. (For more on text as a medium and ChatGPT output as “content.” Hat tip to Matt for sharing this piece with me.)

Microsoft News Center. “Introducing Microsoft 365 Copilot: Your Copilot for Work.” Microsoft, 16 March 2023. (“Harnessing the power of AI, Microsoft 365 Copilot turns your words into the most powerful productivity tool on the planet.”)

OpenAI. “GPT-4 Technical Report.” 2023. (For more on the GPT-4 model. Also note: “GPT-4 was used in the following ways: to help us iterate on LaTeX formatting; for text summarization; and as a copyediting tool.”)

Sterne, Jonathan. The Audible Past: Cultural Origins of Sound Reproduction. Duke University Press, 2003. (For more, including lots of history, on why technology doesn’t fall from the sky.)

University of Chicago SAND Lab. The Glaze Project. Self-published, 2023. (For more on AI-cloaking. Hat tip to Miriam Posner at UCLA for pointing me to this project.)

Wilkins, Alex. “AI Trained on YouTube and Podcasts Speaks with Ums and Ahs.” New Scientist, 9 March 2023. (For more on AI mimicking unintentional speech. Hat tip to Sounding Out! for pointing me to this one.)

Wolfram, Stephen. “What Is ChatGPT Doing . . . and Why Does It Work?” Self-published, 14 February 2023. (For more on how ChatGPT works. Hat tip to Alex Gil at Yale for pointing me to this one.)