Inside the Mind of an AI: How To Read Machine Generated Literature

On Thursday the 9th of February, the Trinity Long Room Hub was at capacity as it hosted Prof N. Katherine Hayles, literary critic and research professor, from the University of California, Los Angeles. The talk was supported by a Welcome Trust ISSF Award for Neurohumanities at Trinity College Dublin and hosted by the Trinity Long Room Hub Arts & Humanities Research Institute. Organised by TCD’s Department of History of Art and Architecture in the School of Histories and Humanities as part of the Neurohumanities Lecture Series, Prof Hayles’ talk “Inside the Mind of an AI: How To Read Machine Generated Literature” focused on GPT-3 (Generative Pretrained Transformer-3), the first AI able to produce human-competitive texts. She shared what GPT-3 means for literary criticism and the creation of literature to an attentive and engaged audience.

Prof Hayles began by considering what it means to read in a world with AI: alongside the human ability to read, there are now neural networks that can process natural languages. This means we have two “significantly different ways to read.” After establishing the pathways that AI uses to address the long-distance challenge of natural languages, she asked the important questions regarding how literary critics should read or respond to GPT-3, in a time when machines are writing articles, novels, poems, and their output is likely to increase exponentially. How should we interpret machine-generated texts? What does it mean if a machine talks about human experiences?

The answers to these questions will come from the arts and humanities. The fact that GPT-3 can detect and reproduce genre and style, Prof Hayles argued, has “wide-ranging philosophical and political implications” that should be analysed. She also made the important observation that uncertainty regarding machine intentionality is “precisely the kind of problem that literary critical techniques were invented to solve.”

These fundamental issues were addressed as she detailed various strategies that explore the differences and similarities between human and AI language generation. This started with discussion of the Null Strategy, based on the scientific “null hypothesis” which disregards data differences between populations. This would mean that differences between the experiences of humans and AI could be disregarded when it comes to creating meaning. This strategy, Prof Hayles, argues, is supported by postmodern theory such as Roland Barthes’ “Death of the Author,” which ignores a text’s origin and focuses on meaning created within the text itself. However, in Prof Hayles’ own view, “knowing something about the author enables a more precise and insightful reading.” The differences between humans and AI are too big to be ignored.

Following this, Prof Hayles illustrated arguments for evidence of sentience and creativity in the processes of literature-generating AI. She gave examples such as the chatbot LaMDA, whose creator was fired from Google after claiming it had achieved sentience, and science fiction author Kathryn Cramer, who uses GPT-3 in her work to detect patterns in programmes responses, such as the different voices that emerge from the AI’s output. Doing so, she highlighted the importance of identifying the creative processes of AI.

Instead of the Null Strategy, then, Prof Hayles proposes various other strategies for analysing AI generated-literature, which instead acknowledge the literature’s origin and sources of creativity. These include identifying the relationship of an AI’s output to their source texts and its other influences, and discerning the ideological biases of their output. She invited the audience to consider whether a literature-generating AI can, by building upon the source text and adding nuance to the instructions it is given, “develop its own kind of tacit knowledge through its own networks of indexical relationships.”

Concluding on her argument that “literary critical techniques are well suited to distinguish between the machine’s programmes and the meanings that people project onto the text,” the talk was followed by a lively Q&A session which addressed issues such as GPT-3 and intellectual property, AI’s potential for curiosity, and literary genre. Prof N. Katherine Hayles certainly highlighted how important critical theory is – and how much more important it will become – with further advancements in technology.

Human+ is a five-year international, inter- and transdisciplinary fellowship programme conducting groundbreaking research addressing human-centric approaches to technology development. Human+ is led by the Trinity Long Room Hub Arts and Humanities Research Institute and ADAPT, the Science Foundation Ireland Centre for Digital Content Innovation, at Trinity College Dublin. The programme is further supported by unique relationships with HUMAN+ Enterprise Partners.

The HUMAN+ project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 945447.

Share with others