The Future of Intelligence and/or the Future of Unintelligibility

The following is an excerpt of my talk from the Locarno Film Festival, at the “Long Night of Dreaming about the Future of Intelligence” held August 9-10, 2023. (Animated imagery created with ModelScope Text to Video Synthesis demo, using text drawn from the talk itself.)

Thanks to Rafael Dernbach for organizing and inviting me to this event, and thanks to Francesco de Biasi and Bernadette Klausberger for help with logistics and other support. And thanks to everyone for coming out tonight. I’m really excited to be here with you, especially during this twilight hour, in this in-between space, between day and night, like some hypnagogic state between waking existence and a sleep of dreams. 

For over a century this liminal space of twilight has been central to thinking and theorizing the cinema and its shadowy realm of dreams, but I think it can be equally useful for thinking about the media transitions we are experiencing today towards what I and others have called “post-cinematic” media.

In the context of a film festival, the very occurrence of which testifies to the continued persistence and liveliness of cinema today, I should clarify that “post-cinema,” as I use the term, is not meant to suggest that cinema is over or dead. Far from it.

Rather, the “post” in post-cinema points to a kind of futurity that is being integrated into, while also transforming and pointing beyond, what we have traditionally known as the cinema.

That is, a shift is taking place from cinema’s traditional modes of recording and reproducing past events to a new mode of predicting, anticipating, and shaping mediated futures—something that we see in everything from autocorrect on our phones to the use of AI to generate trippy, hypnagogic spectacles. 

Tonight, I hope to use this twilight time to prime us all for a long night of dreaming, and thinking, maybe even hallucinating, about the future of intelligence. The act of priming is an act that sets the stage and prepares for a future operation.

We prime water pumps, for example, removing air from the line to ensure adequate suction and thus delivery of water from the well. We also speak of priming engines, distributing oil throughout the system to avoid damage on initial startup. Interestingly, when we move from mechanical, hydraulic, and thermodynamic systems to cybernetic and more broadly informatic ones, this notion of priming tends to be replaced by the concept of “training,” as we say of AI models. 

Large language models like ChatGPT are not primed but instead trained. The implication seems to be that (dumb) mechanical systems are merely primed, prepared, for operations that are guided or supervised by human users, while AI models need to be trained, perhaps even educated, for an operation that is largely autonomous and intelligent. But let’s not forget that artificial intelligence was something of a marketing term proposed in the 1950s (Dartmouth workshop 1956) as an alternative to, and in order to compete with, the dominance of cybernetics. Clearly, AI won that competition, and so while we still speak of computer engineers, we don’t speak of computer engines in need of priming, but AI models in need of training.

In the following, I want to take a step back from this language, and the way of thinking that it primes us for, because it encodes also a specific way of imagining the future—and the future of intelligence in particular—that I think is still up for grabs, suspended in a sort of liminal twilight state. My point is not that these technologies are neutral, or that they might turn out not to affect human intelligence and agency. Rather, I am confident in saying that the future of intelligence will be significantly different from intelligence’s past. There will be some sort of redistribution, at least, if not a major transformation, in the intellective powers that exist and are exercised in the world.

I am reminded of Plato’s Phaedrus, in which Socrates recounts the mythical origins of writing, and the debate that it engendered: would this new inscription technology extend human memory by externalizing it and making it durable, or would it endanger memory by the same mechanisms? If people could write things down, so the worry went, they wouldn’t need to remember them anymore, and the exercise of active, conscious memory would suffer as a result.

Certainly, the advent of writing was a watershed moment in the history of human intelligence, and perhaps the advent of AI will be regarded similarly. This remains to be seen. In any case, we see the same polarizing tendencies: some think that AI will radically expand our powers of intelligence, while others worry that it will displace or eclipse our powers of reason. So there is a similar ambivalence, but we shouldn’t overlook a major difference, which is one of temporality (and this brings us back to the question of post-cinema).

Plato’s question concerned memory and memorial technologies (which includes writing as well as, later, photography, phonography, and cinema), but if we ask the question of intelligence’s future today, it is complicated by the way that futurity itself is centrally at stake now: first by the predictive algorithms and future-oriented technologies of artificial intelligence, and second by the potential foreclosure of the future altogether via climate catastrophe, possible extinction, or worse—all of which is inextricably tied up with the technological developments that have led from hydraulic to thermodynamic to informatic systems. To ask about the future of intelligence is therefore to ask both about the futurity of intelligence as well as its environmentality—dimensions that I have sought to think together under the concept of post-cinema.

In my book Discorrelated Images, I assert that the nature of digital images does not correspond to the phenomenological assumptions on which classical film theory was built. While film theory is based on past film techniques that rely on human perception to relate frames across time, computer generated images use information to render images as moving themselves. Consequently, cinema studies and new media theory are no longer separable, and the aesthetic and epistemological consequences of shifts in technology must be accounted for in film theory and cinema studies more broadly as computer-generated images are now able to exceed our perceptual grasp. I introduce discorrelation as a conceptual tool for understanding not only the historical, but also the technological specificity, of how films are actively and affectively perceived as computer generated images. This is a kind of hyperinformatic cinema – with figures intended to overload and exceed our perceptual grasp, enabled by algorithmic processing. In the final chapter of the book, I consider how these computer-generated images have exceeded spectacle, and are arguably not for human perception at all, thus serving as harbingers of human extinction, and the end of the environment as defined by human habitation.

At least, that is what you will read about my book if you search for it on Google Books — above, I have only slightly modified and excerpted the summary included there. Note that this is not the summary provided by my publisher, even though that is what Google claims. I strongly suspect that a computer, and not a human, wrote this summary, as the text kind of makes sense and kind of doesn’t. I do indeed argue that computer-generated images exceed our perceptual grasp, that their real-time algorithmic rendering and futural or predictive dimensions put them, at least partially, outside of conscious awareness and turn them into potent vectors of subjectivation and environmental change. But I honestly don’t know what it means to say that “computer generated images use information to render images as moving themselves.” The repetition of the word images makes this sentence confusing, and the final words are ambiguous: are these supposed to be “self-moving images,” or images that, themselves, are moving? Or do the images use information to render themselves as moving images? What would that mean? The images are self-rendering? There is a multilayered problem of intelligibility involved, despite the fact that the sentences are more or less grammatical. The semantic ambiguities, the strange repetitions, and the feeling that something is just a little off are tell-tale signs of AI-generated text. This is not full-blown “hallucination,” as they say when AI just makes things up, but instead a kind of twilight recursion, suspended between the past of the training data and the future of the predictive algorithm, generating a sleepy, hypnagogic loop or a quasi-lucid, semi-waking dream.

But that summary was generated back in 2020. Since then, with GPT and other tools proliferating, we have witnessed a quantum leap in the intelligibility of AI-generated texts. In preparation for this event, I asked ChatGPT to summarize several of my books and to explain key concepts and arguments I made in them. The results were much better than what I just discussed (even though I was using the basic version that runs on GPT-3.5, not the more advanced GPT-4). Asked to explain my theory that “media are the originary correlators of experience,” the algorithm responded: “In this context, ‘originary’ suggests that media have been present from the beginning of human existence and have continuously evolved alongside our species. They are ingrained in our social and cultural development and have become integral to how we make sense of the world. […] Whether it’s language, art, writing, photography, film, or digital technology, each medium influences and organizes our experiences, constructing the framework through which we navigate reality.” That’s not bad, and it gets at what I’m calling the environmentality of media, including the medium or milieu of intelligence. 

We could say, then, that artificial intelligence technology functions as a contemporary manifestation of the correlation between media and human experience. ChatGPT represents a significant leap in the relationship between humans and technology in the digital age. As a sophisticated language model, it mediates human interaction with information, communication, and even decision-making processes. ChatGPT is an intermediary that transforms the way we engage with knowledge and ideas, redefining the boundaries between human and machine. As an AI language model, ChatGPT embodies the fusion of the organic (human intelligence) and the artificial (machine intelligence). This fusion blurs the lines between human creativity and algorithmic generation, questioning traditional notions of authorship and creativity.

The only problem, though, is that everything I just said about ChatGPT was written by ChatGPT, which I asked to speculate, on the basis of my books, about what I would say about large language model AIs. The impersonation is competent, and even clarifying, as it brings out implications of my previous thinking in transferring them to the new case. Significantly, it points the way out of the impasse I described earlier with reference to Plato’s Phaedrus: AI will neither simply empower nor simply imperil human intelligence but will fundamentally alter it by transforming the parameters or environment of its operation. 

The fact that ChatGPT could write this text, and that I could speak it aloud without any noticeable change in my voice, style, or even logical commitments, offers a perfect example of the aforementioned leap in the intelligibility of AI-generated contents. Intelligibility is of course not the same as intelligence, but neither is it easily separated from the latter. Nevertheless, or as a result, I want to suggest that perhaps the future of intelligence depends on the survival of unintelligibility. This can be taken in several ways. Generally, noise is a necessary condition, substrate, or environment for the construction of signals, messages, or meanings. Without the background of unintelligible noise, meaningful figures could hardly stand out as, well, meaningful. In the face of the increasingly pervasive—and increasingly intelligible—AI-generated text circulating on the Internet (and beyond), Matthew Kirschenbaum speaks of a coming Textpocalypse: “a tsunami of text swept into a self-perpetuating cataract of content that makes it functionally impossible to reliably communicate in any digital setting.” Kirschenbaum observes: “It is easy now to imagine a setup wherein machines could prompt other machines to put out text ad infinitum, flooding the internet with synthetic text devoid of human agency or intent: gray goo, but for the written word.” 

Universal intelligibility, in effect, threatens intelligence, for if all text (or other media) becomes intelligible, how can we intelligently discriminate, and how can we cultivate intelligence? Cultivating intelligence, in such an environment, requires exposure to the unintelligible, that which resists intellective parsing: e.g. glitches, errors, and aesthetic deformations that both expose the computational infrastructures and emphasize our own situated, embodied processing. Such embodied processing precedes and resists capture by higher-order cognition. The body is not dumb; it has its own sort of intelligence, which is modified by way of interfacing with computation and its own sub-intellective processes. In this interface, a microtemporal collision takes place that, for better or for worse, transforms us and our powers of intelligence. If I emphasize the necessary role of unintelligibility, this is not (just) about protecting ourselves from being duped and dumbed by all-too-intelligible deepfakes or the textpocalypse, for example; it is also about recognizing and caring for the grounds of intelligence itself, both now and in the future.

And here is where art comes in. Some of the most intelligent contemporary AI-powered or algorithmic art actively resists easy and uncomplicated intelligibility, instead foregrounding unintelligibility as a necessary substrate or condition of possibility. Remix artist Mark Amerika’s playful/philosophical use of GPT for self-exploration (or “critique” in a quasi-Kantian sense) is a good example; in his book My Life as an Artifical Creative Intelligence, coauthored with GPT-2, and in the larger project of which it is a part, language operates beyond intention as the algorithm learns from the artist, and the artist from the algorithm, increasingly blurring the lines that nevertheless reveal themselves as seamful cracks in digital systems and human subjectivities alike. The self-deconstructive performance reveals the machinic substrate even of human meaning. In her forthcoming book Malicious Deceivers, theater and performance scholar Ioana Jucan offers another example, focusing on the question of intelligibility in Annie Dorsen’s algorithmic theater. For example, Dorsen’s play A Piece of Work (2013) uses Markov chains and other algorithms to perform real-time analyses of Shakespeare’s Hamlet and generate a new play, different in each performance, in which human and machinic actors interface on stage, often getting caught in unintelligible loops that disrupt conventions of theatrical and psychological/semantic coherence alike. 

Moreover, a wide range of AI-generated visual art foregrounds embodied encounters that point to the limits of intellect as the ground of intelligence: as I have discussed in a recent essay in Outland magazine, artists like Refik Anadol channel the sublime as a pre- or post-intellecitve mode of aesthetic encounter with algorithms; Ian Cheng uses AI to create self-playing videogame scenarios that, because they offer not point of interface, leave the viewer feeling sidelined and disoriented; and Jon Rafman channels cringe and the uncomfortable underbellies of online life, using diffusion models like Midjourney or DALL-E 2 to illustrate weird copypasta tales from the Internet that point us toward a visual equivalent of the gray goo that Kirschenbaum identifies with the textpocalypse. These examples are wildly divergent in their aesthetic and political concerns, but they are all united, I contend, in a shared understanding of environmentality and noise as a condition of perceptual engagement; they offer important challenges to intelligibility that might help us to navigate the future of intelligence.

To be continued…

A Long Night of Dreaming about the Future of Intelligence — at Locarno Film Festival, August 9, 2023

On August 9, I will be speaking at the Long Night of Dreaming about the Future of Intelligence, which is taking place from dusk to dawn (8:44pm to 6:17am) at the Locarno Film Festival in Switzerland. I was asked to give a pithy statement of my contribution, and I settled on this:

“The future of intelligence depends crucially on the survival of unintelligibility.”

I’m still working out what this means, and if (and how) it’s even correct, but it’s prompted by some thoughts about the quantum leap forward that generative AI has recently made in terms of producing “intelligible” text (and other contents). Intelligibility is of course not the same as intelligence. Meanwhile, some of the most intelligent art using these new technologies works against the grain of “innovation,” foregrounding instead the unintelligible noise upon which these algorithms depend.

Here’s more info about the Long Night of Dreaming from their website:

On Wednesday, August 9th, “A Long Night of Dreaming about The Future of Intelligence” takes place at the Locarno Film Festival. From sunset to sunrise, Festival guests and visitors are invited to learn and dream together about possible futures of intelligence. Guided by researchers, artists, and cinephiles, these questions will be addressed: how do different forms of artificial and ecological intelligence manifest today? How might intelligence change in the future? And what is the role of cinema in shaping intelligence and rendering it visible? For the duration of an entire night, emerging forms of intelligence and their impact on society can be discussed and experienced in talks, workshops and performances.

The Long Night is a collaboration between the Locarno Film Festival, BaseCamp and the Università della Svizzera italiana (USI). It is supported by Stiftung Mercator Schweiz. The event is a successor of “The 24h long conversation on The Future of Attention” at Locarno75. As last year, it is curated by researcher and futurist Rafael Dernbach.

“Our image of intelligence has become a feverish dream, lately.Generative Artificial Intelligence has opened up a world of wondrous pictures, sounds and texts. We are astonished, amused, or disturbed by these creations. And by their loud promises of a radically different future. At the same time, ecological critique and its images of devasted landscapes, anticipating forests and networking fungi challenges our concept intelligent behavior: Have we neglected non-human forms of intelligence for too long? Might fungi be more capable of solving certain problems than human minds? Cinema, with its deep relation to dreams, has a strong influence on what we perceive as intelligence.”

During the Long Night, leading researchers in the field of cinema and intelligence such as Shane Denson (Stanford University) and Kevin B. Lee (USI) will share their research. Filmmakers such as Gala Hernández López will give insights into her work with emerging technologies. And designers such as Fabian Frey and Laura Papke will create intimate learning encounters to experience different forms of intelligence and explore its futures.

Inspired by cinema’s deep relation with dreams – but going far beyond the world of moving images – this night creates a unique opportunity for exchange about intelligence from artistic as well as scientific perspectives. It offers the chance for unexpected and memorable encounters with guests of the Locarno Film Festival. The exploratory journey starts on August 9th at sunset, 20:44 – and ends nine hours later on August 10th at sunrise, 6:17. Every full hour a new encounter, talk, performance or experience will take the lead, and visitors can join throughout the night.

The Long Night of Dreaming is open to anyone who is interested (free admission) and will take place at BaseCamp Istituto Sant’Eugenio (Via al Sasso 1, Locarno). The detailed program will be soon available here.

Notes toward a Phenomenology of AI Art

Jon Rafman Counterfeit Poast, 2022 4K stereo video 23:39 min MSPM JRA 49270 film still

Today I have a short piece in Outland on AI art and its embodied processing, as part of a larger suite of articles curated by Mark Amerika.

The essay offers a first taste of something I’m developing at the moment on the phenomenology of AI and the role of aesthetics as first philosophy in the contemporary world — or, AI aesthetics as the necessary foundation of AI ethics.

OUT NOW: Post-Cinematic Bodies (+ pics from book launch event)

My book Post-Cinematic Bodies is now out!

An open-access version of the book can be downloaded for free from meson press (here), and a limited number of print copies are available for purchase at Hopscotch Reading Room in Berlin, where the book launch was held on July 3. Paperbacks will be more widely available soon; the book is currently listed on Amazon Germany, and it should be appearing for other regions in the coming weeks. Stay tuned!

Here are some pictures from the book launch, where I was in conversation with Mark Hansen. Turnout was great, and it was lots of fun!

In conversation with Mark Hansen
Trying to answer one of Mark’s tough questions!
Reading from the book
Enjoying a cool beverage
How many famous media theorists can you spot?
Mark B. N. Hansen
Francesco Casetti
Eylül İşcen and Karin Denson
Selfie with friends
Dinner afterwards: Karin Denson, Shane Denson, Mark Hansen, Francesco Casetti, Bernard Geoghegan, and Siddhartha Lokanandi

BOOK LAUNCH UPDATE: New Date (July 3) and Location! In conversation with Mark B. N. Hansen

Please note: Due to factors outside of my control, the book launch event for Post-Cinematic Bodies, originally scheduled for this Thursday June 29, has been postponed to next Monday, July 3 at 7pm.

I am happy to announce that I will be in conversation with Mark B. N. Hansen!

Please also note the change of venue, to the Kurfürstenstraße location of Hopscotch Reading Room!

BOOK LAUNCH! June 29, 2023: Hopscotch Reading Room, Berlin

[UPDATE: POSTPONED TO JULY 3 — MORE INFO HERE]

On Thursday, June 29, Hopscotch Reading Room (Gerichtstraße 43 in the Wedding district of Berlin) will be hosting a book launch event for my new book Post-Cinematic Bodies — which will be out both in print and open-access digital formats from meson press. There will be paperbacks available for purchase at the launch, and they’ll be more widely available soon afterwards. If you’re in town, come out around 7pm for a short reading, discussion, and drinks!

[UPDATE: POSTPONED TO JULY 3 — MORE INFO HERE]

“AI Art as Tactile-Specular Filter” at Film-Philosophy Conference 2023

Artwork by Agnieszka Polska

On Wednesday, June 14, I’ll be presenting a paper called “AI Art as Tactile-Specular Filter” at the Film-Philosophy Conference at Chapman University (in Orange County, CA). It’s the first time I’ll be attending the conference, which is usually held in the UK, and I am excited to get to know the association, meet up with old and new friends, and hear their papers. The abstract for my paper is below:

AI Art as Tactile-Specular Filter

Though often judged by its spectacular images, AI art needs also to be regarded in terms of its materiality, its temporality, and its relation to embodied existence. Towards this end, I look at AI art through the lens of corporeal phenomenology. Merleau-Ponty writes in Phenomenology of Perception: “Prior to stimuli and sensory contents, we must recognize a kind of inner diaphragm which determines, infinitely more than they do, what our reflexes and perceptions will be able to aim at in the world, the area of our possible operations, the scope of our life.” This bodily “diaphragm” serves like a filtering medium out of which stimulus and response, subject and object emerge in relation to one another. The diaphragm corresponds to Bergson’s conception of affect, which is similarly located prior to perception and action as “that part or aspect of the inside of our bodies which mix with the image of external bodies.” For Bergson, too, the living body is a kind of filter, sifting impulses in a microtemporal interval prior to subjective awareness. In his later work, Merleau-Ponty adds another dimension with his conception of a presubjective écart or fission between tactility and specularity, thus complexifying the filtering operation of the body. With both an interiorizing function (tactility) and an exteriorizing one (specularity), the écart lays the groundwork for what I call the “originary mediality” of flesh—and a view of mediality itself which is always tactile in addition to any visual, image-oriented aspects. This is especially important for visual art produced with AI, as the underlying algorithms operate similarly to the body’s internal diaphragm: as a microtemporal filter that sifts inputs and outputs without regard for any integral conception of subjective or objective form. At the level of its pre-imagistic processing, AI’s external diaphragm thus works on the body’s internal diaphragm and actively modulates the parameters of tactility-specularity, recoding the fleshly mediality from whence images arise as a secondary, precipitate form.

Media Aesthetics V || Rhetoric, Media, & Publics Summer Institute 2023

This July, I am excited to be one of the faculty at the Media Aesthetics Summer Institute at Northwestern University, along with Nico Baumbach, Dahye Kim, Hannah Zeavin, and Chenshu Zhou. Please consider applying if this intensive, interdisciplinary workshop could benefit your work. The call for applications and further info are below:

Call for Applications

2023 Summer Institute in Rhetoric, Media, and Publics

Northwestern University, Evanston, IL 60208 
In Person, July 17–21, 2023 
The deadline for applications is Tuesday June 6, 2023 

Media Aesthetics V
The annual Rhetoric, Media, and Publics Summer Institute at Northwestern University is scheduled to be held on July 17-21, 2023 (with arrival July 16 and departure July 22). 

Institute conveners are Dilip Gaonkar (Rhetoric, Media, and Publics, Northwestern University) and James J. Hodge (English, Northwestern University).

The theorization of media often begins with a story about the history of the senses and the sensorium and how that history might be understood in terms of the ways new technologies transform our individual and collective abilities to see, hear, and communicate. The 21st-century computational saturation of culture by mediated forms as the infrastructure of ordinary life poses new challenges to this project. While many projects emphasize the algorithmic and technical dimensions of the internet age, the media aesthetics project (now in its 5th year) is devoted to exploring ordinary experience. How, for instance, does the rise of internet culture into culture as such bring into being new forms of social belonging, personhood, and collective desire? What aesthetic forms — new or old — grant the most critical traction on grasping our historical present? What critical/interpretive languages do we need to devise to respond constructively to the politically vexed and culturally fragmented ethos of the present? In this project, we hope to explore and interrogate the mediated experience of the present as it mutates, propelled by the rapidly shifting dynamics of capitalist modernity, and while mutating both discloses and conceals the possibilities and perils before us.

Institute Format and Application Process 

The institute will consist of five days of presentations and discussions led by visiting scholars and Northwestern faculty. This year’s visiting scholars include: Nico Baumbach (Columbia University), Shane Denson (Stanford University), Hannah Zeavin (Indiana University), and Chenshu Zhou (University of Pennsylvania). This year’s contributing Northwestern University faculty includes Dahye Kim (Asian Languages and Cultures).

The institute is sponsored by the Center for Global Culture and Communication (CGCC), an interdisciplinary initiative of Northwestern University’s School of Communication. The CGCC will subsidize transportation (up to $250), lodging (double-occupancy), and some meals (breakfast and lunch every day and two group dinners) for admitted students. Applicants should send a brief letter of nomination from their academic advisor, along with a one-page statement explaining their interest in participating in this year’s institute, to the summer institute coordinator Bipin Sebastian (bipinsebastian@u.northwestern.eduWe will adopt a policy of rolling admissions. Priority will therefore be granted to strong applications that are submitted in a timely fashion, preferably by June 6, 2023. All inquiries should be directed to Bipin Sebastian. 

Summer Institute Schedule (tentative):

Monday 7/17

 Welcome and Introductions (am): Dilip Gaonkar & James J. Hodge

 Shane Denson talk (pm): “Of Algorithms, Aesthetics, and Embodied Existences” 

Tuesday 7/18 

 Denson workshop (am) 

 Chenshu Zhou talk (pm): “The Boredom and Excitement of Live Streaming”  

 Dahye Kim talk (pm): “Korean Writing in the Age of Multilingual Word Processing: Reterritorialization of Scripts and the Cultural Technique of Writing” 

Wednesday 7/19 

 Zhou workshop (am) 

 Nico Baumbach talk (pm): “Conspiracy as Theory, Theory as Conspiracy” 

Thursday 7/20

 Baumbach workshop (am) 

 Hannah Zeavin talk (pm): ” Screening Mother, Coding Baby: Attachment, Deprivation, and the American Prison”

Friday 7/21 

 Zeavin workshop (am)

Faculty Bios:

Nico Baumbach Nico Baumbach is Associate Professor of Film and Media Studies at Columbia University. His research and teaching focus on critical theory, film and media theory, documentary, and the intersection of aesthetic and political philosophy. He is the author of Cinema/Politics/Philosophy (Columbia University Press, 2019) and The Anonymous Image: Cinema Against Control (Columbia University Press, Forthcoming)He is currently working on a book on the relationship between critical theory and conspiracy theory.

Shane Denson is Associate Professor of Film and Media Studies and, by Courtesy, of German Studies and of Communication at Stanford University, where he also serves as Director of the PhD Program in Modern Thought & Literature. His research interests span a variety of media and historical periods, including phenomenological and media-philosophical approaches to film, digital media, and serialized popular forms. He is the author of Post-Cinematic Bodies (2023), Discorrelated Images (2020), and Postnaturalism: Frankenstein, Film, and the Anthropotechnical Interface (2014). See shanedenson.com for more information.

Dahye Kim is an Assistant Professor of Asian Languages and Cultures at Northwestern University. Her research and teaching focus on modern Korean literature and culture, critical approaches to media history, and the cultural dimensions of communication technologies in East Asia. Dahye is particularly interested in exploring the evolving significance and signification of literature and literacy in the digital age. Her current project, tentatively titled “Techno-fiction: Science Fictional Dreams of Linguistic Metamorphosis and the Informatization of Korean Writing,” delves into the radical transformation of writing and literature in the new technological environment of the 1980s and 1990s South Korea.

Hannah Zeavin is a scholar, writer, and editor. She is an Assistant Professor of the History of Science at the University of California at Berkeley (Department of History & The Berkeley Center for New Media). Zeavin is the author of The Distance Cure: A History of Teletherapy (2021) and Mother’s Little Helpers; Technology in the American Family (forthcoming), both from MIT Press. She is the Founding Editor of Parapraxis.

Chenshu Zhou (she/her) is Assistant Professor of Cinema Studies in the History of Art Department and the Cinema and Media Studies Program at the University of Pennsylvania. She received her PhD from Stanford University. Zhou’s research explores a variety of questions related to the moving images, in particular spectatorship, exhibition, and temporality. She is the author of Cinema Off Screen: Moviegoing in Socialist China (University of California Press, 2021), which received the 2022 Best First Book Award from the Society of Cinema and Media Studies. Her second ongoing book project investigates the relationship between work and screen media consumption against China’s transition from socialism to neoliberal authoritarianism.

“Acting Algorithms” — Mihaela Mihailova at Digital Aesthetics Workshop, May 26, 2023

Please join the Digital Aesthetics Workshop for our last event of the year with Mihaela Mihailova, who will present “Acting Algorithms: Animated Deepfake Performances in Contemporary Media” on Friday, May 26 from 1-3PM PT, where lunch will be served. The event will take place in McMurtry 007.

Zoom link for those unable to join in-person: https://tinyurl.com/3nnj32et

Abstract:

From the moving Mona Lisa deepfake created by the Moscow Samsung AI Center to the (re)animated life-size digital avatar of Salvador Dalí who greets visitors at the Dalí Museum in St. Petersburg, Florida, algorithmically generated performances are becoming integral to emerging media forms. As products of the collaboration between tech researchers, coders, animators, digital artists, and actors, as well as the labor of the (often deceased) makers of the original works, such amalgamated, multi-modal performances challenge existing definitions and conceptualizations of acting in/for the animated medium, along with notions of authorship and authenticity. Additionally, they expand the disciplinary reach and relevance of the subject, highlighting the necessity of thinking through contemporary digital animation’s relationship with data science and machine learning in order to better understand its ever-growing variety of non-filmic permutations.  

At the same time, fan-made deepfakes, ranging from movie mashups to unauthorized pornographic edits, further complicate the aesthetic and legal landscape of animated algorithmic performance. Juxtaposing these amateur, free, often low-quality videos and images with the commissioned, well-funded works described above reveals fascinating tensions between the institutional implementations of deepfakes and their popular use on online platforms.   

This talk explores the application, dissemination, and ontological status of deepfake performances across a variety of contexts, including digital artworks, viral videos, museum initiatives, and tech demos. It interrogates the practical, ideological, and ethical implications of their means of creation, including the digital “resurrection” of deceased individuals, the repurposing and rebranding of centuries-old artwork, and the superimposition of actors’ faces onto footage of other performers’ roles. It asks the following questions: who (or what) do these animated performances belong to? What new terms and approaches might be necessary in order to fully evaluate and account for their complicated relationship with existing theories of acting? How are they shaping – and being shaped by – contemporary animated media? 

Bio:

Mihaela Mihailova is Assistant Professor in the School of Cinema at San Francisco State University. She is the editor of Coraline: A Closer Look at Studio LAIKA’s Stop-Motion Witchcraft (Bloomsbury, 2021). She has published in Journal of Cinema and Media Studies, [in]Transition, Convergence: The International Journal of Research into New Media Technologies, Feminist Media Studies, animation: an interdisciplinary journal, and Studies in Russian and Soviet Cinema.  

This event is generously co-sponsored by the Stanford McCoy Family Center for Ethics in Society and Feminist, Gender, and Sexuality Studies. Image credit goes to The Zizi Show, A Deepfake Drag Cabaret.