GlitchesAreLikeWildAnimalsInLatentSpace! BOVINE! — Karin + Shane Denson (2024)

BOVINE! (2024)
Karin & Shane Denson

Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.

The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:

Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.

Karin Denson, Training Data (C-print, 36 x 24 in., 2024)

Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:

Karin Denson, Bovine Form (acrylic on canvas, 36 x 24 in., 2024)

The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.

The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

Karin Denson, Bovine Space (pentaptych, acrylic on canvas, each panel 12 x 36 in., total hanging size 64 x 36 in., 2024)

Congratulations, Dr. Hank Gerba!

On Sunday, June 16, it was my honor to “hood” Hank Gerba, who earned their doctoral degree in Art History with a concentration in Film & Media Studies, and to make the following remarks at our departmental commencement ceremony:

Hank Gerba’s dissertation, “Digital Disruptions: Moiré, Aliasing, and the Stroboscopic Effect,” is exceptional in a number of ways. While it is a work primarily in media theory, or even media philosophy (itself an exceptional or eccentric subfield within film and media studies), the dissertation stages its argument by way of close engagement with a number of digital processes and devices, but also a number of analog artworks. Accordingly, it is exceptionally well suited for an interdisciplinary graduate program like ours, where students like Hank graduate with a PhD in Art History but are able to specialize also in Film & Media Studies, and where all of our students are expected to gain some acquaintance with both fields. 

Still, it is rare for a dissertation to be so agnostic about traditional disciplinary divisions and to speak across boundaries in a way that goes straight to their root: in this case, straight to the root of aesthetic and technical processes or forms. The division between technology or the technical, and art or the aesthetic, is a fairly recent invention, just a little over 200 years old. It is only in the wake of this split that we can speak of the supposedly disparate fields of art history and media studies. Exceptionally, Hank’s dissertation challenges that split, along with a number of other more or less conventional categorizations. And it does so by foregrounding a number of exceptional phenomena: the shimmer of moiré silk, pixely appearances on computer screens, and stroboscopic flickers of film—digital disruptions, according to the title of Hank’s dissertation, when digital and analog logics come into conflict with one another, when blocky grids clash with smooth contours or a film’s discrete frames line up with the movement of wagon wheels or helicopter blades to make it seem as though they were standing still. 

In framing the project this way, however, Hank not only challenges disciplinary boundaries but significantly relocates the digital/analog divide. The digital is not just about computers but applies also to the clashing grids of watered silk, which give rise to the analog shimmer that we see in the fabric and in artworks made with it. Digital and analog come to name not particular types of technologies or media, but fundamental modes of organizing aesthetic experience itself. This is an important media-philosophical argument, and it lays important groundwork for thinking about the ways that contemporary media, such as AI, are actively transforming our visual and aesthetic worlds.

I’ll just mention, finally, that Hank is the first student for whom I served as primary advisor, and the first PhD student whose progress I have accompanied from admission to the program to all the way to defending their dissertation. So it kind of feels like I’m graduating today as well. But it’s Hank who did the work and in many respects surpassed their mentor. I am grateful to have learned from Hank, both through their scholarship and through their work across the university, including at the Digital Aesthetics Workshop at the Stanford Humanities Center, where we have collaborated for several years. Hank is graduating with the Christopher Meyer Prize, one of the highest honors that we can bestow on graduating PhD students, in recognition not only of excellent scholarship but also outstanding service to the departmental and university community. 

Please join me in congratulating Hank Gerba.

Video: Leonardo Art Science Evening Rendezvous (LASER) Talks, June 10, 2024

Thanks to Piero Scaruffi for inviting me to present at the Leonardo Art Science Evening Rendezvous (LASER) series here at Stanford last night, alongside Virginia San Fratello, Fiorenza Micheli, and Tom Mullaney. It was a great conversation, with lots of unexpected resonances!

“Artificial Imagination” — Out now in new issue of Cinephile (open access)

The new issue of Cinephile, the University of British Columbia’s film and media journal, is just out. The theme of the issue is “(Un)Recovering the Future,” and it’s all about nostalgia, malaise, history, and (endangered) futurities.

In this context, I am happy to have contributed a piece called “Artificial Imagination” on the relation between AI and (visual) imagination. The essay lays some of the groundwork for a larger exploration of AI and its significance for aesthetics in both broad and narrow senses of the word. It follows from the emphasis on embodiment in my essay “From Sublime Awe to Abject Cringe: On the Embodied Processing of AI Art,” recently published in Journal of Visual Culture, as part of a larger book project tentatively called Art & Artificiality, or: What AI Means for Aesthetics.

Thanks very much to editors Will Riley and Liam Riley for the invitation to contribute to this issue!

Announcing the New Co-Chair of SCMS Philosophy & Theory SIG: Bernard Dionysius Geoghegan

Congratulations to Bernard Dionysus Geoghegan, who has been elected new Co-Chair of the SCMS Philosophy & Theory SIG! And thanks to Will Brown for running in the election. It was a very tight race, and we were extremely proud to present the SIG with two amazing candidates.

It has been an honor to serve in the role of Co-Chair for the past three years, first alongside Co-Chair Victor Fan and Secretary John Winn, and more recently Co-Chair Deborah Levitt and Secretary Hank Gerba. It has been a pleasure working with them all.

I look forward to seeing where Bernie, Deborah, and Hank take the SIG in the coming years!

Don’t Look Now: From Flawed Experiment in Videographic Interactivity to New Open-Source Tool — Interactive Video Grid

Back in 2016, my experimental video essay “Don’t Look Now: Paradoxes of Suture” was published in the open access journal [in]Transition: Journal of Videographic Film and Moving Image Studies. This was an experiment with the limits of the “video essay” form, and a test to see if it could accommodate non-linear and interactive forms (produced with some very basic javascript and HTML/CSS so as to remain accessible and viewable even with updates to web infrastructures). Seeing as the interactive video essay was accepted and published in a peer-reviewed journal devoted, for the most part, to more conventional linear video essays, I considered the test passed. (However, since the journal has recently moved to a new hosting platform with Open Humanities Library, the interactive version is no longer included directly on the site, instead linking to my own self-hosted version here.)

But even if the test was passed in terms of publication, the peer reviewers noted that the experiment was not altogether successful. Richard Misek noted that the piece was “flawed,” qualifying nevertheless that “the work’s limitations are integral to its innovation.” The innovation, according to Misek, was to point to a new way of looking and doing close analysis:

“Perhaps one should see it not as a self-contained video essay but as a walk-through of an early beta of an app for viewing and manipulating video clips spatially. Imagine, for example… The user imports a scene. The app then splits it into clips and linearly spatializes it, perhaps like in Denson’s video. Each clip can then be individually played, looped, or paused. For example, the user can scroll to, and then pause, the in points or out points for each clip; or just play two particular shots simultaneously and pause everything else. Exactly how the user utilizes this app depends on the film and what they hope to discover from it. The very process of doing this, of course, may then also reveal previously unnoticed themes, patterns, or equivalences. Such a platform for analyzing moving images could hugely faciliate close formal analysis. I imagine a moving image version of Warburg’s Mnemosyne Atlas – a wall (/ screen) full of images, all existing in spatial relation with each other, and all in motion; a field of connections waiting to be made.

“In short, I think this video points towards new methods of conducting close analysis rather than new methods of presenting it. In my view, the ideal final product would not be a tidied-up video essay but an app. I realize that, technically and conceptually, this is asking a lot. It would be a very different, and much larger project. For now, though, this video provides an inspiring demo of what such an app could help film analysts achieve.”

Fast-forward eight years, to a short article on “Five Video Essays to Close Out May,” published on May 28, 2024 in Hyperallergic. Here, author Dan Schindel includes a note about an open-source and open-access tool, the Interactive Video Grid by Quan Zhang, that is inspired by my video essay and aims to realize a large part of the vision laid out by Misek in his review. As one of two demos of the tool, which allows users to create interactive grids of video clips for close and synchronous analysis, Zhang even includes “Don’t Look Now: Paradoxes of Suture. A Reconfiguration of Shane Denson’s Interactive Video Essay.”

I’m excited to experiment with this in classrooms, or as an aid in my own research. And I can imagine that additional development might point to further innovations in modes of looking. For example, what if we make the grid dynamic, such that the clips can be dragged and rearranged? Or added and removed, resized, slowed down or speeded up, maybe even superimposed on one another? Of course, many such transformations are already possible within nonlinear digital editing platforms — but it’s only the editing process that is nonlinear, while the operations imagined here only become visible in the outputted products that are, alas, still linear videos.

Like my original video, Zhang’s new tool might also be “flawed” and in need of further development, but it is successful in terms of pointing to new ways of looking that go beyond linear forms of film and video and that take fuller advantage of the underlying nonlinearity of digital media. The latter, I would suggest, are anyway transforming our modes of visual attention, so it seems only right that we should experiment self-reflexively and probe the limits of the new ways of looking.

Six years of Digital Aesthetics Workshop

This past week marked the conclusion of our sixth year of the Digital Aesthetics Workshop at the Stanford Humanities Center, which we celebrated with a graduate symposium — the appropriately titled Digital Aesthetics Workshop-Workshop!

With nine events a year, six years is a lot of events! Here’s what we’ve done so far:

2017-2018 Events: 

    • Mark B. N. Hansen, “The Ontology of Media Operations, or, Where is the Technics in Cultural Techniques,” 10 October 2017
    • Claus Pias, “Computer Game Worlds,” 24 October 2017
    • Allison de Fren, “Post-Cinema and Videographic Criticism,” 14 November 2017
    • Bonnie Ruberg, “Video Games Have Always Been Queer,” 23 January 2018
    • Jacob Gaboury, “Techniques for Secondary Mediation: On the Screenshot as Image-Object,” 6 February 2018
    • Shane Denson, “Discorrelated Images,” 3 April 2018
    • Elizabeth Kessler, “Psychedelic Space and Anachronic Time: Photography and the Voyager’s Tour of the Solar System,” 10 April 2018
    • Jonathan Sterne, “Machine Learning, ‘AI,’ and the Politics of Media Aesthetics: Why Online Music Mastering (Sort of) Works,” 24 April 2018
    • Matthew Wilson Smith, “The Nostalgia of Virtual Reality,” 15 May 2018

2018-2019 Events: 

    • Carolyn L. Kane, “Chroma Glitch: Data as Style,” 9 October 2018
    • Camille Utterback, “Embodied Interactions & Material Screens,” 27 November 2018
    • Miryam Sas, “Plastic Dialectics: Community and Collectivity in Japanese Contemporary Art,” 4 December 2018
    • Stephanie Boluk and Patrick LeMieux, “Skin in the Game: Greymarket Gambling in the Virtual Economies of Counter-Strike,” 14 January 2019
    • N. Katherine Hayles, “Can Computers Create Meaning? A Cyber-Bio-Semiotic Perspective,” 12 February 2019
    • Kevin B. Lee, “Dreams and Terrors of Desktop Documentary,” 27 February 2019
    • Marion Fourcade, “A Maussian Bargain: The Give and Take of the Personal Data Economy,” 23 April 2019
    • Digital Aesthetics Symposium, featuring Stanford graduate students and faculty, 14-15 May 2019
    • Miyako Innoue, “Writing at the Speed of Thinking: The Japanese Kana Typewriter and the Rehabilitation of the Male Hand,” 28 May 2019

2019-2020 Events:

    • Jenny Odell, “Killing Time,” 23 October 2019
    • Scott Bukatman, “We Are Ant-Man,” 5 November 2019
    • Ben Peters, “Declining Russian Media Theory,” 21 November 2019
    • Rachel Plotnick, “Unclean Interface: Computation as a Cleanliness Problem,” 11 February 2020
    • Jean Ma, “At the Edges of Sleep,” 9 March 2020 [cancelled due to COVID-19]
    • Melissa Gregg, Title TBA, 7 April 2020 [cancelled due to COVID-19]
    • Sarah T. Roberts, “Behind the Screen: Content Moderation in the Shadows of Social Media,” 21 April 2020
    • Kris Cohen, “Bit Field Black,” 19 May 2020
    • Xiaochang Li, “How Language Became Data: Speech Recognition between Likeness and Likelihood,” 26 May 2020

2020-2021 Events:

    • Vivian Sobchack, in conversation with Scott Bukatman and Shane Denson, 29 September 2020 (additional follow-up event for Stanford graduate students, 14 October 2020)
    • “New Regimes of Imaging.” Roundtable discussion with Ranjodh Singh Dhaliwal, Deborah Levitt, Bernard Geoghegan, and Shane Denson, 23 October 2020
    • libi rose striegl and the Media Archaeology Lab at the University of Colorado at Boulder, 10 November 2020
    • Shaka McGlotten, “Racial Chain of Being,” 8 December 2020
    • James J. Hodge and Shane Denson, “Dialogue in Digital Aesthetics: Sensations of History and Discorrelated Images,” 2 April 2021
    • Melissa Gregg, “The Great Watercooler in the Cloud: Distributed Work, Collegial Presence, and Mindful Labor Post-COVID,” 6 April 2021
    • Adrian Daub, “What Tech Calls Thinking,” 11 May 2021
    • Legacy Russell, “Cyberpublics, Monuments, and Participation,” 20 May 2021
    • Fred Turner and Mary Beth Meehan, “Seeing Silicon Valley – Life Inside a Fraying America,” 2 June 2021

2022-2023 Events:

    • Erich Hörl, “The Disruptive Condition,” 5 October 2022
    • Mark Algee-Hewitt, “Patterns of Text/Patterns of Analysis,” 15 November 2022
    • Jean Ma and Tung-Hui Hu, “In Conversation” (joint book event), 2 December 2022
    • Bernard Dionysius Geoghegan, “The Violent Forensics of Digital Imagery: Abu Ghraib, Ukraine, and Cat Videos,” 17 January 2023
    • Melissa Gilliam and Patrick Jagoda, “Game Changer Lab” (co-sponsored with the Critical Making Collaborative), 26 January 2023
    • M. Beatrice Fazi, “On Digital Theory,” 28 February 2023
    • Alexander Galloway, “‘No Deconstruction without Computers’: Learning to Code with Derrida and Kittler,” 7 March 2023
    • Neta Alexander, “The Right to Speed-Watch (or, When Netflix Discovered its Blind Viewers),” 18 April 2023
    • Damon Young, “Selfie/Portrait,” 9 May 2023
    • Mihaela Mihailova, “Acting Algorithms: Animated Deepfake Performances in Contemporary Media,” 26 May 2023

2023-2024 Events:

    • Luciana Parisi, “The Negative Aesthetic of AI,” 20 October 2023
    • Ge Wang, “Artful Design and Artificial Intelligence: What Do We (Really) Want from AI?,” 14 November 2023
    • Thomas Lamarre, “Harvesting Light,” 5 December 2023
    • Bryan Norton, “Marx After Simondon: Metabolic Rift and the Analog of Computation,” 30 January 2024
    • Yvette Granata, “Mimetic Virtualities: Rendering the Masses and/or Feminist Media Art?,” 6 February 2024
    • Akira Mizuta Lippit, “Shadowline,” 12 March 2024
    • Nicholas Baer, “The Ends of Perfection: On a Limit Concept in Global Film and Media Theory,” 5 April 2024
    • James Hodge, “Six Theses on an Aesthetics of Always-On Computing,” 30 April 2024
    • Digital Aesthetics Workshop-Workshop, graduate student symposium, with responses from Angèle Christin and Shane Denson, 24 May 2024

Thanks to all of the graduate student coordinators over the years, including Jeff Nagy, Doug Eacho, Natalie Deam, Annika Butler-Wall, and this year’s coordinators Grace Han and Hank Gerba. (And congratulations to Hank on successfully defending their dissertation last week!)

“How is Human Embodiment Transformed in an Age of Algorithms?” — Stanford LASER Talks, June 10, 2024

On June 10, 2024 (7pm at Li Ka Shing Center 120), I will be presenting an informal talk titled “How is Human Embodiment Transformed in an Age of Algorithms?” as part of a Leonardo Art Science Evening Rendezvous (LASER) Talks event.

The four talks that evening are:

– Shane Denson (Stanford/ Film and Media) on “How is Human Embodiment Transformed in an Age of Algorithms?”

– Virginia San Fratello (San Jose State Univ/ Art) on “3D Printing the Future”

– Fiorenza Micheli (Stanford/ Center for Ocean Solutions) on “Harnessing the data revolution for ocean and human health”

– Tom Mullaney (Stanford/ History) on “The Audacity of Chinese Computing”

The event is open to the public. More info is available here: https://events.stanford.edu/event/four-laser-talks-human-embodiment-3d-printing-ocean-health-chinese-computing