GlitchesAreLikeWildAnimalsInLatentSpace! BOVINE! — Karin + Shane Denson (2024)

BOVINE! (2024)
Karin & Shane Denson

Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.

The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:

Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.

Karin Denson, Training Data (C-print, 36 x 24 in., 2024)

Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:

Karin Denson, Bovine Form (acrylic on canvas, 36 x 24 in., 2024)

The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.

The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

Karin Denson, Bovine Space (pentaptych, acrylic on canvas, each panel 12 x 36 in., total hanging size 64 x 36 in., 2024)

“Why Are Things So Weird?” — Kevin Munger on Flusser’s COMMUNICOLOGY

I just stumbled upon this interesting looking video response from Kevin Munger to Vilém Flusser’s Communicology: Mutations in Human Relations?, which appeared in the “Sensing Media” book series that I edit with Wendy Chun for Stanford University Press.

The above (posted on Twitter here) is an excerpt of a longer video accessible if you subscribe to the New Models Patreon or Substack. I haven’t subscribed, so I’m 100% sure what to expect, but it looks provocative!

Streaming Mind, Streaming Body

A short text of mine titled “Streaming Mind, Streaming Body” was recently published online at In Media Res as part of a theme week on “The Contemporary Streaming Style II.” The piece connects reflections stemming from Bernard Stiegler’s philosophy of media to recent body-oriented streaming platforms like the Peloton.

You can find my piece here and the rest of the theme week (with contributions from Neta Alexander, Ethan Tussey, Carol Vernallis, and Jennifer Barker) here.

Video: Sensations of History and Discorrelated Images: James Hodge and Shane Denson in Conversation

Above, the complete video from the conversation on April 2, 2021 between James Hodge and myself about our new books, Sensations of History and Discorrelated Images. Co-sponsored by the Center for Global Culture and Communication at Northwestern University and the Linda Randall Meier Research Workshop on Digital Aesthetics at Stanford University.

Videos of Two Recent Book-Related Talks

Discorrelation, or: Images between Algorithms and Aesthetics — Nov. 3, 2020 at CESTA, Stanford University

Here are videos of two recent talks related to my book Discorrelated Images. Above, a talk titled “Discorrelation, or: Images between Algorithms and Aesthetics,” delivered at Stanford’s Center for Spatial and Textual Analysis (CESTA) on November 3, 2020. And below, a talk titled “Discorrelated Images” from October 26, 2020 at UC Santa Barbara’s Media Arts and Technology Seminar Series.

Discorrelated Images — October 26, 2020 at MAT Seminar Series, UCSB

Complete Video of Rendered Worlds: New Regimes of Imaging

Here is the complete video of the event Rendered Worlds: New Regimes of Imaging from October 23, 2020. Featuring Deborah Levitt (The New School), Ranjodh Singh Dhaliwal (UC Davis and Universität Siegen), Bernard Dionysius Geoghegan (King’s College London), and Shane Denson (Stanford) discussing their recent work, with Hank Gerba (Stanford) and Jacob Hagelberg (UC Davis) co-moderating the round-table.

Sponsored by the Linda Randall Meier workshop on Digital Aesthetics (Stanford) and the Technocultural Futures Research Cluster (UC Davis), with support from the Mellon Foundation and the National Endowment for the Humanities.

Complete Video: Vivian Sobchack in Conversation with Scott Bukatman and Shane Denson

Here is the complete video of the Digital Aesthetics Workshop event from September 29, 2020: Vivian Sobchack in conversation with Scott Bukatman and myself. This was a lively and far-ranging discussion, which we were honored to host. Please enjoy!

CFP Post-Cinema: Practices of Research and Creation

CFP Post-Cinema: Practices … by medieninitiative on Scribd

I am happy to share the CFP for a special issue of Images Secondes on the topic of “Post-Cinema: Practices of Research and Creation,” edited by Chloé Galibert-Lainé and Gala Hernández López.

The special issue, for which I am serving on the comité scientifique (which sounds a lot cooler than “review board”), will collect traditional scholarly articles as well as contributions in other media (such as videographic criticism and experimental digital forms). Proposals are due April 20, 2020, with final submissions due September 30.

Please spread the word to anyone who might be interested in contributing to what is sure to be an exciting publication!

Amalgamate: An Exhibition of Video Works

I am happy to announce Amalgamate, an exhibition of videos made by students in my course on “The Video Essay” (Fall 2019). Works range from analytical to experimental, with activist impulses and cinephilic sensitivities sprinkled throughout. The show runs from January 10-31, 2020 in the Gunn Foyer, McMurtry Building, at Stanford.

Discorrelation and the Post-Perceptual Image (Bogotá, video en español)

Video (in Spanish) of my talk, “Discorrelation and Post-Perceptual Image,” from September 12, 2019 at the Universidad Nacional de Colombia in Bogotá is now online.

Minutes before the talk, I was whisked away to give two separate interviews — one for an article that is now online, and one for a local television station (!), which I have not yet seen…