GlitchesAreLikeWildAnimalsInLatentSpace! CANINE! — Karin + Shane Denson

CANINE! (2024)

Karin & Shane Denson

Canine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making — including the mental “schematisms” theorized by Kant and now embodied in algorithmic stereotypes.

This is a screen recording of a real-time, generative/combinatory video.

Canine! is a sort of “forest of forking paths,” consisting of 64 branching and looping pathways, with alternate pathways displayed in tandem, along with generative text, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation. Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a set of species-indeterminate canines, which Karin painted with acrylic on canvas. The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times in branching paths before looping back. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original canine painting into Audacity as raw data, interpreted with the GSM codec.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book article “Artificial Imagination” (https://ojs.library.ubc.ca/index.php/cinephile/article/view/199653).

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

See also: Bovine! (https://vimeo.com/manage/videos/1013903632)

Leave a Reply