GlitchesAreLikeWildAnimalsInLatentSpace! BOVINE! — Karin + Shane Denson (2024)

BOVINE! (2024)
Karin & Shane Denson

Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.

The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:

Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.

Karin Denson, Training Data (C-print, 36 x 24 in., 2024)

Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:

Karin Denson, Bovine Form (acrylic on canvas, 36 x 24 in., 2024)

The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.

The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

Karin Denson, Bovine Space (pentaptych, acrylic on canvas, each panel 12 x 36 in., total hanging size 64 x 36 in., 2024)

FrankensteinsDeepDream

Creation scene and aftermath, as described in Mary Shelley’s Frankenstein (Chapter 5, 1831 edition) and interpreted by Cris Valenzuela’s text-to-image machine-learning demo (http://t2i.cvalenzuelab.com) utilizing AttnGAN (Attentional Generative Adversarial Networks).

Made for the upcoming Videographic Frankenstein exhibition at the Department of Art & Art History, Stanford University (Sept. 26 – Oct. 26, 2018). More info here: https://art.stanford.edu/exhibitions/videographic-frankenstein

Coming Soon: after.video

av3d_v03

I just saw the official announcement for this exciting project, which I’m proud to be a part of (with a collaborative piece I made with Karin Denson).

after.video, Volume 1: Assemblages is a “video book” — a paperback book and video stored on a Raspberry Pi computer packaged in a VHS case. It will also be available as online video and book PDF download.

Edited by Oliver Lerone Schultz, Adnan Hadzi, Pablo de Soto, and Laila Shereen Sakr (VJ Um Amel), it will be published this year (2016) by Open Humanities Press.

The piece I developed with Karin is a theory/practice hybrid called “Scannable Images: Materialities of Post-Cinema after Video.” It involves digital video, databending/datamoshing, generative text, animated gifs, and augmented reality components, in addition to several paintings in acrylic (not included in the video book).

Here’s some more info about the book from the OpenMute Press site:

Theorising a World of Video

after.video realizes the world through moving images and reassembles theory after video. Extending the formats of ‘theory’, it reflects a new situation in which world and video have grown together.

This is an edited collection of assembled and annotated video essays living in two instantiations: an online version – located on the web at http://after.video/assemblages, and an offline version – stored on a server inside a VHS (Video Home System) case. This is both a digital and analog object: manifested, in a scholarly gesture, as a ‘video book’.

We hope that different tribes — from DIY hackercamps and medialabs, to unsatisfied academic visionaries, avantgarde-mesh-videographers and independent media collectives, even iTV and home-cinema addicted sofasurfers — will cherish this contribution to an ever more fragmented, ever more colorful spectrum of video-culture, consumption and appropriation…

Table of Contents

Control Societies 
Peter Woodbridge + Gary Hall + Clare Birchall
Scannable images: materialities of Post-Cinema after Video 
Karin + Shane Denson
Isistanbul 
Serhat Köksal
The Crying Selfie
Rózsa Zita Farkas
Guided Meditation 
Deborah Ligotrio
Contingent Feminist Tacticks for Working with Machines 
Lucia Egaña Rojas
Capturing the Ephemeral and Contestational 
Eric Kiuitenberg
Surveillance Assemblies 
Adnan Hadzi
You Spin me Round – Full Circle 
Andreas Treske

Editorial Collective

Oliver Lerone Schultz
Adnan Hadzi
Pablo de Soto
Laila Shereen Sakr (VJ Um Amel)

Tech Team

Jacob Friedman – Open Hypervideo Programmer
Anton Galanopoulos – Micro-Computer Programmer

Producers

Adnan Hadzi – OHP Managing Producer
Jacob Friedman – OHV Format Development & Interface Design
Joscha Jäger – OHV Format Development & Interface Design
Oliver Lerone Schultz – Coordination CDC, Video Vortex #9, OHP

Cover artwork and booklet design: Jacob Friedman
Copyright: the authors
Licence: after.video is dual licensed under the terms of the MIT license and the GPL3
http://www.gnu.org/licenses/gpl-3.0.html
Language: English
Assembly On-demand
OpenMute Press

Acknowledgements

Co-Initiated + Funded by

Art + Civic Media as part of Centre for Digital Cultures @ Leuphana University.
Art + Civic Media was funded through Innovation Incubator, a major EU project financed by the European Regional Development Fund (ERDF) and the federal state of Lower Saxony.

Thanks to

Joscha Jaeger – Open Hypervideo (and making this an open licensed capsule!)
Timon Beyes – Centre for Digital Cultures, Lüneburg
Mathias Fuchs – Centre for Digital Cultures, Lüneburg
Gary Hall – School of Art and Design, Coventry University
Simon Worthington – OpenMute

http://www.metamute.org/shop/openmute-press/after.video

The Gnomes Are Back: Business cARd 2.0

gnome-cARd

Ever since our old AR platform was bought out and shut down by Apple, the “data gnomes” that Karin and I developed in conjunction with the Duke S-1: Speculative Sensation Lab’s “Manifest Data” project have been bumbling about in digital limbo, banished to 404 hell. So today I finally made the first steps in migrating our beloved creatures over to a new AR platform (Wikitude), where they’re starting to feel at home. While I was at it, I went ahead and reprogrammed my business card:

2016-01-31 12.21.55 pm

The QR code on the front now redirects the browser to shanedenson.com, while the AR content on the back side is made visible with the Wikitude app (free on iOS or Android) — just search for “Shane Denson” and point your phone/tablet’s camera at the image below:

2016-01-31 12.22.20 pm

(In case you’re wondering what this is: it’s a “data portrait” generated from my Internet browsing behavior. You can make your own with the code included in the S-1 Lab’s Manifest Data kit.)

Audiovisualities Lab — Film Screening and Project Showcase

AVLab_Expo_2015

On April 8, 2015, I will be participating in this event, hosted by the Duke Audiovisualities Lab. During the “project showcase” portion of the event, several of the people involved in Bill Seaman and John Supko‘s Generative Media Authorship seminar — including Eren GumrukcuogluAaron Kutnick, and myself — will be presenting generative works. I will be showing some of the databending/glitch-video work I’ve been doing lately (see, for example, here and here). Refreshments and drinks will be served!

Manifest Data @ Media Arts + Sciences Rendez-Vous

manifest-data-2

This Thursday, March 5, 2015 (4:15pm, Bay 10, Smith Warehouse at Duke University), members of the S-1 Speculative Sensation Lab, including Amanda Starling Gould, Luke Caldwell, David Rambo, and myself, will be presenting our collaborative art/theory project Manifest Data. As usual, there will be drinks and light refreshments!

Emergence Lab at Duke Media Arts + Sciences Rendezvous

2015-02-24 10.28.03 am

This Thursday, February 26, 2015, the Emergence Lab (headed by media artist Bill Seaman and composer John Supko) will be taking over the Duke Media Arts + Sciences Rendezvous. If you don’t know their work already, be sure to check out Seaman and Supko’s collaborative album s_traits (also available on iTunes and elsewhere), which has been getting a lot of attention in the media lately — including a mention in the New York Times list of top classical recordings of 2014:

‘S_TRAITS’ Bill Seaman, media artist; John Supko, composer (Cotton Goods). This hypnotic disc is derived from more than 110 hours of audio sourced from field recordings, digital noise, documentaries and piano music. A software program developed by the composer John Supko juxtaposed samples from the audio database into multitrack compositions; he and the media artist Bill Seaman then finessed the computer’s handiwork into these often eerily beautiful tracks. VIVIEN SCHWEITZER

In their Generative Media Authorship seminar, which I have been auditing this semester, we have been exploring similar (and wildly different) methods for creating generative artworks and systems in a variety of media, including text, audio, and images in both analog and digital forms. The techniques and ideas we’ve been developing there have dovetailed nicely with the work that Karin Denson and I have been doing lately with the S-1 Lab as well (in particular, the generative sculpture and augmented reality pieces we’ve been making for the lab’s collaborative Manifest Data project). I have experimented with writing Markov chains in Python and javascript, turning text into sound, making sound out of images, and making movies out of all-of-the-above — and I have witnessed people with far greater skills than me do some amazing things with computers, cameras, numbers, books, and fishtanks!

On Thursday (at 4:15pm) several of us will be speaking about our generative experiments and works-in-progress. I will be talking about video glitches and post-cinema, as discussed in my two previous blog posts (here and here), while I am especially excited to see S-1 collaborator Aaron Kutnick‘s demonstration of his raspberry pi-based eidetic camera and to hear composer Eren Gumrukcuoglu‘s machine-based music. I also look forward to meeting Duke biology professor Sönke Johnsen and composer Vladimir Smirnov. All around, this promises to be a great event, so check it out if you’re in the area!