Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.
The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:
Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.
Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.
The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.
Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:
The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.
The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.
Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.
I’ve been traveling a lot outside of California this summer, but whenever I get the chance I like to spend time up north in Mendocino or Fort Bragg, where my wife Karin is part of the artist collective at Edgewater Gallery.
Earlier in the summer, we observed tons of California brown pelicans and common murres (which look like penguins) camped out on some small offshore islands. The assembly has attracted a lot of attention — from locals, tourists, artists, and scientists. The local newspaper, The Mendocino Voice, just put out a long piece on the birds and the possible reasons for their convergence there, and they quoted Karin and featured a glitch collage that she did a while back.
Karin has been photographing, filming, glitching, and painting pelicans and other California wildlife for several years now. Check out more of her work at karindenson.com.
The Digital Aesthetics Workshop at the Stanford Humanities Center is entering its second year, and we are pleased to announce the first event: Carolyn L. Kane will share some of her current research with us, under the title Chroma Glitch: Data as Style. The discussion will encompass Takeshi Murata, Ryan Trecartin, and datamoshing, all within Kane’s broader project, tentatively titled Precarious Beauty: Glitch, Noise, and Aesthetic Failure. There will be a paper pre-circulated ahead of the talk; we will pass it along a week ahead of the event. We are thrilled Dr. Kane can join us – when we first came up with this idea for a workshop, her name became a token for the sort of scholarship we would want to bring in. She will launch a year already filling up with exciting speakers and a new graduate colloquium (more on that to come).
Carolyn L. Kane is the author of the award-winning Chromatic Algorithms: Synthetic Color, Computer Art, and Aesthetics after Code (U Chicago, 2014). [You can learn more about this fascinating project through this interview in Theory, Culture & Society.] She earned her Ph.D. from New York University’s Dept. of Media, Culture, and Communication in 2011, and was awarded the Nancy L. Buc Postdoctoral Fellowship in “Aesthetics and the Question of Beauty” at Brown University in 2014. From 2011 to 2014 she taught at Hunter College; she is now Associate Professor of Communication and Design at Ryerson University in Toronto.
This event will be held from 5-7p on Tuesday, Oct 9, 2018 at the Roble Arts Gym Lounge (TAPS department). Drinks and snacks will be served. Please RSVP to Doug Eacho (email in image above) if you can, and share widely.
I just saw the official announcement for this exciting project, which I’m proud to be a part of (with a collaborative piece I made with Karin Denson).
after.video, Volume 1: Assemblages is a “video book” — a paperback book and video stored on a Raspberry Pi computer packaged in a VHS case. It will also be available as online video and book PDF download.
Edited by Oliver Lerone Schultz, Adnan Hadzi, Pablo de Soto, and Laila Shereen Sakr (VJ Um Amel), it will be published this year (2016) by Open Humanities Press.
The piece I developed with Karin is a theory/practice hybrid called “Scannable Images: Materialities of Post-Cinema after Video.” It involves digital video, databending/datamoshing, generative text, animated gifs, and augmented reality components, in addition to several paintings in acrylic (not included in the video book).
after.video realizes the world through moving images and reassembles theory after video. Extending the formats of ‘theory’, it reflects a new situation in which world and video have grown together.
This is an edited collection of assembled and annotated video essays living in two instantiations: an online version – located on the web at http://after.video/assemblages, and an offline version – stored on a server inside a VHS (Video Home System) case. This is both a digital and analog object: manifested, in a scholarly gesture, as a ‘video book’.
We hope that different tribes — from DIY hackercamps and medialabs, to unsatisfied academic visionaries, avantgarde-mesh-videographers and independent media collectives, even iTV and home-cinema addicted sofasurfers — will cherish this contribution to an ever more fragmented, ever more colorful spectrum of video-culture, consumption and appropriation…
Table of Contents
Control Societies
Peter Woodbridge + Gary Hall + Clare Birchall Scannable images: materialities of Post-Cinema after Video
Karin + Shane Denson Isistanbul
Serhat Köksal The Crying Selfie
Rózsa Zita Farkas Guided Meditation
Deborah Ligotrio Contingent Feminist Tacticks for Working with Machines
Lucia Egaña Rojas Capturing the Ephemeral and Contestational
Eric Kiuitenberg Surveillance Assemblies
Adnan Hadzi You Spin me Round – Full Circle
Andreas Treske
Editorial Collective
Oliver Lerone Schultz
Adnan Hadzi
Pablo de Soto
Laila Shereen Sakr (VJ Um Amel)
Tech Team
Jacob Friedman – Open Hypervideo Programmer
Anton Galanopoulos – Micro-Computer Programmer
Producers
Adnan Hadzi – OHP Managing Producer
Jacob Friedman – OHV Format Development & Interface Design
Joscha Jäger – OHV Format Development & Interface Design
Oliver Lerone Schultz – Coordination CDC, Video Vortex #9, OHP
Cover artwork and booklet design: Jacob Friedman
Copyright: the authors
Licence: after.video is dual licensed under the terms of the MIT license and the GPL3 http://www.gnu.org/licenses/gpl-3.0.html
Language: English
Assembly On-demand
OpenMute Press
Acknowledgements
Co-Initiated + Funded by
Art + Civic Media as part of Centre for Digital Cultures @ Leuphana University.
Art + Civic Media was funded through Innovation Incubator, a major EU project financed by the European Regional Development Fund (ERDF) and the federal state of Lower Saxony.
Thanks to
Joscha Jaeger – Open Hypervideo (and making this an open licensed capsule!)
Timon Beyes – Centre for Digital Cultures, Lüneburg
Mathias Fuchs – Centre for Digital Cultures, Lüneburg
Gary Hall – School of Art and Design, Coventry University
Simon Worthington – OpenMute
As Karin posted yesterday (and as I reblogged this morning), our collaborative artwork Post-Cinema: 24fps@44100Hz will be on display (and on sale) from January 15-23 at The Carrack Modern Art gallery in Durham, NC, as part of their annual Winter Community Show.
Exhibiting augmented reality pieces always brings with it a variety of challenges — including technical ones and, above all, the need to inform viewers about how to use the work. So, for this occasion, I’ve put together this brief demo video explaining the piece and how to view it. The video will be displayed on a digital picture frame mounted on the wall below the painting. Hopefully it will be both eye-catching enough to attract passersby and it will effectively communicate the essential information about the process and use of the work.
On April 8, 2015, I will be participating in this event, hosted by the Duke Audiovisualities Lab. During the “project showcase” portion of the event, several of the people involved in Bill Seaman and John Supko‘s Generative Media Authorship seminar — including Eren Gumrukcuoglu, Aaron Kutnick, and myself — will be presenting generative works. I will be showing some of the databending/glitch-video work I’ve been doing lately (see, for example, here and here). Refreshments and drinks will be served!
Video meditation inspired by the final paragraph of my book Postnaturalism:
Recoding our perceptions of the Frankenstein film, including even our view of Karloff’s iconic monster as the “original” of its type, Edison’s Frankenstein joins the ranks of the Frankenstein film series, now situating itself at our end rather than at the beginning of that series’ history. Now, prospering among the short clips of YouTube, where it is far more at home than any of the feature films ever could be, the Edison monster becomes capable again of articulating a “medial” narrative—a tale told from a middle altitude, from a position half-way between the diegetic story, on the one hand, of the monster’s defeat by a Frankenstein who grows up and “comes to his senses” and, on the other hand, a non-diegetic, media-historical metanarrative that, in contrast to the story of medial maturation it encoded in 1910, now articulates a tale of visual media’s currently conflicted state, caught between historical specificity and an eternal recurrence of the same. The monster’s medial narrative communicates with our own medial position, mediates possible transactions in a realm of experimentation, in which human and nonhuman agencies negotiate the terms of their changing relations. With its digitally scarred body, pocked by pixels and compression “artifacts,” the century-old monster opens a line of flight that, if we follow it, might bring us face to face with the molecular becoming of our own postnatural future.
Sketch for a multi-screen video installation, which I’ll be presenting and discussing alongside some people doing amazing work in connection with John Supko & Bill Seaman’s Emergence Lab and their Generative Media seminar — next Thursday, February 26, 2015 at the Duke Media Arts + Sciences Rendezvous.
In some ways, the digital glitch might be seen as the paradigmatic art form of our convergence culture — where “convergence” is understood more in the sense theorized by Friedrich Kittler than that proposed by Henry Jenkins. That is, glitches speak directly to the interchangeability of media channels in a digital media ecology, where all phenomenal forms float atop an infrastructural stream of zeroes and ones. They thrive upon this interchangeability, while they also point out to us its limits. Indeed, such glitches are most commonly generated by feeding a given data format into the “wrong” system — into a piece of software that wasn’t designed to handle it, for example — and observing the results. Thus, such “databending” practices (knowledge of which circulates among networks of actors constituting a highly “participatory culture” of their own) expose the incompleteness of convergence, the instability of apparently “fixed” data infrastructures as they migrate between various programs and systems for making that data manifest.
As a result, the practice of making glitches provides an excellent praxis-based propaedeutic to a materialist understanding of post-cinematic affect. They magnify the “discorrelations” that I have suggested constitute the heart of post-cinematic moving images, providing a hands-on approach to phenomena that must seem abstract and theoretical. For example, I have claimed:
CGI and digital cameras do not just sever the ties of indexicality that characterized analogue cinematography (an epistemological or phenomenological claim); they also render images themselves fundamentally processual, thus displacing the film-as-object-of-perception and uprooting the spectator-as-perceiving-subject – in effect, enveloping both in an epistemologically indeterminate but materially quite real and concrete field of affective relation. Mediation, I suggest, can no longer be situated neatly between the poles of subject and object, as it swells with processual affectivity to engulf both.
Now, I still stand behind this description, but I acknowledge that it can be hard to get one’s head around it and to understand why such a claim makes sense (or makes a difference). It probably doesn’t help (unless you’re already into that sort of thing) that I have had recourse to Bergsonian metaphysics to explain the idea:
The mediating technology itself becomes an active locus of molecular change: a Bergsonian body qua center of indetermination, a gap of affectivity between passive receptivity and its passage into action. The camera imitates the process by which our own pre-personal bodies synthesize the passage from molecular to molar, replicating the very process by which signal patterns are selected from the flux and made to coalesce into determinate images that can be incorporated into an emergent subjectivity. This dilation of affect, which characterizes not only video but also computational processes like the rendering of digital images (which is always done on the fly), marks the basic condition of the post-cinematic camera, the positive underside of what presents itself externally as a discorrelating incommensurability with respect to molar perception. As Mark Hansen has argued, the microtemporal scale at which computational media operate enables them to modulate the temporal and affective flows of life and to affect us directly at the level of our pre-personal embodiment. In this respect, properly post-cinematic cameras, which include video and digital imaging devices of all sorts, have a direct line to our innermost processes of becoming-in-time […].
I have, to be sure, pointed to examples (such as the Paranormal Activity and Transformers series of films) that illustrate or embody these ideas in a more palpable, accessible form. And I have indicated some of the concrete spaces of transformation — for example, in the so-called “smart TV”:
today the conception of the camera should perhaps be expanded: consider how all processes of digital image rendering, whether in digital film production or simply in computer-based playback, are involved in the same on-the-fly molecular processes through which the video camera can be seen to trace the affective synthesis of images from flux. Unhinged from traditional conceptions and instantiations, post-cinematic cameras are defined precisely by the confusion or indistinction of recording, rendering, and screening devices or instances. In this respect, the “smart TV” becomes the exemplary post-cinematic camera (an uncanny domestic “room” composed of smooth, computational space): it executes microtemporal processes ranging from compression/decompression, artifact suppression, resolution upscaling, aspect-ratio transformation, motion-smoothing image interpolation, and on-the-fly 2D to 3D conversion. Marking a further expansion of the video camera’s artificial affect-gap, the smart TV and the computational processes of image modulation that it performs bring the perceptual and actional capacities of cinema – its receptive camera and projective screening apparatuses – back together in a post-cinematic counterpart to the early Cinématographe, equipped now with an affective density that uncannily parallels our own. We don’t usually think of our screens as cameras, but that’s precisely what smart TVs and computational display devices in fact are: each screening of a (digital or digitized) “film” becomes in fact a re-filming of it, as the smart TV generates millions of original images, more than the original film itself – images unanticipated by the filmmaker and not contained in the source material. To “render” the film computationally is in fact to offer an original rendition of it, never before performed, and hence to re-produce the film through a decidedly post-cinematic camera. This production of unanticipated and unanticipatable images renders such devices strangely vibrant, uncanny […].
Recent news about Samsung’s smart TVs eavesdropping on our conversations may have made those devices seem even more uncanny than when I first wrote the lines above, but this, I have to admit, is still a long way from impressing the theory of post-cinematic transformation on my readers in anything like a materially robust or embodied manner — though I am supposedly describing changes in the affective, embodied parameters of life itself.
Hence my recourse to the glitch, and to the practice of making glitches as a means for gaining first-hand knowledge of the transformations I associate with post-cinema. In lieu of another argument, then, I will simply describe the process of making the video at the top of this blog post. It is my belief that going through this process gave me a deeper understanding of what, exactly, I was pointing to in those arguments; by way of extension, furthermore, I suggest that following these steps on your own will similarly provide insight into the mechanisms and materialities of what, following Steven Shaviro, I have come to refer to as post-cinematic affect.
The process starts with a picture — in this case, a jpeg image taken by my wife on an iPhone 4S:
Following this “Glitch Primer” on editing images with text editors, I began experimenting with ImageGlitch, a nice little program that opens the image as editable text in one pane and immediately updates visual changes to the image in another. (The changes themselves can be made with any normal plain-text editor, but ImageGlitch gives you a little more control, i.e. immediate feedback.)
I began inserting the word “postnaturalism” into the text at random places, thus modifying the image’s data infrastructure. By continually breaking and unbreaking the image, I began to get a feel for the picture’s underlying structure. Finally, when I had destroyed the image to my liking, I decided that it would be more interesting to capture the process of destruction/deformation, as opposed to a static product resulting from it. Thus, I used ScreenFlow to capture a video of my screen as I undid (using CMD-Z) all the changes I had just made.
Because I had made an inordinately large number of edits, this step-wise process of reversing the edits took 8:30 minutes, resulting in a rather long and boring video. So, in Final Cut Pro, I decided to speed things up a little — by 2000%, to be exact. (I also cropped the frame to show only the image, not the text.) I then copied the resulting 24-second video, pasted it back in after the original, and set it to play in reverse (so that the visible image goes from a deformed to a restored state and back again).
This was a little better, but still a bit boring. What else could I do with it? One thing that was clearly missing was a soundtrack, so I next considered how I might generate one with databending techniques.
Through blog posts by Paul Hertz and Antonio Roberts, I became aware of the possibility to use the open source audio editing program Audacity to open image files as raw data, thereby converting them into sound files for the purposes of further transformation. Instead of going through with this process of glitching, however, I experimented with opening my original jpeg image in a format that would produce recognizable sound (and not just static). The answer was to open the file with GSM encoding, which gave me an almost musical soundtrack — but a little high-pitched for my taste. (To be honest, it sounded pretty cool for about 2 seconds, and then it was annoying to the point of almost hurting). So I exported the sound as an mp3 file, which I imported into my Final Cut Pro project and applied a pitch-shifting filter (turning it down 2400 cents or 2 octaves).
At this point, I could have exported the video and been done with it, but while discovering the wonders of image databending, I ran across some people doing wonderful things with Audacity and video files as well. A tutorial at quart-avant-poing.com was especially helpful, while videos like the following demonstrate the range of possibilities:
So after exporting my video, complete with soundtrack, from Final Cut Pro, I imported the whole thing into Audacity (using A-Law encoding) and exported it back out (again using A-Law encoding), thereby glitching the video further — simply by the act of importing and exporting, i.e. without any intentional act of modification!
I opened the video in VLC and was relatively happy with the results; but then I noticed that other video players, such as QuickTime, QuickTime Player 7, and video editing software like Final Cut and Premiere Pro were all showing something different in their rendering of “the same” data! It was at this point that the connection to my theoretical musings on post-cinematic cameras, smart TVs, and the “fundamentally processual” nature of on-the-fly computational playback began to hit home in a very practical way.
As the author of the quart-avant-poing tutorial put it:
For some reasons (cause players work in different ways) you’ll get sometimes differents results while opening your glitched file into VLC or MPC etc… so If you like what you get into VLC and not what you see in MPC, then export it again directly from VLC for example, which will give a solid video file of what you saw in it, and if VLC can open it but crash while re-exporting it in a solid file, don’t hesitate to use video capture program like FRAPS to record what VLC is showing, because sometimes, capturing a glitch in clean file can be seen as the main part of the job cause glitches are like wild animals in a certain way, you can see them, but putting them into a clean video file structure is a mess.
Thus, I experimented with a variety of ways (and codecs) for exporting (or “capturing”) the video I had seen, but which proved elusive to my attempts to make repeatable (and hence visible to others). I went through several iterations of video and audio tracks until I was able to approximate what I thought I had seen and heard. At the end of the process, when I had arrived at the version embedded at the top of this post, I felt like I had more thoroughly probed (though without fully “knowing”) the relations between the data infrastructure and the manifest images — relations that I now saw as more thoroughly material than before. And I came, particularly, to appreciate the idea that “glitches are like wild animals.”
Strange beasts indeed! And when you consider that all digital video files are something like latent glitches — or temporarily domesticated animals — you begin to understand what I mean about the instability and revisability of post-cinematic images: in effect, glitches merely show us the truth about digital video as an essentially generative system, magnifying the interstitial spaces that post-cinematic machineries fill in with their own affective materialities, so that though a string of zeroes and ones remains unchanged as it streams through these systems, we can yet never cross the same stream twice…