“The Environmental Data Stack” — Jussi Parikka at Digital Aesthetics Workshop, January 7, 2025

We’re pleased to announce our first event of 2025! Please join us in welcoming Jussi Parikka, who will present on “The Environmental Data Stack” on Tuesday, Jan 7, 5-7pm PT. The event will take place in the Stanford Humanities Center Board Room, where refreshments will be served. We look forward to seeing you there after the holiday break!

Zoom link for those unable to join in-person: tinyurl.com/ykhvtu63

Abstract:

This talk tests the notion of “environmental data stack” as a particular kind of a methodological problem space (Lury 2021). The term defines the multiple levels of “problematics” of grounding environmental data in alternating scales of reference, in different technological forms of capture of data, and in various interacting registers of sensing.  The environmental data stack builds on existing work in critical data studies where the situated, even spatialized notions of data are developed – and it also lends itself to a sense of the politics and aesthetics of data, where aesthetics is not necessarily about art (it can be though) but about the wider context of materials, sensing, and modeling. This work relates to my interest in cultural techniques of data and software studies, including the intersection of ecomedia and computational practices. The talk will thus feature some examples from recent and on-going work in different projects such as the Design and Aesthetics for Enviornmental Data (https://cc.au.dk/en/dafed/).

Bio:

Jussi Parikka is professor of Digital Aesthetics and Culture at Aarhus University where he leads the Digital Aesthetics Research Centre (DARC) as well as is the founding co-director of the Environmental Media and Aesthetics -research program. He also holds a visiting professorship at Winchester School of Art (University of Southampton). His books have addressed media archaeology, the ecological underpinnings of discourses of digital culture from animals to geology, and most recently, transformations of visual culture. The more recent books include Operational Images (2023) as well as the co-authored Living Surfaces: Images, Plants, and Environments of Media (2024, with Abelardo Gil-Fournier). Both are available as open access. His books have been translated into 12 languages. Currently he is developing a new project on datafication of agriculture.

This event is generously co-sponsored by The Europe Center.

GlitchesAreLikeWildAnimalsInLatentSpace! CANINE! — Karin + Shane Denson

CANINE! (2024)

Karin & Shane Denson

Canine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making — including the mental “schematisms” theorized by Kant and now embodied in algorithmic stereotypes.

This is a screen recording of a real-time, generative/combinatory video.

Canine! is a sort of “forest of forking paths,” consisting of 64 branching and looping pathways, with alternate pathways displayed in tandem, along with generative text, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation. Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a set of species-indeterminate canines, which Karin painted with acrylic on canvas. The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times in branching paths before looping back. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original canine painting into Audacity as raw data, interpreted with the GSM codec.

Onscreen and spoken text is generated by a Markov model trained on Shane’s article “Artificial Imagination” (https://ojs.library.ubc.ca/index.php/cinephile/article/view/199653).

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

See also: Bovine! (https://vimeo.com/manage/videos/1013903632)

“Rise of the Machines” — Spiral Film and Philosophy Conference, May 23-24 2025

I am excited to announce that I will be giving the keynote lecture at the Spiral Film & Philosophy conference in May. I attended Spiral back before the pandemic, when Deborah Levitt gave the keynote, and I have been wanting to return ever since.

They’ve put together an excellent theme this year — please share the CFP widely!

“Digital Orreries: Meditations on Material and Media Cosmologies” — Aileen Robinson at Digital Aesthetics Workshop, Dec. 3, 2024

We’re pleased to announce our next event of the year. Please join us in welcoming Aileen Robinson, who will present on “Digital Orreries: Meditations on Material and Media Cosmologies” on Tuesday, Dec 3, 6:30-8:30pm PT. The event will take place in the Stanford Humanities Center Board Room, where refreshments (and dinner!) will be served.

Zoom link for those unable to join in-person: tinyurl.com/4mnk7wmn

Bio:

Aileen Robinson is a historian of performance and technology with specializations in 18th and 19th century technological performance and Black cultural performances. Working across the history of science, technology, and theatre, Robinson explores how systems of knowledge, connected to the body and the object, overlapped to produce practices of research, dissemination, and valuation.  Robinson’s current book manuscript, Instruments of Illusion, explores intersections between technological, scientific, and theatrical knowledge in early nineteenth-century interactive science museums. She teaches across the history of science and performance, magic and technology, eighteenth- and nineteenth-century stagecraft, and 19th and 20th-century Black artistic production.

This event has been generously co-sponsored by Stanford TAPS: Department of Theater and Performance Studies.

“Democratizing Vibrations” and “Opera Machine” — Critical Making Collaborative, Nov. 22, 2024

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, West Montgomery and Lloyd May, who will present their ongoing work in opera and haptic art—Friday, Nov. 22 (5PM) at the CCRMA Stage (3rd floor). 

Democratizing Vibrations – Lloyd May (Music Technology)

What would it mean to put vibration and touch at the center of a musical experience? What should devices used to create and experience vibration-based art (haptic instruments) look and feel like? These questions are at the core of the Musical Haptics project that aims to co-design haptic instruments and artworks with D/deaf and hard-of-hearing artists. 

Opera Machine – Westley Montgomery (TAPS)

Opera Machine is a work-in-process exploring music, measurement, and the sedimentation of culture in the bodies of performers. How does the cultural legacy of opera reverberate in the present day? How have the histories of voice-science, race “science,” and the gendering of the body co-produced pedagogies and styles of opera performance? What might it look like (sound like) to resist these histories? 

“Forms in Motion: Elemental Effects in Contemporary Cinema” — Kartik Nair at Digital Aesthetics Workshop, Nov. 12, 2024

We’re pleased to announce our first event for the 24-25 Academic Year. Please join us in welcoming Kartik Nair, who will present on “Forms in Motion: Elemental Effects in Contemporary Cinema” on Tuesday, November 12, 5:00-7:00pm PT. The event will take place in the Stanford Humanities Center Board Room, where refreshments will be served. Below you will find the abstract and bio attached, as well as a poster  for lightweight circulation. We look forward to seeing you there!

Zoom link for those unable to join in-person: tinyurl.com/4b8e75v4

Abstract:

Motion capture is the practice of recording the movements of human bodies and using those movements to animate computer-generated bodies, thereby producing virtual character movement on the screen. Current scholarship on motion capture has critically examined the construction of this technology in trade reportage, industry journalism, and film promotion, detecting a discursive ambivalence arising from a struggle for recognition between live actors and motion capture technicians over the future of film performance. This talk will use motion capture as a heuristic to understand the many other kinds of human movements that are being captured in the processes of digital image-making. I will track the pipeline of atmospheric effects. Such atmospheric effects are ubiquitous in contemporary blockbuster cinema. Dust, fire, smoke, light, water and other particulate proliferate in the mise en scene, helping to ground impossible worlds even as they fascinate us with their own expressive qualities. Replacing the logic of photographic capture with one in which the frame is a ‘blank canvas’ to which elements are selectively added, such atmospheric effects vividly attest to the claim that digital tools have re-linked filmmaking with painting. Yet, unlike the painted canvas, which preserves brushstrokes in frozen perpetuity, virtual effects inscribe a trace of and in motion: these are instances in which the creative and corporeal motion of visual effects artists is captured and conveyed as motion. This process unfolds along a transnational path along which the mobile trace moves. Even as those generating it may remain immobilized by visa regulations, server locations, and time-zone differentials, their physical moves are eventually ex-propriated and assimilated into screen movement. Closely read, then, the spectacular conventions of blockbuster cinema can become legible as archives in and of motion.

Bio:

Kartik Nair is a film scholar working at the intersection of transnational cinema, film historiography, materialist media theory, and infrastructure studies, with a focus on popular genres and South Asian cinema. His first book, Seeing Things, is about the production and circulation of low-budget horror films in 1980s India. His current research explores the physical pipelines of digital cinema. He is an Assistant Professor of Film Studies at Temple University in Philadelphia, and one of the core editors of BioScope: South Asian Screen Studies.

This event has been generously co-sponsored by the Department of Art & Art History and the Stanford Center for South Asia.

GlitchesAreLikeWildAnimalsInLatentSpace! BOVINE! — Karin + Shane Denson (2024)

BOVINE! (2024)
Karin & Shane Denson

Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.

The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:

Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.

Karin Denson, Training Data (C-print, 36 x 24 in., 2024)

Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:

Karin Denson, Bovine Form (acrylic on canvas, 36 x 24 in., 2024)

The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.

The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

Karin Denson, Bovine Space (pentaptych, acrylic on canvas, each panel 12 x 36 in., total hanging size 64 x 36 in., 2024)

Congratulations, Dr. Hank Gerba!

On Sunday, June 16, it was my honor to “hood” Hank Gerba, who earned their doctoral degree in Art History with a concentration in Film & Media Studies, and to make the following remarks at our departmental commencement ceremony:

Hank Gerba’s dissertation, “Digital Disruptions: Moiré, Aliasing, and the Stroboscopic Effect,” is exceptional in a number of ways. While it is a work primarily in media theory, or even media philosophy (itself an exceptional or eccentric subfield within film and media studies), the dissertation stages its argument by way of close engagement with a number of digital processes and devices, but also a number of analog artworks. Accordingly, it is exceptionally well suited for an interdisciplinary graduate program like ours, where students like Hank graduate with a PhD in Art History but are able to specialize also in Film & Media Studies, and where all of our students are expected to gain some acquaintance with both fields. 

Still, it is rare for a dissertation to be so agnostic about traditional disciplinary divisions and to speak across boundaries in a way that goes straight to their root: in this case, straight to the root of aesthetic and technical processes or forms. The division between technology or the technical, and art or the aesthetic, is a fairly recent invention, just a little over 200 years old. It is only in the wake of this split that we can speak of the supposedly disparate fields of art history and media studies. Exceptionally, Hank’s dissertation challenges that split, along with a number of other more or less conventional categorizations. And it does so by foregrounding a number of exceptional phenomena: the shimmer of moiré silk, pixely appearances on computer screens, and stroboscopic flickers of film—digital disruptions, according to the title of Hank’s dissertation, when digital and analog logics come into conflict with one another, when blocky grids clash with smooth contours or a film’s discrete frames line up with the movement of wagon wheels or helicopter blades to make it seem as though they were standing still. 

In framing the project this way, however, Hank not only challenges disciplinary boundaries but significantly relocates the digital/analog divide. The digital is not just about computers but applies also to the clashing grids of watered silk, which give rise to the analog shimmer that we see in the fabric and in artworks made with it. Digital and analog come to name not particular types of technologies or media, but fundamental modes of organizing aesthetic experience itself. This is an important media-philosophical argument, and it lays important groundwork for thinking about the ways that contemporary media, such as AI, are actively transforming our visual and aesthetic worlds.

I’ll just mention, finally, that Hank is the first student for whom I served as primary advisor, and the first PhD student whose progress I have accompanied from admission to the program to all the way to defending their dissertation. So it kind of feels like I’m graduating today as well. But it’s Hank who did the work and in many respects surpassed their mentor. I am grateful to have learned from Hank, both through their scholarship and through their work across the university, including at the Digital Aesthetics Workshop at the Stanford Humanities Center, where we have collaborated for several years. Hank is graduating with the Christopher Meyer Prize, one of the highest honors that we can bestow on graduating PhD students, in recognition not only of excellent scholarship but also outstanding service to the departmental and university community. 

Please join me in congratulating Hank Gerba.

Video: Leonardo Art Science Evening Rendezvous (LASER) Talks, June 10, 2024

Thanks to Piero Scaruffi for inviting me to present at the Leonardo Art Science Evening Rendezvous (LASER) series here at Stanford last night, alongside Virginia San Fratello, Fiorenza Micheli, and Tom Mullaney. It was a great conversation, with lots of unexpected resonances!