“Unit Operations” and “Alloy Resonator 0.2” — Critical Making Collaborative, March 10, 2025

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, Daniel Jackson and Kimia Koochakzadeh-Yazdi, who will present their ongoing work in music and performance—Monday, March 10 (4PM) at the CCRMA Stage (3rd floor). 

Alloy Resonator 0.2 – Kimia Koochakzadeh-Yazdi (Music Composition) 

Alloy Resonator, a hybrid wearable instrument, embraces the fragility and rigidity of the body as an expressive medium for playing electronic music. It experiments with physical thresholds and explores ways to position the performer’s body at the center of the performance. The goal is to have every movement, whether subtle or exaggerated, become an amplified sonic gesture.

The Unit Operations Here Are Highly Specific – Daniel Jackson (Theater and Performance Studies)

The Unit Operations Here Are Highly Specific is a devised, movement-based work exploring the relationship between text, performance, and reception by allowing each audience member to choose from and switch between soundtracks while they watch a choreographed performance. The work playfully confronts the limits of personalization in the context of collective experience while interrogating how meaning is generated and where meaning resides in complex performance-media environments.

GlitchesAreLikeWildAnimalsInLatentSpace! CANINE! — Karin + Shane Denson

CANINE! (2024)

Karin & Shane Denson

Canine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making — including the mental “schematisms” theorized by Kant and now embodied in algorithmic stereotypes.

This is a screen recording of a real-time, generative/combinatory video.

Canine! is a sort of “forest of forking paths,” consisting of 64 branching and looping pathways, with alternate pathways displayed in tandem, along with generative text, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation. Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a set of species-indeterminate canines, which Karin painted with acrylic on canvas. The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times in branching paths before looping back. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original canine painting into Audacity as raw data, interpreted with the GSM codec.

Onscreen and spoken text is generated by a Markov model trained on Shane’s article “Artificial Imagination” (https://ojs.library.ubc.ca/index.php/cinephile/article/view/199653).

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

See also: Bovine! (https://vimeo.com/manage/videos/1013903632)

“Democratizing Vibrations” and “Opera Machine” — Critical Making Collaborative, Nov. 22, 2024

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, West Montgomery and Lloyd May, who will present their ongoing work in opera and haptic art—Friday, Nov. 22 (5PM) at the CCRMA Stage (3rd floor). 

Democratizing Vibrations – Lloyd May (Music Technology)

What would it mean to put vibration and touch at the center of a musical experience? What should devices used to create and experience vibration-based art (haptic instruments) look and feel like? These questions are at the core of the Musical Haptics project that aims to co-design haptic instruments and artworks with D/deaf and hard-of-hearing artists. 

Opera Machine – Westley Montgomery (TAPS)

Opera Machine is a work-in-process exploring music, measurement, and the sedimentation of culture in the bodies of performers. How does the cultural legacy of opera reverberate in the present day? How have the histories of voice-science, race “science,” and the gendering of the body co-produced pedagogies and styles of opera performance? What might it look like (sound like) to resist these histories? 

GlitchesAreLikeWildAnimalsInLatentSpace! BOVINE! — Karin + Shane Denson (2024)

BOVINE! (2024)
Karin & Shane Denson

Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.

The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:

Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.

Karin Denson, Training Data (C-print, 36 x 24 in., 2024)

Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:

Karin Denson, Bovine Form (acrylic on canvas, 36 x 24 in., 2024)

The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.

The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

Karin Denson, Bovine Space (pentaptych, acrylic on canvas, each panel 12 x 36 in., total hanging size 64 x 36 in., 2024)

“Artificial Imagination” — Out now in new issue of Cinephile (open access)

The new issue of Cinephile, the University of British Columbia’s film and media journal, is just out. The theme of the issue is “(Un)Recovering the Future,” and it’s all about nostalgia, malaise, history, and (endangered) futurities.

In this context, I am happy to have contributed a piece called “Artificial Imagination” on the relation between AI and (visual) imagination. The essay lays some of the groundwork for a larger exploration of AI and its significance for aesthetics in both broad and narrow senses of the word. It follows from the emphasis on embodiment in my essay “From Sublime Awe to Abject Cringe: On the Embodied Processing of AI Art,” recently published in Journal of Visual Culture, as part of a larger book project tentatively called Art & Artificiality, or: What AI Means for Aesthetics.

Thanks very much to editors Will Riley and Liam Riley for the invitation to contribute to this issue!

OUT NOW: “From Sublime Awe to Abject Cringe: On the Embodied Processing of AI Art” in Journal of Visual Culture

The new issue of Journal of Visual Culture just dropped, and I’m excited to see my article on AI art and aesthetics alongside work by Shannon Mattern, Bryan Norton, Jussi Parikka, and others. It looks like a great issue, and I’m looking forward to digging into it!

“Mimetic Virtualities” — Yvette Granata at Digital Aesthetics Workshop, February 6, 2024

Please join us for the next Digital Aesthetics Workshop, when we will welcome Yvette Granata for her talk on “Mimetic Virtualities: Rendering the Masses and/or Feminist Media Art?” on February 6, 5-7pm PT. The event will take place in the Stanford Humanities Center Board Room, where refreshments will be served. Below you will find the abstract and bio attached, as well as a poster for lightweight circulation. We look forward to seeing you there!

Zoom link for those unable to join in-person: tinyurl.com/2r285898

Abstract: 

From stolen election narratives to Q-anon cults, the politics of the 21st century are steeped in the mainstreaming of disinformation and the hard-core pursuit of false realities via any media necessary. Simultaneously, the 21st century marks the rise of virtual reality as a mass media. While spatial computing technologies behind virtual reality graphics and head-mounted displays have been in development since the middle of the 20th century, virtual reality as a mass media is a phenomenon of the last decade. Concurrently with the development of VR as a mass media, the tools of virtual production have proliferated – such as motion capture libraries, 3D model and animation platforms, and game engine tools. Does the pursuit of false realities and the proliferation of virtual reality technologies have anything to do with each other? Has virtual reality as a mass medium shaped the aesthetics of the digital masses differently? Looking to the manner in which virtual mimesis operates via rendering methods of the image of crowds, from 2D neural GAN generators to the recent development of neural radiance fields (NERFs) as a form of mass 3D rendering, I analyze the politics and aesthetics of mimetic virtualities as both a process of rendering of the masses and as a process of the distribution of the sensibility of virtualized bodies. Lastly, I present all of the above via feminist media art practice as a critical, creative method.

Bio:

Yvette Granata is a media artist, filmmaker, and digital media scholar. She is Assistant Professor at University of Michigan in the department of Film, Television and Media and the Digital Studies Institute. She creates immersive installations, video art, VR experiences,  and interactive environments, and writes about digital culture, media art, and media theory. Her work has been exhibited nationally and internationally at film festivals and art institutions including, Slamdance, CPH:DOX, The Melbourne International Film Festival, The Annecy International Animation Festival, Images Festival, Harvard Carpenter Center for the Arts, The EYE Film Museum, McDonough Museum of Art, and Hallwalls Contemporary Art, among others. Her most recent VR project,  I Took a Lethal Dose of Herbs, premiered at CPH:DOX in 2023, won best VR film at the Cannes World Film Awards, and received an Honorable Mention at Prix Ars Electronica in Linz Austria. Yvette has also published in Ctrl-Z: New Media PhilosophyTrace JournalNECSUS: European Journal of Media StudiesInternational Journal of Cultural Studies and AI & Society. She lives in Detroit.

“My Life as an Artificial Creative Intelligence: A Speculative Fiction” — Mark Amerika, Dept of Art & Art History, Nov. 29, 2023

Photo credit: Laura Shill

In this artist talk, Mark Amerika shares his creative process as a digital artist whose symbiotic relationship with both language and diffusion models informs his artistic and theoretical pursuits. Turning to his most recent book, My Life as an Artificial Creative Intelligence (Stanford University Press) and his just-released art project, Posthuman Cinema, Amerika will demonstrate, through personal narrative and theoretical asides, how different rhetorical uses of language can transform AI into a camera, a fiction writer, a poet and a philosopher.

Throughout the performance, Amerika will ask us to consider at what point a language artist becomes a language model and vice-versa. He will also question what new skills artists will have to develop as they co-evolve in a creative work environment where one must maintain a playful and dynamic relationship with the rapid technical maneuvering of the machinic Other. Will a more robust, intuitive yet interdependent relationship with AI models require artists to fine-tune what Amerika refers to as a cosmotechnical skill, one that is at once imaginative and indeterminate, playful and profound, grounded yet otherworldly in its aesthetic becoming? And how do we teach this skill at both the undergraduate and graduate level?

Borrowing from Beatnik poets and jazz musicians alike, Amerika suggests that a continuous call-and-response improvisational jam session with AI models may unlock personal insights that reveal how one’s own unconscious neural mechanism acts (performs) like a Meta Remix Engine. Engaging with other artists and writers who have tapped into their creative spontaneity as a primary research methodology, Amerika will discuss how digital artists can train themselves to intuitively select and defamiliarize datum for aesthetic effect. In so doing, Amerika suggests that this is how an artist connects with their own alien intelligence, a mediumistic sensibility that takes them out of their anthropocentric stronghold and invites them to reimagine what it means to be creative across the human-nonhuman spectrum.

Mark Amerika has exhibited his art in many venues including the Whitney Biennial, the Denver Art Museum, ZKM, the Walker Art Center, and the American Museum of the Moving Image. His solo exhibitions have appeared all over the world including at the Institute of Contemporary Arts in London, the University of Hawaii Art Galleries, the Marlborough Gallery in Barcelona and the Norwegian Embassy in Havana.

Amerika has had five early and/or mid-career retrospectives including the first two Internet art retrospectives ever produced (Tokyo and London). In 2009-2010, The National Museum of Contemporary Art in Athens, Greece, featured Amerika’s comprehensive retrospective exhibition entitled UNREALTIME. The exhibition included his groundbreaking works of Internet art GRAMMATRON and FILMTEXT as well as his feature-length work of mobile cinema, Immobilité. In 2012, Amerika released his large-scale transmedia narrative, Museum of Glitch Aesthetics (MOGA), a multi-platform net artwork commissioned by Abandon Normal Devices in conjunction with the London 2012 Olympic and Paralympic Games. His public art project, Glitch TV, was featured at the opening of the “video towers” at Denver International Airport.

He is the author of thirteen books including My Life as an Artificial Creative Intelligence, the inaugural title in the “Sensing Media” series published in 2022 by Stanford University Press.

See here for more information.

Correlative Counter-Capture in Contemporary Art @ ASAP/14

Rafael Lozano-Hemmer, “Pulse Index”, 2010. “Recorders”, Museum of Contemporary Art, Sydney, 2011. Photo by: Antimodular Research

On Saturday, September 30, at 9am Pacific Time, I’ll be giving the following talk at ASAP/14 (online):

Correlative Counter-Capture in Contemporary Art

Computational processing takes place at speeds and scales that are categorically outside human perception, but such invisible processing nevertheless exerts significant effects on the sensory and aesthetic—as well as political—qualities of artworks that employ digital and/or algorithmic media. To account for this apparent paradox, it is necessary to rethink aesthetics itself in the light of two evidently opposing tendencies of computation: on the one hand, the invisibility of processing means that computation is phenomenologically discorrelated (in that it breaks with what Husserl calls the “the fundamental correlation between noesis and noema”); on the other hand, however, when directed toward the production of sensory contents, computation relies centrally on statistical correlations that reproduce normative constructs (including those of gender, race, and dis/ability). As discorrelative, computation exceeds the perceptual bond between subject and object, intervening directly in the prepersonal flesh; as correlative, computation not only expresses “algorithmic biases” but is capable of implanting them directly in the flesh. Through this double movement, a correlative capture of the body and its metabolism is made possible: a statistical norming of subjectivity and collectivity prior to perception and representation. Political structures are thus seeded in the realm of affect and aesthesis, but because the intervention takes place in the discorrelated matter of prepersonal embodiment, a margin of indeterminacy remains from which aesthetic and political resistance might be mounted (with no guarantee of success). In this presentation, I turn to contemporary artworks combining the algorithmic (including AI, VR, or robotics) with the metabolic (including heartrate sensors, ECGs, and EEGs) in order to imagine a practice of dis/correlative counter-capture. Works by the likes of Rashaad Newsome, Rafael Lozano-Hemmer, Hito Steyerl, or Teoma Naccarato and John MacCallum point to an aesthetic practice of counter-capture that does not elude but re-engineers mechanisms of control for potentially, but only ever locally, liberatory purposes.

Pelicans and Glitches on California’s North Coast

I’ve been traveling a lot outside of California this summer, but whenever I get the chance I like to spend time up north in Mendocino or Fort Bragg, where my wife Karin is part of the artist collective at Edgewater Gallery.

Earlier in the summer, we observed tons of California brown pelicans and common murres (which look like penguins) camped out on some small offshore islands. The assembly has attracted a lot of attention — from locals, tourists, artists, and scientists. The local newspaper, The Mendocino Voice, just put out a long piece on the birds and the possible reasons for their convergence there, and they quoted Karin and featured a glitch collage that she did a while back.

Karin has been photographing, filming, glitching, and painting pelicans and other California wildlife for several years now. Check out more of her work at karindenson.com.