“Democratizing Vibrations” and “Opera Machine” — Critical Making Collaborative, Nov. 22, 2024

The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, West Montgomery and Lloyd May, who will present their ongoing work in opera and haptic art—Friday, Nov. 22 (5PM) at the CCRMA Stage (3rd floor). 

Democratizing Vibrations – Lloyd May (Music Technology)

What would it mean to put vibration and touch at the center of a musical experience? What should devices used to create and experience vibration-based art (haptic instruments) look and feel like? These questions are at the core of the Musical Haptics project that aims to co-design haptic instruments and artworks with D/deaf and hard-of-hearing artists. 

Opera Machine – Westley Montgomery (TAPS)

Opera Machine is a work-in-process exploring music, measurement, and the sedimentation of culture in the bodies of performers. How does the cultural legacy of opera reverberate in the present day? How have the histories of voice-science, race “science,” and the gendering of the body co-produced pedagogies and styles of opera performance? What might it look like (sound like) to resist these histories? 

GlitchesAreLikeWildAnimalsInLatentSpace! BOVINE! — Karin + Shane Denson (2024)

BOVINE! (2024)
Karin & Shane Denson

Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.

The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:

Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.

Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.

The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.

Karin Denson, Training Data (C-print, 36 x 24 in., 2024)

Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:

Karin Denson, Bovine Form (acrylic on canvas, 36 x 24 in., 2024)

The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.

The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.

Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.

Made with Max 8 (https://cycling74.com/products/max) on a 2023 Mac Studio (Mac 14,14, 24-core Apple M2 Ultra, 64 GB RAM) running macOS Sonoma (14.6.1). Generative text is produced with Pavel Janicki’s MaxAndP5js Bridge (https://www.paweljanicki.jp/projects_maxandp5js_en.html) to interface Max with the p5js (https://p5js.org) version of the RiTa tools for natural language and generative writing (https://rednoise.org/rita/). Jeremy Bernstein’s external Max object, shell 1.0b3 (https://github.com/jeremybernstein/shell/releases/tag/1.0b3), passes the text to the OS for text-to-speech.

Karin Denson, Bovine Space (pentaptych, acrylic on canvas, each panel 12 x 36 in., total hanging size 64 x 36 in., 2024)

Don’t Look Now: From Flawed Experiment in Videographic Interactivity to New Open-Source Tool — Interactive Video Grid

Back in 2016, my experimental video essay “Don’t Look Now: Paradoxes of Suture” was published in the open access journal [in]Transition: Journal of Videographic Film and Moving Image Studies. This was an experiment with the limits of the “video essay” form, and a test to see if it could accommodate non-linear and interactive forms (produced with some very basic javascript and HTML/CSS so as to remain accessible and viewable even with updates to web infrastructures). Seeing as the interactive video essay was accepted and published in a peer-reviewed journal devoted, for the most part, to more conventional linear video essays, I considered the test passed. (However, since the journal has recently moved to a new hosting platform with Open Humanities Library, the interactive version is no longer included directly on the site, instead linking to my own self-hosted version here.)

But even if the test was passed in terms of publication, the peer reviewers noted that the experiment was not altogether successful. Richard Misek noted that the piece was “flawed,” qualifying nevertheless that “the work’s limitations are integral to its innovation.” The innovation, according to Misek, was to point to a new way of looking and doing close analysis:

“Perhaps one should see it not as a self-contained video essay but as a walk-through of an early beta of an app for viewing and manipulating video clips spatially. Imagine, for example… The user imports a scene. The app then splits it into clips and linearly spatializes it, perhaps like in Denson’s video. Each clip can then be individually played, looped, or paused. For example, the user can scroll to, and then pause, the in points or out points for each clip; or just play two particular shots simultaneously and pause everything else. Exactly how the user utilizes this app depends on the film and what they hope to discover from it. The very process of doing this, of course, may then also reveal previously unnoticed themes, patterns, or equivalences. Such a platform for analyzing moving images could hugely faciliate close formal analysis. I imagine a moving image version of Warburg’s Mnemosyne Atlas – a wall (/ screen) full of images, all existing in spatial relation with each other, and all in motion; a field of connections waiting to be made.

“In short, I think this video points towards new methods of conducting close analysis rather than new methods of presenting it. In my view, the ideal final product would not be a tidied-up video essay but an app. I realize that, technically and conceptually, this is asking a lot. It would be a very different, and much larger project. For now, though, this video provides an inspiring demo of what such an app could help film analysts achieve.”

Fast-forward eight years, to a short article on “Five Video Essays to Close Out May,” published on May 28, 2024 in Hyperallergic. Here, author Dan Schindel includes a note about an open-source and open-access tool, the Interactive Video Grid by Quan Zhang, that is inspired by my video essay and aims to realize a large part of the vision laid out by Misek in his review. As one of two demos of the tool, which allows users to create interactive grids of video clips for close and synchronous analysis, Zhang even includes “Don’t Look Now: Paradoxes of Suture. A Reconfiguration of Shane Denson’s Interactive Video Essay.”

I’m excited to experiment with this in classrooms, or as an aid in my own research. And I can imagine that additional development might point to further innovations in modes of looking. For example, what if we make the grid dynamic, such that the clips can be dragged and rearranged? Or added and removed, resized, slowed down or speeded up, maybe even superimposed on one another? Of course, many such transformations are already possible within nonlinear digital editing platforms — but it’s only the editing process that is nonlinear, while the operations imagined here only become visible in the outputted products that are, alas, still linear videos.

Like my original video, Zhang’s new tool might also be “flawed” and in need of further development, but it is successful in terms of pointing to new ways of looking that go beyond linear forms of film and video and that take fuller advantage of the underlying nonlinearity of digital media. The latter, I would suggest, are anyway transforming our modes of visual attention, so it seems only right that we should experiment self-reflexively and probe the limits of the new ways of looking.

Sunset with a Sky Background — Screening and discussion on AI Aesthetics with filmmaker J. Makary and respondent Caitlin Chan

On May 7, 2024 (4:30pm in McMurtry 115), the Critical Making Collaborative at Stanford is proud to present a screening of Sunset with a Sky Background, followed by a discussion on AI aesthetics with filmmaker J. Makary and respondent Caitlin Chan.

J. Louise Makary is a filmmaker and Ph.D. candidate in art history specializing in film studies and lens-based art practices. She is interested in using methodologies foundational to the study of cinema, such as psychoanalysis and semiotics, to interpret emergent visual forms of A.I. with film in mind. Her works have been exhibited at ICA Philadelphia, Bauhaus University, the Slought Foundation, Mana Contemporary (Jersey City and Chicago), Human Resources LA, Moore College, SPACES Cleveland, and the Spring/Break Art Show.

Caitlin Chan is a second year Ph.D. student in art history. She is currently working on a project that historicizes the aesthetics and phenomenology of A.I.-generated images by tracing a genealogy to early 19th-century photographic practices of making and viewership.

“Weaving as Coding: Complexity and Nostalgia” — Hideo Mabuchi at Critical Making Collaborative, March 4, 2024

The Critical Making Collaborative at Stanford proudly presents Hideo Mabuchi, Professor of Applied Physics and Denning Family Director of the Stanford Arts Institute, for a presentation titled “Weaving as Coding: Complexity and Nostalgia.” The presentation will take place on Monday, March 4 (12:30-2:00pm in the McMurtry Building, room 370). All are welcome!

In Hideo’s words:

Weaving as Coding: Complexity and Nostalgia

Textiles are cultural objects that organically support nested layers of coding.  In this talk I’ll first illustrate what I mean by this with brief examples borrowed from papers in anthropology and media studies, and then discuss a small textile piece I recently wove on an eight-shaft table loom.  My piece employs a traditional block draft (Bronson spot lace) and weft-faced weaving to mimic the appearance of a seven-segment numeral display, as can be found in common LED alarm clocks, and spells out the “calculator word” h-E-L-L-0 as the ​upside-down view of the digit string 07734.  To complete the arc of the story I’ll offer a semantic mash-up of Boymian reflective nostalgia with the information-theoretic concept of algorithmic complexity, and argue on this basis that hand-weaving offers a rich paradigm for critical making that undermines framings of generative AI as a tool that augments human creativity.

As a quantum physicist devoted to the traditional crafts of ceramics and weaving, I live a kind of spiral between abstraction and materiality that keeps me dithering over what it means to know something.  I profess this equivocation in my teaching, which increasingly looks to the humanities for help in relativizing rigorous thought and embodied understanding.  The project I’ll discuss grew out of class prep for teaching APPPHYS100B “The Questions of Cloth: Weaving, Pattern Complexity, and Structures of Fabrics”, but I’ve only picked up on its critical making aspect as a result of things I learned while co-teaching ARTHIST284/484 “Material Metonymy: Ceramics and Asian America” with Marci Kwon.

Cuerpos Post-Cinemáticos — Spanish translation of Post-Cinematic Bodies

I was pleasantly surprised to receive a copy of Cuerpos Post-Cinemáticos, a Spanish translation of Post-Cinematic Bodies, in the mail today — especially surprised since I had no idea it was being made!

Zenaida Osorio, a professor in the School of Graphic Design at the Universidad Nacional de Colombia in Bogotá, undertook the project with her students as a sort of critical making project. They are open about the fact that they used ChatGPT and DeepL to make the first pass at translating the open-access text, but then a team of 11 students (Alejandro Guerrero, David Inagán, Natalia Correa, Natalia Montaña, Natalie Martin, Roxana Ayala, Selina Ojeda, Sofía Bernal, Santiago Narváez, William Camacho, and Wilmer Casallas) revised and corrected the translation. The whole team added a glossary of technical terms, a commentary (in English and Spanish) before each chapter and at the end of the book, and a set of QR-code–activated “visual comments” — a set of wonderfully designed objects that link the ideas of the book to the students’ lived experience in Bogotá. They also sent me printed copies of these beautiful objects. The final product is finely crafted.

My family and I had the honor to spend a week in Bogotá at Professor Osorio’s invitation back in 2019, where I saw first-hand the amazing work that she and her students are doing there. It was a truly memorable week, which I often look back on fondly, and I hope to return there again someday. Today, I am very touched by this wonderful and unexpected gift!

“Why Are Things So Weird?” — Kevin Munger on Flusser’s COMMUNICOLOGY

I just stumbled upon this interesting looking video response from Kevin Munger to Vilém Flusser’s Communicology: Mutations in Human Relations?, which appeared in the “Sensing Media” book series that I edit with Wendy Chun for Stanford University Press.

The above (posted on Twitter here) is an excerpt of a longer video accessible if you subscribe to the New Models Patreon or Substack. I haven’t subscribed, so I’m 100% sure what to expect, but it looks provocative!

Post-Cinematic Bodies book launch — write-up in the Stanford Daily (and an AI-generated knock-off?)

Yesterday, The Stanford Daily ran an article by student reporter Joshua Kim about the book launch of Post-Cinematic Bodies, which you can find here. Interestingly, it seems that the article was immediately picked up, processed with AI (I can only assume), and (re)published in machinically modified form, complete with a listicle-like FAQs section, by a certain “Simon Smith,” on a website illustrated exclusively with AI-generated images. Welcome, as Matthew Kirschenbaum writes, to the Textpocalypse!

Two Events on AI and Critical Making

I am happy to announce this year’s first two events of the Critical Making Collaborative at Stanford. Both events focus on critical and self-reflexive uses of AI at the intersection of theory and practice.

The first event, on Friday, October 13 (12-2pm in the McMurtry Building, room 360), includes a screening of Carlo Nasisse’s short film “Uncanny Earth.” In this film — which is equally about technology, ecology, human and nonhuman agency — an AI attempts to tell a story about the earth and its inhabitants. Following the screening, we will discuss the film and the many issues it raises for working and thinking critically with AI with the filmmaker. 

Carlo Nasisse is a director and cinematographer. His work has been featured in the New Yorker, PBS, SXSW, Slamdance, and the New Orleans Film Festival. His most recent short film, “Direcciones”, won the Golden Gate Award for Best Documentary Short at the San Francisco Film Festival. He is currently completing his MFA at Stanford University.

RSVPs to shane.denson@stanford.edu are appreciated, though not required, so I have a rough headcount for refreshments.

The second event, on Friday, November 3 (4:30pm, location TBA), will feature Prof. Matt Smith and his wonderfully weird graphic novel remix of Nietzsche’s “On Truth and Lies in an Nonmoral Sense” composed in awkward and agonistic collaboration with the AI graphics engine Midjourney — it may be humanity’s last artwork!

Matthew Wilson Smith is Professor of German Studies and of Theater and Performance Studies at Stanford. His interests include modern theatre and relations between science, technology, and the arts. His book The Nervous Stage: 19th-century Neuroscience and the Birth of Modern Theatre (Oxford, 2017) explores historical intersections between theatre and neurology and traces the construction of a “neural subject” over the course of the nineteenth century. It was a finalist for the George Freedley Memorial Award of the Theater Library Association. His previous book, The Total Work of Art: From Bayreuth to Cyberspace (Routledge, 2007), presents a history and theory of attempts to unify the arts; the book places such diverse figures as Wagner, Moholy-Nagy, Brecht, Riefenstahl, Disney, Warhol, and contemporary cyber-artists within a coherent genealogy of multimedia performance.  He is the editor of Georg Büchner: The Major Works, which appeared as a Norton Critical Edition in 2011, and the co-editor of Modernism and Opera (Johns Hopkins, 2016), which was shortlisted for an MSA Book Prize. His essays on theater, opera, film, and virtual reality have appeared widely, and his work as a playwright has appeared at the Eugene O’Neill Musical Theater Conference, Richard Foreman’s Ontological-Hysteric Theater, and other stages. He previously held professorships at Cornell University and Boston University as well as visiting positions at Columbia University and Johannes Gutenberg-Universität (Mainz).

Streaming Mind, Streaming Body

A short text of mine titled “Streaming Mind, Streaming Body” was recently published online at In Media Res as part of a theme week on “The Contemporary Streaming Style II.” The piece connects reflections stemming from Bernard Stiegler’s philosophy of media to recent body-oriented streaming platforms like the Peloton.

You can find my piece here and the rest of the theme week (with contributions from Neta Alexander, Ethan Tussey, Carol Vernallis, and Jennifer Barker) here.