The Critical Making Collaborative at Stanford invites you to an evening of sharing and discussion with two recipients of the Critical Making Award, West Montgomery and Lloyd May, who will present their ongoing work in opera and haptic art—Friday, Nov. 22 (5PM) at the CCRMA Stage (3rd floor).
Democratizing Vibrations – Lloyd May (Music Technology)
What would it mean to put vibration and touch at the center of a musical experience? What should devices used to create and experience vibration-based art (haptic instruments) look and feel like? These questions are at the core of the Musical Haptics project that aims to co-design haptic instruments and artworks with D/deaf and hard-of-hearing artists.
Opera Machine – Westley Montgomery (TAPS)
Opera Machine is a work-in-process exploring music, measurement, and the sedimentation of culture in the bodies of performers. How does the cultural legacy of opera reverberate in the present day? How have the histories of voice-science, race “science,” and the gendering of the body co-produced pedagogies and styles of opera performance? What might it look like (sound like) to resist these histories?
We’re pleased to announce our first event for the 24-25 Academic Year. Please join us in welcoming Kartik Nair, who will present on “Forms in Motion: Elemental Effects in Contemporary Cinema” on Tuesday, November 12, 5:00-7:00pm PT. The event will take place in the Stanford Humanities Center Board Room, where refreshments will be served. Below you will find the abstract and bio attached, as well as a poster for lightweight circulation. We look forward to seeing you there!
Motion capture is the practice of recording the movements of human bodies and using those movements to animate computer-generated bodies, thereby producing virtual character movement on the screen. Current scholarship on motion capture has critically examined the construction of this technology in trade reportage, industry journalism, and film promotion, detecting a discursive ambivalence arising from a struggle for recognition between live actors and motion capture technicians over the future of film performance. This talk will use motion capture as a heuristic to understand the many other kinds of human movements that are being captured in the processes of digital image-making. I will track the pipeline of atmospheric effects. Such atmospheric effects are ubiquitous in contemporary blockbuster cinema. Dust, fire, smoke, light, water and other particulate proliferate in the mise en scene, helping to ground impossible worlds even as they fascinate us with their own expressive qualities. Replacing the logic of photographic capture with one in which the frame is a ‘blank canvas’ to which elements are selectively added, such atmospheric effects vividly attest to the claim that digital tools have re-linked filmmaking with painting. Yet, unlike the painted canvas, which preserves brushstrokes in frozen perpetuity, virtual effects inscribe a trace of and in motion: these are instances in which the creative and corporeal motion of visual effects artists is captured and conveyed as motion. This process unfolds along a transnational path along which the mobile trace moves. Even as those generating it may remain immobilized by visa regulations, server locations, and time-zone differentials, their physical moves are eventually ex-propriated and assimilated into screen movement. Closely read, then, the spectacular conventions of blockbuster cinema can become legible as archives in and of motion.
Bio:
Kartik Nair is a film scholar working at the intersection of transnational cinema, film historiography, materialist media theory, and infrastructure studies, with a focus on popular genres and South Asian cinema. His first book, Seeing Things, is about the production and circulation of low-budget horror films in 1980s India. His current research explores the physical pipelines of digital cinema. He is an Assistant Professor of Film Studies at Temple University in Philadelphia, and one of the core editors of BioScope: South Asian Screen Studies.
This event has been generously co-sponsored by the Department of Art & Art History and the Stanford Center for South Asia.
Bovine! is a part of the GlitchesAreLikeWildAnimalsInLatentSpace! series of AI, generative video, and painting works. Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making.
The above video is a screen recording of a real-time, generative/combinatory video. There are currently two versions:
Bovine.app displays generative text over combinatory video, all composited in real time. It is mathematically possible but virtually impossible that the same combination of image, sound, and text will ever be repeated.
Bovine-Video-Only.app removes text and text-to-speech elements, and only features generative audiovideo, which is assembled randomly from five cut-up versions of a single video, composited together in real-time.
The underlying video was generated in part with RunwayML (https://runwayml.com). Karin’s glitch paintings (https://karindenson.com) were used to train a model for image generation.
Prompting the model with terms like “Glitches are like wild animals” (a phrase she has been working with for years, originally found in an online glitch tutorial, now offline), and trying to avoid the usual suspects (lions, tigers, zebras), produced a glitchy cow, which Karin painted with acrylic on canvas:
The painting was fed back into RunwayML as the seed for a video clip (using Gen-2 in spring/summer 2024), which was extended a number of times. The resulting video was glitched with databending methods (in Audacity). The soundtrack was produced by feeding a jpg of the original cow painting into Audacity as raw data, interpreted with the GSM codec. After audio and video were assembled, the glitchy video was played back and captured with VLC and Quicktime, each of which interpreted the video differently. The two versions were composited together, revealing delays, hesitations, and lack of synchronization.
The full video was then cropped to produce five different strips. The audio on each was positioned accordingly in stereo space (i.e. the left-most strip has its audio turned all the way to the left, the next one over is half-way from the left to the center, the middle one is in the center, etc.). The Max app chooses randomly from a set of predetermined start points where to play each strip of video, keeping the overall image more or less in sync.
Onscreen and spoken text is generated by a Markov model trained on Shane’s book Discorrelated Images (https://www.dukeupress.edu/discorrelated-images), the cover of which featured Karin’s original GlitchesAreLikeWildAnimals! painting.
Lever Press has an exciting new book series — [film|minutes] — edited by Bernd Herzogenrath (Goethe Universität Frankfurt am Main), and I am excited to be writing a book on Bride of Frankenstein for it! Stay tuned!
On Sunday, June 16, it was my honor to “hood” Hank Gerba, who earned their doctoral degree in Art History with a concentration in Film & Media Studies, and to make the following remarks at our departmental commencement ceremony:
Hank Gerba’s dissertation, “Digital Disruptions: Moiré, Aliasing, and the Stroboscopic Effect,” is exceptional in a number of ways. While it is a work primarily in media theory, or even media philosophy (itself an exceptional or eccentric subfield within film and media studies), the dissertation stages its argument by way of close engagement with a number of digital processes and devices, but also a number of analog artworks. Accordingly, it is exceptionally well suited for an interdisciplinary graduate program like ours, where students like Hank graduate with a PhD in Art History but are able to specialize also in Film & Media Studies, and where all of our students are expected to gain some acquaintance with both fields.
Still, it is rare for a dissertation to be so agnostic about traditional disciplinary divisions and to speak across boundaries in a way that goes straight to their root: in this case, straight to the root of aesthetic and technical processes or forms. The division between technology or the technical, and art or the aesthetic, is a fairly recent invention, just a little over 200 years old. It is only in the wake of this split that we can speak of the supposedly disparate fields of art history and media studies. Exceptionally, Hank’s dissertation challenges that split, along with a number of other more or less conventional categorizations. And it does so by foregrounding a number of exceptional phenomena: the shimmer of moiré silk, pixely appearances on computer screens, and stroboscopic flickers of film—digital disruptions, according to the title of Hank’s dissertation, when digital and analog logics come into conflict with one another, when blocky grids clash with smooth contours or a film’s discrete frames line up with the movement of wagon wheels or helicopter blades to make it seem as though they were standing still.
In framing the project this way, however, Hank not only challenges disciplinary boundaries but significantly relocates the digital/analog divide. The digital is not just about computers but applies also to the clashing grids of watered silk, which give rise to the analog shimmer that we see in the fabric and in artworks made with it. Digital and analog come to name not particular types of technologies or media, but fundamental modes of organizing aesthetic experience itself. This is an important media-philosophical argument, and it lays important groundwork for thinking about the ways that contemporary media, such as AI, are actively transforming our visual and aesthetic worlds.
I’ll just mention, finally, that Hank is the first student for whom I served as primary advisor, and the first PhD student whose progress I have accompanied from admission to the program to all the way to defending their dissertation. So it kind of feels like I’m graduating today as well. But it’s Hank who did the work and in many respects surpassed their mentor. I am grateful to have learned from Hank, both through their scholarship and through their work across the university, including at the Digital Aesthetics Workshop at the Stanford Humanities Center, where we have collaborated for several years. Hank is graduating with the Christopher Meyer Prize, one of the highest honors that we can bestow on graduating PhD students, in recognition not only of excellent scholarship but also outstanding service to the departmental and university community.
Thanks to Piero Scaruffi for inviting me to present at the Leonardo Art Science Evening Rendezvous (LASER) series here at Stanford last night, alongside Virginia San Fratello, Fiorenza Micheli, and Tom Mullaney. It was a great conversation, with lots of unexpected resonances!
The new issue of Cinephile, the University of British Columbia’s film and media journal, is just out. The theme of the issue is “(Un)Recovering the Future,” and it’s all about nostalgia, malaise, history, and (endangered) futurities.
In this context, I am happy to have contributed a piece called “Artificial Imagination” on the relation between AI and (visual) imagination. The essay lays some of the groundwork for a larger exploration of AI and its significance for aesthetics in both broad and narrow senses of the word. It follows from the emphasis on embodiment in my essay “From Sublime Awe to Abject Cringe: On the Embodied Processing of AI Art,” recently published in Journal of Visual Culture, as part of a larger book project tentatively called Art & Artificiality, or: What AI Means for Aesthetics.
Thanks very much to editors Will Riley and Liam Riley for the invitation to contribute to this issue!
I’m excited to report that the Digital Aesthetics Workshop at the Stanford Humanities Center has been renewed for another year — our seventh! Next year’s graduate student co-chairs are Grace Han and Rebecca Turner!
Congratulations to Bernard Dionysus Geoghegan, who has been elected new Co-Chair of the SCMS Philosophy & Theory SIG! And thanks to Will Brown for running in the election. It was a very tight race, and we were extremely proud to present the SIG with two amazing candidates.
It has been an honor to serve in the role of Co-Chair for the past three years, first alongside Co-Chair Victor Fan and Secretary John Winn, and more recently Co-Chair Deborah Levitt and Secretary Hank Gerba. It has been a pleasure working with them all.
I look forward to seeing where Bernie, Deborah, and Hank take the SIG in the coming years!
Back in 2016, my experimental video essay “Don’t Look Now: Paradoxes of Suture” was published in the open access journal [in]Transition: Journal of Videographic Film and Moving Image Studies. This was an experiment with the limits of the “video essay” form, and a test to see if it could accommodate non-linear and interactive forms (produced with some very basic javascript and HTML/CSS so as to remain accessible and viewable even with updates to web infrastructures). Seeing as the interactive video essay was accepted and published in a peer-reviewed journal devoted, for the most part, to more conventional linear video essays, I considered the test passed. (However, since the journal has recently moved to a new hosting platform with Open Humanities Library, the interactive version is no longer included directly on the site, instead linking to my own self-hosted version here.)
But even if the test was passed in terms of publication, the peer reviewers noted that the experiment was not altogether successful. Richard Misek noted that the piece was “flawed,” qualifying nevertheless that “the work’s limitations are integral to its innovation.” The innovation, according to Misek, was to point to a new way of looking and doing close analysis:
“Perhaps one should see it not as a self-contained video essay but as a walk-through of an early beta of an app for viewing and manipulating video clips spatially. Imagine, for example… The user imports a scene. The app then splits it into clips and linearly spatializes it, perhaps like in Denson’s video. Each clip can then be individually played, looped, or paused. For example, the user can scroll to, and then pause, the in points or out points for each clip; or just play two particular shots simultaneously and pause everything else. Exactly how the user utilizes this app depends on the film and what they hope to discover from it. The very process of doing this, of course, may then also reveal previously unnoticed themes, patterns, or equivalences. Such a platform for analyzing moving images could hugely faciliate close formal analysis. I imagine a moving image version of Warburg’s Mnemosyne Atlas – a wall (/ screen) full of images, all existing in spatial relation with each other, and all in motion; a field of connections waiting to be made.
“In short, I think this video points towards new methods of conducting close analysis rather than new methods of presenting it. In my view, the ideal final product would not be a tidied-up video essay but an app. I realize that, technically and conceptually, this is asking a lot. It would be a very different, and much larger project. For now, though, this video provides an inspiring demo of what such an app could help film analysts achieve.”
Fast-forward eight years, to a short article on “Five Video Essays to Close Out May,” published on May 28, 2024 in Hyperallergic. Here, author Dan Schindel includes a note about an open-source and open-access tool, the Interactive Video Grid by Quan Zhang, that is inspired by my video essay and aims to realize a large part of the vision laid out by Misek in his review. As one of two demos of the tool, which allows users to create interactive grids of video clips for close and synchronous analysis, Zhang even includes “Don’t Look Now: Paradoxes of Suture. A Reconfiguration of Shane Denson’s Interactive Video Essay.”
I’m excited to experiment with this in classrooms, or as an aid in my own research. And I can imagine that additional development might point to further innovations in modes of looking. For example, what if we make the grid dynamic, such that the clips can be dragged and rearranged? Or added and removed, resized, slowed down or speeded up, maybe even superimposed on one another? Of course, many such transformations are already possible within nonlinear digital editing platforms — but it’s only the editing process that is nonlinear, while the operations imagined here only become visible in the outputted products that are, alas, still linear videos.
Like my original video, Zhang’s new tool might also be “flawed” and in need of further development, but it is successful in terms of pointing to new ways of looking that go beyond linear forms of film and video and that take fuller advantage of the underlying nonlinearity of digital media. The latter, I would suggest, are anyway transforming our modes of visual attention, so it seems only right that we should experiment self-reflexively and probe the limits of the new ways of looking.