Video: Post-Cinema and/as Speculative Media Theory, Part 4: Adrian Ivakhiv, “Speculative Ecologies of (Post)Cinema” — #SCMS15

Above, Adrian Ivakhiv’s talk “Speculative Ecologies of (Post)Cinema, or, The Art of Morphogenesis: Cinema in & beyond the Capitalocene” — the fourth of five videos documenting the “Post-Cinema and/as Speculative Media Theory” panel I chaired on March 27, 2015 at the Society for Cinema and Media Studies annual conference in Montreal.

See here for more information and a general introduction to the panel.

Up next: Mark B. N. Hansen. Stay tuned!

Audiovisualities Lab — Film Screening and Project Showcase

AVLab_Expo_2015

On April 8, 2015, I will be participating in this event, hosted by the Duke Audiovisualities Lab. During the “project showcase” portion of the event, several of the people involved in Bill Seaman and John Supko‘s Generative Media Authorship seminar — including Eren GumrukcuogluAaron Kutnick, and myself — will be presenting generative works. I will be showing some of the databending/glitch-video work I’ve been doing lately (see, for example, here and here). Refreshments and drinks will be served!

Video: Post-Cinema and/as Speculative Media Theory, Part 2: Steven Shaviro, “The Rhythm-Image” — #SCMS15

Above, Steven Shaviro’s talk “The Rhythm-Image” — the second of five videos documenting the “Post-Cinema and/as Speculative Media Theory” panel I chaired on March 27, 2015 at the Society for Cinema and Media Studies annual conference in Montreal.

See here for more information and a general introduction to the panel.

Up next: Patricia Pisters. Stay tuned!

Video: Post-Cinema and/as Speculative Media Theory, Part 1 — #SCMS15

Above, the first of five videos documenting the “Post-Cinema and/as Speculative Media Theory” panel I chaired on March 27, 2015 at the Society for Cinema and Media Studies annual conference in Montreal.

The room was jam-packed with people (as you can see in the images here), and the panel was equally jam-packed with dense theoretical discussions of post-cinema. These videos are meant to compensate for the limited space and the limited time (and cognitive capacity) to process these thinkers’ ideas on the spot, preserving and opening their presentations to a wider audience.

This, the shortest of the videos, contains my introductory remarks outlining my overall rationale for organizing the panel.

Stay tuned for videos of talks by Steven Shaviro, Patricia Pisters, Adrian Ivakhiv, and Mark B. N. Hansen, which will be appearing here in the coming weeks (if you like, you can subscribe to the blog to make sure you don’t miss them; see the link on the upper right-hand side of the screen). In the meantime, you can read their abstracts here:

Steven Shaviro, “Reversible Flesh”

Patricia Pisters, “The Filmmaker as Metallurgist: Post-Cinema’s Commitment to Radical Contingency”

Adrian Ivakhiv, “Speculative Ecologies of (Post-)Cinema”

Mark B. N. Hansen, “Speculative Protention, or, Are 21st Century Media Agents of Futurity?”

Finally, you can look forward to contributions by all four speakers (and many more as well) to the open-access collection Post-Cinema: Theorizing 21st-Century Film, which I am currently co-editing with Julia Leyda for REFRAME Books (and which will be coming out later this year).

Frankenstein 1910 Glitch Mix

Video meditation inspired by the final paragraph of my book Postnaturalism:

Recoding our perceptions of the Frankenstein film, including even our view of Karloff’s iconic monster as the “original” of its type, Edison’s Frankenstein joins the ranks of the Frankenstein film series, now situating itself at our end rather than at the beginning of that series’ history. Now, prospering among the short clips of YouTube, where it is far more at home than any of the feature films ever could be, the Edison monster becomes capable again of articulating a “medial” narrative—a tale told from a middle altitude, from a position half-way between the diegetic story, on the one hand, of the monster’s defeat by a Frankenstein who grows up and “comes to his senses” and, on the other hand, a non-diegetic, media-historical metanarrative that, in contrast to the story of medial maturation it encoded in 1910, now articulates a tale of visual media’s currently conflicted state, caught between historical specificity and an eternal recurrence of the same. The monster’s medial narrative communicates with our own medial position, mediates possible transactions in a realm of experimentation, in which human and nonhuman agencies negotiate the terms of their changing relations. With its digitally scarred body, pocked by pixels and compression “artifacts,” the century-old monster opens a line of flight that, if we follow it, might bring us face to face with the molecular becoming of our own postnatural future.

glitchesarelikewildanimals!

Sketch for a multi-screen video installation, which I’ll be presenting and discussing alongside some people doing amazing work in connection with John Supko & Bill Seaman’s Emergence Lab and their Generative Media seminar — next Thursday, February 26, 2015 at the Duke Media Arts + Sciences Rendezvous.

For more about the theory and process behind this piece, as well as the inspiration for the title, see my previous post “The Glitch as Propaedeutic to a Materialist Theory of Post-Cinema.”

The Glitch as Propaedeutic to a Materialist Theory of Post-Cinematic Affect

In some ways, the digital glitch might be seen as the paradigmatic art form of our convergence culture — where “convergence” is understood more in the sense theorized by Friedrich Kittler than that proposed by Henry Jenkins. That is, glitches speak directly to the interchangeability of media channels in a digital media ecology, where all phenomenal forms float atop an infrastructural stream of zeroes and ones. They thrive upon this interchangeability, while they also point out to us its limits. Indeed, such glitches are most commonly generated by feeding a given data format into the “wrong” system — into a piece of software that wasn’t designed to handle it, for example — and observing the results. Thus, such “databending” practices (knowledge of which circulates among networks of actors constituting a highly “participatory culture” of their own) expose the incompleteness of convergence, the instability of apparently “fixed” data infrastructures as they migrate between various programs and systems for making that data manifest.

As a result, the practice of making glitches provides an excellent praxis-based propaedeutic to a materialist understanding of post-cinematic affect. They magnify the “discorrelations” that I have suggested constitute the heart of post-cinematic moving images, providing a hands-on approach to phenomena that must seem abstract and theoretical. For example, I have claimed:

CGI and digital cameras do not just sever the ties of indexicality that characterized analogue cinematography (an epistemological or phenomenological claim); they also render images themselves fundamentally processual, thus displacing the film-as-object-of-perception and uprooting the spectator-as-perceiving-subject – in effect, enveloping both in an epistemologically indeterminate but materially quite real and concrete field of affective relation. Mediation, I suggest, can no longer be situated neatly between the poles of subject and object, as it swells with processual affectivity to engulf both.

Now, I still stand behind this description, but I acknowledge that it can be hard to get one’s head around it and to understand why such a claim makes sense (or makes a difference). It probably doesn’t help (unless you’re already into that sort of thing) that I have had recourse to Bergsonian metaphysics to explain the idea:

The mediating technology itself becomes an active locus of molecular change: a Bergsonian body qua center of indetermination, a gap of affectivity between passive receptivity and its passage into action. The camera imitates the process by which our own pre-personal bodies synthesize the passage from molecular to molar, replicating the very process by which signal patterns are selected from the flux and made to coalesce into determinate images that can be incorporated into an emergent subjectivity. This dilation of affect, which characterizes not only video but also computational processes like the rendering of digital images (which is always done on the fly), marks the basic condition of the post-cinematic camera, the positive underside of what presents itself externally as a discorrelating incommensurability with respect to molar perception. As Mark Hansen has argued, the microtemporal scale at which computational media operate enables them to modulate the temporal and affective flows of life and to affect us directly at the level of our pre-personal embodiment. In this respect, properly post-cinematic cameras, which include video and digital imaging devices of all sorts, have a direct line to our innermost processes of becoming-in-time […].

I have, to be sure, pointed to examples (such as the Paranormal Activity and Transformers series of films) that illustrate or embody these ideas in a more palpable, accessible form. And I have indicated some of the concrete spaces of transformation — for example, in the so-called “smart TV”:

today the conception of the camera should perhaps be expanded: consider how all processes of digital image rendering, whether in digital film production or simply in computer-based playback, are involved in the same on-the-fly molecular processes through which the video camera can be seen to trace the affective synthesis of images from flux. Unhinged from traditional conceptions and instantiations, post-cinematic cameras are defined precisely by the confusion or indistinction of recording, rendering, and screening devices or instances. In this respect, the “smart TV” becomes the exemplary post-cinematic camera (an uncanny domestic “room” composed of smooth, computational space): it executes microtemporal processes ranging from compression/decompression, artifact suppression, resolution upscaling, aspect-ratio transformation, motion-smoothing image interpolation, and on-the-fly 2D to 3D conversion. Marking a further expansion of the video camera’s artificial affect-gap, the smart TV and the computational processes of image modulation that it performs bring the perceptual and actional capacities of cinema – its receptive camera and projective screening apparatuses – back together in a post-cinematic counterpart to the early Cinématographe, equipped now with an affective density that uncannily parallels our own. We don’t usually think of our screens as cameras, but that’s precisely what smart TVs and computational display devices in fact are: each screening of a (digital or digitized) “film” becomes in fact a re-filming of it, as the smart TV generates millions of original images, more than the original film itself – images unanticipated by the filmmaker and not contained in the source material. To “render” the film computationally is in fact to offer an original rendition of it, never before performed, and hence to re-produce the film through a decidedly post-cinematic camera. This production of unanticipated and unanticipatable images renders such devices strangely vibrant, uncanny […].

Recent news about Samsung’s smart TVs eavesdropping on our conversations may have made those devices seem even more uncanny than when I first wrote the lines above, but this, I have to admit, is still a long way from impressing the theory of post-cinematic transformation on my readers in anything like a materially robust or embodied manner — though I am supposedly describing changes in the affective, embodied parameters of life itself.

Hence my recourse to the glitch, and to the practice of making glitches as a means for gaining first-hand knowledge of the transformations I associate with post-cinema. In lieu of another argument, then, I will simply describe the process of making the video at the top of this blog post. It is my belief that going through this process gave me a deeper understanding of what, exactly, I was pointing to in those arguments; by way of extension, furthermore, I suggest that following these steps on your own will similarly provide insight into the mechanisms and materialities of what, following Steven Shaviro, I have come to refer to as post-cinematic affect.

The process starts with a picture — in this case, a jpeg image taken by my wife on an iPhone 4S:

IMG_6643 copy

Following this “Glitch Primer” on editing images with text editors, I began experimenting with ImageGlitch, a nice little program that opens the image as editable text in one pane and immediately updates visual changes to the image in another. (The changes themselves can be made with any normal plain-text editor, but ImageGlitch gives you a little more control, i.e. immediate feedback.)

2015-02-15 05.46.00 pm

I began inserting the word “postnaturalism” into the text at random places, thus modifying the image’s data infrastructure. By continually breaking and unbreaking the image, I began to get a feel for the picture’s underlying structure. Finally, when I had destroyed the image to my liking, I decided that it would be more interesting to capture the process of destruction/deformation, as opposed to a static product resulting from it. Thus, I used ScreenFlow to capture a video of my screen as I undid (using CMD-Z) all the changes I had just made.

2015-02-15 17_53_57

Because I had made an inordinately large number of edits, this step-wise process of reversing the edits took 8:30 minutes, resulting in a rather long and boring video. So, in Final Cut Pro, I decided to speed things up a little — by 2000%, to be exact. (I also cropped the frame to show only the image, not the text.) I then copied the resulting 24-second video, pasted it back in after the original, and set it to play in reverse (so that the visible image goes from a deformed to a restored state and back again).

This was a little better, but still a bit boring. What else could I do with it? One thing that was clearly missing was a soundtrack, so I next considered how I might generate one with databending techniques.

Through blog posts by Paul Hertz and Antonio Roberts, I became aware of the possibility to use the open source audio editing program Audacity to open image files as raw data, thereby converting them into sound files for the purposes of further transformation. Instead of going through with this process of glitching, however, I experimented with opening my original jpeg image in a format that would produce recognizable sound (and not just static). The answer was to open the file with GSM encoding, which gave me an almost musical soundtrack — but a little high-pitched for my taste. (To be honest, it sounded pretty cool for about 2 seconds, and then it was annoying to the point of almost hurting). So I exported the sound as an mp3 file, which I imported into my Final Cut Pro project and applied a pitch-shifting filter (turning it down 2400 cents or 2 octaves).

At this point, I could have exported the video and been done with it, but while discovering the wonders of image databending, I ran across some people doing wonderful things with Audacity and video files as well. A tutorial at quart-avant-poing.com was especially helpful, while videos like the following demonstrate the range of possibilities:

https://www.youtube.com/watch?v=eoCvP6mrKQw

So after exporting my video, complete with soundtrack, from Final Cut Pro, I imported the whole thing into Audacity (using A-Law encoding) and exported it back out (again using A-Law encoding), thereby glitching the video further — simply by the act of importing and exporting, i.e. without any intentional act of modification!

I opened the video in VLC and was relatively happy with the results; but then I noticed that other video players, such as QuickTime, QuickTime Player 7, and video editing software like Final Cut and Premiere Pro were all showing something different in their rendering of “the same” data! It was at this point that the connection to my theoretical musings on post-cinematic cameras, smart TVs, and the “fundamentally processual” nature of on-the-fly computational playback began to hit home in a very practical way.

As the author of the quart-avant-poing tutorial put it:

For some reasons (cause players work in different ways) you’ll get sometimes differents results while opening your glitched file into VLC or MPC etc… so If you like what you get into VLC and not what you see in MPC, then export it again directly from VLC for example, which will give a solid video file of what you saw in it, and if VLC can open it but crash while re-exporting it in a solid file, don’t hesitate to use video capture program like FRAPS to record what VLC is showing, because sometimes, capturing a glitch in clean file can be seen as the main part of the job cause glitches are like wild animals in a certain way, you can see them, but putting them into a clean video file structure is a mess.

Thus, I experimented with a variety of ways (and codecs) for exporting (or “capturing”) the video I had seen, but which proved elusive to my attempts to make repeatable (and hence visible to others). I went through several iterations of video and audio tracks until I was able to approximate what I thought I had seen and heard. At the end of the process, when I had arrived at the version embedded at the top of this post, I felt like I had more thoroughly probed (though without fully “knowing”) the relations between the data infrastructure and the manifest images — relations that I now saw as more thoroughly material than before. And I came, particularly, to appreciate the idea that “glitches are like wild animals.”

Strange beasts indeed! And when you consider that all digital video files are something like latent glitches — or temporarily domesticated animals — you begin to understand what I mean about the instability and revisability of post-cinematic images: in effect, glitches merely show us the truth about digital video as an essentially generative system, magnifying the interstitial spaces that post-cinematic machineries fill in with their own affective materialities, so that though a string of zeroes and ones remains unchanged as it streams through these systems, we can yet never cross the same stream twice…

Ancillary to Another Purpose

Michel Chion describes causal listening as a mode of attending to sounds in order to identify unique objects causing them; to identify classes of causes (e.g. human, mechanical, animal sources); or to at least ascribe to them a general etiological nature (e.g. “it sounds like something mechanical,” or “something digital,” etc.). “For lack of anything more specific, we identify indices, particularly temporal ones, that we try to draw upon to discern the nature of the cause” (Audio-Vision 27). Lacking any more concrete clues, we can, in this mode, trace the “causal history of a sound” even without knowing the sound’s cause.

As is already clear, there is a complex interplay between states of knowing and non-knowing in Chion’s description of listening modes, and this epistemological problematic is intimately tied to questions of visibility and invisibility. In other words, seeing images concomitant with sounds suggests to us causal relations, but these suggestions can be highly misleading – as they usually are in the case of the highly constructed soundscapes of filmic productions. This is why reduced listening – “the listening mode that focuses on the traits of the sound itself, independent of its cause and of its meaning” (29) – has often been associated with “acousmatic listening” (in which the putative causal relations are severed by listening blind, so to speak, i.e. without any accompanying images). As a form of phenomenological bracketing, acousmatic listening seeks to place us in a state of non-knowing with relation to causes and their visual cues, thus helping us to attend to “sound—verbal, played on an instrument, noises, or whatever—as itself the object […] instead of as a vehicle for something else” (29). In such a situation, we can focus on the sound’s “own qualities of timbre and texture, to its own personal vibration” (31) – or so it would seem.

Acousmatic listening is a path to reduced listening, perhaps, but only after an initial intensification of causal listening that occurs (that is, we try even harder to “see” the causes when we can’t simply look at them). In some respect, then, knowing the causes to begin with can actually help overcome this problem, allowing us to stop focusing on the question of causality so that we can more freely “explore what the sound is like in and of itself” (33).

I describe these complexities of Chion’s listening modes because they neatly summarize the complexities of my own experience of constructing, listening to, and experiencing a sound montage. This montage, which runs 2 minutes and 21 seconds in total, is constructed from found materials, all of which were collected on YouTube. The process of collection was guided by only very loose criteria – I was interested in finding sonic materials that are related in some way to changing human-technological relations. I thought of transitions from pre-industrial to industrial to digital environments and sought to find sounds that might evoke these (along with broader contrasts, real or imagined, between nature and culture, human and nonhuman, organic and technical).

The materials collected are: an amateur video of a “1953 VAC CASE tractor running a buzz saw” (http://youtu.be/wrGdgjoUJSg); an excerpt from the creation sequence in James Whale’s 1931 Frankenstein (http://youtu.be/8H3dFh6GA-A); “Sounds of the 90’s – old computer and printer starting up,” which shows a desktop computer ca. 1993 booting Windows 95 (http://youtu.be/JpSfgusep7s); a full-album pirating of Brian Eno’s classic album Ambient 1 – Music for Airports from 1978, set against a static image of the album cover (http://youtu.be/5KGMo9yOaSU); “1 Hour of Ambient Sound From Forbidden Planet: The Great Krell Machine (1956 Mono)” (http://youtu.be/0nt7q5Rw-R8); and “Leslie S5T-R train horn & Doran-Cunningham 6-B Ship horn,” an amateur demonstration of these two industrial horns, installed in a car and blasted in an empty canyon (http://youtu.be/cjyUfV3W5zk).

The fact that these sound materials were culled from a video-sharing site had implications for the epistemological/phenomenological situation of listening. In the case of the tractor, the old computer, and the industrial horns, the amateur nature of the video emphasized a direct causal link; presumably, a viewer of the video “knows” exactly what is causing the sounds. The situation is more complicated in the other three sources. The Eno album is the only specifically musical source selected; and while it is recognizably “musical,” in that musical instruments are identifiable as causes of sounds, the ambient nature of the music is itself designed to problematize causal listening and to open the very notion of the sound object to include the chance sounds that accompany its audition. Nevertheless, finding the object on YouTube, where it is attributed to Eno as an album, and where the still video track of the album cover objectifies the sound as an encapsulated and specifically musical product, reinforces a different level of causal indexing. Similarly, the ambient sounds from Forbidden Planet might be extremely difficult to identify without the attribution and the still video image on YouTube; with them, a very general causal relation (it sounds like the rumble of a space ship, for example) is established – despite the fact that the real sound sources, the production processes involved in the studio film’s soundtrack, are obscured. The sounds from Frankenstein, from which all dialogue has been omitted, seem even more causally determinate: the video shows us lightning flashes and technical apparatuses emitting sounds. Especially in this case, but to varying degrees in all of them, knowing where these sounds come from makes it hard to put aside putative causal knowledge, to reduce the sounds phenomenally to their sonic textures, and not to slide farther into a form of listening that would seek to move beyond the “added value” of the sound/image relation and to locate the “real” sources of the sounds (as sound effects).

In putting together the sound montage, I was therefore concerned to blend the materials in such a way that would not only obscure these sources for an imagined listener, but that would open them up to a different sort of listening – a reduced listening that severed the indexical links articulating what I thought I knew or didn’t know about the sounds’ causes – for myself. For the reasons listed above, this was no easy task. Shuffling around the sounds still seemed like shuffling around the tractor, the thunder, the horns, etc. It proved helpful to randomly silence various tracks, fading them out and back in at a given point without any specific reason, sonically, focusing instead on the relations between the visual patterns described by the sounds on my computer screen. Especially the ambient sounds (Frankenstein, Forbidden Planet) proved easy to obscure, and the fact of severing their indexical ties to films and the themes they involved allowed for alternate hearings of the other sonic elements (horns, Windows 95 startup sound, etc.), which still retained their character of foreground objects but could be imagined in different settings, etc. (i.e. I was still caught in a form of causal listening, but I had begun imaginatively varying what the causes might be). For example, by varying the volume levels of these tracks, so that they became less distinct from one another, a clicking sound produced by turning on the old computer could be reimagined as starting a tape deck, especially as it preceded the commencement of the Eno music.

Getting past this level of reimagining causal relations, and moving on to a reduced form of listening, was no easy task, and I doubt that it can ever be achieved fully or in purified form. In any case, I began to discover the truth of Chion’s remark, that a prior knowledge of causal relations can actually help liberate the listener from causal listening; thus, the complications described above, stemming from the fact that I found my materials on a video-sharing site, actually helped me to get past the hyper-causal listening that accompanies a purely blind, acousmatic audition. I began hearing low electrical hums rather than identifiable causes, for example, but it remained difficult to get beyond an objectifying and visualizing form of hearing with respect to the buzz saw. The latter instrument was, however, opened up to alternative scenes: perhaps it was a construction crew on a street, and the spaceship was actually a subway beneath that street. Inchoate scenes began to open up as soon as a texture was discovered behind a cause. A dreamy, perhaps hallucinatory, possibly arthouse-style but maybe more ironic or even humorous, visual landscape lay just outside this increasingly material sonic envelope. It remained, therefore, to be seen just what could be heard in these sounds, especially when they were combined with alternate visual materials.

In the process of assembling the found-footage video montage – which I did not begin doing until I had finalized the soundtrack – I discovered an agency or set of agencies in these sounds; they directed my placement of video clips, suggesting that I pull them to the left, nudge them back a bit to the right, shift them elsewhere, or cut them out completely. A sort of dance ensued between the images and the sounds, imbuing both of them with new color, new meaning, and transformed causal and material relations. The final result, which I have titled “Ancillary to Another Purpose,” still embodies many of the thematic elements that I thought it might when I began constructing the soundtrack, but not at all in the same form I anticipated.

Finally, however the results of my experiment might be judged, I highly recommend this exercise – which I undertook in the context of Shambhavi Kaul’s class on “Editing for Film and Video” at Duke University – to anyone interested in gaining a better, more practical, and more embodied understanding of sound/image relations.

Scholarship in Sound & Image

2015-02-05 02.45.18 pm

I have long been interested in “videographic criticism” — that is, scholarly (interpretive, argumentative, and sometimes more poetic) work done in the medium of sound and moving images. Such work is especially relevant for engagements with film and video, television, and video games — i.e. for the critical analysis of media that themselves operate with sound and moving images of various sorts. Until now, I have only dabbled in videographic criticism, but I regard it as an important means of gaining insight and materially grappling with moving-image media; the process of planning and executing a video essay can be literally eye-opening to students who are just coming to terms with concepts and practices of cinematographic framing and continuity editing, for example, but the experience is no less powerful for seasoned scholars who are used to engaging with moving images through the more conventional channel of written text.

Over the past few years, as a result, I have made a commitment to myself to work towards more fully integrating videographic modes and methods into my pedagogical and scholarly practice. I have encouraged students to produce video essays as seminar coursework (e.g. in my 21st-century film course) — and I have plans to expand my incorporation of such assignments in future courses. In the meantime, a great number of people have been busy developing the form, exploring best practices for conducting this type of work, and even setting up peer-reviewed journals for videographic criticism — if you haven’t seen it yet, be sure to check out the awesome journal [in]TransitionIn other words, while this is still a relatively new field of scholarly publication (though it clearly draws on older forms of documentary and creative work), there is a growing community of people and a growing body of work and experimentation that can be drawn upon and learned from.

I am therefore very excited to be attending a workshop this summer, “Scholarship in Sound & Image” (June 14-27, 2015 at Middlebury College), where I look forward to meeting some of these people and learning from their experience. Co-directed by Christian Keathley and Jason Mittell, and with guest presentations by the incomparable Catherine Grant and Eric Faden, the workshop promises to be a once in a lifetime learning event.

In other words: Expect to see more moving-image experiments on this blog!