This video essay explores “suture” and ideology in an excerpt from Michael Bay’s 13 Hours: The Secret Soldiers of Benghazi (2016). Moreover, the video itself is an experiment or “proof-of-concept” piece: made entirely in Keynote (Apple’s version of PowerPoint), all of the transitions and effects are automated by the software. In other words, after designing the presentation (or “programming” it), I simply clicked “play” and let the presentation run. This is a screen recording of the results.
Tag: video
DEMO Video: Post-Cinema: 24fps@44100Hz
As Karin posted yesterday (and as I reblogged this morning), our collaborative artwork Post-Cinema: 24fps@44100Hz will be on display (and on sale) from January 15-23 at The Carrack Modern Art gallery in Durham, NC, as part of their annual Winter Community Show.
Exhibiting augmented reality pieces always brings with it a variety of challenges — including technical ones and, above all, the need to inform viewers about how to use the work. So, for this occasion, I’ve put together this brief demo video explaining the piece and how to view it. The video will be displayed on a digital picture frame mounted on the wall below the painting. Hopefully it will be both eye-catching enough to attract passersby and it will effectively communicate the essential information about the process and use of the work.
“Found Footage Video Aesthetics” Theme Week at In Media Res
Next week, August 17-21, 2015, I will be participating in the “Found Footage Video Aesthetics” theme week at the mediaCommons site In Media Res. I’ll be up first, on Monday, Aug. 17, with a video essay on the topic of “VHS Found Footage and the Material Horrors of Post-Cinematic Images” — a project I started this summer at the NEH-funded Middlebury College Workshop on Videographic Criticism. Stay tuned!
Complete Panel Video — Post-Cinema and/as Speculative Media Theory #SCMS15
On March 27, 2015, at the annual conference of the Society for Cinema and Media Studies in Montreal, Steven Shaviro, Patricia Pisters, Adrian Ivakhiv, and Mark B. N. Hansen participated in a panel I organized on “Post-Cinema and/as Speculative Media Theory.” It was standing room only, and many people were unable to squeeze into the room (some images are posted here). Thankfully, all of the presenters agreed to have their talks recorded on video and archived online.
(I have posted these videos here before, but for the sake of convenience I wanted to pull them together in a single post, so that the entire panel is available in one place.)
Above, you’ll find my brief general introduction to the panel, and below the four presentations:
Steven Shaviro’s proposal for a “Cinema 3.0”: the rhythm-image (following Deleuze’s movement-image and time-image)
Patricia Pisters, whose own proposal for a third image-type she calls the “neuro-image,” on the politics of post-cinema
Adrian Ivakhiv on the material, ecological dimensions of (post-)cinema in the Anthropocene and/or Capitalocene
Mark B. N. Hansen on the microtemporal and sub-perceptual dimensions of digital, post-cinematic images
Finally, you can look forward to hearing more from the panel participants, all of whom are contributing to an open-access collection titled Post-Cinema: Theorizing 21st-Century Film, co-edited by myself and Julia Leyda (forthcoming this year from REFRAME Books). More details soon, so stay tuned!
Sculpting Data — Teaser Video
Karin and Shane Denson, “Sculpting Data (& Painting Networks)” — teaser video.
glitchesarelikewildanimals!
Sketch for a multi-screen video installation, which I’ll be presenting and discussing alongside some people doing amazing work in connection with John Supko & Bill Seaman’s Emergence Lab and their Generative Media seminar — next Thursday, February 26, 2015 at the Duke Media Arts + Sciences Rendezvous.
For more about the theory and process behind this piece, as well as the inspiration for the title, see my previous post “The Glitch as Propaedeutic to a Materialist Theory of Post-Cinema.”
The Glitch as Propaedeutic to a Materialist Theory of Post-Cinematic Affect
In some ways, the digital glitch might be seen as the paradigmatic art form of our convergence culture — where “convergence” is understood more in the sense theorized by Friedrich Kittler than that proposed by Henry Jenkins. That is, glitches speak directly to the interchangeability of media channels in a digital media ecology, where all phenomenal forms float atop an infrastructural stream of zeroes and ones. They thrive upon this interchangeability, while they also point out to us its limits. Indeed, such glitches are most commonly generated by feeding a given data format into the “wrong” system — into a piece of software that wasn’t designed to handle it, for example — and observing the results. Thus, such “databending” practices (knowledge of which circulates among networks of actors constituting a highly “participatory culture” of their own) expose the incompleteness of convergence, the instability of apparently “fixed” data infrastructures as they migrate between various programs and systems for making that data manifest.
As a result, the practice of making glitches provides an excellent praxis-based propaedeutic to a materialist understanding of post-cinematic affect. They magnify the “discorrelations” that I have suggested constitute the heart of post-cinematic moving images, providing a hands-on approach to phenomena that must seem abstract and theoretical. For example, I have claimed:
CGI and digital cameras do not just sever the ties of indexicality that characterized analogue cinematography (an epistemological or phenomenological claim); they also render images themselves fundamentally processual, thus displacing the film-as-object-of-perception and uprooting the spectator-as-perceiving-subject – in effect, enveloping both in an epistemologically indeterminate but materially quite real and concrete field of affective relation. Mediation, I suggest, can no longer be situated neatly between the poles of subject and object, as it swells with processual affectivity to engulf both.
Now, I still stand behind this description, but I acknowledge that it can be hard to get one’s head around it and to understand why such a claim makes sense (or makes a difference). It probably doesn’t help (unless you’re already into that sort of thing) that I have had recourse to Bergsonian metaphysics to explain the idea:
The mediating technology itself becomes an active locus of molecular change: a Bergsonian body qua center of indetermination, a gap of affectivity between passive receptivity and its passage into action. The camera imitates the process by which our own pre-personal bodies synthesize the passage from molecular to molar, replicating the very process by which signal patterns are selected from the flux and made to coalesce into determinate images that can be incorporated into an emergent subjectivity. This dilation of affect, which characterizes not only video but also computational processes like the rendering of digital images (which is always done on the fly), marks the basic condition of the post-cinematic camera, the positive underside of what presents itself externally as a discorrelating incommensurability with respect to molar perception. As Mark Hansen has argued, the microtemporal scale at which computational media operate enables them to modulate the temporal and affective flows of life and to affect us directly at the level of our pre-personal embodiment. In this respect, properly post-cinematic cameras, which include video and digital imaging devices of all sorts, have a direct line to our innermost processes of becoming-in-time […].
I have, to be sure, pointed to examples (such as the Paranormal Activity and Transformers series of films) that illustrate or embody these ideas in a more palpable, accessible form. And I have indicated some of the concrete spaces of transformation — for example, in the so-called “smart TV”:
today the conception of the camera should perhaps be expanded: consider how all processes of digital image rendering, whether in digital film production or simply in computer-based playback, are involved in the same on-the-fly molecular processes through which the video camera can be seen to trace the affective synthesis of images from flux. Unhinged from traditional conceptions and instantiations, post-cinematic cameras are defined precisely by the confusion or indistinction of recording, rendering, and screening devices or instances. In this respect, the “smart TV” becomes the exemplary post-cinematic camera (an uncanny domestic “room” composed of smooth, computational space): it executes microtemporal processes ranging from compression/decompression, artifact suppression, resolution upscaling, aspect-ratio transformation, motion-smoothing image interpolation, and on-the-fly 2D to 3D conversion. Marking a further expansion of the video camera’s artificial affect-gap, the smart TV and the computational processes of image modulation that it performs bring the perceptual and actional capacities of cinema – its receptive camera and projective screening apparatuses – back together in a post-cinematic counterpart to the early Cinématographe, equipped now with an affective density that uncannily parallels our own. We don’t usually think of our screens as cameras, but that’s precisely what smart TVs and computational display devices in fact are: each screening of a (digital or digitized) “film” becomes in fact a re-filming of it, as the smart TV generates millions of original images, more than the original film itself – images unanticipated by the filmmaker and not contained in the source material. To “render” the film computationally is in fact to offer an original rendition of it, never before performed, and hence to re-produce the film through a decidedly post-cinematic camera. This production of unanticipated and unanticipatable images renders such devices strangely vibrant, uncanny […].
Recent news about Samsung’s smart TVs eavesdropping on our conversations may have made those devices seem even more uncanny than when I first wrote the lines above, but this, I have to admit, is still a long way from impressing the theory of post-cinematic transformation on my readers in anything like a materially robust or embodied manner — though I am supposedly describing changes in the affective, embodied parameters of life itself.
Hence my recourse to the glitch, and to the practice of making glitches as a means for gaining first-hand knowledge of the transformations I associate with post-cinema. In lieu of another argument, then, I will simply describe the process of making the video at the top of this blog post. It is my belief that going through this process gave me a deeper understanding of what, exactly, I was pointing to in those arguments; by way of extension, furthermore, I suggest that following these steps on your own will similarly provide insight into the mechanisms and materialities of what, following Steven Shaviro, I have come to refer to as post-cinematic affect.
The process starts with a picture — in this case, a jpeg image taken by my wife on an iPhone 4S:
Following this “Glitch Primer” on editing images with text editors, I began experimenting with ImageGlitch, a nice little program that opens the image as editable text in one pane and immediately updates visual changes to the image in another. (The changes themselves can be made with any normal plain-text editor, but ImageGlitch gives you a little more control, i.e. immediate feedback.)
I began inserting the word “postnaturalism” into the text at random places, thus modifying the image’s data infrastructure. By continually breaking and unbreaking the image, I began to get a feel for the picture’s underlying structure. Finally, when I had destroyed the image to my liking, I decided that it would be more interesting to capture the process of destruction/deformation, as opposed to a static product resulting from it. Thus, I used ScreenFlow to capture a video of my screen as I undid (using CMD-Z) all the changes I had just made.
Because I had made an inordinately large number of edits, this step-wise process of reversing the edits took 8:30 minutes, resulting in a rather long and boring video. So, in Final Cut Pro, I decided to speed things up a little — by 2000%, to be exact. (I also cropped the frame to show only the image, not the text.) I then copied the resulting 24-second video, pasted it back in after the original, and set it to play in reverse (so that the visible image goes from a deformed to a restored state and back again).
This was a little better, but still a bit boring. What else could I do with it? One thing that was clearly missing was a soundtrack, so I next considered how I might generate one with databending techniques.
Through blog posts by Paul Hertz and Antonio Roberts, I became aware of the possibility to use the open source audio editing program Audacity to open image files as raw data, thereby converting them into sound files for the purposes of further transformation. Instead of going through with this process of glitching, however, I experimented with opening my original jpeg image in a format that would produce recognizable sound (and not just static). The answer was to open the file with GSM encoding, which gave me an almost musical soundtrack — but a little high-pitched for my taste. (To be honest, it sounded pretty cool for about 2 seconds, and then it was annoying to the point of almost hurting). So I exported the sound as an mp3 file, which I imported into my Final Cut Pro project and applied a pitch-shifting filter (turning it down 2400 cents or 2 octaves).
At this point, I could have exported the video and been done with it, but while discovering the wonders of image databending, I ran across some people doing wonderful things with Audacity and video files as well. A tutorial at quart-avant-poing.com was especially helpful, while videos like the following demonstrate the range of possibilities:
So after exporting my video, complete with soundtrack, from Final Cut Pro, I imported the whole thing into Audacity (using A-Law encoding) and exported it back out (again using A-Law encoding), thereby glitching the video further — simply by the act of importing and exporting, i.e. without any intentional act of modification!
I opened the video in VLC and was relatively happy with the results; but then I noticed that other video players, such as QuickTime, QuickTime Player 7, and video editing software like Final Cut and Premiere Pro were all showing something different in their rendering of “the same” data! It was at this point that the connection to my theoretical musings on post-cinematic cameras, smart TVs, and the “fundamentally processual” nature of on-the-fly computational playback began to hit home in a very practical way.
As the author of the quart-avant-poing tutorial put it:
For some reasons (cause players work in different ways) you’ll get sometimes differents results while opening your glitched file into VLC or MPC etc… so If you like what you get into VLC and not what you see in MPC, then export it again directly from VLC for example, which will give a solid video file of what you saw in it, and if VLC can open it but crash while re-exporting it in a solid file, don’t hesitate to use video capture program like FRAPS to record what VLC is showing, because sometimes, capturing a glitch in clean file can be seen as the main part of the job cause glitches are like wild animals in a certain way, you can see them, but putting them into a clean video file structure is a mess.
Thus, I experimented with a variety of ways (and codecs) for exporting (or “capturing”) the video I had seen, but which proved elusive to my attempts to make repeatable (and hence visible to others). I went through several iterations of video and audio tracks until I was able to approximate what I thought I had seen and heard. At the end of the process, when I had arrived at the version embedded at the top of this post, I felt like I had more thoroughly probed (though without fully “knowing”) the relations between the data infrastructure and the manifest images — relations that I now saw as more thoroughly material than before. And I came, particularly, to appreciate the idea that “glitches are like wild animals.”
Strange beasts indeed! And when you consider that all digital video files are something like latent glitches — or temporarily domesticated animals — you begin to understand what I mean about the instability and revisability of post-cinematic images: in effect, glitches merely show us the truth about digital video as an essentially generative system, magnifying the interstitial spaces that post-cinematic machineries fill in with their own affective materialities, so that though a string of zeroes and ones remains unchanged as it streams through these systems, we can yet never cross the same stream twice…
Ancillary to Another Purpose
Michel Chion describes causal listening as a mode of attending to sounds in order to identify unique objects causing them; to identify classes of causes (e.g. human, mechanical, animal sources); or to at least ascribe to them a general etiological nature (e.g. “it sounds like something mechanical,” or “something digital,” etc.). “For lack of anything more specific, we identify indices, particularly temporal ones, that we try to draw upon to discern the nature of the cause” (Audio-Vision 27). Lacking any more concrete clues, we can, in this mode, trace the “causal history of a sound” even without knowing the sound’s cause.
As is already clear, there is a complex interplay between states of knowing and non-knowing in Chion’s description of listening modes, and this epistemological problematic is intimately tied to questions of visibility and invisibility. In other words, seeing images concomitant with sounds suggests to us causal relations, but these suggestions can be highly misleading – as they usually are in the case of the highly constructed soundscapes of filmic productions. This is why reduced listening – “the listening mode that focuses on the traits of the sound itself, independent of its cause and of its meaning” (29) – has often been associated with “acousmatic listening” (in which the putative causal relations are severed by listening blind, so to speak, i.e. without any accompanying images). As a form of phenomenological bracketing, acousmatic listening seeks to place us in a state of non-knowing with relation to causes and their visual cues, thus helping us to attend to “sound—verbal, played on an instrument, noises, or whatever—as itself the object […] instead of as a vehicle for something else” (29). In such a situation, we can focus on the sound’s “own qualities of timbre and texture, to its own personal vibration” (31) – or so it would seem.
Acousmatic listening is a path to reduced listening, perhaps, but only after an initial intensification of causal listening that occurs (that is, we try even harder to “see” the causes when we can’t simply look at them). In some respect, then, knowing the causes to begin with can actually help overcome this problem, allowing us to stop focusing on the question of causality so that we can more freely “explore what the sound is like in and of itself” (33).
I describe these complexities of Chion’s listening modes because they neatly summarize the complexities of my own experience of constructing, listening to, and experiencing a sound montage. This montage, which runs 2 minutes and 21 seconds in total, is constructed from found materials, all of which were collected on YouTube. The process of collection was guided by only very loose criteria – I was interested in finding sonic materials that are related in some way to changing human-technological relations. I thought of transitions from pre-industrial to industrial to digital environments and sought to find sounds that might evoke these (along with broader contrasts, real or imagined, between nature and culture, human and nonhuman, organic and technical).
The materials collected are: an amateur video of a “1953 VAC CASE tractor running a buzz saw” (http://youtu.be/wrGdgjoUJSg); an excerpt from the creation sequence in James Whale’s 1931 Frankenstein (http://youtu.be/8H3dFh6GA-A); “Sounds of the 90’s – old computer and printer starting up,” which shows a desktop computer ca. 1993 booting Windows 95 (http://youtu.be/JpSfgusep7s); a full-album pirating of Brian Eno’s classic album Ambient 1 – Music for Airports from 1978, set against a static image of the album cover (http://youtu.be/5KGMo9yOaSU); “1 Hour of Ambient Sound From Forbidden Planet: The Great Krell Machine (1956 Mono)” (http://youtu.be/0nt7q5Rw-R8); and “Leslie S5T-R train horn & Doran-Cunningham 6-B Ship horn,” an amateur demonstration of these two industrial horns, installed in a car and blasted in an empty canyon (http://youtu.be/cjyUfV3W5zk).
The fact that these sound materials were culled from a video-sharing site had implications for the epistemological/phenomenological situation of listening. In the case of the tractor, the old computer, and the industrial horns, the amateur nature of the video emphasized a direct causal link; presumably, a viewer of the video “knows” exactly what is causing the sounds. The situation is more complicated in the other three sources. The Eno album is the only specifically musical source selected; and while it is recognizably “musical,” in that musical instruments are identifiable as causes of sounds, the ambient nature of the music is itself designed to problematize causal listening and to open the very notion of the sound object to include the chance sounds that accompany its audition. Nevertheless, finding the object on YouTube, where it is attributed to Eno as an album, and where the still video track of the album cover objectifies the sound as an encapsulated and specifically musical product, reinforces a different level of causal indexing. Similarly, the ambient sounds from Forbidden Planet might be extremely difficult to identify without the attribution and the still video image on YouTube; with them, a very general causal relation (it sounds like the rumble of a space ship, for example) is established – despite the fact that the real sound sources, the production processes involved in the studio film’s soundtrack, are obscured. The sounds from Frankenstein, from which all dialogue has been omitted, seem even more causally determinate: the video shows us lightning flashes and technical apparatuses emitting sounds. Especially in this case, but to varying degrees in all of them, knowing where these sounds come from makes it hard to put aside putative causal knowledge, to reduce the sounds phenomenally to their sonic textures, and not to slide farther into a form of listening that would seek to move beyond the “added value” of the sound/image relation and to locate the “real” sources of the sounds (as sound effects).
In putting together the sound montage, I was therefore concerned to blend the materials in such a way that would not only obscure these sources for an imagined listener, but that would open them up to a different sort of listening – a reduced listening that severed the indexical links articulating what I thought I knew or didn’t know about the sounds’ causes – for myself. For the reasons listed above, this was no easy task. Shuffling around the sounds still seemed like shuffling around the tractor, the thunder, the horns, etc. It proved helpful to randomly silence various tracks, fading them out and back in at a given point without any specific reason, sonically, focusing instead on the relations between the visual patterns described by the sounds on my computer screen. Especially the ambient sounds (Frankenstein, Forbidden Planet) proved easy to obscure, and the fact of severing their indexical ties to films and the themes they involved allowed for alternate hearings of the other sonic elements (horns, Windows 95 startup sound, etc.), which still retained their character of foreground objects but could be imagined in different settings, etc. (i.e. I was still caught in a form of causal listening, but I had begun imaginatively varying what the causes might be). For example, by varying the volume levels of these tracks, so that they became less distinct from one another, a clicking sound produced by turning on the old computer could be reimagined as starting a tape deck, especially as it preceded the commencement of the Eno music.
Getting past this level of reimagining causal relations, and moving on to a reduced form of listening, was no easy task, and I doubt that it can ever be achieved fully or in purified form. In any case, I began to discover the truth of Chion’s remark, that a prior knowledge of causal relations can actually help liberate the listener from causal listening; thus, the complications described above, stemming from the fact that I found my materials on a video-sharing site, actually helped me to get past the hyper-causal listening that accompanies a purely blind, acousmatic audition. I began hearing low electrical hums rather than identifiable causes, for example, but it remained difficult to get beyond an objectifying and visualizing form of hearing with respect to the buzz saw. The latter instrument was, however, opened up to alternative scenes: perhaps it was a construction crew on a street, and the spaceship was actually a subway beneath that street. Inchoate scenes began to open up as soon as a texture was discovered behind a cause. A dreamy, perhaps hallucinatory, possibly arthouse-style but maybe more ironic or even humorous, visual landscape lay just outside this increasingly material sonic envelope. It remained, therefore, to be seen just what could be heard in these sounds, especially when they were combined with alternate visual materials.
In the process of assembling the found-footage video montage – which I did not begin doing until I had finalized the soundtrack – I discovered an agency or set of agencies in these sounds; they directed my placement of video clips, suggesting that I pull them to the left, nudge them back a bit to the right, shift them elsewhere, or cut them out completely. A sort of dance ensued between the images and the sounds, imbuing both of them with new color, new meaning, and transformed causal and material relations. The final result, which I have titled “Ancillary to Another Purpose,” still embodies many of the thematic elements that I thought it might when I began constructing the soundtrack, but not at all in the same form I anticipated.
Finally, however the results of my experiment might be judged, I highly recommend this exercise – which I undertook in the context of Shambhavi Kaul’s class on “Editing for Film and Video” at Duke University – to anyone interested in gaining a better, more practical, and more embodied understanding of sound/image relations.
Video: Post-Cinematic Interfaces with a Postnatural World
There’s something fitting about the fact that the audio recording of my “3 Theses” on postnaturalism and post-cinema — which I presented at the 2014 annual conference of the DGfA, “America After Nature” — is overrun by the nonhuman voices of nameless birds calling to one another, blissfully indifferent to my theoretical speculations. What at first presented itself to me as something of a disappointment, viz. the generally poor quality of the recording and the occasional difficulty of discerning spoken words in particular, seemed on second thought a nice illustration — or better: enactment — of some of the ideas I put forward about the distributed agency of affect’s environmental mediation: here the human voice competes with “natural” and “cultural” forces ranging from songbirds to smartphones, failing to command their attentions but contributing to an improbable concert for a sufficiently non- or posthuman ear immersed in an ecology of material interaction.
Looking at it (or listening to it) from this angle, and getting over my initial disappointment, I decided to add some video of the various postnatural landscapes I encountered while in Germany on my recent trip. The result is another of what I have begun referring to as “metabolic images” — where the computational capture and processing of moving images, along with their temporal (and microtemporal) modulation, point to the subpersonal effects (and affects) of our embodied interfaces with a post-cinematic media environment. (See here, here, or here for more…)
(For the full effect, be sure to view the video in HD on vimeo. And finally, if you happen to have a more humanly inflected interest in the discursive “contents” put forward here, you can find the full text of my presentation here.)
Post-Cinematic: Video Essays
For their final projects in my 21st-century film class, three of my students chose to make video essays, which they have now made available on a blog that they set up especially for this purpose. Over at 21stcenturycinema.wordpress.com, you will find Jesko Thiel’s exploration of transmedia storytelling, Christopher Schramm’s analysis of editing techniques in videogame-based “fragmovies,” and Andreas Merokis’s look at violence as narrative and/or spectacle in contemporary cinema. Take a look and leave them a comment!