Network Ecologies Exhibition: Sneak Preview

IMG_4226

Above, a sneak peak at some of the work that Karin and I have been preparing for the Network Ecologies exhibition at The Edge, Duke University, April 20 – May 10, 2015. The paintings you see here are functioning QR codes (but the programming has not been finalized yet, hence the oblique presentation here). When finished, they will activate a variety of contents and scenarios that have to do with the theme of Network Ecologies. More info soon!

Frankenstein 1910 Glitch Mix

Video meditation inspired by the final paragraph of my book Postnaturalism:

Recoding our perceptions of the Frankenstein film, including even our view of Karloff’s iconic monster as the “original” of its type, Edison’s Frankenstein joins the ranks of the Frankenstein film series, now situating itself at our end rather than at the beginning of that series’ history. Now, prospering among the short clips of YouTube, where it is far more at home than any of the feature films ever could be, the Edison monster becomes capable again of articulating a “medial” narrative—a tale told from a middle altitude, from a position half-way between the diegetic story, on the one hand, of the monster’s defeat by a Frankenstein who grows up and “comes to his senses” and, on the other hand, a non-diegetic, media-historical metanarrative that, in contrast to the story of medial maturation it encoded in 1910, now articulates a tale of visual media’s currently conflicted state, caught between historical specificity and an eternal recurrence of the same. The monster’s medial narrative communicates with our own medial position, mediates possible transactions in a realm of experimentation, in which human and nonhuman agencies negotiate the terms of their changing relations. With its digitally scarred body, pocked by pixels and compression “artifacts,” the century-old monster opens a line of flight that, if we follow it, might bring us face to face with the molecular becoming of our own postnatural future.

Emergence Lab at Duke Media Arts + Sciences Rendezvous

2015-02-24 10.28.03 am

This Thursday, February 26, 2015, the Emergence Lab (headed by media artist Bill Seaman and composer John Supko) will be taking over the Duke Media Arts + Sciences Rendezvous. If you don’t know their work already, be sure to check out Seaman and Supko’s collaborative album s_traits (also available on iTunes and elsewhere), which has been getting a lot of attention in the media lately — including a mention in the New York Times list of top classical recordings of 2014:

‘S_TRAITS’ Bill Seaman, media artist; John Supko, composer (Cotton Goods). This hypnotic disc is derived from more than 110 hours of audio sourced from field recordings, digital noise, documentaries and piano music. A software program developed by the composer John Supko juxtaposed samples from the audio database into multitrack compositions; he and the media artist Bill Seaman then finessed the computer’s handiwork into these often eerily beautiful tracks. VIVIEN SCHWEITZER

In their Generative Media Authorship seminar, which I have been auditing this semester, we have been exploring similar (and wildly different) methods for creating generative artworks and systems in a variety of media, including text, audio, and images in both analog and digital forms. The techniques and ideas we’ve been developing there have dovetailed nicely with the work that Karin Denson and I have been doing lately with the S-1 Lab as well (in particular, the generative sculpture and augmented reality pieces we’ve been making for the lab’s collaborative Manifest Data project). I have experimented with writing Markov chains in Python and javascript, turning text into sound, making sound out of images, and making movies out of all-of-the-above — and I have witnessed people with far greater skills than me do some amazing things with computers, cameras, numbers, books, and fishtanks!

On Thursday (at 4:15pm) several of us will be speaking about our generative experiments and works-in-progress. I will be talking about video glitches and post-cinema, as discussed in my two previous blog posts (here and here), while I am especially excited to see S-1 collaborator Aaron Kutnick‘s demonstration of his raspberry pi-based eidetic camera and to hear composer Eren Gumrukcuoglu‘s machine-based music. I also look forward to meeting Duke biology professor Sönke Johnsen and composer Vladimir Smirnov. All around, this promises to be a great event, so check it out if you’re in the area!

glitchesarelikewildanimals!

Sketch for a multi-screen video installation, which I’ll be presenting and discussing alongside some people doing amazing work in connection with John Supko & Bill Seaman’s Emergence Lab and their Generative Media seminar — next Thursday, February 26, 2015 at the Duke Media Arts + Sciences Rendezvous.

For more about the theory and process behind this piece, as well as the inspiration for the title, see my previous post “The Glitch as Propaedeutic to a Materialist Theory of Post-Cinema.”

The Glitch as Propaedeutic to a Materialist Theory of Post-Cinematic Affect

In some ways, the digital glitch might be seen as the paradigmatic art form of our convergence culture — where “convergence” is understood more in the sense theorized by Friedrich Kittler than that proposed by Henry Jenkins. That is, glitches speak directly to the interchangeability of media channels in a digital media ecology, where all phenomenal forms float atop an infrastructural stream of zeroes and ones. They thrive upon this interchangeability, while they also point out to us its limits. Indeed, such glitches are most commonly generated by feeding a given data format into the “wrong” system — into a piece of software that wasn’t designed to handle it, for example — and observing the results. Thus, such “databending” practices (knowledge of which circulates among networks of actors constituting a highly “participatory culture” of their own) expose the incompleteness of convergence, the instability of apparently “fixed” data infrastructures as they migrate between various programs and systems for making that data manifest.

As a result, the practice of making glitches provides an excellent praxis-based propaedeutic to a materialist understanding of post-cinematic affect. They magnify the “discorrelations” that I have suggested constitute the heart of post-cinematic moving images, providing a hands-on approach to phenomena that must seem abstract and theoretical. For example, I have claimed:

CGI and digital cameras do not just sever the ties of indexicality that characterized analogue cinematography (an epistemological or phenomenological claim); they also render images themselves fundamentally processual, thus displacing the film-as-object-of-perception and uprooting the spectator-as-perceiving-subject – in effect, enveloping both in an epistemologically indeterminate but materially quite real and concrete field of affective relation. Mediation, I suggest, can no longer be situated neatly between the poles of subject and object, as it swells with processual affectivity to engulf both.

Now, I still stand behind this description, but I acknowledge that it can be hard to get one’s head around it and to understand why such a claim makes sense (or makes a difference). It probably doesn’t help (unless you’re already into that sort of thing) that I have had recourse to Bergsonian metaphysics to explain the idea:

The mediating technology itself becomes an active locus of molecular change: a Bergsonian body qua center of indetermination, a gap of affectivity between passive receptivity and its passage into action. The camera imitates the process by which our own pre-personal bodies synthesize the passage from molecular to molar, replicating the very process by which signal patterns are selected from the flux and made to coalesce into determinate images that can be incorporated into an emergent subjectivity. This dilation of affect, which characterizes not only video but also computational processes like the rendering of digital images (which is always done on the fly), marks the basic condition of the post-cinematic camera, the positive underside of what presents itself externally as a discorrelating incommensurability with respect to molar perception. As Mark Hansen has argued, the microtemporal scale at which computational media operate enables them to modulate the temporal and affective flows of life and to affect us directly at the level of our pre-personal embodiment. In this respect, properly post-cinematic cameras, which include video and digital imaging devices of all sorts, have a direct line to our innermost processes of becoming-in-time […].

I have, to be sure, pointed to examples (such as the Paranormal Activity and Transformers series of films) that illustrate or embody these ideas in a more palpable, accessible form. And I have indicated some of the concrete spaces of transformation — for example, in the so-called “smart TV”:

today the conception of the camera should perhaps be expanded: consider how all processes of digital image rendering, whether in digital film production or simply in computer-based playback, are involved in the same on-the-fly molecular processes through which the video camera can be seen to trace the affective synthesis of images from flux. Unhinged from traditional conceptions and instantiations, post-cinematic cameras are defined precisely by the confusion or indistinction of recording, rendering, and screening devices or instances. In this respect, the “smart TV” becomes the exemplary post-cinematic camera (an uncanny domestic “room” composed of smooth, computational space): it executes microtemporal processes ranging from compression/decompression, artifact suppression, resolution upscaling, aspect-ratio transformation, motion-smoothing image interpolation, and on-the-fly 2D to 3D conversion. Marking a further expansion of the video camera’s artificial affect-gap, the smart TV and the computational processes of image modulation that it performs bring the perceptual and actional capacities of cinema – its receptive camera and projective screening apparatuses – back together in a post-cinematic counterpart to the early Cinématographe, equipped now with an affective density that uncannily parallels our own. We don’t usually think of our screens as cameras, but that’s precisely what smart TVs and computational display devices in fact are: each screening of a (digital or digitized) “film” becomes in fact a re-filming of it, as the smart TV generates millions of original images, more than the original film itself – images unanticipated by the filmmaker and not contained in the source material. To “render” the film computationally is in fact to offer an original rendition of it, never before performed, and hence to re-produce the film through a decidedly post-cinematic camera. This production of unanticipated and unanticipatable images renders such devices strangely vibrant, uncanny […].

Recent news about Samsung’s smart TVs eavesdropping on our conversations may have made those devices seem even more uncanny than when I first wrote the lines above, but this, I have to admit, is still a long way from impressing the theory of post-cinematic transformation on my readers in anything like a materially robust or embodied manner — though I am supposedly describing changes in the affective, embodied parameters of life itself.

Hence my recourse to the glitch, and to the practice of making glitches as a means for gaining first-hand knowledge of the transformations I associate with post-cinema. In lieu of another argument, then, I will simply describe the process of making the video at the top of this blog post. It is my belief that going through this process gave me a deeper understanding of what, exactly, I was pointing to in those arguments; by way of extension, furthermore, I suggest that following these steps on your own will similarly provide insight into the mechanisms and materialities of what, following Steven Shaviro, I have come to refer to as post-cinematic affect.

The process starts with a picture — in this case, a jpeg image taken by my wife on an iPhone 4S:

IMG_6643 copy

Following this “Glitch Primer” on editing images with text editors, I began experimenting with ImageGlitch, a nice little program that opens the image as editable text in one pane and immediately updates visual changes to the image in another. (The changes themselves can be made with any normal plain-text editor, but ImageGlitch gives you a little more control, i.e. immediate feedback.)

2015-02-15 05.46.00 pm

I began inserting the word “postnaturalism” into the text at random places, thus modifying the image’s data infrastructure. By continually breaking and unbreaking the image, I began to get a feel for the picture’s underlying structure. Finally, when I had destroyed the image to my liking, I decided that it would be more interesting to capture the process of destruction/deformation, as opposed to a static product resulting from it. Thus, I used ScreenFlow to capture a video of my screen as I undid (using CMD-Z) all the changes I had just made.

2015-02-15 17_53_57

Because I had made an inordinately large number of edits, this step-wise process of reversing the edits took 8:30 minutes, resulting in a rather long and boring video. So, in Final Cut Pro, I decided to speed things up a little — by 2000%, to be exact. (I also cropped the frame to show only the image, not the text.) I then copied the resulting 24-second video, pasted it back in after the original, and set it to play in reverse (so that the visible image goes from a deformed to a restored state and back again).

This was a little better, but still a bit boring. What else could I do with it? One thing that was clearly missing was a soundtrack, so I next considered how I might generate one with databending techniques.

Through blog posts by Paul Hertz and Antonio Roberts, I became aware of the possibility to use the open source audio editing program Audacity to open image files as raw data, thereby converting them into sound files for the purposes of further transformation. Instead of going through with this process of glitching, however, I experimented with opening my original jpeg image in a format that would produce recognizable sound (and not just static). The answer was to open the file with GSM encoding, which gave me an almost musical soundtrack — but a little high-pitched for my taste. (To be honest, it sounded pretty cool for about 2 seconds, and then it was annoying to the point of almost hurting). So I exported the sound as an mp3 file, which I imported into my Final Cut Pro project and applied a pitch-shifting filter (turning it down 2400 cents or 2 octaves).

At this point, I could have exported the video and been done with it, but while discovering the wonders of image databending, I ran across some people doing wonderful things with Audacity and video files as well. A tutorial at quart-avant-poing.com was especially helpful, while videos like the following demonstrate the range of possibilities:

https://www.youtube.com/watch?v=eoCvP6mrKQw

So after exporting my video, complete with soundtrack, from Final Cut Pro, I imported the whole thing into Audacity (using A-Law encoding) and exported it back out (again using A-Law encoding), thereby glitching the video further — simply by the act of importing and exporting, i.e. without any intentional act of modification!

I opened the video in VLC and was relatively happy with the results; but then I noticed that other video players, such as QuickTime, QuickTime Player 7, and video editing software like Final Cut and Premiere Pro were all showing something different in their rendering of “the same” data! It was at this point that the connection to my theoretical musings on post-cinematic cameras, smart TVs, and the “fundamentally processual” nature of on-the-fly computational playback began to hit home in a very practical way.

As the author of the quart-avant-poing tutorial put it:

For some reasons (cause players work in different ways) you’ll get sometimes differents results while opening your glitched file into VLC or MPC etc… so If you like what you get into VLC and not what you see in MPC, then export it again directly from VLC for example, which will give a solid video file of what you saw in it, and if VLC can open it but crash while re-exporting it in a solid file, don’t hesitate to use video capture program like FRAPS to record what VLC is showing, because sometimes, capturing a glitch in clean file can be seen as the main part of the job cause glitches are like wild animals in a certain way, you can see them, but putting them into a clean video file structure is a mess.

Thus, I experimented with a variety of ways (and codecs) for exporting (or “capturing”) the video I had seen, but which proved elusive to my attempts to make repeatable (and hence visible to others). I went through several iterations of video and audio tracks until I was able to approximate what I thought I had seen and heard. At the end of the process, when I had arrived at the version embedded at the top of this post, I felt like I had more thoroughly probed (though without fully “knowing”) the relations between the data infrastructure and the manifest images — relations that I now saw as more thoroughly material than before. And I came, particularly, to appreciate the idea that “glitches are like wild animals.”

Strange beasts indeed! And when you consider that all digital video files are something like latent glitches — or temporarily domesticated animals — you begin to understand what I mean about the instability and revisability of post-cinematic images: in effect, glitches merely show us the truth about digital video as an essentially generative system, magnifying the interstitial spaces that post-cinematic machineries fill in with their own affective materialities, so that though a string of zeroes and ones remains unchanged as it streams through these systems, we can yet never cross the same stream twice…

New Website: Duke S-1 Speculative Sensation Lab

2015-02-06 05.29.38 pm

The S-1 Speculative Sensation Lab at Duke University, with which I have had the honor of collaborating on an exciting set of art/tech/theory projects over the past couple of months, has a new website: http://s-1lab.org

It’s still under development at this point, but you can already get an idea of the kind of work that’s going on in the lab, under the direction of Mark B. N. Hansen and Mark Olson. Check it out!

2015-02-03 06.27.18 pm

Ancillary to Another Purpose

Michel Chion describes causal listening as a mode of attending to sounds in order to identify unique objects causing them; to identify classes of causes (e.g. human, mechanical, animal sources); or to at least ascribe to them a general etiological nature (e.g. “it sounds like something mechanical,” or “something digital,” etc.). “For lack of anything more specific, we identify indices, particularly temporal ones, that we try to draw upon to discern the nature of the cause” (Audio-Vision 27). Lacking any more concrete clues, we can, in this mode, trace the “causal history of a sound” even without knowing the sound’s cause.

As is already clear, there is a complex interplay between states of knowing and non-knowing in Chion’s description of listening modes, and this epistemological problematic is intimately tied to questions of visibility and invisibility. In other words, seeing images concomitant with sounds suggests to us causal relations, but these suggestions can be highly misleading – as they usually are in the case of the highly constructed soundscapes of filmic productions. This is why reduced listening – “the listening mode that focuses on the traits of the sound itself, independent of its cause and of its meaning” (29) – has often been associated with “acousmatic listening” (in which the putative causal relations are severed by listening blind, so to speak, i.e. without any accompanying images). As a form of phenomenological bracketing, acousmatic listening seeks to place us in a state of non-knowing with relation to causes and their visual cues, thus helping us to attend to “sound—verbal, played on an instrument, noises, or whatever—as itself the object […] instead of as a vehicle for something else” (29). In such a situation, we can focus on the sound’s “own qualities of timbre and texture, to its own personal vibration” (31) – or so it would seem.

Acousmatic listening is a path to reduced listening, perhaps, but only after an initial intensification of causal listening that occurs (that is, we try even harder to “see” the causes when we can’t simply look at them). In some respect, then, knowing the causes to begin with can actually help overcome this problem, allowing us to stop focusing on the question of causality so that we can more freely “explore what the sound is like in and of itself” (33).

I describe these complexities of Chion’s listening modes because they neatly summarize the complexities of my own experience of constructing, listening to, and experiencing a sound montage. This montage, which runs 2 minutes and 21 seconds in total, is constructed from found materials, all of which were collected on YouTube. The process of collection was guided by only very loose criteria – I was interested in finding sonic materials that are related in some way to changing human-technological relations. I thought of transitions from pre-industrial to industrial to digital environments and sought to find sounds that might evoke these (along with broader contrasts, real or imagined, between nature and culture, human and nonhuman, organic and technical).

The materials collected are: an amateur video of a “1953 VAC CASE tractor running a buzz saw” (http://youtu.be/wrGdgjoUJSg); an excerpt from the creation sequence in James Whale’s 1931 Frankenstein (http://youtu.be/8H3dFh6GA-A); “Sounds of the 90’s – old computer and printer starting up,” which shows a desktop computer ca. 1993 booting Windows 95 (http://youtu.be/JpSfgusep7s); a full-album pirating of Brian Eno’s classic album Ambient 1 – Music for Airports from 1978, set against a static image of the album cover (http://youtu.be/5KGMo9yOaSU); “1 Hour of Ambient Sound From Forbidden Planet: The Great Krell Machine (1956 Mono)” (http://youtu.be/0nt7q5Rw-R8); and “Leslie S5T-R train horn & Doran-Cunningham 6-B Ship horn,” an amateur demonstration of these two industrial horns, installed in a car and blasted in an empty canyon (http://youtu.be/cjyUfV3W5zk).

The fact that these sound materials were culled from a video-sharing site had implications for the epistemological/phenomenological situation of listening. In the case of the tractor, the old computer, and the industrial horns, the amateur nature of the video emphasized a direct causal link; presumably, a viewer of the video “knows” exactly what is causing the sounds. The situation is more complicated in the other three sources. The Eno album is the only specifically musical source selected; and while it is recognizably “musical,” in that musical instruments are identifiable as causes of sounds, the ambient nature of the music is itself designed to problematize causal listening and to open the very notion of the sound object to include the chance sounds that accompany its audition. Nevertheless, finding the object on YouTube, where it is attributed to Eno as an album, and where the still video track of the album cover objectifies the sound as an encapsulated and specifically musical product, reinforces a different level of causal indexing. Similarly, the ambient sounds from Forbidden Planet might be extremely difficult to identify without the attribution and the still video image on YouTube; with them, a very general causal relation (it sounds like the rumble of a space ship, for example) is established – despite the fact that the real sound sources, the production processes involved in the studio film’s soundtrack, are obscured. The sounds from Frankenstein, from which all dialogue has been omitted, seem even more causally determinate: the video shows us lightning flashes and technical apparatuses emitting sounds. Especially in this case, but to varying degrees in all of them, knowing where these sounds come from makes it hard to put aside putative causal knowledge, to reduce the sounds phenomenally to their sonic textures, and not to slide farther into a form of listening that would seek to move beyond the “added value” of the sound/image relation and to locate the “real” sources of the sounds (as sound effects).

In putting together the sound montage, I was therefore concerned to blend the materials in such a way that would not only obscure these sources for an imagined listener, but that would open them up to a different sort of listening – a reduced listening that severed the indexical links articulating what I thought I knew or didn’t know about the sounds’ causes – for myself. For the reasons listed above, this was no easy task. Shuffling around the sounds still seemed like shuffling around the tractor, the thunder, the horns, etc. It proved helpful to randomly silence various tracks, fading them out and back in at a given point without any specific reason, sonically, focusing instead on the relations between the visual patterns described by the sounds on my computer screen. Especially the ambient sounds (Frankenstein, Forbidden Planet) proved easy to obscure, and the fact of severing their indexical ties to films and the themes they involved allowed for alternate hearings of the other sonic elements (horns, Windows 95 startup sound, etc.), which still retained their character of foreground objects but could be imagined in different settings, etc. (i.e. I was still caught in a form of causal listening, but I had begun imaginatively varying what the causes might be). For example, by varying the volume levels of these tracks, so that they became less distinct from one another, a clicking sound produced by turning on the old computer could be reimagined as starting a tape deck, especially as it preceded the commencement of the Eno music.

Getting past this level of reimagining causal relations, and moving on to a reduced form of listening, was no easy task, and I doubt that it can ever be achieved fully or in purified form. In any case, I began to discover the truth of Chion’s remark, that a prior knowledge of causal relations can actually help liberate the listener from causal listening; thus, the complications described above, stemming from the fact that I found my materials on a video-sharing site, actually helped me to get past the hyper-causal listening that accompanies a purely blind, acousmatic audition. I began hearing low electrical hums rather than identifiable causes, for example, but it remained difficult to get beyond an objectifying and visualizing form of hearing with respect to the buzz saw. The latter instrument was, however, opened up to alternative scenes: perhaps it was a construction crew on a street, and the spaceship was actually a subway beneath that street. Inchoate scenes began to open up as soon as a texture was discovered behind a cause. A dreamy, perhaps hallucinatory, possibly arthouse-style but maybe more ironic or even humorous, visual landscape lay just outside this increasingly material sonic envelope. It remained, therefore, to be seen just what could be heard in these sounds, especially when they were combined with alternate visual materials.

In the process of assembling the found-footage video montage – which I did not begin doing until I had finalized the soundtrack – I discovered an agency or set of agencies in these sounds; they directed my placement of video clips, suggesting that I pull them to the left, nudge them back a bit to the right, shift them elsewhere, or cut them out completely. A sort of dance ensued between the images and the sounds, imbuing both of them with new color, new meaning, and transformed causal and material relations. The final result, which I have titled “Ancillary to Another Purpose,” still embodies many of the thematic elements that I thought it might when I began constructing the soundtrack, but not at all in the same form I anticipated.

Finally, however the results of my experiment might be judged, I highly recommend this exercise – which I undertook in the context of Shambhavi Kaul’s class on “Editing for Film and Video” at Duke University – to anyone interested in gaining a better, more practical, and more embodied understanding of sound/image relations.