Here are a few images, all taken from Twitter, from the “Post-Cinema and/as Speculative Media Theory” panel that I chaired this morning, bright and early at 9am.
Finally, if you couldn’t make it, couldn’t see the speakers, couldn’t hear them, or couldn’t follow all the intricacies of their very rich and theoretically dense papers this morning, you may be interested to know that the entire panel was videotaped and will be available here soon!
Only two weeks until the 2015 Society for Cinema and Media Studies annual conference (March 25-29 in Montreal)! In case you haven’t seen it already, the official program is now up here (warning: opens as a PDF).
As I have posted before, I will be participating in two panels this year:
Second, I will be co-presenting a paper on “Hardware Seriality” with my colleague Andreas Jahn-Sudmann in the “Digital Seriality” panel (Session Q20: Saturday, March 28, 3:00 – 4:45pm). Other panelists include Scott Higgins and Dominik Maeder (click for their abstracts). (Unfortunately, Daniela Wentz will not be able to attend the conference.)
In some ways, the digital glitch might be seen as the paradigmatic art form of our convergence culture — where “convergence” is understood more in the sense theorized by Friedrich Kittler than that proposed by Henry Jenkins. That is, glitches speak directly to the interchangeability of media channels in a digital media ecology, where all phenomenal forms float atop an infrastructural stream of zeroes and ones. They thrive upon this interchangeability, while they also point out to us its limits. Indeed, such glitches are most commonly generated by feeding a given data format into the “wrong” system — into a piece of software that wasn’t designed to handle it, for example — and observing the results. Thus, such “databending” practices (knowledge of which circulates among networks of actors constituting a highly “participatory culture” of their own) expose the incompleteness of convergence, the instability of apparently “fixed” data infrastructures as they migrate between various programs and systems for making that data manifest.
As a result, the practice of making glitches provides an excellent praxis-based propaedeutic to a materialist understanding of post-cinematic affect. They magnify the “discorrelations” that I have suggested constitute the heart of post-cinematic moving images, providing a hands-on approach to phenomena that must seem abstract and theoretical. For example, I have claimed:
CGI and digital cameras do not just sever the ties of indexicality that characterized analogue cinematography (an epistemological or phenomenological claim); they also render images themselves fundamentally processual, thus displacing the film-as-object-of-perception and uprooting the spectator-as-perceiving-subject – in effect, enveloping both in an epistemologically indeterminate but materially quite real and concrete field of affective relation. Mediation, I suggest, can no longer be situated neatly between the poles of subject and object, as it swells with processual affectivity to engulf both.
Now, I still stand behind this description, but I acknowledge that it can be hard to get one’s head around it and to understand why such a claim makes sense (or makes a difference). It probably doesn’t help (unless you’re already into that sort of thing) that I have had recourse to Bergsonian metaphysics to explain the idea:
The mediating technology itself becomes an active locus of molecular change: a Bergsonian body qua center of indetermination, a gap of affectivity between passive receptivity and its passage into action. The camera imitates the process by which our own pre-personal bodies synthesize the passage from molecular to molar, replicating the very process by which signal patterns are selected from the flux and made to coalesce into determinate images that can be incorporated into an emergent subjectivity. This dilation of affect, which characterizes not only video but also computational processes like the rendering of digital images (which is always done on the fly), marks the basic condition of the post-cinematic camera, the positive underside of what presents itself externally as a discorrelating incommensurability with respect to molar perception. As Mark Hansen has argued, the microtemporal scale at which computational media operate enables them to modulate the temporal and affective flows of life and to affect us directly at the level of our pre-personal embodiment. In this respect, properly post-cinematic cameras, which include video and digital imaging devices of all sorts, have a direct line to our innermost processes of becoming-in-time […].
I have, to be sure, pointed to examples (such as the Paranormal Activity and Transformers series of films) that illustrate or embody these ideas in a more palpable, accessible form. And I have indicated some of the concrete spaces of transformation — for example, in the so-called “smart TV”:
today the conception of the camera should perhaps be expanded: consider how all processes of digital image rendering, whether in digital film production or simply in computer-based playback, are involved in the same on-the-fly molecular processes through which the video camera can be seen to trace the affective synthesis of images from flux. Unhinged from traditional conceptions and instantiations, post-cinematic cameras are defined precisely by the confusion or indistinction of recording, rendering, and screening devices or instances. In this respect, the “smart TV” becomes the exemplary post-cinematic camera (an uncanny domestic “room” composed of smooth, computational space): it executes microtemporal processes ranging from compression/decompression, artifact suppression, resolution upscaling, aspect-ratio transformation, motion-smoothing image interpolation, and on-the-fly 2D to 3D conversion. Marking a further expansion of the video camera’s artificial affect-gap, the smart TV and the computational processes of image modulation that it performs bring the perceptual and actional capacities of cinema – its receptive camera and projective screening apparatuses – back together in a post-cinematic counterpart to the early Cinématographe, equipped now with an affective density that uncannily parallels our own. We don’t usually think of our screens as cameras, but that’s precisely what smart TVs and computational display devices in fact are: each screening of a (digital or digitized) “film” becomes in fact a re-filming of it, as the smart TV generates millions of original images, more than the original film itself – images unanticipated by the filmmaker and not contained in the source material. To “render” the film computationally is in fact to offer an original rendition of it, never before performed, and hence to re-produce the film through a decidedly post-cinematic camera. This production of unanticipated and unanticipatable images renders such devices strangely vibrant, uncanny […].
Recent news about Samsung’s smart TVs eavesdropping on our conversations may have made those devices seem even more uncanny than when I first wrote the lines above, but this, I have to admit, is still a long way from impressing the theory of post-cinematic transformation on my readers in anything like a materially robust or embodied manner — though I am supposedly describing changes in the affective, embodied parameters of life itself.
Hence my recourse to the glitch, and to the practice of making glitches as a means for gaining first-hand knowledge of the transformations I associate with post-cinema. In lieu of another argument, then, I will simply describe the process of making the video at the top of this blog post. It is my belief that going through this process gave me a deeper understanding of what, exactly, I was pointing to in those arguments; by way of extension, furthermore, I suggest that following these steps on your own will similarly provide insight into the mechanisms and materialities of what, following Steven Shaviro, I have come to refer to as post-cinematic affect.
The process starts with a picture — in this case, a jpeg image taken by my wife on an iPhone 4S:
Following this “Glitch Primer” on editing images with text editors, I began experimenting with ImageGlitch, a nice little program that opens the image as editable text in one pane and immediately updates visual changes to the image in another. (The changes themselves can be made with any normal plain-text editor, but ImageGlitch gives you a little more control, i.e. immediate feedback.)
I began inserting the word “postnaturalism” into the text at random places, thus modifying the image’s data infrastructure. By continually breaking and unbreaking the image, I began to get a feel for the picture’s underlying structure. Finally, when I had destroyed the image to my liking, I decided that it would be more interesting to capture the process of destruction/deformation, as opposed to a static product resulting from it. Thus, I used ScreenFlow to capture a video of my screen as I undid (using CMD-Z) all the changes I had just made.
Because I had made an inordinately large number of edits, this step-wise process of reversing the edits took 8:30 minutes, resulting in a rather long and boring video. So, in Final Cut Pro, I decided to speed things up a little — by 2000%, to be exact. (I also cropped the frame to show only the image, not the text.) I then copied the resulting 24-second video, pasted it back in after the original, and set it to play in reverse (so that the visible image goes from a deformed to a restored state and back again).
This was a little better, but still a bit boring. What else could I do with it? One thing that was clearly missing was a soundtrack, so I next considered how I might generate one with databending techniques.
Through blog posts by Paul Hertz and Antonio Roberts, I became aware of the possibility to use the open source audio editing program Audacity to open image files as raw data, thereby converting them into sound files for the purposes of further transformation. Instead of going through with this process of glitching, however, I experimented with opening my original jpeg image in a format that would produce recognizable sound (and not just static). The answer was to open the file with GSM encoding, which gave me an almost musical soundtrack — but a little high-pitched for my taste. (To be honest, it sounded pretty cool for about 2 seconds, and then it was annoying to the point of almost hurting). So I exported the sound as an mp3 file, which I imported into my Final Cut Pro project and applied a pitch-shifting filter (turning it down 2400 cents or 2 octaves).
At this point, I could have exported the video and been done with it, but while discovering the wonders of image databending, I ran across some people doing wonderful things with Audacity and video files as well. A tutorial at quart-avant-poing.com was especially helpful, while videos like the following demonstrate the range of possibilities:
So after exporting my video, complete with soundtrack, from Final Cut Pro, I imported the whole thing into Audacity (using A-Law encoding) and exported it back out (again using A-Law encoding), thereby glitching the video further — simply by the act of importing and exporting, i.e. without any intentional act of modification!
I opened the video in VLC and was relatively happy with the results; but then I noticed that other video players, such as QuickTime, QuickTime Player 7, and video editing software like Final Cut and Premiere Pro were all showing something different in their rendering of “the same” data! It was at this point that the connection to my theoretical musings on post-cinematic cameras, smart TVs, and the “fundamentally processual” nature of on-the-fly computational playback began to hit home in a very practical way.
As the author of the quart-avant-poing tutorial put it:
For some reasons (cause players work in different ways) you’ll get sometimes differents results while opening your glitched file into VLC or MPC etc… so If you like what you get into VLC and not what you see in MPC, then export it again directly from VLC for example, which will give a solid video file of what you saw in it, and if VLC can open it but crash while re-exporting it in a solid file, don’t hesitate to use video capture program like FRAPS to record what VLC is showing, because sometimes, capturing a glitch in clean file can be seen as the main part of the job cause glitches are like wild animals in a certain way, you can see them, but putting them into a clean video file structure is a mess.
Thus, I experimented with a variety of ways (and codecs) for exporting (or “capturing”) the video I had seen, but which proved elusive to my attempts to make repeatable (and hence visible to others). I went through several iterations of video and audio tracks until I was able to approximate what I thought I had seen and heard. At the end of the process, when I had arrived at the version embedded at the top of this post, I felt like I had more thoroughly probed (though without fully “knowing”) the relations between the data infrastructure and the manifest images — relations that I now saw as more thoroughly material than before. And I came, particularly, to appreciate the idea that “glitches are like wild animals.”
Strange beasts indeed! And when you consider that all digital video files are something like latent glitches — or temporarily domesticated animals — you begin to understand what I mean about the instability and revisability of post-cinematic images: in effect, glitches merely show us the truth about digital video as an essentially generative system, magnifying the interstitial spaces that post-cinematic machineries fill in with their own affective materialities, so that though a string of zeroes and ones remains unchanged as it streams through these systems, we can yet never cross the same stream twice…
The S-1 Speculative Sensation Lab at Duke University, with which I have had the honor of collaborating on an exciting set of art/tech/theory projects over the past couple of months, has a new website: http://s-1lab.org
It’s still under development at this point, but you can already get an idea of the kind of work that’s going on in the lab, under the direction of Mark B. N. Hansen and Mark Olson. Check it out!
The 2015 Annual C21 Conference (April 30 – May 2, 2015 at the Center for 21st Century Studies, University of Wisconsin-Milwaukee) will be devoted to the theme “After Extinction,” which can be thought from a variety of related perspectives. As the conference CFP put it:
C21’s conference After Extinction will pursue the question of what it means to come “after” extinction in three different but related senses.
1) In temporal terms, what comes after extinction, not only the event of extinction but also the concept? After we think extinction what comes next? Are there historical models or examples of what comes after? Can these past extinctions measure up to present day events, or do the possible scales on which extinction might operate today make such comparisons incompatible? Is extinction something that only happens belatedly, after there are already species or forms or practices in place, or does extinction work prior to the emergence of species, as generative of the evolution or emergence of any form of life or being? Is extinction terminal or can species return, a la Jurassic Park or European projects to restore the auroch or Przewalski’s horse? Can dead or dying languages be revitalized?
2) In an epistemological sense, what does it mean for an image, graphic, text, video or film to “take after” the concept of extinction, to mediate it in such a way as to resemble or be mimetic of extinction. What is “after extinction” in the sense that a painting is “after O’Keeffe” or a child “takes after” its parent? In order to be recognized as coming after extinction an event or occasion must be seen as being related to extinction, to have been consequent or emergent from the event of extinction. Thus we mean to explore the premediation of future extinctions in a variety of formal and informal, print, audiovisual, and networked media. What forms of knowledge emerge in such anticipatory pursuits?
3) In spatial terms, what will remain physically after extinction? Extinction is not simply death or absence but a geophysical event that occurs in space. What does it mean to pursue extinction, to go “after” it with technologies and scientific techniques of making extinction legible by premediating its possible occurrence through climate change modeling or pandemic forecasting? How should one act “after extinction” in order to plan for, prevent, or preempt the end of crucial life forms, for example, by establishing seed banks or stockpiling DNA? How does the extinction of one species threaten the lifeblood of the entire biosphere (e.g., the impact of bee colony collapse on particular flora and fauna as well as on human practices like agriculture)? Have new artifacts surfaced either as sentinels or fossils of extinction (e.g., animal carcasses washed up on shore filled with plastic, or mutant plants in irradiated nuclear test fields)? Even if extinction has always been thought of as impacting a larger ecology, has the scale of risk changed in light of the accelerated networks of the 21st century?
I am very happy to have the opportunity to return to Milwaukee this year in order to pursue these questions at what promises to be another great C21 event! My own paper, which was just accepted, will focus on questions of extinction in relation to the concept of post-cinema.
Here is my abstract:
Post-Cinema after Extinction
Shane Denson
In this presentation, I argue that contemporary, digital moving-image media – what some critics have come to see as properly “post-cinematic” media – are related materially, culturally, and conceptually to extinction as their experiential horizon. Materially and technologically, post-cinema emerges as a set of aesthetic responses to the real or imagined extinction of film qua celluloid or to the death of cinema qua institution of shared reception. Significantly, however, such animating visions of technocultural transformation in the wake of the demise of a formerly dominant media regime are linked in complex ways to another experience of extinction: that of the human. That is, post-cinema is involved centrally in the (pre-)mediation of an experience of the world without us – both thematically, e.g. in films about impending or actual extinction events, and formally, in terms of a general “discorrelation” of moving images from the norms of human embodiment that governed classical cinema. Such discorrelation is evidenced in violations of classical continuity principles, for example, but it is anchored more fundamentally in a disruption of phenomenological relations established by the analogue camera. Digital cameras and algorithmic image-processing technologies confront us with images that are no longer calibrated to our embodied senses, and that therefore must partially elude or remain invisible to the human. Anticipating and intimating the eradication of human perception, post-cinema is therefore “after extinction” even before extinction takes place: it envisions and transmits affective clues about a world without us, a world beyond “correlationism,” that arises at the other end of the Anthropocene – or that we inhabit already.
Bibliography:
Denson, Shane, and Julia Leyda, eds. Post-Cinema: Theorizing 21st-Century Film. Sussex: REFRAME Books, forthcoming.
Kara, Selmin. “Beasts of the Digital Wild: Primordigital Cinema and the Question of Origins.” Sequence 1.4 (2014).
Shaviro, Steven. Post-Cinematic Affect. Winchester: Zero Books, 2009.
Sobchack, Vivian. The Address of the Eye: A Phenomenology of Film Experience. Princeton: Princeton UP, 1992.
_____. “The Scene of the Screen: Envisioning Photographic, Cinematic, and Electronic Presence.” Carnal Thoughts: Embodiment and Moving Image Culture. Berkeley and Los Angeles: U of California P, 2004. 135-162.
In case you missed it: you can watch a split-screen video presentation of my digital humanities-oriented talk, “Visualizing Digital Seriality,” which I gave last Friday, January 30, 2015, at Duke University — here (or click the image above).
Click the image above to view the slides and hear the audio track recorded at our January 21, 2015 presentation of Manifest Data, a collaborative art/theory project by the Duke University S-1 Speculative Sensation Lab (directed by Mark B. N. Hansen and Mark Olson). This is an ongoing project, with further elaborations/iterations and presentations/exhibitions in the planning (more soon!).
The presentation took place at The Edge, the new digital and interactive learning space at Duke’s Bostock Library. The presenters (in the order of their appearance) were: Amanda Starling Gould, Luke Caldwell, Shane Denson (me), and David Rambo.
For more info about the project, see here and here — and stay tuned for more!.
On January 21, 2015 (3:00-4:00pm), the S-1 Speculative Sensation Lab will be presenting a collaborative artwork titled Manifest Data at The Edge, the new space in Duke University’s Bostock Library devoted to “interdisciplinary, data-driven, digitally reliant or team-based research.”
Manifest Data brings together programmers, 3-D printing specialists, sculptors, and theorists to reflect on the production of value in the digital age, the materiality of information, and the (non-)place of mediated relations.
Code written by Luke Caldwell captures data that would otherwise be leaked as we browse the web, and exploited by the likes of Google and Facebook; in a second step, this data is transformed into a coordinate system that can be mapped as a 3D object. In collaboration with other lab members, artist Libi Striegl prepares and prints out the resulting “data creatures.” Karin Denson has reimagined these forms as beautifully grotesque garden gnomes — thus reappropriating a figure that has become a symbol for 3D printing and a marketing tool for companies like MakerBot. Together, Karin and I have further translated these figures into the hybrid spaces of augmented reality, planting the gnomes strategically and in such a way as to instantiate a very personal system for creating value that — dare we hope? — is immune to corporate cooptation. Lab members David Rambo and Max Symuleski, among others, round out the project with artistic-theoretical statements connecting the project of Manifest Data with a critical questioning of contemporary manifest destiny and a new phrenology for the digital age.
The S-1 Lab is directed by Mark B. N. Hansen and Mark Olson in the Media Arts + Sciences Program at Duke. The Manifest Data project was initiated by Amanda Starling Gould, who has continued to provide it with a guiding aesthetic-theoretical vision.
More information about the presentation, which happens to be the inaugural presentation in the “What I Do With Data” series of the Digital Scholarship Services at Duke, can be found here.