After.Video at Libre Graphics 2016 in London

banner_glitch_1

Recently, I posted about a project called after.video, which contains an augmented (AR) glitch/video/image-based theory piece that Karin Denson and I collaborated on. It has now been announced that the official launch of after.video, Volume 1: Assemblages — a “video book” consisting of a paperback book and video elements stored on a Raspberry Pi computer packaged in a VHS case, which will also be available online — will take place at the Libre Graphics Meeting 2016 in London (Sunday, April 17th at 4:20pm).

Interactivity and the “Video Essay”

paradoxes-of-suture1

Recently, I posted a video essay exploring suture, space, and vision in a scene from Nicholas Roeg’s Don’t Look Now (1973). Following my video essay hijab.key, which experimented with dirt-cheap DIY production (namely, using Apple’s Keynote presentation software instead of the much more expensive Adobe Premiere Pro or Final Cut Pro for video authoring), the more recent video, “Don’t Look Now: Paradoxes of Suture,” was also an experiment in using splitscreen techniques for deformative purposes, intended to allow the viewer to alternate between “close” (or focused) and “distant” (or scanning, scattered) viewing modes.

What I have begun to realize, however, is that these videos are pushing at the boundaries of what a “video essay” is or can be. And I think they are doing so in a way that goes beyond the more common dissatisfactions that many of us have with the term “video essay” — namely, the questionable implications of (and formal expectations invoked by) the term “essay,” which apparently fails to describe many videographic forms that are either more poetic in nature or that indeed try to do argumentative or scholarly work, but whose arguments are less linear or explicit than those of traditional essays. These are good reasons, I think, to prefer a term like “videographic criticism” (except that “criticism” is perhaps too narrow) or “videographic scholarship” over the more common “video essay.”

But these recent pieces raise issues that go beyond such concerns, I suggest. They challenge the “videographic” part of the equation as much as, or even more than, the “essayistic” part. Take my “Don’t Look Now” video, which as I stated is designed to give the viewer the opportunity to experiment with different ways or modes of looking. By dissecting a scene into its constituent shots and laying them out in a multiscreen format, I wanted to allow the viewer to approach the scene the way one looks at a page of comics; that is, the viewer is free to zoom into a particular panel (which in this case is a looping video clip) and become immersed in the spatiotemporal relations it describes, only then to zoom back out to regard a sequence, row, or page of such panels, before zooming back in to the next one, and so on. Thus, two segments in the recent video consist simply of fifteen looping shots, laid out side by side; they are intended to give the viewer time to experiment with this other, less linear mode of looking.

But the video format itself is linear, which raises problems for any such experiment. For example, how long should such a splitscreen configuration be allowed to run? Any answer will be arbitrary. What is the duration of a page of comics? The question is not nonsensical, as cartoonists can establish rhythms and pacings that will help to determine the psychological, structural, or empirical duration of the reading experience, but this will never be an absolute and determinate value, as it is of necessity in the medium of video. That is, the linear format of video forced me to make a decision about the length of these segments, and I chose, somewhat arbitrarily, to give the viewer first a brief (30 sec.) glance at the multiscreen composition, followed later (after a more explicitly argumentative section) by a longer look (2 min. at the end of the 9 minute video). But since the whole point was to enable a non-linear viewing experience (like the non-linear experience of reading comics), any decision involving such a linearization was bound to be unsatisfactory.

One viewer commented, for example:

“I think the theory is interesting but why the lengthy stretches of multi shot with audio? Irritating to the extent that they detract from your message.”

Two important aspects come together in this comment. For one thing, the video is seen as a vehicle for a message, an argument; in short, it is regarded as an “essay.” And since the essayistic impulse combines with the video medium to impose a linear form on something intended as a non-linear and less-than-argumentative experimental setup, it is all too understandable that the “lengthy stretches” were found “irritating” and beside the point. I responded:

“For me the theory [of suture] is less interesting than the reading/viewing technique enabled by the splitscreen. I wanted to give the viewer time to make his or her own connections/alternate readings. I realize that’s at odds with the linear course of an ‘essay,’ and the length of these sections is arbitrary. In the end I may try to switch to an interactive format that would allow the viewer to decide when to move on.”

It was dawning on me, in other words, that by transforming the Keynote presentation into a video essay (using screen recording software), I had indeed found an interesting alternative to expensive video authoring software (which might be particularly valuable for students and other people lacking in funding or institutional support); at the same time, however, I was unduly amputating the interactive affordances of the medium that I was working in. If I wanted to encourage a truly experimental form of vision, then I would need to take advantage of precisely these interactive capabilities.

DontLookNow-ParadoxesOfSuture-INTERACTIVE-Standalone-2016-02-16.

Basically, a Keynote file (like a PowerPoint presentation) is already an interactive file. Usually the interactivity is restricted to the rudimentary “click-to-proceed-to-the-next-slide” variety, but more complex and interesting forms of interaction (or automation) can be programmed in as well. In this case, I set up the presentation in such a way as to time certain events (making the automatic move from one slide to another a more “cinematic” sequence, for example), while waiting for user input for others (for example, giving the user the time to experiment with the splitscreen setup for as long as they like before moving on). You can download the autoplaying Keynote file (124 MB) here (or by clicking on the icon below) to see for yourself.

icon128-2x

Of course, only users of Apple computers will be able to view the Keynote file, which is a serious limitation; ideally, an interactive video essay (or whatever we decide to call it) will not only be platform agnostic but also accessible online. Interestingly, Keynote offers the option to export your slideshow to HTML. The export is a bit buggy (see “known bugs” below), but with some tinkering you can get some decent results. Click here to see a web-based version of the same interactive piece. (Again, however, note the “known bugs” below.)

HTML5_Logo_256

In any case, this is just a first foray. Keynote is probably not the ultimate tool for this kind of work, and I am actively exploring alternatives at the moment. But it is interesting, to say the least, to test the limits of the software for the purposes of web authoring (hint: the limits are many, and there is constant need for workarounds). It might especially be of interest to those without any web design experience, or in cases where you want to quickly put together a prototype of an interactive “essay” — but ultimately, we will need to move on to more sophisticated tools and platforms.

I am interested, finally — and foremost — in getting some feedback on what is working and what’s not in this experiment. I am interested both in technical glitches and in suggestions for making the experience of interacting with the piece more effective and engaging. In addition to the Keynote file and the online version, you can also download the complete HTML package as a single .zip file (66 MB), which will likely run smoother on your machine and also allow you to dissect the HTML and Javascript if you’re so inclined.

zip

However you access the piece, please leave a comment if you notice bugs or have any suggestions!

Known bugs, limitations, and workarounds:

  • Keynote file only runs on Mac (workaround: access HTML version)
  • Buggy browser support for HTML version:
    • Horrible font rendering in Firefox (“Arial Black” rendered as serif font)
    • Offline playback not supported in Google Chrome
    • Best support on Apple Safari (Internet Explorer not tested)
  • Haven’t yet found the sweet spot for video compression (loading times may be too long for smooth online playback)

Don’t Look Now: Paradoxes of Suture

An exploration of suture, space, and vision in Nicholas Roeg’s Don’t Look Now (1973), this video essay also experiments with close and distant modes of viewing. Made in Apple Keynote, screen recording done with ScreenFlow, and just a few finishing touches added with Final Cut Pro.

UPDATE: See here for an interactive version and for some more general reflections on interactivity and the “video essay.”

Coming Soon: after.video

av3d_v03

I just saw the official announcement for this exciting project, which I’m proud to be a part of (with a collaborative piece I made with Karin Denson).

after.video, Volume 1: Assemblages is a “video book” — a paperback book and video stored on a Raspberry Pi computer packaged in a VHS case. It will also be available as online video and book PDF download.

Edited by Oliver Lerone Schultz, Adnan Hadzi, Pablo de Soto, and Laila Shereen Sakr (VJ Um Amel), it will be published this year (2016) by Open Humanities Press.

The piece I developed with Karin is a theory/practice hybrid called “Scannable Images: Materialities of Post-Cinema after Video.” It involves digital video, databending/datamoshing, generative text, animated gifs, and augmented reality components, in addition to several paintings in acrylic (not included in the video book).

Here’s some more info about the book from the OpenMute Press site:

Theorising a World of Video

after.video realizes the world through moving images and reassembles theory after video. Extending the formats of ‘theory’, it reflects a new situation in which world and video have grown together.

This is an edited collection of assembled and annotated video essays living in two instantiations: an online version – located on the web at http://after.video/assemblages, and an offline version – stored on a server inside a VHS (Video Home System) case. This is both a digital and analog object: manifested, in a scholarly gesture, as a ‘video book’.

We hope that different tribes — from DIY hackercamps and medialabs, to unsatisfied academic visionaries, avantgarde-mesh-videographers and independent media collectives, even iTV and home-cinema addicted sofasurfers — will cherish this contribution to an ever more fragmented, ever more colorful spectrum of video-culture, consumption and appropriation…

Table of Contents

Control Societies 
Peter Woodbridge + Gary Hall + Clare Birchall
Scannable images: materialities of Post-Cinema after Video 
Karin + Shane Denson
Isistanbul 
Serhat Köksal
The Crying Selfie
Rózsa Zita Farkas
Guided Meditation 
Deborah Ligotrio
Contingent Feminist Tacticks for Working with Machines 
Lucia Egaña Rojas
Capturing the Ephemeral and Contestational 
Eric Kiuitenberg
Surveillance Assemblies 
Adnan Hadzi
You Spin me Round – Full Circle 
Andreas Treske

Editorial Collective

Oliver Lerone Schultz
Adnan Hadzi
Pablo de Soto
Laila Shereen Sakr (VJ Um Amel)

Tech Team

Jacob Friedman – Open Hypervideo Programmer
Anton Galanopoulos – Micro-Computer Programmer

Producers

Adnan Hadzi – OHP Managing Producer
Jacob Friedman – OHV Format Development & Interface Design
Joscha Jäger – OHV Format Development & Interface Design
Oliver Lerone Schultz – Coordination CDC, Video Vortex #9, OHP

Cover artwork and booklet design: Jacob Friedman
Copyright: the authors
Licence: after.video is dual licensed under the terms of the MIT license and the GPL3
http://www.gnu.org/licenses/gpl-3.0.html
Language: English
Assembly On-demand
OpenMute Press

Acknowledgements

Co-Initiated + Funded by

Art + Civic Media as part of Centre for Digital Cultures @ Leuphana University.
Art + Civic Media was funded through Innovation Incubator, a major EU project financed by the European Regional Development Fund (ERDF) and the federal state of Lower Saxony.

Thanks to

Joscha Jaeger – Open Hypervideo (and making this an open licensed capsule!)
Timon Beyes – Centre for Digital Cultures, Lüneburg
Mathias Fuchs – Centre for Digital Cultures, Lüneburg
Gary Hall – School of Art and Design, Coventry University
Simon Worthington – OpenMute

http://www.metamute.org/shop/openmute-press/after.video

Speculative Data: Full Text, MLA 2016 #WeirdDH

SpeculativeData-jpg.001

Below you’ll find the full text of my talk from the Weird DH panel organized by Mark Sample at the 2016 MLA conference in Austin Texas. Other speakers on the panel included Jeremy Justus, Micki Kaufman, and Kim Knight.

***

Speculative Data: Post-Empirical Approaches to the “Datafication” of Affect and Activity

Shane Denson, Duke University

A common critique of the digital humanities questions the relevance (or propriety) of quantitative, data-based methods for the study of literature and culture; in its most extreme form, this type of criticism insinuates a complicity between DH and the neoliberal techno-culture that turns all human activity, if not all of life itself, into “big data” to be mined for profit. Now, it may sound from this description that I am simply setting up a strawman to knock down, so I should admit up front that I am not wholly unsympathetic to the critique of datafication. But I do want to complicate things a bit. Specifically, I want to draw on recent reconceptions of DH as “deformed humanities” – as an aesthetically and politically invested field of “deformance”-based practice – and describe some ways in which a decidedly “weird” DH can avail itself of data collection in order to interrogate and critique “datafication” itself.

SpeculativeData-jpg.002

My focus is on work conducted in and around Duke University’s S-1: Speculative Sensation Lab, where literary scholars, media theorists, artists, and “makers” of all sorts collaborate on projects that blur the boundaries between art and digital scholarship. The S-1 Lab, co-directed by Mark Hansen and Mark Olson, experiments with biometric and environmental sensing technologies to expand our access to sensory experience beyond the five senses. Much of our work involves making “things to think with,” i.e. experimental “set-ups” designed to generate theoretical and aesthetic insight and to focus our mediated sensory apparatus on the conditions of mediation itself. Harnessing digital technologies for the work of media theory, this experimentation can rightly be classed, alongside such practices as “critical making,” in the broad space of the digital humanities. But due to their emphatically self-reflexive nature, these experiments challenge borders between theory and practice, scholarship and art, and must therefore be qualified, following Mark Sample, as decidedly “weird DH.”

SpeculativeData-jpg.003.jpeg

One such project, Manifest Data, uses a piece of “benevolent spyware” that collects and parses data about personal Internet usage in such a way as to produce 3D-printable sculptural objects, thus giving form to data and reclaiming its personal value from corporate cooptation. In a way that is both symbolic and material, this project counters the invisibility and “naturalness” of mechanisms by which companies like Google and Facebook expropriate value from the data we produce. Through a series of translations between the digital and the physical—through a multi-stage process of collecting, sculpting, resculpting, and manifesting data in virtual, physical, and augmented spaces—the project highlights the materiality of the interface between human and nonhuman agencies in an increasingly datafied field of activity. (If you’re interested in this project, which involves “data portraits” based on users’ online activity and even some weird data-driven garden gnomes designed to dispel the bad spirits of digital capital, you can read more about it in the latest issue of Hyperrhiz.)

SpeculativeData-jpg.004

Another ongoing project, about which I will say more in a moment, uses data collected through (scientifically questionable) biofeedback devices to perform realtime collective transformations of audiovisual materials, opening theoretical notions of what Steven Shaviro calls “post-cinematic affect” to robustly material, media-archaeological, and aesthetic investigations.

SpeculativeData-jpg.005

These and other projects, I contend, point the way towards a truly “weird DH” that is reflexive enough to suspect its own data-driven methods but not paralyzed into inactivity.

Weird DH and/as Digital Critical (Media) Studies:

So I’m trying to position these projects as a form of weird digital critical (media) studies, designed to enact and reflect (in increasingly self-reflexive ways) on the use of digital tools and processes for the interrogation of the material, cultural, and medial parameters of life in digital environments.

SpeculativeData-jpg.006

Using digital techniques to reflect on the affordances and limitations of digital media and interfaces, these projects are close in spirit to new media art, but they are also apposite with practices and theories of “digital rhetoric,” as described by Doug Eyman, with Gregory Ulman’s “electracy,” or with Casey Boyle’s posthuman rhetoric of multistability, which celebrates the rhetorical affordances of digital glitches in exposing the affordances and limitations of computational media in the broader realm of an interagential relational field that includes both humans and nonhumans. In short, these projects enact what we might call, following Stanley Cavell, the “automatisms” of digital media – the generative affordances and limitations that are constantly produced, reproduced, and potentially transformed or “deformed” in creative engagements with media. Digital tools are used in such a way as to problematize their very instrumentality, hence moving towards a post-empirical or post-positivistic form of datafication as much as towards a post-instrumental digitality.

SpeculativeData-jpg.007

Algorithmic Nickelodeon / Datafied Attention:

My key example is a project tentatively called the “algorithmic nickelodeon.” Here we use consumer-grade EEG headsets to interrogate the media-technical construction and capture of human attention, and thus to complicate datafication by subjecting it to self-reflexive, speculative, and media-archaeological operations. The devices in question cost about $100 and are marketed as tools for improving concentration, attention, and memory. The headset measures a variety of brainwave activity and, by means of a proprietary algorithm, computes values for “attention” and “meditation” that can be tracked and, with the help of software applications, trained and supposedly optimized. In the S-1 Lab, we have sought to tap into these processes in order not just to criticize the scientifically dubious nature of these claims but rather to probe and better understand the nature of the automatisms and interfaces taking place here and in media of attention more generally. Specifically, we have designed a film- and media-theoretical application of the apparatus, which allows us to think early and contemporary moving images together, to conceive pre- and post-cinema in terms of their common deviations from the attention economy of classical cinema, and to reflect more broadly on the technological-material reorganizations of attention involved in media change. This is an emphatically experimental (that is, speculative, post-positivistic) application, and it involves a sort of post-cinematic reenactment of early film’s viewing situations in the context of traveling shows, vaudeville theaters, and nickelodeons. With the help of a Python script written by lab member Luke Caldwell, a group of viewers wearing the Neurosky EEG devices influence the playback of video clips in real time, for example changing the speed of a video or the size of the projected image in response to changes in attention as registered through brain-wave activity.

At the center of the experimentation is the fact of “time-axis manipulation,” which Friedrich Kittler highlights as one of the truly novel affordances of technical media, like the phonograph and cinema, that arose around 1900 and marked, for him, a radical departure from the symbolic realms of pre-technical arts and literature. Now it became possible to inscribe “reality itself,” or to record a spectrum of frequencies (like sound and light) directly, unfiltered through alphabetic writing; and it became possible as well to manipulate the speed or even playback direction of this reality.

SpeculativeData-jpg.009

Recall that the cinema’s standard of 24 fps only solidified and became obligatory with the introduction of sound, as a solution to a concrete problem introduced by the addition of a sonic register to filmic images. Before the late 1920s, and especially in the first two decades of film, there was a great deal of variability in projection speed, and this was “a feature, not a bug” of the early cinematic setup. Kittler writes: “standardization is always upper management’s escape from technological possibilities. In serious matters such as test procedures or mass entertainment, TAM [time-axis manipulation] remains triumphant. [….] frequency modulation is indeed the technological correlative of attention” (Gramophone Film Typewriter 34-35). Kittler’s pomp aside, his statement highlights a significant fact about the early film experience: Early projectionists, who were simultaneously film editors and entertainers in their own right, would modulate the speed of their hand-cranked apparatuses in response to their audience’s interest and attention. If the audience was bored by a plodding bit of exposition, the projectionist could speed it up to get to a more exciting part of the movie, for example. Crucially, though: the early projectionist could only respond to the outward signs of the audience’s interest, excitement, or attention – as embodied, for example, in a yawn, a boo, or a cheer.

SpeculativeData-jpg.010

But with the help of an EEG, we can read human attention – or some construction of “attention” – directly, even in cases where there is no outward or voluntary expression of it, and even without its conscious registration. By correlating the speed of projection to these inward and involuntary movements of the audience’s neurological apparatus, such that low attention levels cause the images to speed up or slow down, attention is rendered visible and, to a certain extent, opened to conscious and collective efforts to manipulate it and the frequency of images now indexed to it.

According to Hugo Münsterberg, who wrote one of the first book-length works of film theory in 1916, cinema’s images anyway embody, externalize, and make visible the faculties of human psychology; “attention,” for example, is said to be embodied by the close-up. With our EEG setup, we can literalize Münsterberg’s claim by correlating higher attention levels with a greater zoom factor applied to the projected image. If the audience pays attention, the image grows; if attention flags, the image shrinks. But this literalization raises more questions than it answers, it would seem. On the one hand, it participates in a process of “datafication,” turning brain wave patterns into a stream of data called “attention,” but whose relation to attention in ordinary senses is altogether unclear. But this datafication simultaneously opens up a space of affective or aesthetic experience in which the problematic nature of the experimental “set-up” announces itself to us in a self-reflexive doubling: we realize suddenly that “it’s a setup”; “we’ve been framed” – first by the cinema’s construction of attentive spectators and now by this digital apparatus that treats attention as an algorithmically computed value.

So in a way, the apparatus is a pedagogical/didactic tool: it not only allows us to reenact (in a highly transformed manner) the experience of early cinema, but it also helps us to think about the construction of “attention” itself in technical apparatuses both then and now. In addition to this function, it also generates a lot of data that can indeed be subjected to statistical analysis, correlation, and visualization, and that might be marshaled in arguments about the comparative medial impacts or effects of various media regimes. Our point, however, remains more critical, and highly dubious of any positivistic understanding of this data. The technocrats of the advertising industry, the true inheritors of Münsterberg the industrial psychologist, are anyway much more effective at instrumentalizing attention and reducing it to a psychotechnical variable. With a sufficiently “weird” DH approach, we hope to stimulate a more speculative, non-positivistic, and hence post-empirical relation to such datafication. Remitting contemporary attention procedures to the early establishment of what Kittler refers to as the “link between physiology and technology” (73) upon which modern entertainment media are built, this weird DH aims not only to explore the current transformations of affect, attention, and agency – that is, to study their reconfigurations – but also potentially to empower media users to influence such configuration, if only on a small scale, rather than leave it completely up to the technocrats.

The Gnomes Are Back: Business cARd 2.0

gnome-cARd

Ever since our old AR platform was bought out and shut down by Apple, the “data gnomes” that Karin and I developed in conjunction with the Duke S-1: Speculative Sensation Lab’s “Manifest Data” project have been bumbling about in digital limbo, banished to 404 hell. So today I finally made the first steps in migrating our beloved creatures over to a new AR platform (Wikitude), where they’re starting to feel at home. While I was at it, I went ahead and reprogrammed my business card:

2016-01-31 12.21.55 pm

The QR code on the front now redirects the browser to shanedenson.com, while the AR content on the back side is made visible with the Wikitude app (free on iOS or Android) — just search for “Shane Denson” and point your phone/tablet’s camera at the image below:

2016-01-31 12.22.20 pm

(In case you’re wondering what this is: it’s a “data portrait” generated from my Internet browsing behavior. You can make your own with the code included in the S-1 Lab’s Manifest Data kit.)

New Website: shanedenson.com

2015-12-05 05.14.33 pm.png

Some of you may have noticed a new tab at the top right of this page, labeled “shanedenson.com.” Surprisingly enough, that’s also the name of my new website: shanedenson.com. In case you’re wondering, though, this blog is not going anywhere. The website, which is pretty sparse at the moment, provides a more general landing site, while the blog will continue to provide updates on current and upcoming events and issues of more topical relevance — it will continue, in other words, to do what blogs do. The website, on the other hand, offers me the chance to do some things that aren’t supported on a wordpress site. I plan to add some more interesting rubrics soon, so stop by occasionally and check it out.

DEMO Video: Post-Cinema: 24fps@44100Hz

As Karin posted yesterday (and as I reblogged this morning), our collaborative artwork Post-Cinema: 24fps@44100Hz will be on display (and on sale) from January 15-23 at The Carrack Modern Art gallery in Durham, NC, as part of their annual Winter Community Show.

Exhibiting augmented reality pieces always brings with it a variety of challenges — including technical ones and, above all, the need to inform viewers about how to use the work. So, for this occasion, I’ve put together this brief demo video explaining the piece and how to view it. The video will be displayed on a digital picture frame mounted on the wall below the painting. Hopefully it will be both eye-catching enough to attract passersby and it will effectively communicate the essential information about the process and use of the work.