Speculative Data: Full Text, MLA 2016 #WeirdDH

SpeculativeData-jpg.001

Below you’ll find the full text of my talk from the Weird DH panel organized by Mark Sample at the 2016 MLA conference in Austin Texas. Other speakers on the panel included Jeremy Justus, Micki Kaufman, and Kim Knight.

***

Speculative Data: Post-Empirical Approaches to the “Datafication” of Affect and Activity

Shane Denson, Duke University

A common critique of the digital humanities questions the relevance (or propriety) of quantitative, data-based methods for the study of literature and culture; in its most extreme form, this type of criticism insinuates a complicity between DH and the neoliberal techno-culture that turns all human activity, if not all of life itself, into “big data” to be mined for profit. Now, it may sound from this description that I am simply setting up a strawman to knock down, so I should admit up front that I am not wholly unsympathetic to the critique of datafication. But I do want to complicate things a bit. Specifically, I want to draw on recent reconceptions of DH as “deformed humanities” – as an aesthetically and politically invested field of “deformance”-based practice – and describe some ways in which a decidedly “weird” DH can avail itself of data collection in order to interrogate and critique “datafication” itself.

SpeculativeData-jpg.002

My focus is on work conducted in and around Duke University’s S-1: Speculative Sensation Lab, where literary scholars, media theorists, artists, and “makers” of all sorts collaborate on projects that blur the boundaries between art and digital scholarship. The S-1 Lab, co-directed by Mark Hansen and Mark Olson, experiments with biometric and environmental sensing technologies to expand our access to sensory experience beyond the five senses. Much of our work involves making “things to think with,” i.e. experimental “set-ups” designed to generate theoretical and aesthetic insight and to focus our mediated sensory apparatus on the conditions of mediation itself. Harnessing digital technologies for the work of media theory, this experimentation can rightly be classed, alongside such practices as “critical making,” in the broad space of the digital humanities. But due to their emphatically self-reflexive nature, these experiments challenge borders between theory and practice, scholarship and art, and must therefore be qualified, following Mark Sample, as decidedly “weird DH.”

SpeculativeData-jpg.003.jpeg

One such project, Manifest Data, uses a piece of “benevolent spyware” that collects and parses data about personal Internet usage in such a way as to produce 3D-printable sculptural objects, thus giving form to data and reclaiming its personal value from corporate cooptation. In a way that is both symbolic and material, this project counters the invisibility and “naturalness” of mechanisms by which companies like Google and Facebook expropriate value from the data we produce. Through a series of translations between the digital and the physical—through a multi-stage process of collecting, sculpting, resculpting, and manifesting data in virtual, physical, and augmented spaces—the project highlights the materiality of the interface between human and nonhuman agencies in an increasingly datafied field of activity. (If you’re interested in this project, which involves “data portraits” based on users’ online activity and even some weird data-driven garden gnomes designed to dispel the bad spirits of digital capital, you can read more about it in the latest issue of Hyperrhiz.)

SpeculativeData-jpg.004

Another ongoing project, about which I will say more in a moment, uses data collected through (scientifically questionable) biofeedback devices to perform realtime collective transformations of audiovisual materials, opening theoretical notions of what Steven Shaviro calls “post-cinematic affect” to robustly material, media-archaeological, and aesthetic investigations.

SpeculativeData-jpg.005

These and other projects, I contend, point the way towards a truly “weird DH” that is reflexive enough to suspect its own data-driven methods but not paralyzed into inactivity.

Weird DH and/as Digital Critical (Media) Studies:

So I’m trying to position these projects as a form of weird digital critical (media) studies, designed to enact and reflect (in increasingly self-reflexive ways) on the use of digital tools and processes for the interrogation of the material, cultural, and medial parameters of life in digital environments.

SpeculativeData-jpg.006

Using digital techniques to reflect on the affordances and limitations of digital media and interfaces, these projects are close in spirit to new media art, but they are also apposite with practices and theories of “digital rhetoric,” as described by Doug Eyman, with Gregory Ulman’s “electracy,” or with Casey Boyle’s posthuman rhetoric of multistability, which celebrates the rhetorical affordances of digital glitches in exposing the affordances and limitations of computational media in the broader realm of an interagential relational field that includes both humans and nonhumans. In short, these projects enact what we might call, following Stanley Cavell, the “automatisms” of digital media – the generative affordances and limitations that are constantly produced, reproduced, and potentially transformed or “deformed” in creative engagements with media. Digital tools are used in such a way as to problematize their very instrumentality, hence moving towards a post-empirical or post-positivistic form of datafication as much as towards a post-instrumental digitality.

SpeculativeData-jpg.007

Algorithmic Nickelodeon / Datafied Attention:

My key example is a project tentatively called the “algorithmic nickelodeon.” Here we use consumer-grade EEG headsets to interrogate the media-technical construction and capture of human attention, and thus to complicate datafication by subjecting it to self-reflexive, speculative, and media-archaeological operations. The devices in question cost about $100 and are marketed as tools for improving concentration, attention, and memory. The headset measures a variety of brainwave activity and, by means of a proprietary algorithm, computes values for “attention” and “meditation” that can be tracked and, with the help of software applications, trained and supposedly optimized. In the S-1 Lab, we have sought to tap into these processes in order not just to criticize the scientifically dubious nature of these claims but rather to probe and better understand the nature of the automatisms and interfaces taking place here and in media of attention more generally. Specifically, we have designed a film- and media-theoretical application of the apparatus, which allows us to think early and contemporary moving images together, to conceive pre- and post-cinema in terms of their common deviations from the attention economy of classical cinema, and to reflect more broadly on the technological-material reorganizations of attention involved in media change. This is an emphatically experimental (that is, speculative, post-positivistic) application, and it involves a sort of post-cinematic reenactment of early film’s viewing situations in the context of traveling shows, vaudeville theaters, and nickelodeons. With the help of a Python script written by lab member Luke Caldwell, a group of viewers wearing the Neurosky EEG devices influence the playback of video clips in real time, for example changing the speed of a video or the size of the projected image in response to changes in attention as registered through brain-wave activity.

At the center of the experimentation is the fact of “time-axis manipulation,” which Friedrich Kittler highlights as one of the truly novel affordances of technical media, like the phonograph and cinema, that arose around 1900 and marked, for him, a radical departure from the symbolic realms of pre-technical arts and literature. Now it became possible to inscribe “reality itself,” or to record a spectrum of frequencies (like sound and light) directly, unfiltered through alphabetic writing; and it became possible as well to manipulate the speed or even playback direction of this reality.

SpeculativeData-jpg.009

Recall that the cinema’s standard of 24 fps only solidified and became obligatory with the introduction of sound, as a solution to a concrete problem introduced by the addition of a sonic register to filmic images. Before the late 1920s, and especially in the first two decades of film, there was a great deal of variability in projection speed, and this was “a feature, not a bug” of the early cinematic setup. Kittler writes: “standardization is always upper management’s escape from technological possibilities. In serious matters such as test procedures or mass entertainment, TAM [time-axis manipulation] remains triumphant. [….] frequency modulation is indeed the technological correlative of attention” (Gramophone Film Typewriter 34-35). Kittler’s pomp aside, his statement highlights a significant fact about the early film experience: Early projectionists, who were simultaneously film editors and entertainers in their own right, would modulate the speed of their hand-cranked apparatuses in response to their audience’s interest and attention. If the audience was bored by a plodding bit of exposition, the projectionist could speed it up to get to a more exciting part of the movie, for example. Crucially, though: the early projectionist could only respond to the outward signs of the audience’s interest, excitement, or attention – as embodied, for example, in a yawn, a boo, or a cheer.

SpeculativeData-jpg.010

But with the help of an EEG, we can read human attention – or some construction of “attention” – directly, even in cases where there is no outward or voluntary expression of it, and even without its conscious registration. By correlating the speed of projection to these inward and involuntary movements of the audience’s neurological apparatus, such that low attention levels cause the images to speed up or slow down, attention is rendered visible and, to a certain extent, opened to conscious and collective efforts to manipulate it and the frequency of images now indexed to it.

According to Hugo Münsterberg, who wrote one of the first book-length works of film theory in 1916, cinema’s images anyway embody, externalize, and make visible the faculties of human psychology; “attention,” for example, is said to be embodied by the close-up. With our EEG setup, we can literalize Münsterberg’s claim by correlating higher attention levels with a greater zoom factor applied to the projected image. If the audience pays attention, the image grows; if attention flags, the image shrinks. But this literalization raises more questions than it answers, it would seem. On the one hand, it participates in a process of “datafication,” turning brain wave patterns into a stream of data called “attention,” but whose relation to attention in ordinary senses is altogether unclear. But this datafication simultaneously opens up a space of affective or aesthetic experience in which the problematic nature of the experimental “set-up” announces itself to us in a self-reflexive doubling: we realize suddenly that “it’s a setup”; “we’ve been framed” – first by the cinema’s construction of attentive spectators and now by this digital apparatus that treats attention as an algorithmically computed value.

So in a way, the apparatus is a pedagogical/didactic tool: it not only allows us to reenact (in a highly transformed manner) the experience of early cinema, but it also helps us to think about the construction of “attention” itself in technical apparatuses both then and now. In addition to this function, it also generates a lot of data that can indeed be subjected to statistical analysis, correlation, and visualization, and that might be marshaled in arguments about the comparative medial impacts or effects of various media regimes. Our point, however, remains more critical, and highly dubious of any positivistic understanding of this data. The technocrats of the advertising industry, the true inheritors of Münsterberg the industrial psychologist, are anyway much more effective at instrumentalizing attention and reducing it to a psychotechnical variable. With a sufficiently “weird” DH approach, we hope to stimulate a more speculative, non-positivistic, and hence post-empirical relation to such datafication. Remitting contemporary attention procedures to the early establishment of what Kittler refers to as the “link between physiology and technology” (73) upon which modern entertainment media are built, this weird DH aims not only to explore the current transformations of affect, attention, and agency – that is, to study their reconfigurations – but also potentially to empower media users to influence such configuration, if only on a small scale, rather than leave it completely up to the technocrats.

Speculative Data #WeirdDH #MLA16 #S107

storify-mla16

The “Weird DH” panel at MLA 2016 in Austin, chaired by Mark Sample, was great fun, and it generated quite a bit of discussion, both online and off. Luckily, Eileen Clancy preserved the twitter discussion in a Storify (https://storify.com/clancynewyork/weird-dh), so in case you couldn’t make it out last week, you can still catch up on some of the topics and reactions to the talks by Jeremy Justus, Micki Kaufman, Kim Knight, and myself.

(If I get around to it, I will also be posting the full text of my talk very soon.)

On the ‘Parergodic’ Work of Seriality in Interactive Digital Environments

 

batman_and_parergodicity-slides-JPG.002

Here is the full text of my talk from the 2015 conference of the Society for Literature, Science, and the Arts — part of the panel on Video Games’ Extra-Ludic Echoes (organized by David Rambo, and featuring talks by Patrick LeMieux and Stephanie Boluk, David Rambo, and myself).

On the “Parergodic” Work of Seriality in Interactive Digital Environments

Shane Denson (SLSA 2015, Houston, Nov. 15, 2015)

I want to suggest that popular serialized figures function as indexes of historical and media-technical changes, helping us to assess the material and cultural transformations that such figures chart in the process of their serial unfoldings. This function becomes especially pronounced when serial figures move between and among various media. By shifting from a medial “inside” to an “outside” or an in-between, serial figures come to function as higher-order media, turning first-order apparatic media like film and television inside out and exposing them as reversible frames. But with the rise of interactive, networked, and convergent digital media environments, this outside space is called into question, and the medial logic of serial figures is transformed in significant ways. This transformation, I suggest, is not unrelated to the blurring of relations between work and play, between paid labor and the incidental work culled from our entertainment practices. In the age of transmedia, serial figures move flexibly between media much like we move between projects and contexts of consumption and production. The dynamics of border-crossing that characterized earlier serial figures have now been re-functionalized in accordance with the ergodic work of navigating computational networks—in accordance, that is, with work and network forms that frame all aspects of contemporary life.

I will come back to this argument in a moment and elaborate with reference to Batman and his movement from comics to film and video games, but first let me say a few words about “the work of seriality.” In the nineteenth century, production work became increasingly serialized as it was fragmented and mechanized in factories, culminating in the assembly line, the paradigmatic site of serialized production, and eventually leading to digital automation and process control. Ultimately, of course, this trajectory was spearheaded by capitalism’s own seriality, or its structuration as an endless series of M-C-M’ progressions. At the same time, works of culture also fell under the spell of the series. The industrial steam press churned out penny dreadfuls and dime novels, before comic strips, film serials, radio and TV series took over. The simultaneous rise of serialized work practices and serial “works” of culture is too massive, I suggest, to be a sheer coincidence. And concomitant with these two was a third form of serialization: that of cultural identity or of subjective experience itself. As Benedict Anderson and Jean-Paul Sartre before him have argued, new forms of community, identity, and perception were based in the serial work of media, such that, for example, the serialization of daily papers, consumed more or less simultaneously by an entire nation, could produce the nation itself as an “imagined community” of serialized subjects. Anderson’s conception of the serialities of nationhood or the proletariat suggests a material connection between the minute level of concrete serial media practices and the broad level of discursive, cultural, or imagined realities—a connection that I want to pursue into the realm of digital media.

batman_and_parergodicity-slides-JPG.004

I want to suggest that serialized media are able to leverage these shifts in the nature of work/works because they function according to the logic of the “parergon,” as described by Jacques Derrida. Etymologically, the term parergon is composed of the prefix “para-” (next to, or beside) and “ergon,” which derives from Greek for work. The parergon is thus literally “next to the work,” marginal or supplementary to it, as a frame is with respect to a painting (or an hors d’oeuvre with respect to the main course of a meal).

parergon

But the picture frame in particular demonstrates an essential reversibility: on the one hand, the frame serves as a background for the work, as a ground for the image it frames and selects or presents. On the other hand, the frame can also be absorbed into the figure when seen against the larger background of the wall, as when we take a broad view of a row of paintings on a museum wall before selecting one to observe more closely. The frame is therefore subject to repeated figure/ground reversals, and it’s the same with serialized media, which are constituted in the flickering interplay between an ongoing sequence and its articulation into discrete segments.

batman_and_parergodicity-slides-JPG.017

A serial figure like Frankenstein’s monster embodies this interplay and mediates it as a higher-order reflection on media change. The monster is of course part of a film’s diegetic universe, for example, but it also exceeds that frame and partakes in a plurimedial series of instantiations. We never just see Frankenstein’s monster; we see an iteration of the monster that stands in extradiegetic relation to Karloff’s iconic portrayal and to a series of media and mediations of the figure. And we should not forget that Karloff’s mute monster, which contrasts sharply with the eloquent monster of Shelley’s novel, once served to foreground the transition from silent cinema to the talkies. Figure/ground reversibility is an essential precondition for plurimedial seriality as such, specifically enabling the foregrounding of mediality that allows the serial figure to serve as a figuration of media change.

But with the rise of digital media, the formerly discrete media across which serial figures were deployed come to mingle in much closer proximity. What Henry Jenkins calls our “convergence culture” responds by coming up with new ways to tell stories (and to sell commodities) that take advantage of the coming-together of media in the space of the digital. In Jenkins’s version, transmedia storytelling is inherently serial, but much less linear than a conventional television series might be, as it allows the reader/viewer/player/user to explore various facets of a story-world through movies, games, textual and other forms, allowing for a variable order of consumption that corresponds, we might say, to the database structures in which digital information is stored and (interactively) accessed. But transmedia storytelling often aims to smooth over the disjunctures between media installments; the parergonal logic of figure/ground reversals that sustained serial figures and allowed them to track and foreground media changes is thus transformed. A serial figure like Batman thrives in this new environment and traces this transformation in relation to computational mediation and the shift from a parergonal to what I call a parergodic logic.

batman_and_parergodicity-slides-JPG.020

With the term parergodics, I want to link Derrida’s notion of the parergon with Espen Aarseth’s use of the term ergodics to describe processes and structures of digital interactivity. Ergodics combines the Greek ergon (work) and hodos (path), thus positing nontrivial labor as the aesthetic mode of players’ engagement with games. For Aarseth, the arduous or laborious path of ergodic interactivity marks a fundamental difference between digital media such as video games or electronic literature on the one hand and traditional literature and narrative media on the other. For whereas the path of a narrative is fixed for the reader of a novel or the spectator of a film, it must be generated in digital media through a cooperative effort between the user and the computational system. The signs composing the text of a video game—including textual strings, visual perspectives, narrative and audiovisual events—are not (completely) predetermined but generated on the fly, in real time, as the player makes his or her way through the game. Ergodics, the path of the work or the work of the path, therefore describes the nontrivial labor at the heart of gameplay. But to expand this beyond Aarseth’s narrower frame of reference, the concept of ergodics can also be seen to ground a wider variety of interactive and participatory potentials in contemporary culture, where computational networks are implicated virtually ubiquitously in entertainment, social life, and work. The borders between these realms are remarkably unclear (think of all the things people do on social networks and the virtual impossibility of distinguishing clearly between work activities and play), and it would seem that this has something to do with the indifference of computational media to the type of contents processed. This computational indifference to the phenomenological modalities of human experience – or to the differences between the analogue media that at least partly corresponded to those modalities – leads, as Mark Hansen argues, to a divergence between mediation in its classical, perceptually oriented form and a new form of mediation that channels human affect into the process-oriented project of establishing ever greater networks of pure connectivity. This is the larger significance, I propose, of what Steven Shaviro calls “post-cinematic affect”: in contrast to the cinema, which was constituted by the storage and reproduction of perceptual objects, ergodic mediation involves acts of affective interfacing with the fundamentally post-perceptual realm of computation, which is algorithmic, distributed, and nonlocal, in contrast to the phenomenological basis of human embodiment. Clicking on a Youtube video not only delivers perceptual content to your embodied eyes and ears, it also delivers computational content – information about affective, epistemic, and monetary valuations – to the routines of network-constitutive algorithms. In this environment, play activities not only involve the execution of nontrivial work, as Aarseth argues, but corporations and financial interests, among others, continually find clever ways to disguise work as play, to “gamify” our labor, both paid and unpaid, while mining the data generated in the process in order to profit from both dedicated and incidental work. In this environment, as Matteo Pasquinelli has argued, virtually any investment of attention or affect will also generate a surplus value for Google, Facebook, etc. – a value produced and accumulated parasitically, without regard for any significance we may attach to the contents of our digital interactions, by means of computational algorithms functioning on an altogether different level than the human concerns that feed them.

As a result, media “contents” become incidental or marginal to work, so that our so-called “participatory culture” might better be termed a “parergodic culture,” where cultural “contents” are reversibly supplemental to the nontrivial labor of interactive work. But the notion of “parergodic culture” suggests also that there might be para-ergodic margins from which to witness the shift, to take stock of it in the process of its occurrence. This is where the parergon meets ergodics, and it’s in this reversible margin of parergodicity, neither completely inside nor outside the realm of ergodics, that I’d like to situate the serial work of Batman from about the mid 1980s to the present.

batman_and_parergodicity-slides-JPG.021

The starting point is the appearance of graphic novels such as The Dark Knight Returns (Frank Miller, 1986), Arkham Asylum: Serious House on Serious Earth (Grant Morrison, 1989), and The Killing Joke (Alan Moore, 1988), which re-envisioned Batman as a darker figure and laid the groundwork for the figure’s medial self-awareness.

In their wake, a key scene in Tim Burton’s 1989 film stages a parergonal reversal of medial spaces during Joker’s televised address to Batman and the people of Gotham. The medium takes on an unexpected materiality as the Joker shoves the mayor’s image off the screen, and a crucial reversal is visualized as a shot of several contiguous studio monitors gives way to the various screens united in Batman’s multimedia console. It is here, with a sudden freeze frame interaction, that Batman enacts a further parergonal reversal: while the film’s editing leads us to believe that Bruce Wayne, like all the citizens of Gotham, is viewing the Joker’s address live, he pauses the recording, in effect pausing the continuity of the film itself. And with this seemingly insignificant difference it introduces between live and recorded images, Batman’s pausing of the image announces, in effect, an entry into the interactive space of post-cinematic media. This is the first step towards the reconceptualization of images and visual media as purely processual, computational, and no longer tied to perception as its objects.

batman_and_parergodicity-slides-JPG.024

Jump ahead twenty years. Computational technologies are implemented more broadly in the actual production of visual media, for example in post-cinematic blockbusters like Christopher Nolan’s Batman trilogy. Nolan’s second film, The Dark Knight, can be seen as a serial continuation not only of Batman Begins, the first film in Nolan’s trilogy, but also an updating of Burton’s early exploration of Batman’s ergodic mediality.

batman_and_parergodicity-slides-JPG.027

Most centrally, Nolan’s film updates Batman’s console and places it in the middle of the caped crusader’s pursuits to restore order to Gotham. The film spends a considerable amount of time foregrounding this computational wonder machine, which alternates reversibly with the film itself and serves to foreground its CGI-based spectacles.

parergodic-batman

Within the frame of the narrative, a new range of computational powers is demonstrated, including biometric facial recognition and computational forensics. Early in the film, Bruce Wayne’s tech guy Lucius Fox demonstrates to him a new technology, utilizing a cell phone to emit an inaudibly high frequency capable of mapping a remote location by means of digitally enhanced sonar. This sets the stage for the film’s climax, when the sonar program is spread, virus-like, to the cell phones of all of Gotham’s inhabitants. Through this network, which feeds into Bruce Wayne’s central console, now equipped with a giant wall of display devices, Batman is able to “see” the whole city.

sonar-vision

This is a disembodied or nonlocalized 3D computer-graphics vision generated through a distributed, nonhuman sensory form that substitutes computational process for perceptual object. Seeing through the eyes of a machinic network, Batman is able to find the bad guys just in time for the final showdown, but at a decisive moment Batman’s “vision” machine crashes.

sonar-crash

The event is presented to us in first-person perspective, crucially drawing attention to the mediation of our own vision through computational processes. Here the parergonal reversibility between diegetic and medial levels is thoroughly parergodic, as we are made witness to an event that challenges the perceptual frames delineating the narrative and our ability to engage or disengage with the medium.

batman_and_parergodicity-slides-JPG.030

But the scene anticipates an even more intense experience of parergodic involvement in the video game Arkham Asylum. Here, a specifically parergonal exploration of spatialized boundaries between sanity and insanity that goes back to the graphic novel of the same title is translated into a narrative that weaves back and forth between “reality” and Scarecrow-induced hallucinations. The player, who has to act in order to stay alive, can never be sure when one of these hallucinatory states has begun, and he or she therefore gets drawn into such illusions until an abrupt awakening takes place in the wake of a victory (in a boss battle) or its deferral. Even more poignantly, though, there is a total break with all narrative, perceptual, and actional involvement at one point late in the game, when the images on the screen freeze and display digital artifacts and the soundtrack begins to skip like a scratched CD.

Suddenly, the screen goes black and the game literally reboots – at least, I could swear that my PlayStation restarted at this moment, while a feeling of panic gripped me. When the game restarts, we see images reminiscent of the game’s opening scenes – thus compounding a sense of fear that either my disk or my machine is broken, and that all my progress in the game, by this time some 10 or 20 hours, is lost and will have to be repeated from the beginning. But this time things are backwards: the Joker’s in the driver’s seat, escorting Batman into Arkham Asylum. The cutscene gives way to an interactive sequence where the player controls the Joker, thus instituting a weird sort of actional identification with the villain, who then turns and points a gun directly at the player, whose vision is suddenly realigned with the perspective of Batman. There’s no chance to avoid death, and we see this “Mission failed” screen with the tip, “Use the middle stick to avoid Joker’s gun fire.” Only, there is no middle stick on the PlayStation or Xbox controller. This whole sequence therefore emphasizes the point of interface as a reversible margin where computational or ergodic media converge as both the thematic/actional “content” and the material platform for play.

batman_and_parergodicity-slides-JPG.035

And the quasi-glitch and simulated crash of the game channel this attention to reveal the significant work involved in ergodic play—the very real panic and extradiegetic fears activated here highlight the cognitive and physical labor invested by the player, the precariousness of the digital platform for the storage or accumulation of such work, over which we have little individual control, though our activities are sure to generate profit for the corporations holding ownership of intellectual properties (like Batman), of proprietary software and hardware (like the console we’re operating), or the algorithms that will mine our activities for surplus value. This, I suggest, is parergodic culture.

batman_and_parergodicity-slides-JPG.036

Weird DH at MLA 2016

weird-dh-2

On January 7, 2016, I’ll be participating in a panel, organized by Mark Sample, on “Weird DH” at the MLA Convention in Austin. Here’s the lineup:

107. Weird DH

Thursday, 7 January3:30–4:45 p.m., Lone Star C, JW Marriott

Program arranged by the forum TC Digital Humanities

Presiding: Mark Sample, Davidson Coll.

1. “Speculative Data: Postempirical Approaches to the ‘Datafication’ of Affect and Activity,” Shane Denson, Duke Univ.

2. “Analyzing Belligerent Erasure: Weird Digital Humanities and/in the Native,” Jeremy Justus, Univ. of Pittsburgh, Johnstown

3. “‘Weird Tales of Super-K’: A Synesthetic Journey into the National Security Archive’s Kissinger Correspondence,” Micki Kaufman, MLA

4. “Danger, Jane Roe! Using Embroidery and Electronics to Make Data Weird,” Kim Knight, Univ. of Texas, Dallas

Subject:

Keywords:

 

And here’s the abstract for my talk:

Speculative Data: Post-Empirical Approaches to the “Datafication” of Affect and Activity

Shane Denson, Duke University

A common critique of the digital humanities questions the relevance of quantitative, data-based methods for the study of literature and culture; in its most extreme form, this type of criticism insinuates a complicity between DH and the neoliberal techno-culture that turns all human activity, if not all of life itself, into “big data” to be mined for profit. Drawing on recent reconceptions of DH as “deformed humanities” – as an aesthetically and politically invested field of “deformance”-based practice – this presentation describes several methods by which a decidedly “weird” DH can avail itself of data collection to interrogate and critique “datafication” itself.

The focus is on work conducted in the context of Duke University’s S-1 Speculative Sensation Lab, where literary scholars, media theorists, artists, and “makers” of all sorts collaborate to produce computational and data-driven “things to think with” that blur the boundaries between art and digital scholarship. One such project, Manifest Data, uses a piece of “benevolent spyware” that collects and parses data about personal Internet usage in such a way as to produce 3D-printable sculptural objects, thus giving form to data and reclaiming its personal value from corporate cooptation. Another ongoing project uses data collected by (scientifically questionable) biofeedback devices to perform realtime collective transformations of audiovisual materials, opening theoretical notions of “post-cinematic affect” to robustly material, media-archaeological, and aesthetic investigations. These and other projects, I contend, point the way towards a truly “weird DH” that is reflexive enough to suspect its own data-driven methods but not paralyzed into inactivity.

Bibliography:

S-1 Speculative Sensation Lab. Manifest Data. Collaborative art/theory project. Description online: <http://s-1lab.org/project/manifest-data/>.

S-1 Speculative Sensation Lab (with contributions from Luke Caldwell, Karin Denson, Shane Denson, Amanda Starling Gould, David Rambo, Libi Striegl, and Max Symuleski). “Manifest Data: A Kit to Create Personal Digital Data-Based Sculptures.” Hyperrhiz: New Media Cultures 13 (2015): <http://hyperrhiz.io/hyperrhiz13/sensors-data-bodies/manifest-data.html>. Web.

Sample, Mark. “Notes Towards a Deformed Humanities.” Samplereality (2 May 2012): <http://www.samplereality.com/2012/05/02/notes-towards-a-deformed-humanities/>. Web.

Samuels, Lisa, and Jerome McGann. “Deformance and Interpretation.” New Literary History 30.1 (1999): 25–56. Print.

Shaviro, Steven. Post-Cinematic Affect. Winchester: Zero Books, 2010. Print.

 

Conversations in the Digital Humanities at Duke

page-events-dhduke

Today, Oct. 2, 2015, the Franklin Humanities Institute, the Wired! Lab, the PhD Lab in Digital Knowledge, and HASTAC@Duke will be presenting “Conversations in the Digital Humanities,” the inaugural event of the new Digital Humanities Initiative at Duke University. More information about the event, in which I will be participating alongside colleagues from the S-1: Speculative Sensation Lab, can be found on the FHI website.

Also, all of the 10-minute “lightning talks” will be live-streamed. The first block of sessions, from 2:15-3:45pm EST, will be streamed here, and the second block, from 4:00-5:40pm, will be viewable here. (Apparently, the videos will be archived and available after the fact as well.)

Here is the complete schedule:

2:00 – 2:15
Welcome and Introduction to Digital Humanities Initiative

2:15 – 3:45 
Session 1 (10 minutes per talk)

  1. Project Vox (Andrew Janiak, and Liz Milewicz)
  2. NC Jukebox (Trudi Abel, Victoria Szabo)
  3. Visualizing Cultures: The Shiseido Project (Gennifer Weisenfeld)
  4. Going Global in Mughal India (Sumathi Ramaswamy)
  5. Israel’s Occupation in the Digital Age (Rebecca Stein)
  6. Digital Athens: Archaeology meets ArcGIS (Tim Shea, Sheila Dillon)
  7. Early Medieval Networks (J. Clare Woods)

3:45 – 4:00
Coffee Break

4:00 – 5:40 
Session 2 (10 minutes per talk)

  1. Painting the Apostles – A Case Study in “The Lives of Things” (Mark Olson, Mariano Tepper, and Caroline Bruzelius)
  2. Digital Archaeology: From the Field to Virtual Reality (Maurizio Forte)
  3. The Memory Project (Luo Zhou)
  4. Veoveo, children at play (Raquel Salvatella de Prada)
  5. “Things to Think With”: Weird DH, Data, and Experimental Media Theory (S-1 Lab)
  6. s_traits, Generative Authorship and the Emergence Lab (Bill Seaman and John Supko)
  7. Found Objects and Fireflies (Scott Lindroth)
  8. Project Provoke (Mary Caton Lingold and others)

5:40 – 6:00 
Reception

Video Games’ Extra-Ludic Echoes — SLSA 2015

machine-puzzled-them

I am excited to be a part of the panel “Video Games’ Extra-Ludic Echoes,” which will be chaired by my colleague David Rambo at the upcoming conference of the Society for Literature, Science, and the Arts (SLSA) — hosted in Houston this year by Rice University, November 12-15, 2015. Below you will find the panel description and links to the individual abstracts.

Video Games’ Extra-Ludic Echoes

Chair: David Rambo

Each of the three presentations in this panel perform their own media-theoretical approach to comprehend how video games extend and consolidate the sociotechnical logics surrounding their conception, production, and reception. We intend to kindle a discussion about the ways certain video games order significant ideologies and activities of human life: from multimedial culture industries to the blurred division between life and labor to the concealment of racism in the techniques of 20th-century entertainment. All three share a motivation to delineate various cultural and economic inheritances of video games and the transformative ways in which video games echo those inheritances.

Shane Denson’s contribution attends to the serial manifestations of Batman across genres and media. He hones in on the subsumption of life-time under work-time that is common to the computational networks of daily life and to the crossed borders of serial figures such as Batman. In Stephanie Boluk and Patrick LeMieux’s “White Hand, Black Box,” we learn to recognize in the maniculed pointer of various softwares’ interfaces and especially in the Master Hand of Super Smash Brothers the now concealed tradition of black minstrelsy as it was imbued by Disney in the gloved hands of Mickey Mouse and other characters. David Rambo deploys a Spinozist theoretical framework to categorize the video game, Diablo III in particular, as a neoliberal enterprise that extrinsically determines the player’s desire and even will to live. Spinoza’s Ethics thereby offers a way to conceptualize the video game both as an autonomous entity marked with finite, and thus fulfillable, completeness, and as a node in a much broader regime of affections that orders capital’s socioeconomic system.

These three presentations depict the video game as an artifactual conduit eminently bound up with the cultural forces that ineluctably structure our civilization, from its marginal groups to its most powerful systemic imperatives.

Abstracts for the individual papers:

Shane Denson, “Gaming and the ‘Parergodic’ Work of Seriality in Interactive Digital Environments”

Stephanie Boluk and Patrick LeMieux, “White Hand, Black Box: The Manicule from Mickey to Mario to Mac OS”

David Rambo, “Spinoza on Completion and Authorial Forces in Video Games”

Gaming and the ‘Parergodic’ Work of Seriality in Interactive Digital Environments — Shane Denson

arkham-hack

My abstract for the panel “Video Games’ Extra-Ludic Echoes” at SLSA 2015 in Houston:

“Gaming and the ‘Parergodic’ Work of Seriality in Interactive Digital Environments”

Shane Denson, Duke University and Leibniz University of Hannover

Twentieth-century serial figures like Tarzan, Frankenstein’s monster, or Sherlock Holmes enacted a “parergonal” logic; as plurimedial figures, they continually crossed the boundaries between print, film, radio, and televisual media, slipped in and out of their frames, and showed them – in accordance with a Derridean logic of the parergon – to be reversible. In the twenty-first century, the medial logics of serial figures have been transformed in conjunction with the rise of interactive, networked, and convergent digital media environments. A figure like Batman exemplifies this shift as the transition from a broadly “parergonal” to a specifically “parergodic” logic. The latter term builds upon Espen Aarseth’s notion of “ergodic” gameplay – where ergodics combines the Greek ergon (work) and hodos (path), thus positing nontrivial labor as the aesthetic mode of players’ engagement with games. These new, ergodic serial forms and functions, as embodied by a figure like Batman, raise questions about the blurring of relations between work and play, between paid labor and the incidental work culled from our entertainment practices. Following Batman’s transitions from comics to graphic novels, to the films of Tim Burton and Christopher Nolan, and on to the popular and critically acclaimed Arkham series of videogames, I will demonstrate that the dynamics of border-crossing which characterized earlier serial figures has now been re-functionalized in accordance with the ergodic work of navigating computational networks – in accordance, that is, with work and network forms that frame all aspects of contemporary life.

White Hand, Black Box: The Manicule from Mickey to Mario to Mac OS — Stephanie Boluk & Patrick LeMieux

mac-manicule

Stephanie Boluk and Patrick LeMieux’s abstract for the panel “Video Games’ Extra-Ludic Echoes” at SLSA 2015 in Houston:

“White Hand, Black Box: The Manicule from Mickey to Mario to Mac OS”

Stephanie Boluk, UC Davis, and Patrick LeMieux, UC Davis

Whereas the manicule, a pointing finger directing a reader’s attention, has been used for a millennium in chirographic and print texts, in the context of twentieth century animation and twenty-first century computing the medieval pointer has been recontextualized as the hand of the animator to a graphic user interface (GUI) element. After the popularization of the talkie in the late twenties, in Steamboat Willie (1928), the first ever “Merry Melody” released by Disney, Mickey Mouse adopts gloves and the lilting voice of Al Jolson’s Jazz Singer (1927). This process sanitizes a genre of racist comedy for mainstream consumption. Although Mickey’s gloves are easily deemed merely a contrivance of the technical limitations related to articulating fingers in early animation, Bimbo and Betty, Oswald and Ortensia, Foxy and Roxy, and, of course, Mickey and Minnie are anthropomorphic animals that whitewashed their relation to racist caricatures inspired by blackface minstrelsy. This history was further obfuscated as “Mickey’s manicules” eventually found themselves as elements within the contemporary operating systems like Mac OS and as GUI’s within videogames like Mario Paint in the eighties and nineties. From the metaleptic manicule of classic animations to the metonymic manicule in the GUI, this paper ultimately performs a close reading of the figure of “Master Hand” in Super Smash Bros. (1998) in order to argue that the white hand allegorizes the ways in which “user friendly” design has black boxed the racialized history of computation.

Spinoza on Completion and Authorial Forces in Video Games — David Rambo

spinoza

David Rambo’s abstract for the panel “Video Games’ Extra-Ludic Echoes” at SLSA 2015 in Houston:

“Spinoza on Completion and Authorial Forces in Video Games”

David Rambo, Duke University

This talk extends the Spinozist paradigm for theorizing the medium-specificity of narrative and agency in video games I presented at SLSA 2013. Whereas Spinoza’s first and second orders of knowledge—phenomenal experience and rational systemization—map easily enough onto a single-player video game as a deterministic Natura; knowledge of the third kind would problematically seem to require an idealistic reduction of the video game into an operational and meaning-making Idea in abstraction from culture, political economy, and perhaps even the body of the player. Looking primarily to the changes made to Blizzard’s multiple releases of Diablo 3 (2012-2014), I propose that completion distinguishes the video game from other cultural forms and allows us to conceive of its essence. Pursuit of a game’s completion echoes, in Frédéric Lordon’s Spinozist terms, the ascription of one’s conatus to an enterprise’s regime of affects. For the notion of a game’s completion appears under the purview of the developers’ and industry’s ulterior motives. On one hand, the player’s motivation to complete a game redounds to the complex of desires that operate part and parcel with a game’s mechanics, marketing, and historical situation. On the other hand, total completion is a barrier that development studios intend to break by marketing supplemental material, exploiting customer data and feedback, issuing patches, and releasing expansion packs. Spinoza’s ontology of affection allows for a rational ordering of this tension between completion and incompletion in the individual playing and mass market consumption of video games.

Complete Panel Video — Post-Cinema and/as Speculative Media Theory #SCMS15

On March 27, 2015, at the annual conference of the Society for Cinema and Media Studies in Montreal, Steven Shaviro, Patricia Pisters, Adrian Ivakhiv, and Mark B. N. Hansen participated in a panel I organized on “Post-Cinema and/as Speculative Media Theory.” It was standing room only, and many people were unable to squeeze into the room (some images are posted here). Thankfully, all of the presenters agreed to have their talks recorded on video and archived online.

(I have posted these videos here before, but for the sake of convenience I wanted to pull them together in a single post, so that the entire panel is available in one place.)

Above, you’ll find my brief general introduction to the panel, and below the four presentations:

Steven Shaviro’s proposal for a “Cinema 3.0”: the rhythm-image (following Deleuze’s movement-image and time-image)

Patricia Pisters, whose own proposal for a third image-type she calls the “neuro-image,” on the politics of post-cinema

Adrian Ivakhiv on the material, ecological dimensions of (post-)cinema in the Anthropocene and/or Capitalocene

Mark B. N. Hansen on the microtemporal and sub-perceptual dimensions of digital, post-cinematic images

Finally, you can look forward to hearing more from the panel participants, all of whom are contributing to an open-access collection titled Post-Cinema: Theorizing 21st-Century Film, co-edited by myself and Julia Leyda (forthcoming this year from REFRAME Books). More details soon, so stay tuned!