Month: December 2014
SCMS 2015 Preliminary Schedule Online — #SCMS15
The preliminary schedule for the Society of Cinema and Media Studies 2015 conference in Montreal is now online (here). As I posted recently, I will be involved in two separate panels:
First, I will be chairing the panel on “Post-Cinema and/as Speculative Media Theory” (panel K7, Friday, March 27, 2015, 9:00-10:45am) — with presenters Steven Shaviro, Patricia Pisters, Adrian Ivakhiv, and Mark B. N. Hansen. You can find the complete panel description, as well as individual abstracts, here. Note also that all participants on this panel are contributors to the forthcoming Post-Cinema: Theorizing 21st-Century Film, which I am co-editing with Julia Leyda.
Second, I will be participating in a panel on “Digital Seriality” (panel Q20, Saturday, March 28, 2015, 3:00-4:45pm) — along with Andreas Jahn-Sudmann, Scott Higgins, Dominik Maeder, and Daniela Wentz. Panel description and abstracts can be found here. And, as with the other panel, this one too has a tie-in with a publication: all the participants on this panel were contributors to the special issue of Eludamos: Journal for Computer Game Culture that Andreas Jahn-Sudmann and I edited on the topic of “Digital Seriality.”
Out Now: Digital Seriality — Special Issue of Eludamos
The latest issue of Eludamos: Journal for Computer Game Culture, a special issue devoted to the topic of “Digital Seriality” — edited by yours truly, together with Andreas Jahn-Sudmann — is now out! Weighing in at 198 pages, this is one of the fattest issues yet of the open-access journal, and it’s jam-packed with great stuff like:
- Patrick LeMieux on the culture and technology of tool-assisted speedrunning
- Jens Bonk on the serial structure of Halo
- Scott Higgins on the ludic pre-history of gaming in serial films
- Lisa Gotto on ludic seriality and digital typography
- Tobias Winnerling on the serialization of history in “historical” games
- Till Heilmann on Flappy Bird and the seriality of digits
- David B. Nieborg on the political economy of blockbuster games
- Rikke Toft Nørgård and Claus Toft-Nielsen on LEGO as an environment for serial play
- Dominik Maeder and Daniela Wentz on serial interfaces and memes
- Maria Sulimma on cross-medium serialities in The Walking Dead!
So what are you waiting for? Do yourself a favor and check out this issue now!
Visualizing Digital Seriality, Or: All Your Mods Are Belong to Us!
In this post, I want to outline some ongoing work in progress that I’ve been pursuing as part of my postdoctoral research project on seriality as an aesthetic form and as a process of collectivization in digital games and gaming communities. The larger context, as readers of this blog will know, is a collaborative project I am conducting with Andreas Jahn-Sudmann of the Freie Universität Berlin, titled “Digital Seriality” — which in turn is part of an even larger research network, the DFG Research Unit “Popular Seriality–Aesthetics and Practice.” I’ll touch on this bigger picture here and there as necessary, but I want to concentrate more specifically in the following on some thoughts and research techniques that I’ve been developing in the context of Victoria Szabo’s “Historical & Cultural Visualization” course, which I audited this semester at Duke University. In this hands-on course, we looked at a number of techniques and technologies for conducting digital humanities-type research, including web-based and cartographic research and presentation, augmented and virtual reality, and data-intensive research and visualization. We engaged with a great variety of tools and applications, approaching them experimentally in order to evaluate their particular affordances and limitations with respect to humanities work. My own engagements were guided by the following questions: How might the tools and methods of digital humanities be adapted for my research on seriality in digital games, and to what end? What, more specifically, can visualization techniques add to the study of digital seriality?
I’ll try to offer some answers to these questions in what follows, but let me indicate briefly why I decided to pursue them in the first place. To begin with, seriality challenges methods of single-author and oeuvre or work-centric approaches, as serialization processes unfold across oftentimes long temporal frames and involve collaborative production processes — including not only team-based authorship in industrial contexts but also feedback loops between producers and their audiences, which can exert considerable influence on the ongoing serial development. Moreover, such tendencies are exacerbated with the advent of digital platforms, in which these feedback loops multiply and and accelerate (e.g. in Internet forums established or monitored by serial content producers and, perhaps more significantly, in real-time algorithmic monitoring of serialized consumption on platforms like Netflix), while the contents of serial media are themselves subject to unprecedented degrees of proliferation, reproduction, and remix under conditions of digitalization. Accordingly, an incredible amount of data is generated, so that it is natural to wonder whether any of the methods developed in the digital humanities might help us to approach phenomena of serialization in the digital era. In the context of digital games and game series, the objects of study — both the games themselves and the online channels of communication around which gaming communities form — are digital from the start, but there is such an overwhelming amount of data to sort through that it can be hard to see the forest for the trees. As a result, visualization techniques in particular seem like a promising route to gaining some perspective, or (to mix metaphors a bit) for establishing a first foothold in order to begin climbing what appears an insurmountable mountain of data. Of particular interest here are: 1) “distant reading” techniques (as famously elaborated by Franco Moretti), which might be adapted to the objects of digital games, and 2) tools for network analysis, which might be applied in order to visualize and investigate social formations that emerge around games and game series.
Before elaborating on how I have undertaken to employ these approaches, let me say a bit more about the framework of my project and the theoretical perspective on digital seriality that Andreas Jahn-Sudmann and I have developed at greater length in our jointly authored paper “Digital Seriality.” Our starting point for investigating serial forms and processes in games and gaming communities is what we call “inter-ludic seriality” — that is, the serialization processes that take place between games, establishing series such as Super Mario Bros. 1, 2, 3 etc. or Pokemon Red and Blue, Gold and Silver, Ruby and Sapphire, Black and White etc. For the most part, such inter-ludic series are constituted by fairly standard, commercially motivated practices of serialization, expressed in sequels, spin-offs, and the like; accordingly, they are a familiar part of the popular culture that has developed under capitalist modernity since the time of industrialization. Thus, there is lots of continuity with pre-digital seriality, but there are other forms of seriality involved as well.
“Intra-ludic seriality” refers to processes of repetition and variation that take place within games themselves, for example in the 8 “worlds” and 32 “levels” of Super Mario Bros. Here, a general framework is basically repeated while varying and in some cases increasingly difficult tasks and obstacles are introduced as Mario searches for the lost princess. Following cues from Umberto Eco and others, this formula of “repetition + variation” is taken here as the formal core of seriality; games can therefore be seen to involve an operational form of seriality that is in many ways more basic than, while often foundational to, the narrative serialization processes that they also display.
Indeed, this low-level seriality is matched by higher-level processes that encompass but go beyond the realm of narrative — beyond even the games themselves. What we call “para-ludic seriality” involves tie-ins and cross-overs with other media, including the increasingly dominant trend towards transmedia storytelling, aggressive merchandising, and the like. Clearly, this is part of an expanding commercial realm, but it is also the basis for more.
There is a social superstructure, itself highly serialized, that forms around or atop these serialized media, as fans take to the Internet to discuss (and play) their favorite games. In itself, this type of series-based community-building is nothing new. In fact, it may just be a niche form of a much more general phenomenon that is characteristic for modernity at large. Benedict Anderson and Jean-Paul Sartre before him have described modern forms of collectivity in terms of “seriality,” and they have linked these formations to serialized media consumption and those media’s serial forms — newspapers, novels, photography, and radio have effectively “serialized” community and identity throughout the nineteenth and twentieth centuries.
Interestingly, though, in the digital era, this high-level community-building seriality is sometimes folded into an ultra low-level, “infra-ludic” level of seriality — a level that is generally invisible and that takes place at the level of code. (I have discussed this level before, with reference to the BASIC game Super Star Trek, but I have never explicitly identified it as “infra-ludic seriality” before.) This enfolding of community into code, broadly speaking, is what motivates the enterprise of critical code studies, when it is defined (for example, by Mark Marino) as
an approach that applies critical hermeneutics to the interpretation of computer code, program architecture, and documentation within a socio-historical context. CCS holds that lines of code are not value-neutral and can be analyzed using the theoretical approaches applied to other semiotic systems in addition to particular interpretive methods developed particularly for the discussions of programs. Critical Code Studies follows the work of Critical Legal Studies, in that it practitioners apply critical theory to a functional document (legal document or computer program) to explicate meaning in excess of the document’s functionality, critiquing more than merely aesthetics and efficiency. Meaning grows out of the functioning of the code but is not limited to the literal processes the code enacts. Through CCS, practitioners may critique the larger human and computer systems, from the level of the computer to the level of the society in which these code objects circulate and exert influence.
Basically, then, the questions that I am here pursuing are concerned with the possibilities of crossing CCS with DH — and with observing the consequences for a critical investigation of digital game-based seriality. My goal in this undertaking is to find a means of correlating formations in the high-level superstructure with the infra-ludic serialization at the level of code — not only through close readings of individual texts but by way of large collections of data produced by online collectives.
As a case study, I have been looking at ROMhacking.net, a website devoted to the community of hackers and modders of games for (mostly) older platforms and consoles. “Community” is an important notion in the site’s conception of itself and its relation to its users, as evidenced in the site’s “about” page:
ROMhacking.net is the innovative new community site that aggressively aims to bring several different areas of the community together. First, it serves as a successor to, and merges content from, ROMhacking.com and The Whirlpool. Besides being a simple archive site, ROMhacking.net’s purpose is to bring the ROMhacking Community to the next level. We want to put the word ‘community’ back into the ROMhacking community.
The ROMhacking community in recent years has been scattered and stagnant. It is our goal and hope to bring people back together and breathe some new life into the community. We want to encourage new people to join the hobby and make it easier than ever for them to do so.
Among other things, the site includes a vast collection of Super Mario Bros. mods (at the time of writing, 205 different hacks, some of which include several variations). These are fan-based modifications of Nintendo’s iconic game from 1985, which substitute different characters, add new levels, change the game’s graphics, sound, or thematic elements, etc. — hence perpetuating an unofficial serialization process that runs parallel to Nintendo’s own official game series, and forming the basis of communal formations through more or less direct manipulation of computer code (in the form of assembly language, hex code, or mediated through specialized software platforms, including emulators and tools for altering the game). In other words, the social superstructure of serial collectivity gets inscribed directly into the infra-ludic level of code, leaving traces that can be studied for a better understanding of digital seriality.
But how should we study them? Even this relatively small sample is still quite large by the standards of a traditional, close reading-based criticism. What would we be looking for anyway? The various mods are distributed as patches (.ips files) which have to be applied to a ROM file of the original game; the patches are just instruction files indicating how the game’s code is to be modified by the computer. As such, the patch files can be seen, rather abstractly, as crystallizations of the serialization process: if repetition + variation is the formal core of seriality, the patches are the records of pure variation, waiting to be plugged back into the framework of the game (the repeating element). But when we do plug it back in, then what? We can play the game in an emulator, and certainly it would be interesting — but extremely time-consuming — to compare them all in terms of visual appearance, gameplay, and interface. Or we can open the modified game file in a hex editor, in which case we might get lucky and find an interesting trace of the serialization process, such as the following:
Similar to Super Star Trek with its REM comments documenting its own serial and collective genesis, here we find an embedded infratext in the hexcode of “Millennium Mario,” a mod by an unknown hacker reportedly dating back to January 1, 2000. Note, in particular, the reference to a fellow modder, “toma,” the self-glorifying “1337” comment, and the skewed ASCII art — all signs of a community of serialization operating at a level subterranean to gameplay. But this example also demonstrates the need for a more systematic approach — as well as the obstacles to systematicity, for at stake here is not just code but also the software we use to access it and other “parergodic” elements, including even the display window size or “view” settings of the hex editor:
In a sense, this might be seen as a first demonstration of the importance of visualization not only in the communication of results but in the constitution of research objects! In any case, it clearly establishes the need to think carefully about what it is, precisely, that we are studying: serialization is not imprinted clearly and legibly in the code, but is distributed in the interfaces of software and hardware, gameplay and modification, code and community.
Again, I follow Mark Marino’s conception of critical code studies, particularly with respect to his broad understanding of the object of study:
What can be interpreted?
Everything. The code, the documentation, the comments, the structures — all will be open to interpretation. Greater understanding of (and access to) these elements will help critics build complex readings. In “A Box Darkly,” discussed below, Nick Montfort and Michael Mateas counter Cayley’s claim of the necessity for executability, by acknowledging that code can be written for programs that will never be executed. Within CCS, if code is part of the program or a paratext (understood broadly), it contributes to meaning. I would also include interpretations of markup languages and scripts, as extensions of code. Within the code, there will be the actual symbols but also, more broadly, procedures, structures, and gestures. There will be paradigmatic choices made in the construction of the program, methods chosen over others and connotations.
In addition to symbols and characters in the program files themselves, paratextual features will also be important for informed readers. The history of the program, the author, the programming language, the genre, the funding source for the research and development (be it military, industrial, entertainment, or other), all shape meaning, although any one reading might emphasize just a few of these aspects. The goal need not be code analysis for code’s sake, but analyzing code to better understand programs and the networks of other programs and humans they interact with, organize, represent, manipulate, transform, and otherwise engage.
But, especially when we’re dealing with a large set of serialized texts and paratexts, this expansion of code and the attendant proliferation of data exacerbates our methodological problems. How are we to conduct a “critical hermeneutics” of the binary files, their accompanying README files, the ROMhacking website, and its extensive database — all of which contain information relevant to an assessment of the multi-layered processes of digital seriality? It is here, I suggest, that CCS can profit from combination with DH methods.
The first step in my attempt to do so was to mine data from the ROMhacking website and paratexts distributed with the patches and to create a spreadsheet with relevant metadata (you can download the Excel file here: SMB-Hacks-Dec1). On this basis, I began trying to analyze and visualize the data with Tableau. But while this yielded some basic information that might be relevant for assessing the serial community (e.g. the number of mods produced each year, including upward and downward trends; a list of the top modders in the community; and a look at trends in the types of mods/hacks being produced), the visualizations themselves were not very interesting or informative on their own (click on the image below for an interactive version):
How could this high-level metadata be coordinated with and brought to bear on the code-level serialization processes that we saw in the hexcode above? In looking for an answer, it became clear that I would have to find a way to collect some data about the code. The mods, themselves basically just “diff” files, could be opened and compared with the “diff” function that powers a lot of DH-based textual analysis (for example, with juxta), but the hexadecimal code that we can access here — and the sheer amount of it in each modded game, which consists of over 42000 bytes — is not particularly conducive to analysis with such tools. Many existing hex editors also include a “diff” analysis, but it occurred to me that it would be more desirable to have a graphical display of differences between the files in order to see the changes at a glance. My thinking here was inspired by hexcompare, a Linux-based visual diff program for quickly visualizing the differences between two binary programs:
However, the comparison here is restricted to local use on a Linux machine, and it only considers two files at a time. If this type of analysis is to be of any use for seriality studies, it will have to assess a much larger set of files and/or automate the comparison process. This is where Eric Monson and Angela Zoss from Visualization & Information Services at Duke University came in and helped me to develop an alternative approach. Eric Monson wrote a script that analyzes the mod patch files and records the basic “diff” information they contain: the address or offset at which they instruct the computer to modify the game file, as well as the number of bytes that they instruct it to write. With this information (also recorded in the Excel file linked to above), a much more useful and interactive visualization can be created with Tableau (click for an interactive version):
Here, Gannt charts are used to represent the size and location of changes that a given mod makes to the original Mario game; it is possible to see a large number of these mods at a single glance, to filter them by year, by modder, by title, or even size (some mods expand the original code), etc., and in this way we can begin to see patterns emerging. Thus, we bring a sort of “distant reading” to the level of code, combining DH and CCS. (Contrast this approach with Marino’s 2006 call to “make the code the text,” which despite his broad understanding of code and acknowledgement that software/hardware and text/paratext distinctions are non-absolute, was still basically geared towards a conception of CCS that encouraged critical engagements of the “close-reading” type. As I have argued, however, researching seriality in particular requires that we oscillate between big-picture and micro-level analyses, between distant readings of larger trends and developments and detailed comparisons between individual elements or episodes in the serial chain.)
But to complete this approach, we still need to correlate this code-based data with the social level of online modding communities. For this purpose, I used Palladio (a tool explicitly designed for DH work by the Humanities + Design lab at Stanford) to graph networks on the basis of metadata contained in Readme.txt files.
Here, I have mapped the references (“shout-outs,” etc) that modders made to one another in these paratexts, thus revealing a picture of digital seriality as an imagined community of modders.
Here, on the other hand, I have mapped references from paratextual materials associated with individual mods to various online communities that have come and gone over the years. We see early references to the defunct TEKhacks, by way of Zophar’s Domain, Acmlm’s and Insectduel’s boards, with more recent references to Romhacking.net, the most recent community site and the one that I am studying here.
As an example of how the social network and code-level analyses might be correlated, here I’ve filtered the network graph to show only those modders who refer in their paratexts to Super Mario Bros. 3 (hence bringing inter-ludic seriality to bear on their para- and infra-ludic interventions). The resulting graph reveals a small network of actors whose serializing activity involves mixing and referencing between SMB1 and SMB3, as well as between each other. The Tableau screenshot on the right then selects just these modders and reveals possible similarities and sites of serialization (for closer scrutiny with hexcompare or tools derived from the modding community itself). For example, we find that the modder AP’s SMB3-inspired patches from September 2005 and flamepanther’s SMB DX patches from Oct 2005 exhibit traces of possible overlap that deserve to be looked at in detail. The modder insectduel’s After World 8 (a mod that is referenced by many in the scene) from February 2006 shares large blocks around 31000-32000 bytes with many of the prolific modder Googie’s mods (which themselves seem to exhibit a characteristic signature) from 2004-2006. Of course, recognizing these patterns is just the beginning of inquiry, but at least it is a beginning. From here, we still have to resort to “close reading” techniques and to tools that are not conducive to a broad view; more integrated toolsets remain to be developed. Nevertheless, these methods do seem promising as a way of directing research, showing us where to look in greater depth, and revealing trends and points of contact that would otherwise remain invisible.
Finally, by way of conclusion and to demonstrate what some of this more detailed work looks like, I’d like to return to the “Millennium Mario” mod I considered briefly above. As we saw, there was an interesting infratextual shoutout and some ASCII art in the opening section of the hexcode. With Tableau, we can filter the “diff” view to display only those mods that exhibit changes in the first 500 bytes of code, and to map that section of code in greater resolution (this is done with the slider in the bottom right corner, marked “Start” — referring to the byte count at which a change in the game starts):
Here we find two distinctive (visual) matches: viz. between “Millennium Mario” and Raysyde’s “Super Mario Bros. – Remix 2” from 1999, and between ATA’s “Super Mario Bros. – Yoshi’s Quest” and Krillian’s “Mario Adventure 2,” both from 2000. The latter two mods, while clearly different from the former two, also exhibit some overlap in the changes made to the first 20 or so bytes, so it will be interesting to compare them as well.
Now we can use hexcompare for finer analysis — i.e. to determine if the content of the changed addresses is also identical (the visual match only tells us that something has been changed in the same place, not whether the same change has been made there).
Here we find that Raysyde’s “Super Mario Bros. – Remix 2” does in fact display the same changes in the opening bytes, including the reference to “toma” and the ASCII art. This then is a clear indication of infra-ludic serialization: the borrowing, repetition, and variation of code-level work between members of the modding community. This essentially serial connection (an infra-serial link) would hardly be apparent from the level of the mods’ respective interfaces, though:
When we compare “Millennium Mario” with ATA’s “Super Mario Bros. – Yoshi’s Quest,” we find the ASCII art gone, despite the visual match in Tableau’s mapping of their “diff” indications for the opening bytes:
“Yoshi’s Quest” corresponds in this respect to Krillian’s “Mario Adventure 2”:
Thus we have another clear indication of infra-ludic serialization, which would hardly have been evident other than by means of a directed filtering of the large dataset, in conjunction with a close analysis of the underlying code.
Again, however, this is just the beginning of the analysis — or more broadly of an encounter between DH and CCS. Ideally, the dataset would be expanded beyond ROMhacking.net’s database; other online communities would be mined for data; and, above all, more integrative tools would be developed for correlating social network graphs and diff-maps, for correlating community and code. Perhaps a crowdsourced approach to some of this would be appropriate; for what it’s worth, and in case anyone is inclined to contribute, my data and the interactive Tableau charts are linked above. But the real work, I suspect, lies in building the right tools for the job, and this will clearly not be an easy task. Alas, like digital seriality itself, this is work in progress, and thus it remains work “to be continued”…
Thanks finally to Eric Monson, Angela Zoss, Victoria Szabo, Patrick LeMieux, Max Symuleski, and the participants in the Fall 2014 “Historical & Cultural Visualization Proseminar 1” at Duke University for the various sorts of help, feedback, and useful tips they offered on this project!
“Post-Cinema and/as Speculative Media Theory” — Panel at #SCMS15 in Montreal
[UPDATE: Full video of the complete panel is now online: here.]
At the upcoming conference of the Society for Cinema and Media Studies (March 25-29, 2015 in Montréal), I will be chairing a panel on “Post-Cinema and/as Speculative Media Theory,” which brings together four of the most significant voices in the ongoing attempt to theorize our current media situation: Steven Shaviro, Patricia Pisters, Adrian Ivakhiv, and Mark B. N. Hansen.
(Not quite incidentally, all four speakers are also contributors to the forthcoming volume Post-Cinema: Theorizing 21st-Century Film, which I am co-editing with Julia Leyda.)
Here is the panel description, along with links (below) to the abstracts for the various papers:
Post-Cinema and/as Speculative Media Theory
Following debates over “the end” of film and/or cinema in the wake of the massive digitalization of moving-image media, recent film theory has begun considering the emergence of a new, properly “post-cinematic” media regime (cf. Shaviro 2010; Denson and Leyda, forthcoming). The notion of post-cinema takes up the problematic prefix “post-,” which debates over postmodernism and postmodernity taught us to treat not as a marker of definitive beginnings and ends, but as indicative of a more subtle shift or transformation in the realm of culturally dominant aesthetic and experiential forms (cf. Jameson 1991). In the context of post-cinema, this suggests not so much a clear-cut break with traditional media forms but a transitional movement taking place along an uncertain timeline, following an indeterminate trajectory, and characterized by juxtapositions and overlaps between the techniques, technologies, and aesthetic conventions of “old” and “new” moving-image media.
The ambiguous temporality of the “post-,” which intimates a feeling both of being “after” something and of being “in the middle of” uncertain changes – hence speaking to the closure of a certain past as much as a radical opening of futurity – necessitates a speculative form of thinking that is tuned to experiences of contingency and limited knowledge. With respect to twenty-first century media, theories of post-cinema inherit this disposition, relating it to concrete media transformations while speculating more broadly about the effects they might have on us, our cognitive and aesthetic sensibilities, our agency, or our sense of history.
Bringing together several key figures in the theoretical discussions of post-cinema, this panel seeks to explore and expand this speculative dimension. Steven Shaviro looks at a recent FKA twigs music video as an encapsulation of the post-cinematic media regime at large, theorizing the speculative theoretical work done by the video itself. Patricia Pisters argues that post-cinematic appropriations of archival materials lead to a necessarily speculative revision of history. Adrian Ivakhiv brings the discussion into contact with pressing issues of ecological change. Finally, Mark B. N. Hansen offers a media-philosophical perspective on post-cinema as a future-oriented mode of experience. Together, these interventions articulate post-cinema’s media-technical, aesthetic, ecological, and philosophical vectors in order to develop an emphatically speculative media theory.
Bibliography:
Denson, Shane, and Julia Leyda. Post-Cinema: Theorizing 21st-Century Film. Sussex: REFRAME Books, forthcoming.
Jameson, Fredric. Postmodernism, or, The Cultural Logic of Late Capitalism. Durham: Duke UP, 1991.
Shaviro, Steven. Post-Cinematic Affect. Winchester: Zero Books, 2010.
Chair Bio:
Shane Denson is a DAAD postdoctoral fellow at Duke University and a member of the research unit “Popular Seriality—Aesthetics and Practice.” He is the author of Postnaturalism: Frankenstein, Film, and the Anthropotechnical Interface (Transcript 2014) and co-editor of several collections: Transnational Perspectives on Graphic Narratives (Bloomsbury, 2013), Digital Seriality (special issue of Eludamos, forthcoming), and Post-Cinema: Theorizing 21st Century Film (REFRAME, forthcoming).
Finally, here are links to the individual abstracts:
Steven Shaviro, “Reversible Flesh”
Patricia Pisters, “The Filmmaker as Metallurgist: Post-Cinema’s Commitment to Radical Contingency”
Adrian Ivakhiv, “Speculative Ecologies of (Post-)Cinema”
Mark B. N. Hansen, “Speculative Protention, or, Are 21st Century Media Agents of Futurity?”
[UPDATE: Full video of the complete panel is now online: here.]
Steven Shaviro, “Reversible Flesh” #SCMS15
[UPDATE: Full video of the complete panel is now online: here.]
Here is the abstract for Steven Shaviro’s paper on the panel “Post-Cinema and/as Speculative Media Theory” at the 2015 SCMS conference in Montréal:
Reversible Flesh
Steven Shaviro (Wayne State University)
FKA twigs has made a series of mesmerizing music videos over the last three years. My talk will concentrate on one of these, “Papi Pacify” (directed by FKA twigs and Tom Beard). The video deals starkly, but also obliquely, with issues of intimacy, trust, sexuality, and violence. The video is shot in continually shifting black and white, with glitter and flash effects, and composed entirely of close-ups of the faces and upper bodies of the artist (often staring directly at the camera) and her partner. The video’s editing rhythms are complex and nonlinear, involving looping via animated GIF effects, together with quick inserts and apparent slow motion. The music combines trip hop and r&b; it is floating and ambient, sung in a breathless near-whisper, with periods of instrumental intensification but no tonal shift or climax. Overall, the video disconcertingly reorders human sexuality, by means of its novel articulation of spacetime relations, of the sensorium, and of the relation between viewer/listener and work. In this way, “Papi Pacify,” and FKA twigs’ audiovisual work more generally, itself functions as a speculative revision of media theory.
Bibliography:
Battin, Carrie (2013). “FKA twigs: Interview.” Pitchfork. http://pitchfork.com/features/rising/9183-rising-fka-twigs/.
Friedlander, Emily (2013). “How FKA twigs is Pushing Female Sexuality Beyond Miley Cyrus and Sinead.” The Fader. http://www.thefader.com/2013/10/14/miley-cyrus-sinead-o-connor-female-se/.
Noakes, Tim (2014). “FKA twigs: Future Shock.” Dazed Digital. http://www.dazeddigital.com/music/article/20259/1/fka-twigs-future-shock.
Author Bio:
Steven Shaviro is the DeRoy Professor of English at Wayne State University. He is the author of The Cinematic Body, Post-Cinematic Affect, and Melancholia, Or, The Romantic Anti-Sublime.
Patricia Pisters, “The Filmmaker as Metallurgist: Post-Cinema’s Commitment to Radical Contingency” #SCMS15
[UPDATE: Full video of the complete panel is now online: here.]
Here is the abstract for Patricia Pisters’s paper on the panel “Post-Cinema and/as Speculative Media Theory” at the 2015 SCMS conference in Montréal:
The Filmmaker as Metallurgist: Post-Cinema’s Commitment to Radical Contingency
Patricia Pisters (University of Amsterdam)
Contemporary film, television series, and visual arts have a particular temporal and narrative aesthetics that show how the future, always speculative and multiple, has become the dominant time for thinking. I propose calling this aesthetic mode of the digital age “the neuro-image” (Pisters 2012). Following Gilles Deleuze’s movement-images and time-images, neuro-images increasingly present us time as multiple feedback loops from possible futures, parallel worlds, and complex narrations where subtle differences can cause a world of (micropolitical) variations, different pasts for different futures.
This presentation will look at the ways in which contemporary artists and filmmakers are committed to the radical contingency of the audio-visual archive – committed to revealing hidden dimensions of history and/in our collective audio-visual archive, in order to revive new perspectives and reveal new versions of the past that seem necessary for the future of “a people to come.” In her project The Archival Fourth Dimension, for example, artist Sarah Pierce revisits newsreel archives and proposes to uncover “a different past” in Irish and colonial history. In his installation homage to Stuart Hall, The Unfinished Conversation (2012), John Akomfrah shows how personal and collective archival footage are in a perpetual dialogue where poetry and politics form an intractable bond and history becomes a speculative world of alternative histories. Silvia Kolbowski resurrects Ulrike Meinhoff in A Few Howls Again (2010) and speaks up for her corps, giving a voice to haunting questions of war, violence, and terrorism. And in Zandj Revolution (2013), filmmaker Tariq Teguia makes a journey from Algeria, to Lebanon, Palestine, Iraq, and Greece to find inspiration not only in a past revolution – the ninth-century revolution of the Zandj slaves in Iraq – but also in a rebellious and migratory cinematographic style that captures and foreshadows the spirit of the Arab revolution.
Looking at examples such as these, the presentation aims to show how filmmakers become “metallurgists” (Deleuze and Guattari 1994) following the matter-flows of the archive, bending it in concrete forms that can escape from the mnemonic depths and take on a new life, an afterlife. As a politics and a cinematic aesthetics, this undertaking becomes a never-ending story of “trying again, failing again, failing better” with a radical and speculative commitment to the contingencies of history.
Bibliography:
Deleuze, Gilles, and Felix Guatttari. A Thousand Plateaus. London: The Athlone Press, 1994.
Eisenstein, Sergei. The Film Sense. Trans. Jay Leyda. San Diego, New York & London: Harcourt Brace & Company, 1947.
Shaviro, Steven. Post Cinematic Affect. Winchester, UK: Zero Books, 2010.
Pisters, Patricia. The Neuro-Image: A Deleuzian Film-Philosophy for Digital Screen Culture. Stanford: Stanford University Press, 2012.
Author Bio:
Patricia Pisters is professor of film studies in the department of Media Studies at the University of Amsterdam. She is one of the founding editors of Necsus: European Journal of Media Studies. Publications include The Matrix of Visual Culture: Working with Deleuze in Film Theory (Stanford University Press, 2003) and The Neuro-Image: A Deleuzian Film-Philosophy of Digital Screen Culture (Stanford University Press, 2012).
Adrian Ivakhiv, “Speculative Ecologies of (Post-)Cinema” #SCMS15
[UPDATE: Full video of the complete panel is now online: here.]
Here is the abstract for Adrian Ivakhiv’s paper on the panel “Post-Cinema and/as Speculative Media Theory” at the 2015 SCMS conference in Montréal:
Speculative Ecologies of (Post-)Cinema
Adrian Ivakhiv (University of Vermont)
Three sets of intellectual developments frame this paper: (1) debates over the “end of cinema” (and rise of “post-cinema”) in the wake of digital media; (2) recognition across diverse fields that global ecological change—especially, though not solely, impending climate change—is forcing a rearticulation of disciplinary goals and broad societal values; and (3) an upsurge in speculative philosophy, including film and media philosophy, that reconceptualizes sociality, materiality, and relationality in diverse and mutually imbricated ways.
This paper sets out to articulate these three developments together. The emergence of cinema as the “eye of the [twentieth] century” (Cassetti 2008) and its subsequent mutation into something different at the beginning of the twenty-first, and the emergence of ecology as a dominant way of understanding the human-Earth relationship, have not yet been brought and thought together in a sustained way. To do this, I propose a speculative model of cinema, technology, and reality—a process-relational, semiotic-machinic, and “morphogenetic” model rooted in Whitehead, Peirce, and Deleuze/Guattari—to make sense of the ways in which digital cinema reaffirms the lively, kinematic animacy of all things cinematic and extra-cinematic.
Articulating the connections between cinema, semiosis, and materiality makes it possible to conceive of cinema (including digital cinema) as a particular political-ecological articulation of carbon-based life (or biosemiosis). But life, or the semiotic (in Peirce’s terms), exceeds the living. It is machinic (in Deleuzo-Guattarian terms), networked (in Bruno Latour’s), morphogenetic and perpetually differentiating (Deleuze/DeLanda). In this light, I consider what a “post-carbon” cinematic materiality, a materiality beyond the era of petrochemicals—the Capitalocene—might look like, and how digitality, with its proliferation of new forms and its shift to technologies of “the cloud,” affects the possibilities for reclaiming a semiotic commons.
Bibliography:
Bozak, Nadia. The Cinematic Footprint: Lights, Camera, Natural Resources. New Brunswick, NJ: Rutgers University Press, 2011.
Cassetti, Francesco. Eye of the Century: Film, Experience, Modernity. Tr. E. Larkin with J Pranolo. New York: Columbia University Press, 2008.
Ivakhiv, Adrian, Ecologies of the Moving Image: Cinema, Affect, Nature. Waterloo, Ontario, Canada: Wilfrid Laurier University Press, 2013.
Mullarkey, John, Refractions of Reality: Philosophy and the Moving Image. Basingstoke, Hampshire: Palgrave Macmillan, 2009.
Shaviro, Steven, Post-Cinematic Affect. Winchester: Zero Books, 2010.
Author Bio:
Adrian Ivakhiv is Professor of Environmental Thought and Culture at the University of Vermont. His research focuses at the intersections between ecology, culture, media, affect, and identity. His books include Ecologies of the Moving Image: Cinema, Affect, Nature (2013) and the forthcoming Why Objects Fly Out the Window: An Eventology Manifesto, in the Whiff of its Passing. He blogs at Immanence: EcoCulture, GeoPhilosophy, MediaPolitics.
Mark B. N. Hansen, “Speculative Protention, or, Are 21st Century Media Agents of Futurity?” #SCMS15
[UPDATE: Full video of the complete panel is now online: here.]
Here is the abstract for Mark Hansen’s paper on the panel “Post Cinema and/as Speculative Media Theory” at the 2015 SCMS conference in Montréal:
Speculative Protention, or, Are 21st Century Media Agents of Futurity?
Mark B. N. Hansen (Duke University)
In his effort to develop a philosophical account of time-consciousness in the media age, Bernard Stiegler has invoked cinema (as a stand-in for global, realtime, audiovisual fluxes) as the media object par excellence, the technical temporal object that brokers, models, and operates as surrogate for the temporalization responsible for conscious life. Since the publication of the first volume of Stiegler’s Technics and Time, critics have responded to Stiegler’s project with a mix of enthusiasm and skepticism: enthusiasm for the reworking of seemingly moribund themes of deconstruction into a powerful engagement with contemporary media technologies; skepticism concerning the focus on consciousness and representation as the privileged agent and domain of media’s operationality. One particularly striking consequence of Stiegler’s focus on cinema as temporal technical object is a certain temporal bias toward the past, and a recapitulation of the impasse of protention that plagued Husserl’s account of time-consciousness. So long as protention (the “just-to-come’” futurity that is part of the sensory present on the Husserlian model) is taken to be symmetrical to, and indeed is modelled on or derived from retention (the “just-past” of the sensory present), it cannot but be restricted to something that (1) is already possible from the standpoint of the present, is a mode of possibility belonging to the present, and (2) is representational in the sense of being a “content” of consciousness.
The wide-ranging proliferation of so-called “new media” technologies (what I have called 21st century media in my recent work) affords the opportunity to expand the technical off-loading of time-consciousness that informs the core of Stiegler’s neo-Husserlian thought. Most crucially, 21st century media technologies break the correlation of media with conscious cognition, and thus expand the domain of conjunction to what I have called “worldly sensibility” (the meeting of embodied sensibility and worldly impressionality). In my paper, I shall explore two key aspects of this expansion that directly concern the operationality of “speculative media theory”: 1) how the shift from consciousness to sensibility liberates protentionality from its twin restrictions (possibility of the present and representation of consciousness); and 2) how this shift requires a speculative mode of theorization that is an immediate function of the uncertainty and unrepresentatibility of the future.
Bibliography:
Hansen, Mark B. N. Feed-Forward: On the Future of Twenty-First-Century Media. Chicago: U of Chicago P, 2014.
Stiegler, Bernard. Technics and Time, Vol. 3: Cinematic Time and the Question of Malaise. Stanford: Stanford UP, 2010.
Whitehead, Alfred North. Process and Reality: An Essay on Cosmology. New York: The Free Press, 1978.
Author Bio:
Mark Hansen teaches in the Literature Program and in Media Arts & Sciences at Duke University. He is author of Embodying Technesis: Technology Beyond Writing, New Philosophy for New Media, and Bodies in Code, and has co-edited The Cambridge Companion to Merleau-Ponty, Emergence and Embodiment: New Essays on Second-Order Systems Theory, and Critical Terms for Media Studies. His book Feed-Forward: the Future of 21st Century Media will be published by Chicago in Fall 2014.
“Digital Seriality” — Panel at #SCMS15 in Montreal
At the upcoming conference of the Society for Cinema and Media Studies (March 25-29, 2015 in Montréal), I will be participating in a panel on “Digital Seriality,” co-chaired by Andreas Jahn-Sudmann and Scott Higgins, along with Dominik Maeder and Daniela Wentz.
Here is our panel description, along with links (below) to the abstracts for the various papers:
Digital Seriality
Seriality and the digital are key concepts for an understanding of many current forms, texts, and technologies of media, and they are implicated in much broader media-historical trajectories as well. Beyond the forms and functions of specific cultural artifacts, they are central to our global media ecology. Surprisingly, though, relatively few attempts have been made at thinking the digital and the serial together, as intimately connected perspectives on media. This is precisely the task of the present panel. On the one hand, the papers interrogate the serial conditions, forms, and effects of digital culture; on the other hand, they question the role of the digital as technocultural embodiment, determinant, or matrix for serialized media aesthetics and practices. The panel thus brings together heretofore isolated perspectives from studies of new media culture (cf. Manovich 2001, Jenkins 2006) and emerging scholarship on seriality (cf. Kelleter 2012, Allen and van den Berg 2014).
Seriality and digitality are understood here in terms not only of their narrative/representational manifestations but also their technical-operational impacts on our media environments. Accordingly, Shane Denson and Andreas Jahn-Sudmann’s paper looks to the case of the Xbox One in order to show how computational platforms affect the serial forms and practices emerging within, among, and around digital games (“intra-,” “inter-,” and “para-ludic” serialities; cf. Denson and Jahn-Sudmann 2013), but also how these platforms inscribe themselves – as a serialized factor in their own right – into the parameters of computational expression. Whereas video games serve here to highlight the differences between digital and pre-digital serial forms, Dominik Maeder approaches things from the opposite direction, arguing that the interfaces of Netflix, Hulu, and other digital streaming services embody a form of spatio-temporal serialization that, already anticipated by TV series, is closely related to (pre-digital) televisual seriality. As a complementary perspective, Daniela Wentz’s paper shows how certain TV series anticipate their own digital afterlives in the form of fan-made gifs and memes. Finally, Scott Higgins provides an “archeological” perspective, exploring the ludic dimensions of the operational aesthetic, which anticipates computer games in pre-digital forms, thus offering a fruitful case for rethinking digital seriality from a media-comparative perspective.
Bibliography
Allen, Robert, and Thijs van den Berg, eds. Serialization in Popular Culture. London: Routledge, 2014.
Denson, Shane, and Andreas Jahn-Sudmann. “Digital Seriality: On the Serial Aesthetics and Practice of Digital Games.” Eludamos: Journal for Computer Game Culture 7.1 (2013): 1-32.
Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006.
Kelleter, Frank, ed. Populäre Serialität: Narration – Evolution – Distinktion. Zum seriellen Erzählen seit dem 19. Jahrhundert. Bielefeld: Transcript, 2012.
Manovich, Lev. The Language of New Media. Cambridge, MIT, 2001.
Finally, here are links to the individual abstracts:
Dominik Maeder, “Serial Interfaces: Publishing and Programming Television on Digital Platforms”
Daniela Wentz, “The Infinite Gesture: The Serial Culture of the Gif”
Scott Higgins, “Ludic Operations: Play and the Serial Action Sequence”