Discorrelated Images at UCSB Media Arts and Technology Seminar Series, Oct. 26, 2020

Next Monday, October 26, 2020 (1pm Pacific time), I’ll be speaking about my book Discorrelated Images at the Media Arts and Technology Seminar Series at University of California Santa Barbara. Of course, the event will be online via Zoom: https://ucsb.zoom.us/j/87911890791

Rendered Worlds: New Regimes of Imaging — October 23, 2020

The Digital Aesthetics Workshop is extremely excited to announce a collaborative panel with UC Davis’ Technocultural Futures Research Cluster.

Rendered Worlds: New Regimes of Imaging‘ will take place on Friday, October 23 at 10am PDT. Co-organized by teams from Stanford University and University of California Davis, this event brings together a transatlantic group of scholars to discuss the social, historical, technical, and aesthetic entanglements of our computational images.

Talking about their latest work will be Deborah Levitt (The New School), Ranjodh Singh Dhaliwal (UC Davis and Universität Siegen), Bernard Dionysius Geoghegan (King’s College London), and Shane Denson (Stanford). Hank Gerba (Stanford) and Jacob Hagelberg (UC Davis) will co-moderate the round-table. Please register at tinyurl.com/renderedworlds for your zoom link!

We hope to see you there! If you have any questions, please direct them to Ranjodh Singh Dhaliwal (rjdhaliwal at ucdavis dot edu).

Sponsored by the Stanford Humanities Center. Made possible by support from Linda Randall Meier, the Mellon Foundation, and the National Endowment for the Humanities.

“How Language Became Data” — Xiaochang Li at Digital Aesthetics Workshop, May 26, 2020 (via Zoom)

Poster by Hank Gerba

The third Digital Aesthetics Workshop event of the Spring quarter is coming up next week: on May 26th, at 5 PM, we’ll host a workshop with Xiaochang Li, via Zoom. Please email Jeff Nagy (jsnagy at stanford dot edu) for the link by May 25th.

Professor Li will share research from her current project, How Language Became Data: Speech Recognition Between Likeness and Likelihood. Beginning in 1971, a team of researchers at IBM began to reorient the field of automatic speech recognition away from the study of human speech and language and towards a startling new mandate: “There’s no data like more data.” In the ensuing decades, speech recognition was refashioned as a problem of large-scale data acquisition and classification, one that was distinct from, if not antithetical to, explanation, interpretability, and expertise. The history of automatic speech recognition invites a glimpse into how making language into data helped make data into an imperative, opening the door for the expansion of algorithmic culture into everyday life.

Xiaochang Li is an Assistant Professor of Communication at Stanford University. Her research examines questions surrounding the relationship between information technology and knowledge production and its role in the organization of social life.

“Bit Field Black” — Kris Cohen at Digital Aesthetics Workshop/CPU, May 19, 2020 (via Zoom)

Poster by Hank Gerba

The Digital Aesthetics Workshop is excited to announce our second event of the Spring quarter: on May 19th, at 5 PM, we’ll host a workshop with Kris Cohen, via Zoom. This workshop has been co-organized with Stanford’s Critical Practices Unit (CPU), whom you can (and should!) follow for future CPU events here. Please email Jeff Nagy (jsnagy at stanford dot edu) by May 18th for the Zoom link.

Professor Cohen will discuss new research from his manuscript-in-progress, Bit Field BlackBit Field Black accounts for how a group of Black artists working from the Sixties to the present were addressing, in ways both belied and surprisingly revealed by the language of abstraction and conceptualism, nascent configurations of the computer screen and the forms of labor and personhood associated with those configurations.

Professor Cohen is Associate Professor of Art and Humanities at Reed College. He works on the relationship between art, economy, and media technologies, focusing especially on the aesthetics of collective life. His book, Never Alone, Except for Now (Duke University Press, 2017), addresses these concerns in the context of electronic networks.

A poster with all the crucial information is attached for lightweight recirculation. 

Thank you to all of the very many of you who logged on for our first Spring workshop with Sarah T. Roberts. We hope you will also join us on the 19th, and keep an eye out for an announcement of our third Spring workshop, with Xiaochang Li, coming up on May 26th

“Behind the Screen” — Sarah T. Roberts at Digital Aesthetics Workshop, April 21, 2020 (via Zoom)

Poster by Hank Gerba

We’re excited to announce a last-minute workshop with Sarah T. Roberts, next Tuesday, April 21st, from 5 to 7 PM. The workshop will take place via Zoom; please email Jeff Nagy (jsnagy at stanford dot edu) for the link.

Professor Roberts is the leading authority on commercial content moderation, the mostly invisible, increasingly globalized labor that keeps digital platforms free(-ish) of hate speech, pornography, and other kinds of unwanted material. Her research has become even more crucial over the last few months, as we increasingly spend the bulk of our professional and social lives online, and we hope you’ll join us to discuss it.

Behind the Screen: Content Moderation in the Shadows of Social Media

Faced with mounting pressures and repeated, very public crises, social media firms have taken a new tack since 2017: to respond to criticism of all kinds and from numerous quarters (regulators, civil society advocates, journalists, academics and others) by acknowledging their long-obfuscated human gatekeeping workforce of commercial content moderators. Additionally, these acknowledgments have often come alongside announcements of plans for exponential increases to that workforce, which now represents a global network of laborers – in distinct geographic, cultural, political, economic, labor and industrial circumstances – conservatively estimated in the several tens of thousands and likely many times that. Yet the phenomenon of content moderation in social media firms has been shrouded in mystery when acknowledged at all. In this talk, Sarah T. Roberts will discuss the fruits of her decade-long study the commercial content moderation industry, and its concomitant people, practices and politics. Based on interviews with workers from Silicon Valley to the Philippines, at boutique firms and at major social media companies, she will offer context, history and analysis of this hidden industry, with particular attention to the emotional toll it takes on its workers. The talk will offer insights about potential futures for the commercial internet and a discussion of the future of globalized labor in the digital age.

Sarah T. Roberts is an assistant professor of Information Studies at the UCLA School of Education and Information Studies, specializing in Internet culture, social media, and the intersection of media, technology and society. She is founding co-director, along with Dr. Safiya Noble, of the forthcoming UCLA Center for Critical Internet Inquiry. Her book, Behind the Screen: Content Moderation in the Shadows of Social Media, was released in June 2019 (Yale University Press).

The Sur/Render of Perception (SCMS 2020 Coronavirus Special)

“Though the render farm’s massive energy consumption expedites climate change, perception’s surrender may be a necessary step in loosening subject-oriented individualisms in order to encounter our environment and one another in this time of urgency.”

I was set to participate in a panel on “Rendering: Times, Powers, Perceptions” with Deborah Levitt, Vivian Sobchack, and Joel McKim at the 2020 Conference of the Society for Cinema and Media Studies in Denver. Due to the novel coronavirus pandemic, the conference was cancelled. I had already written my paper, which also serves to introduce one of the topics I deal with in my book Discorrelated Images (out in October with Duke UP), so here it is:

The Sur/render of Perception

“Render farms,” or high-performance networked computer clusters designed to churn out computer-generated imagery, epitomize a transformed relation between image and perception. In these batteries of computers, images are not visual phenomena but pure information, mathematical quantities between black-box machines, radically discorrelated from subjective perception. Indeed, the operation of the render farm signals a fundamental surrender of perception’s primacy of place—a giving up or giving over of human sense to the post-perceptual processes of machine vision, or the regime of what Trevor Paglen calls “invisible images.” To render and to surrender are both conceptually and experientially intertwined, and because the work of rendering (which also means repeating) does not stop at the farm, neither do the consequences for perception: the rendering process is rehearsed in every computationally facilitated playback, as video codecs re-calculate images from predictive vectors, not integral photograms. Each playback is thus a new rendition (the prefix re- in “render” corresponding to the -back of “playback.”); rendering as repetition never comes to rest at a final, integral object or experience. And every viewing thus involves a new surrender of perception.

Meanwhile, the prefix sur- complicates iteration with an ambiguous gesture of transcendent return (giving back over). Surrender presumes a prior state of ownership, which may not correspond to empirical facts but signifies an almost metaphysical relation of priority. The surrender of perception involves giving perceptual experience and its image-object back over to nonperceptual process, matter, metabolism—to the affective processes of pre-personal embodiment that have always subtended subjective perception, now mirrored in the black box of the computer. This act of sur/render is dangerous (possibly subjecting human agency to microtemporal control), but perhaps it holds a liberatory potential as well: opening human experience back up to cosmic time (such as Deleuze hoped from the time-image) and putting us in touch with ecological forces that transductively render perceptual subjects and objects alike. Though the render farm’s massive energy consumption expedites climate change, perception’s surrender may be a necessary step in loosening subject-oriented individualisms in order to encounter our environment and one another in this time of urgency. Could this be the multistable gift (the Proto-Indo-European *do-) at the heart of sur/render?

My forthcoming book Discorrelated Images grapples with these questions across a range of moving-image forms, genres, and media. One especially potent site for thinking about the surrender of perception is in the many contemporary images of the end of the world and of extinction—the ultimate scene of discorrelation, where the subject of perception is obliterated and the image extinguished. Ironically, such scenes are rendered in spectacular computer-generated images—thus enacting the give and take of sur/render, or the production of images that both thematize and instantiate their own discorrelation from human perception.

I want to look at one example of this thematic, material, and ultimately ethical entwinement in a recent videogame, NieR:Automata. In this interactive medium, not only perception but also action is tightly coupled with real-time image generation. That is, rendering transductively articulates action and image together: they are caught up in microtemporal circuits connecting user input and computational operations that feed forward into processual screen events that elicit further inputs and entrain players’ awareness and agency in a temporal becoming that was not pre-recorded but is happening in a precariously generated now. Here, human protention and the computer’s microtemporality commingle to the point of indistinction, raising specifically ethical questions about the determination and exercise of agency, questions that cannot be answered with an appeal to an integral, foundational subject. Against this background, NieR:Automata channels the medium’s ethical questions into an exemplary existential probing of agency, image, and world in the light of extinction, self-reflexively relating its own computational image-rendering to questions of perceptual surrender.

Released for PlayStation 4 and Microsoft Windows in 2017 and for Xbox One the following year, NieR:Automata is a third-person, open-world action role-playing game developed by PlatinumGames under the direction of acclaimed videogame director Yoko Taro. The JRPG draws strong visual and thematic influence from manga and anime, including melodramatic characters with stylized hair, giant robots, and a sword-wielding protagonist in thigh-high boots and a somewhat revealing dress. The game’s post-apocalyptic setting, with abandoned cityscapes reminiscent of the History Channel’s Life After People series, contrasts interestingly with its strangely soothing music, while gameplay alternates between a variety of generic forms and conventions: battling sentient machines in an expansive three-dimensional world, operating flying vehicles in overwhelming “bullet-hell” situations, navigating jump-and-run-style action in side-scrolling 2D platformer sequences, and hacking computers in a minimalistically rendered abstract data space reminiscent of 8-bit era games. The game also features an elaborate narrative, elements of which are uncovered over the course of twenty to thirty (or potentially many more) hours of play. The basic scenario is that thousands of years ago, Earth was invaded by aliens who brought with them hostile robotic machines, eventually forcing humans to abandon the planet and set up a base of operations on the moon; since then, the humanoid androids of the YoRHa forces have been fighting the more industrial-looking machines, waging a completely nonhuman proxy war on behalf of the Council of Humanity. Now, the player joins the ranks of the androids to retake the planet, but along the way we encounter guilt-ridden machines pondering the meaning of existence and establishing religions and novel cultural forms, thus calling into question human exceptionalism and anthropocentric notions of value. Later, we also find out that humanity has in fact been extinct for many millennia, that the ongoing war was pre-programmed, every battle the result of machinic directives guiding the behavior of combatants on both sides.

Driving home this thematization of determinism and free will, the game is full of heavy-handed references to philosophers and to existentialism in particular: the leader of a village of pacifist machines is named Pascal, and he reads Nietzsche. A machine named Jean-Paul Sartre lives there as well, spouting slogans like “existence precedes essence” to anyone who will listen. The android protagonist who serves as the player’s initial onscreen avatar is named 2B—provoking the question, “or not 2B?” All of this can seem rather cheesy, and frankly it is. But it sets the stage for the game’s more substantial probing of the existential parameters of the player’s interface with the computer and the question of agency in a world where extinction is not only assumed as a historical fact but linked to the material experience ofdiscorrelation at every microsecond in the game’s real-time hyperanimated images. Thus, it is perhaps in spite of the game’s overt existentialism that NieR:Automata becomes a powerful mediator of post-extinction ethics in a world of radically machinic images. 

Indeed, much of the most important ethical probing takes place in the low-level circuit of input-output that positions the image in-between the computer’s processes and the user’s actions. Upon initially starting up the game, we are presented with a blackscreen with the words “LOADING – BOOTING SYSTEM…” in the top left corner and a logo in the middle of the screen with the words “YoRHa” and “For the Glory of Mankind.” The results of system checks and other stats scroll down on the left, vaguely reminiscent of a Linux boot process, while the text twitches with simulated glitches. This boot screen, which will reappear throughout the game, is neither completely diegetic nor fully extra-diegetic; it will be narrativized as the android’s own boot process whenever “consciousness data” is uploaded to the server or restored to a new body, but it also serves to mask those moments when the player’s computer or gaming console has to load information from memory.

This semi-diegetic boot screen ushers in a broad problematization of phenomenological relations in the game/player system—an unsettling of relations expanded in battle sequences, which foreground the image’s thoroughgoing repositioning as the object not of perception but of cybernetic processing. Playing as the female android 2B, whose body we see in third-person perspective, when we take damage from an opponent the screen as a whole glitchily shakes, the image “tears,” and we see blocky color separation effects (red and cyan layers bleeding out around the edges of objects, reminiscent of analog 3D cinema); when, on the other hand, 2B dodges an attack, the android’s body quickly splits apart, warps, and flashes as if hit by lightning. These visual effects are thus distributed across the subjective and objective poles of the image, reminding us of the computational totality of the situation—of our real situation as players as much as the fictional situation of the computer-driven characters. 

As a way of “making sense” of this situation, the game stages a constant negotiation between perceptual and computational spaces, figured centrally as shifts between the embodied action of battle and open-world exploration, on the one hand, and the “hacking space” in which computers (androids and machines) interface directly with one another or with the network, on the other. This space of disembodied data, in which the player steers an abstract icon reminiscent of an early arcade-game spaceship and shoots various enemy icons in order to “hack” the opponent, offers a displaced representation (à la cyberpunk imaginations of the network) of the game’s computational system more generally and thus provides a representational and perceptual form for the discorrelated processing at the very heart of all the games’ procedurally generated images. In this sense, these too are semi-diegetic events that involve us more thoroughly—because actively—in the negotiation between perception and computation. 

Later in the game, 2B’s male android partner 9S hacks into the network and contracts a computer virus that distorts our vision as mediated by the screen. He also learns the secret of humanity’s millennia-long extinction, which provokes a sort of existential crisis for him (and for us?). But it is not until the end of the third playthrough, or after some twenty hours or more of gameplay altogether, that we encounter the game’s most powerful questioning—and practical enactment—of the ethical consequences of the intertwining of agency and image in the face of extinction. By this time, all the other androids have succumbed to the virus; the last two prepare to kill each other, effectively completing the program of extinction by ending now nonhuman sentience as well. Afterwards, the credits roll, but the text starts glitching as we learn that there is “data noise present in stream” and “personal data leaking out.” Apparently, there are still traces of the androids’ “consciousness data” in the system, which is in the process of self-deletion. We are presented with an option to try to save them, which if we choose to pursue it causes a spaceship identical to the one seen in “hacking space” to appear, and we are tasked with shooting up the credits scrolling down the screen. 

The textual entries are rendered enemy targets, emitting an astounding barrage of projectiles in all directions. Contact with one of the latter causes the player to lose a “life,” like in old arcade games, and having lost three lives the player is asked whether they would like to connect to “the network”—a somewhat ominous proposition, given the fact that countless hours have just been spent battling the enemy machines’ network. The player can click “no” and refuse defeat, repeating the battle multiple times, but victory seems impossible, and each time the player loses the computer taunts: “Do you give up?” “Is it all pointless?” “Do you admit there is no meaning to this world?”

At some point, there is no option but to connect to the network, in which case we receive a message like: “8X ‘I did my best. One thing is certain: I’m rooting for you.’ USA.” Fragments of other messages are visible in the background. We then receive a rescue offer, and multiple spaceships join ours, multiplying our firepower and our chance of survival. With each collision, one of the spaceships is destroyed, and we read messages such as: “Crs501’s data has been lost.” “minato’s data has been lost.” “Yambu’s data has been lost.” What we are witnessing, it turns out, are the traces of other players’ savefile data—the computational memory that parallels or even replaces human memory and enables players to replay or restore the game from an earlier save state—being sacrificed to help other players connected to the network.

Upon completion of the task with the assistance of these anonymous helpers, we are prompted to send a message of our own to other players. Then, another prompt: “Please respond to this query. You, X, faithful player of this title, have lost your life multiple times to make it this far. You have faced crushing hardship, and suffered greatly for it. Do you have any interest in helping the weak?” The possible answers are a simple “Yes” or “No.” And while the theatrics of these queries may be slightly off-putting, a significant ethical choice is being framed here. An affirmative answer triggers the following message: “Selecting this option enables you to save someone somewhere in the world. However, in exchange, you will lose all of your save data. Do you still wish to rescue someone—a total stranger—in spite of this?” The player is given several opportunities to reconsider, along with further warnings that really everything—all our progress in the game, items and weapons obtained, skills and intelligence unlocked, and generally all of the labor we have invested in the game—will be lost forever in an act of self-sacrifice. If we persist, the computer responds: “Very well. In exchange for all of your data, I will convey your will to this world.” Then we see the game’s configuration menus, all of the places and save points on the map disappearing one after another, followed by the options under “quests,” “items,” “weapons,” and so on. Finally, the options under “system” are deleted, all of the save states disappearing until there is nothing left but a blank slate. The image fades to white, and we are informed: “All of your data has been deleted.” After a short message thanking the player for playing, the “save” indicator appears in the top right corner of the screen. We read “Save complete” and then “Connecting to the network…” Finally, a glitchy NieR:Automata logo comes into focus, along with the message “Press any button.”

In this way, the game frames a non-trivial ethical decision, whether to sacrifice an indistinctly computational and experiential memory and pass it on to those who come after us. The significance of the choice, beyond its overt existentialist framing, lies in the player’s real investment of value in the data to be sacrificed, which seals the circuit between perception and computation. In sacrificing their data, the player also sacrifices the image, which dissolves before their very eyes. This exercise of agency at once completes the destruction of the world and enables its continuation for some unknown player elsewhere in the world. A fine balance is struck between individual identity and anonymity, neither collapsing into solipsistic solitude nor constituting a robust collectivity. The choice to sacrifice oneself for the sake of an unknown, future other frames a symbolic restoration of the intergenerational continuity, or the promise made to future generations, and which is required (as a necessary but not sufficient condition) if we are to avoid climate catastrophe. Playing videogames will clearly not avert the threat of extinction, but playing in the shadow of planetary demise—and connecting the computationally rendered image with an existential surrender of perception—just might help restore the moral gravity of our situation.

Exploring Cinematic Mixed Realities

Exploring Cinematic Mixed Realities: Deformative Methods for Augmented and Virtual Film and Media Studies

Arguably, all cinema, with its projection of three-dimensional spaces onto a two-dimensional screen, is a form of mixed reality. But some forms of cinema are more emphatically interested in mixing realities—like Hale’s Tours (dating back to 1904), which staged its kinesthetic, rollercoaster-like spectacles of railway travel inside of a train car that rocked back and forth but otherwise remained stationary. Here the audience of fellow “passengers” experienced thrills that depended not so much on believing as on corporeally feelingthe effects of the simulation, an embodied experience that was at once an experience of simulated travel and of the technology of simulation. Evoking what Neil Harris has called an “operational aesthetic,” attention here was split, as it is in so many of our contemporary augmented and virtual reality experiences, between the spectacle itself and its means of production. That is, audiences are asked both to marvel at the fictional scenario’s spectacular images and, as in the case of the “bullet time” popularized a century later by The Matrix, to wonder in amazement at the achievement of the spectacle by its underlying technical apparatus. The popularity of “making of” videos and VFX reels attests to a continuity across cinematic and computational (or post-cinematic) forms of mixed reality, despite very important technological differences—including most centrally the emergence of digital media operating at scales and speeds that by far exceed human perception. Seen from this angle, part of the appeal—and also the effectiveness—of contemporary AR, VR, and other mixed reality technologies lies in this outstripping of perception, whereby the spectacle mediates to us an embodied aesthetic experience of the altogether nonhuman dimensionality of computational processing. But how, beyond theorizing historical precursors and aesthetic forms, can this insight be harnessed practically for the study of film and moving-image media?

Taking a cue from Kevin L. Ferguson’s volumetric explorations of cinematic spaces with the biomedical and scientific imaging software ImageJ, I have been experimenting with mixed-reality methods of analysis and thinking about the feedback loops they initiate between embodied experience and computational processes that are at once the object and the medium of analysis. Here, for example, I have taken the famous bullet-time sequence and imported it as a stack of images into ImageJ, using the 3D Viewer plugin to transform what Gilles Deleuze called cinema’s presentation of a “bloc of space-time” into a literal block of bullet-time. This emphatically post-cinematic deformation uses transparency settings to gain computational insight into the virtual construction of a space that can be explored further in VR and AR settings as abstract traces of informational processing. Turned into a kind of monument that mixes human and computational spatiotemporal forms, this is a self-reflexive mixed reality that provides aesthetic experience of low-level human-computational interfacing—or, more pointedly, that re-constitutes aesthesis itself as mixed reality.

Clearly, this is an experimental approach that is not interested in positivistic ideas of leveraging digital media to capture and reconstruct reality, but instead approaches AR and VR technologies as an opportunity to transform and re-mix reality through self-reflexively recursive technoaesthetic operations. Here, for example, I have taken the bullet-time sequence, produced with the help of photogrammetric processes along with digital smoothing and chromakeying or green-screen replacement, and fed it back into photogrammetry software in order to distill a spatial environment and figural forms that can be explored further in virtual and augmented scenarios. Doing so does not, of course, present to us a “truth” understood as a faithful reconstruction of pro-filmic reality. On the contrary, the abstraction and incoherence of these objects foreground the collision of human and informatic realities and incompatible relations to time and space. If such processes have analytical or theoretical value, it resides not in a positivistic but rather a deformative relation to data, both computational and experiential. Indeed, the payoff, as I see it, of interacting with these objects is in the emergence of a new operational aesthetic, one that transforms the original operational aesthetic of the scenario—its splitting of attention between spectacle and apparatus—and redirects it to a second-order awareness of our involvement in mixed reality as itself a volatile mixture of technoaesthetic forms. Ultimately, this approach questions the boundaries between art and technology and reimagines the “doing” of digital media theory as a form of embodied, operational, and aesthetic practice.

“Aesthetics of Discorrelation” and “Exploring Cinematic Mixed Realities” — Two Events at Duke University, Feb. 20 and Feb. 21, 2020

This coming week I will be at Duke University for two events:

First, on Thursday, February 20 (5pm, exact location to be determined), I will be giving a talk titled “Aesthetics of Discorrelation” (drawing on work from my forthcoming book Discorrelated Images).

Then, on Friday, February 21 (1-3pm in Smith Warehouse, Bay 4), I will be participating in a follow-up event to the NEH Institute for Virtual and Augmented Reality for the Digital Humanities, or V/AR-DHI. I will present work on “Exploring Cinematic Mixed Realities: Deformative Methods for Augmented and Virtual Film and Media Studies” and participate in a roundtable discussion with other members of the Institute.

Electronic Bodies, Real Selves: Agency, Identification, and Dissonance in Video Games

On February 19, 2020 (10:30-12:00 in Oshman Hall), Morgane A. Ghilardi from the University of Zurich will be giving a guest lecture in the context of my “Digital and Interactive Media” course:

Electronic Bodies, Real Selves: Agency, Identification, and Dissonance in Video Games

Vivian Sobchack asserts that technology affects the way we see ourselves and, as a consequence, the way we make sense of ourselves. She also points to a crisis of the lived body that is to be attributed to the loss of “material integrity and moral gravity.” What are we to do with such an an assertion in 2020? Digital media that afford us agency in some form or other––specifically, video games––engender a special relationship between our ‘IRL’ selves and the “electronic” bodies on screen in the formation of what I call the player-character subject. Transgressive acts––such as violent acts––that take place within the system of a game––either in terms of fiction or simulation––bring the unique affective dimensions of that relationship to the fore and prompt us to reflect on ways to make sense of our selves at the intersection of real and simulated bodies.

“Unclean Interface: Computation as a Cleanliness Problem” — Rachel Plotnick at Digital Aesthetics Workshop

Announcing the Digital Aesthetics Workshop’s first event of 2020: On February 11th, at 5 PM in the Stanford Humanities Center’s Watt Common Room, we’ll be hosting Rachel Plotnick, who will share some recent research on cleanliness and computation. 

Dr. Plotnick is an Assistant Professor in the Media School at the University of Indiana-Bloomington. Her (fantastic!) first book, Power Button: a History of Pleasure, Panic, and the Politics of Pushing, is just out from MIT Press.

Here is the abstract for her talk:

Unclean Interface: Computation as a Cleanliness Problem

Histories of computing tend to focus on particular elements of computation (such as invention of computers; early PC use; interface design, or viruses), but this study aims to approach computing from a novel, alternative angle – mess. From the earliest advent and use of computers, mess has been a particularly thorny problem that gets defined differently in different contexts, across technologies and spaces, and through a variety of computing practices. Computing is inherently messy: screens, mice, disks and keyboards pick up dirt, dust and crumbs; messy bodies touch and handle computers day in and day out; air is full of unclean particles; and problems of humidity, temperature, and static are routine. At the level of software, too, metaphors of cleanliness and dirtiness persist in terms of “clean” design, “dirty” content or data, desktop icon organization, and fears over contagion and contamination from viruses and spam. By beginning from the vantage point of mess, it becomes possible to crystallize a very different history of computing driven from efforts to contain, control and eliminate dirt, to valorize cleanliness, and to enforce particular protocols, habits, and behaviors. In the messy interface between bodies, environments, software, and hardware one can find persistent concerns about what it means to be “human” and what it means to be “technology.” At the same time, this approach weaves in discussions of care, maintenance, and repair into computing, recognizing that innovation is not the only – or always most salient – way to understand human-technology relations, and that in fact much of everyday interactions with computers take place in acts of protection and cleaning. Innovation may also occur as a result of particular messiness problems, rather than the other way around. Lest we think of mess as a computing problem of the past (given ethereal metaphors of “cloud” computing and increasingly encased computing devices), recent examples of messiness demonstrate the ongoing problem of cleanliness in computing. A few representative cases include: Apple’s continued problems with its butterfly keyboard; concerns over “dirty” databases and how to clean big data; and the booming market for cases, screen protectors, and cleaning devices for tablets, laptops and smartphones.