On Thursday, March 31 (5pm Central US time), I’ll be participating along with Jihoon Kim, John Powers, and Deborah Levitt in a panel titled “(Post)Cinematic Operations: Envisioning Cameras from the Bolex to Smart Sensors” at this year’s (virtual) SCMS conference.
My paper is titled “AI, Deep Learning, and the Aesthetic Education of the ‘Smart’ Camera.” Here’s the abstract:
The merging of “smart” technologies with imaging technologies creates a number of conceptual difficulties for the definition of the word camera. It also creates a number of aesthetic and phenomenological problems for human sensation. As I argued in my book Discorrelated Images, the microtemporal speed of computational processing inserts itself in between the production and reception of images and endows the camera with an affective density that distinguishes it from a purely mechanical reproduction of visible forms; in processes like motion prediction and motion smoothing, the distinction between camera and screen itself breaks down as images are generated on the fly during playback. This presentation takes these considerations further to think about the ways that artificial intelligence further transforms inherited forms and functions of camera-mediation, both in physical apparatuses (e.g. smartphones and drones) and virtual ones (e.g. software-based image generation in videogames, DeepFake videos, AR, or VR). The analysis proceeds by looking at concrete instances such as the “Deep Fusion” technique employed on recent iPhones, which use the A15 Bionic processor—a so-called “neural engine”—to create a composite image combining pixels from a quick burst of digital photos. Beyond merely technical advances, I argue, such “smart” camera processes effect a subtle but significant transformation of our own aesthetic senses, insinuating computational processes in both our low-level processing of sensation and our high-level aesthetic judgments (and thus also algorithmically inserting racial and gendered biases, among other things). A techno-phenomenological analysis, which attends both to technological factors and to the embodied spatiotemporal parameters of human perception, provides the basis for a robustly cultural understanding the “smart” camera, including its role in “re-educating” our aesthetic senses.
I recently gave a talk with the unwieldy title “Post-Cinematic Seriality and the Algorithmic Conditions of Identity and Difference” for the Center for Inter-American Studies at the University of Graz and the Austro-American Society for Styria in Austria (see the *somewhat creepy, but appropriately so, lol* flyer below); and on October 12, 2021 (at 6:30pm Central European time / 9:30am Pacific US time) I’ll be giving a related talk with the much more wieldy (possibly misleadingly simple) title “Seriality and Digital Cultures” at the University of Zurich’s English Department (see the flyer with registration info above).
Both of these talks are related to a larger project that I am developing, which will link seriality as a medial form (in both popular and artistic media) and as a social form (following the late Sartre, Iris Marion Young, Benedict Anderson, and others) in order to think about the ways that — with the shift from a broadly “cinematic” media regime (with its past-oriented, memorial, recording, retentional functions) to a “post-cinematic” one (with its future-oriented, anticipatory, predictive, protentional functions) — algorithmic media are poised to transform categories and lived realities of class, gender, and race.
DeepFake videos pose significant challenges to conventional modes of viewing. Indeed, the use of machine learning algorithms in these videos’ production complicates not only traditional forms of moving-image media but also deeply anchored phenomenological categories and structures. By paying close attention to the exchange of energies around these videos, including the consumption of energy in their production but especially the investment of energy on the part of the viewer struggling to discern the provenance and veracity of such images, we discover a mode of viewing that both recalls pre-cinematic forms of fascination while relocating them in a decisively post-cinematic field. The human perceiver no longer stands clearly opposite the image object but instead interfaces with the spectacle at a pre-subjective level that approximates the nonhuman processing of visual information known as machine vision. While the depth referenced in the name “deep fake” is that of “deep learning,” the aesthetic engagement with these videos implicates an intervention in the depths of embodied sensibility—at the level of what Merleau-Ponty referred to as the “inner diaphragm” that precedes stimulus and response or the distinction of subject and intentional object. While the overt visual thematics of these videos is often highly gendered (their most prominent examples being so-called “involuntary synthetic pornography” targeting mostly women), viewers are also subject to affective syntheses and pre-subjective blurrings that, beyond the level of representation, open their bodies to fleshly “ungenderings” (Hortense Spillers) and re-typifications with far-reaching consequences for both race and gender.
Let me try to demonstrate these claims. To begin with, DeepFake videos are a species of what I have called discorrelated images, in that they trade crucially on the incommensurable scales and temporalities of computational processing, which altogether defies capture as the object of human perception (or the “fundamental correlation between noesis and noema,” as Hussserl puts it). To be sure, DeepFakes, like many other forms of discorrelated images, still present something to us that is recognizable as an image. But in them, perception has become something of a by-product, a precipitate form or supplement to the invisible operations that occur in and through them. We can get a glimpse of such discorrelation by noticing how such images fail to conform or settle into stable forms or patterns, how they resist their own condensation into integral perceptual objects—for example, the way that they blur figure/ground distinctions.
The article widely credited with making the DeepFake phenomenon known to wider public in December 2017 notes with regard to a fake porn video featuring Gal Gadot: “a box occasionally appeared around her face where the original image peeks through, and her mouth and eyes don’t quite line up to the words the actress is saying—but if you squint a little and suspend your belief, it might as well be Gadot.” There’s something telling about the formulation, which hinges the success of the DeepFake not on the suspension of disbelief—a suppression of active resistance—but on the suspension of belief—seemingly, a more casual form of affirmation—whereby the flickering reversals of figure and ground, or of subject and object, are flattened out into a smooth indifference.
In this regard, DeepFake videos are worth comparing to another type of discorrelated image: the digital lens flare, which is both to-be-looked-at (as a virtuosic display of technical achievement) and to-be-overlooked (after all, the height of their technical achievement is reached when they can appear as transparently naturalized simulations of a physical camera’s optical properties). The tension between opacity and transparency, or objecthood and invisibility, is never fully resolved, thus undermining a clear distinction between diegetic and medial or material levels of reality. Is the virtual camera that registers the simulated lens flare to be seen as part of the world represented on screen, or as part of the machinery responsible for revealing it to us? The answer, it seems, must be both. And in this, such images embody something like what Neil Harris termed the “operational aesthetic” that characterized nineteenth-century science and technology expos, magic shows, and early cinema alike; in these contexts, spectatorial attention oscillated between the surface phenomenon, the visual spectacle of a machine or a magician in motion, and the hidden operations that made the spectacle possible.
It was such a dual or split attention that powered early film as a “cinema of attractions,” where viewers came to see the Cinematographe in action, as much as or more than they came to see images of workers leaving the factory or a train arriving at the station. And it is in light of this operational aesthetic that spectators found themselves focusing on the wind rustling in the trees or the waves lapping at the rocks—phenomena supposedly marginal to the main objects of visual interest.
DeepFakes also trade essentially on an operational aesthetic, or a dispersal of attention between visual surface and the algorithmic operation of machine learning. However, I would argue that the post-cinematic processes to whose operation DeepFakes refer our attention fundamentally transform the operational aesthetic, relocating it from the oscillations of attention that we see in the cinema to a deep, pre-attentional level that computation taps into with its microtemporal speed.
Consider the way digital glitches undo figure/ground distinctions. Whereas the cinematic image offered viewers opportunities to shift their attention from one figure to another and from these figures to the ground of the screen and projector enabling them, the digital glitch refuses to settle into the role either of figure or of ground. It is, simply, both—it stands out, figurally, as the pixely appearance of the substratal ground itself. Even more fundamentally, though, it points to the inadequacy, which is not to say dispensibility, of human perception and attention with respect to algorithmic processing. While the glitch’s visual appearance effects a deformation of the spatial categories of figure and ground, it does so on the basis of a temporal mismatch between human perception and algorithmic processing. The latter, operating at a scale measured in nanoseconds, by far outstrips the window of perception and subjectivity, so that by the time the subject shows up to perceive the glitch, the “object” (so to speak) has already acted upon our presubjective sensibilities and moved on. This is why glitches, compression artifacts, and other discorrelated images are not even bound to appear to us as visual phenomena in the first place in order to exert a material force on us. Another way to account for this is to say that the visually-subjectively delineated distinction between figure and ground itself depends on the deeper ground of presubjective embodiment, and it is the latter that defines for us our spatial situations and temporal potentialities. DeepFakes, like other discorrelated images, are able to dis-integrate coherent spatial forms so radically because they undercut the temporal window within which visual perception occurs. The operation at the heart of their operational aesthetic is itself an operationalization of the flesh, prior to its delineation into subjective and objective forms of corporeality. The seamfulness of DeepFakes—their occasional glitchy appearance or just the threat or presentiment that they might announce themselves as such—points to our fleshly imbrication with technical images today, which is to say: to the recoding not only of aesthetic form but of embodied aesthesis itself.
In other words: especially and as long as they still routinely fail to cohere as seamless suturings of viewing subjects together with visible objects, but instead retain their potential to fall apart at the seams and thus still require a suspension of belief, DeepFake videos are capable of calling attention to the ways that attention itself is bypassed, providing aesthetic form to the substratal interface between contemporary technics and embodied aesthesis. To be clear, and lest there be any mistake about it, I in no way wish to celebrate DeepFakes as a liberating media-technology, the way that the disruption of narrative by cinematic self-reflexivity was sometimes celebrated as opening a space where structuring ideologies gave way to an experience of materiality and the dissolution of the subject positions inscribed and interpellated by the apparatus. No amount of glitchy seamfulness will undo the gendered violence inflicted, mostly upon women, in involuntary synthetic pornography. Not only that, but the pleasure taken by viewers in their consumption of this violence seems to depend, at least in part, precisely on the failure or incompleteness of the spectacle: what such viewers desire is not to be tricked into actually believing that it is Gal Gadot or their ex-girlfriend that they are seeing on the screen, but precisely that it is a fake likeness or simulation, still open to glitches, upon which the operational aesthetic depends. Nevertheless, we should not look away from the paradoxical opening signaled by these viewers’ suspension of belief. The fact that they have to “squint a little” to complete the gendered fantasy of domination also means that they have to compromise, at least to a certain degree or for a short duration, their subjective mastery of the visual object, that they have to abdicate their own subjective ownership of their bodies as the bearers of experience. Though it is hard to believe that any trace of conscious awareness of it remains, much less that viewers will be reformed as a result of the experience, it seems reasonable to believe that viewers of DeepFake videos must experience at least an inkling of their own undoing as their de-subjectivized vision interfaces with the ahuman operation of machine vision.
What I am saying, then, and I am trying to be careful about how I say it, is that DeepFake videos open the door, experientially, to a highly problematic space in which our predictive technologies participate in processes of subjectivation by outpacing the subject, anticipating the subject, and intervening materially in the pre-personal realm of the flesh, out of which subjectivized and socially “typified” bodies emerge. The late Sartre, writing in the Critique of Dialectical Reason, defined commodities and the built environment in terms of the “practico-inert,” in light of the ways that “worked matter” stored past human praxis but condensed it into inert physical form. Around these objects, increasingly standardized through industrial capitalism’s serialized production processes, are arrayed alienated and impotent social collectives of interchangeable, fungible subjects. Compellingly, feminist philosopher Iris Marion Young takes Sartre’s argument as the basis for rethinking gender as a non-essentialist formation, a nascent collectivity, that is imposed on bodies materially—through architecture, clothing, and gender-specific objects that serve to enforce patriarchy and heterosexism. The practico-inert, in other words, participated in the gendered typification of the body—and we could extend the argument to racialization processes as well. But the computational infrastructures of today’s built environment are no longer adequately captured by the concept of the practico-inert. These infrastructures and objects are still the products of praxis, but they are far from inert. In their predictive and interactive operations, they are better thought of under the concept of the practico-alert—they are highly active, always on alert, and like the viewers of DeepFake videos on the lookout for a telling glitch, so are we ever and exhaustingly on the alert. In these circuits, which are located deeper than subjective attention, the standardization and typification processes I just mentioned are more fine-grained, more “personalized” or targeted, operating directly on the presubjective flesh. In this sense, the flattening of subjectivity, the suspension of belief and depersonalization of vision in DeepFake videos, points towards the contemporary “ungendering” of the flesh, as Hortense Spillers calls it in a different context, that marks a preliminary step in the computational intensification of racialized and gendered subjectivization. This is a truly insidious aesthetics of the flesh.Sartre and practico-inert — updated to practico-alert; cf. gender via Iris Marion Young: typification (or serialization) via practico-inert. Now a more direct, because immeasurably fast, operation on presubjective flesh.
Next Tuesday, October 5, 2021 (12pm Pacific), I will be giving a talk in Stanford’s German Studies Lecture Series titled “Media Philosophy in the Flesh.” See here for more information and Zoom registration.
On Saturday, October 2, 2021, at 1pm Eastern / 10am Pacific, I will be participating along with Hannah Zeavin, Casey Boyle, and Hank Gerba in a panel on “DeepFake Energies” at the Society for Literature, Science, and the Arts (SLSA) conference (via Zoom).
The panel thinks about the energies invested and expended in DeepFake phenomena: the embodied, cognitive, emotional, inventive, and other energies associated with creating and consuming machine-learning enabled media (video, text, etc.) that simulate human expression, re-create dead persons, or place living people into fake situations. Drawing on resources from phenomenology, psychoanalysis, media theory, and computational exploration, panelists trace the ways that the generative energies at the heart of these AI-powered media transform subjective and collective experiences, with significant consequences for gender, race, and other determinants of political existence in the age of DeepFakes.
Here are the abstracts:
On the Embodied Phenomenology of DeepFakes (Shane Denson, Stanford)
DeepFake videos pose significant challenges to conventional modes of viewing. Indeed, the use of machine learning algorithms in these videos’ production complicates not only traditional forms of moving-image media but also deeply anchored phenomenological categories and structures. By paying close attention to the exchange of energies around these videos, including the consumption of energy in their production but especially the investment of energy on the part of the viewer struggling to discern the provenance and veracity of such images, we discover a mode of viewing that both recalls pre-cinematic forms of fascination while relocating them in a decisively post-cinematic field. The human perceiver no longer stands clearly opposite the image object but instead interfaces with the spectacle at a pre-subjective level that approximates the nonhuman processing of visual information known as machine vision. While the depth referenced in the name “DeepFake” is that of “deep learning,” the aesthetic engagement with these videos implicates an intervention in the depths of embodied sensibility—at the level of what Merleau-Ponty referred to as the “inner diaphragm” that precedes stimulus and response or the distinction of subject and intentional object. While the overt visual thematics of these videos is often highly gendered (their most prominent examples being involuntary synthetic pornography targeting mostly women), viewers are also subject to a”ective syntheses and pre-subjective blurrings that, beyond the level of representation, open their bodies to fleshly “ungenderings” (Hortense Spillers) and re-typifications with far-reaching consequences for both race and gender.
No More Dying (Hannah Zeavin, UC Berkeley)
“No More Dying” concerns itself with the status of DeepFakes in psychic life on the grounds of DeepFakes that reprise the dead. In order to think about whether DeepFakes as surrogates constitute an attempt at eluding pain—a psychotic technology—or are a new form of an ancient capacity to symbolize pain for oneself (Bion 1962), I will return to the status of objects as melancholic media and what this digital partial-revivification might do to and for a psyche. Is creating a virtual agent in the likeness of a lost object a new terrain (a new expression of omnipotent fantasy) or is it more akin to the wish fulfillment at the center of transitional phenomena and dreaming? Does a literal enactment and acting out lead to, as Freud would have it, a mastery and working through—or does the concrete nature of gaming trauma lead to a melancholic preservation of an internal object via an investment in the mediatized external object? Beyond the psychical implications of this form of reviving the dead, the paper troubles the assumptions and politics of this nascent practice by asking whose dead, and whose trauma, are remediated and remedied this way. More simply, which dead are eligible for reliving and, recalling Judith Butler’s question—which lives are grievable?
Low Fidelity in High Definition (Casey Boyle, UT Austin)
When thinking about DeepFakes, it is easy to also think about theorist Jean Baudrillard. It was Baudrillard who, early and often, rang alarm bells regarding the propensity of images and/as information to become unmoored from any direct referent. DeepFakes seem to render literal the general unease with the ongoing mediatization that Baudrillard traced. However, the uncertainty about a “real” is not only because of this severing real from fake, but is also because of a prior condition of media since, as Baudrillard claims, “… a completely new species of uncertainty results not from the lack of information but from information itself and even from an excess of information” (Baudrillard, 1985). The excessive overload of mediatization enables DeepFakes to persist as a threat because the energy and e”ort required to validate any given piece of media is an unsustainable practice when there are so many to verify. It seems then the only response to overload is to generate…more. This presentation reports on an ongoing project to re-energize Baudrillard by computationally generating new texts. Using an instance of GPT-3 machine learning—one trained on Baudrillard’s texts—the presenter will rely on “new” primary texts to comment on the rise of DeepFakes, Post-Truth, and Fake News. Ultimately, this presentation, relying on “new” primary work from Baudrillard, argues that we are not entering an era of Post- Truth but of Post-Piety, which is an era in which we have failed to spend energy building agreement and commonplace.
A Gestural Technics of Individuation as Descent (Hank Gerba)
Googling “What is a DeepFake?” returns a vertiginous list of results detailing the technical processes involved in their production. Operational images par excellence, DeepFakes have spawned an industry of verification practices meant to buttress the epistemological doubt their existence sows. It would seem then that to be concerned with DeepFakes is to be concerned with veridicality, but, as this presentation argues, this problematic is derivative of, and entangled with, an aesthetic encounter. What if we approach DeepFakes otherwise, arriving at, rather than departing from, a causal understanding of their technicity? When a DeepFake “works,” it succeeds in satisfactorily producing gestures characteristic of the person it has “learned” to perform—through these gestures it means them, and only them. The question DeepFakes pose, then, is no longer simply “Is this video a true representation of X?” but “Is this performance true to X?” Gestures therefore plunge us into the aesthetics of personhood; they are, as Vilém Flusser argues, that which mediate personhood by bringing it into the social manifold of meaning. By linking Flusser’s theory of gesture with Gilbert Simondon’s theory of individuation, this presentation concludes by arguing that DeepFakes are a gestural technics of individuation—machinic operations which enfold personhood within the topological logic of gradient descent.
Today I presented a short paper on “Post-Cinematic Animation” as part of a roundtable discussion at the Society for Animation Studies. The roundtable, on “Expanded Animation,” was organized by Deborah Levitt and Phillip Thurtle, and also included Heather Warren-Crow, Misha Mihailova, and Thomas Lamarre—all of whom gave excellent papers. Here’s mine:
My recent book Discorrelated Images (Duke UP 2020) is not first and foremost intended as an intervention in the field of animation studies. Rather, it is an attempt to bring together some of the primarily aesthetic concerns of cinema studies and visual culture more generally with media philosophical and media archaeological interests in the invisible, or anaesthetic if not positively anti-aesthetic, dimensions of technical infrastructures in order to understand how, on the one hand, images have become unyoked from subjective perception and how, on the other hand, this post-phenomenological “discorrelation” opens new avenues of political control and subjectivation. In short, algorithmic images are processed in microtemporal intervals that elude the window of subjective perception; operating faster than us, they thus not only exceed perceptual objecthood but also anticipate our subjectivities; with their predictive or protentional, future-oriented operations, such images mark a significant departure from the past-based recording paradigm of a cinematic media regime, such that post-cinematic media become potent agencies or vectors that lead the way in shaping who we will be; and they do this by operating at or on the cusp between the visible and the invisible, the subjective and the pre-subjective, the aesthetic and the insensible.
But if, as I have said, this argument is not primarily framed in terms of animation studies, it necessarily implicates animation as both a thematic and a medial site of change. In a thread that runs through the book, the question of animation becomes a question precisely of the difference between cinema and post-cinema, one that resonates, in many ways, with Lev Manovich’s argument in the mid-1990s that the postindexical images of “digital cinema” are closer in spirit (and, in some respects, closer materially) to pre-cinematic technologies of animation—phenakistiscopes, thaumatropes, zoetropes, and the like—than to cinema in its classical form. Beyond formal and technical dimensions, I am interested in the philosophical implications, such as those foregrounded by Alan Cholodenko who, writing even earlier than Manovich, argued that “the idea of animation” should be approached “as a notion whose purchase would be transdisciplinary, transinstitutional, implicating the most profound, complex and challenging questions of our culture, questions in the areas of being and becoming, time, space, motion, change—indeed, life itself.” My approach to animation, as the locus of a media-historical transformation that also concerns a reconfiguration of subjectivation’s material parameters, therefore mediates between Manovich’s technical focus and Cholodenko’s philosophical one. I therefore follow Deborah Levitt in her recent probing of animation as “the dominant medium of our time”—by which she refers not to a specific technique but to a broad cultural and sociotechnical condition, which is related as much to moving-image technologies as to biomedical ones (from “novel developments in the biological sciences that open possibilities for producing living beings” to antidepressants and hormone therapy for transgender people); for Levitt, in short, ours is “the age of the animatic apparatus.”
Two other recent theoretical interventions, by Esther Leslie and Joel McKim (writing in a special issue of Animation) and Jim Hodge (in his book Sensations of History: Animation and New Media Art), both suggest that animation mediates between human sense and the insensible processes of computation—a suggestion that helps ground the interrelation of concrete changes in media infrastructure and the forms of subjectivity that they subtend. For example, processes like motion smoothing, in which our so-called “smart TVs” algorithmically compute new images between visible frames and engage in a real-time generative tweening operation, or DeepFake and related AI-driven imaging processes that categorically elude perception in their black boxed operation—such acts of animation in its computationally expanded field activate what Merleau-Ponty referred to as the “inner diaphragm” between subjectivity and objectivity, which, “prior to stimuli and sensory contents, […] determines, infinitely more than they do, what our reflexes and perceptions will be able to aim at in the world, the area of our possible operations, the scope of our life.” That is, algorithmic animation is situated between embodied sensation and the circuits of computational processing, and it thus sets such a pre-subjective and likewise pre-objective membrane in motion, fundamentally recomputing what counts as an image and what our relation to it is. If this means that what Husserl called “the fundamental correlation between noesis and noema,” or the relational bond between perceptual consciousness and its intentional objects, is called into question by computational processes, then animation’s central role as mediator ensures that such discorrelation is not the end but the reinvigoration of embodied sensation—indeed, a redefinition of life itself in the contemporary world.
References:
Cholodenko, Alan. “Introduction.” In The Illusion of Life, edited by Alan Cholodenko, 9-36. Sydney: Power Publications, 1991.
Denson, Shane. Discorrelated Images. Durham: Duke University Press, 2020.
Hodge, James J. Sensations of History: Animation and New Media Art. Minneapolis: University of Minnesota Press, 2019.
Husserl, Edmund. The Phenomenology of Internal Time Consciousness. Translated by James Churchill. Bloomington: Indiana University Press, 1964.
Leslie, Esther, and Joel McKim. “Life Remade: Critical Animation in the Digital Age.” Animation 12.3 (2017): 207-213.
Levitt, Deborah. The Animatic Apparatus: Animation, Vitality, and the Futures of the Image. Winchester, UK: Zero Books, 2018.
Manovich, Lev. “What Is Digital Cinema?” In Post-Cinema: Theorizing 21st-Century Film, edited by Shane Denson and Julia Leyda, 20-50. Falmer, UK: REFRAME Books, 2016.
Merleau-Ponty, Maurice. Phenomenology of Perception. Translated by Colin Smith. New York: Routledge, 2002.
I have had the good fortune to be a Faculty Research Fellow at the Clayman Institute for Gender Research over the past academic year, which has given me an opportunity to work on a new project that thinks about serialization in digital cultures as a vector of change. The larger project takes off from Sartre’s concept of “seriality” (as developed in his late Critique of Dialectical Reason) and connects it to forms of serialized media in order to think about reconfigurations of class, gender, and race. Back in March, I presented some of the work pertaining to gender and embodiment to my colleagues at the Clayman, and they have now posted a short write-up about it. Here’s the (controversial) crux:
Also enjoy this image that I used to illustrate my talk!
The Digital Aesthetics Workshop invites you to join us for one final event next Wednesday, June 2 (5-7PM Pacific), for a conversation with Mary Beth Meehan & Fred Turner.
*~*~*~**~*~**~*
Join photographer Mary Beth Meehan and historian Fred Turner in a conversation about their new book, Seeing Silicon Valley — Life in a Fraying America, and about the power of analog aesthetics in a digital era.
Mary Beth Meehan is a photographer and writer known for her large-scale, community-based portraiture centered on questions of representation, visibility, and social equity. She lives in New England, where she has lectured at Brown University, the Rhode Island School of Design, and the Massachusetts College of Art and Design.
Fred Turner is Harry and Norman Chandler Professor of Communication at Stanford University. He is the author of the award-winning history From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network and the Rise of Digital Utopianism among other books.
We’re excited to announce our next event at the Digital Aesthetics Workshop, a talk by writer and curator Legacy Russell, author of Glitch Feminism, which will take place next Thursday, May 20th at 10 am Pacific and is co-sponsored by the Clayman Institute for Gender Research.
Join writer and curator Legacy Russell in a discussion about the ways in which artists engaging the digital are building new models for what monuments can be in a networked era of mechanical reproduction.
Legacy Russell is a curator and writer. Born and raised in New York City, she is the Associate Curator of Exhibitions at The Studio Museum in Harlem. Russell holds an MRes with Distinction in Art History from Goldsmiths, University of London with a focus in Visual Culture. Her academic, curatorial, and creative work focuses on gender, performance, digital selfdom, internet idolatry, and new media ritual. Russell’s written work, interviews, and essays have been published internationally. She is the recipient of the Thoma Foundation 2019 Arts Writing Award in Digital Art, a 2020 Rauschenberg Residency Fellow, and a recipient of the 2021 Creative Capital Award. Her first book Glitch Feminism: A Manifesto (2020) is published by Verso Books. Her second book, BLACK MEME, is forthcoming via Verso Books.
Sponsored by the Stanford Humanities Center. Made possible by support from Linda Randall Meier, the Mellon Foundation, and the National Endowment for the Humanities. Co-sponsored by the Michelle R. Clayman Institute for Gender Research.
The Fórum Internacional Cinemática III, organized by Giselle Gubernikoff, Edson Luiz Oliveira, and Daniel Perseguim of the Universidade de São Paulo, is taking place online from April 13-15, 2021. Dedicated this year to forms of documentary and “the real,” the conference will feature three plenary talks by Steven Shaviro (April 13), me (April 14), and Selmin Kara (April 15).
My talk, titled “Documenting the Post-Cinematic Real,” draws on a line of questioning about computational media and realism that I explore in the latter half of chapter 5 in Discorrelated Images:
“In its classical formulation, cinematic realism is based in the photographic ontology of film, or in the photograph’s indexical relation to the world, which allegedly grants to film its unique purchase on reality; upon this relation also hinged, for many realist filmmakers and theorists, the political promise of realism. Digital media, meanwhile, are widely credited with disrupting indexicality and instituting an alternative ontology of the image, but does that mean that realism as a potentially political power of connection with the world is dead? If we consider the extent to which reality itself is shaped and mediated through digital media today, the question begins to seem strange. As I will demonstrate with reference to a variety of moving-image texts dealing with drone warfare, online terrorism recruitment, and computationally mediated affects, post-cinematic media might in fact be credited with a newly intensified political relevance through their institution of a new, post-cinematic realism. As a result, the question of “documenting the post-cinematic real,” which any contemporary theory of documentary must raise, will necessarily take us beyond the documentary as it is traditionally understood; it will take us into spaces of the computer desktop, of online and offline subjectivities and collectives, and of post-indexical technologies and environments. How can these spaces, which resist traditional coordinates of cinematic realism, be documented?”
Here are the links to view the plenary talks:
Steve Shaviro, “The Ontology of Post-cinematic Images, and Examples from Music Videos,” April 13 (5pm Brazil, 4pm Eastern, 1pm Pacific) — https://youtu.be/7t6GEB6a-tI