On Saturday, October 2, 2021, at 1pm Eastern / 10am Pacific, I will be participating along with Hannah Zeavin, Casey Boyle, and Hank Gerba in a panel on “DeepFake Energies” at the Society for Literature, Science, and the Arts (SLSA) conference (via Zoom).
The panel thinks about the energies invested and expended in DeepFake phenomena: the embodied, cognitive, emotional, inventive, and other energies associated with creating and consuming machine-learning enabled media (video, text, etc.) that simulate human expression, re-create dead persons, or place living people into fake situations. Drawing on resources from phenomenology, psychoanalysis, media theory, and computational exploration, panelists trace the ways that the generative energies at the heart of these AI-powered media transform subjective and collective experiences, with significant consequences for gender, race, and other determinants of political existence in the age of DeepFakes.
Here are the abstracts:
On the Embodied Phenomenology of DeepFakes (Shane Denson, Stanford)
DeepFake videos pose significant challenges to conventional modes of viewing. Indeed, the use of machine learning algorithms in these videos’ production complicates not only traditional forms of moving-image media but also deeply anchored phenomenological categories and structures. By paying close attention to the exchange of energies around these videos, including the consumption of energy in their production but especially the investment of energy on the part of the viewer struggling to discern the provenance and veracity of such images, we discover a mode of viewing that both recalls pre-cinematic forms of fascination while relocating them in a decisively post-cinematic field. The human perceiver no longer stands clearly opposite the image object but instead interfaces with the spectacle at a pre-subjective level that approximates the nonhuman processing of visual information known as machine vision. While the depth referenced in the name “DeepFake” is that of “deep learning,” the aesthetic engagement with these videos implicates an intervention in the depths of embodied sensibility—at the level of what Merleau-Ponty referred to as the “inner diaphragm” that precedes stimulus and response or the distinction of subject and intentional object. While the overt visual thematics of these videos is often highly gendered (their most prominent examples being involuntary synthetic pornography targeting mostly women), viewers are also subject to a”ective syntheses and pre-subjective blurrings that, beyond the level of representation, open their bodies to fleshly “ungenderings” (Hortense Spillers) and re-typifications with far-reaching consequences for both race and gender.
No More Dying (Hannah Zeavin, UC Berkeley)
“No More Dying” concerns itself with the status of DeepFakes in psychic life on the grounds of DeepFakes that reprise the dead. In order to think about whether DeepFakes as surrogates constitute an attempt at eluding pain—a psychotic technology—or are a new form of an ancient capacity to symbolize pain for oneself (Bion 1962), I will return to the status of objects as melancholic media and what this digital partial-revivification might do to and for a psyche. Is creating a virtual agent in the likeness of a lost object a new terrain (a new expression of omnipotent fantasy) or is it more akin to the wish fulfillment at the center of transitional phenomena and dreaming? Does a literal enactment and acting out lead to, as Freud would have it, a mastery and working through—or does the concrete nature of gaming trauma lead to a melancholic preservation of an internal object via an investment in the mediatized external object? Beyond the psychical implications of this form of reviving the dead, the paper troubles the assumptions and politics of this nascent practice by asking whose dead, and whose trauma, are remediated and remedied this way. More simply, which dead are eligible for reliving and, recalling Judith Butler’s question—which lives are grievable?
Low Fidelity in High Definition (Casey Boyle, UT Austin)
When thinking about DeepFakes, it is easy to also think about theorist Jean Baudrillard. It was Baudrillard who, early and often, rang alarm bells regarding the propensity of images and/as information to become unmoored from any direct referent. DeepFakes seem to render literal the general unease with the ongoing mediatization that Baudrillard traced. However, the uncertainty about a “real” is not only because of this severing real from fake, but is also because of a prior condition of media since, as Baudrillard claims, “… a completely new species of uncertainty results not from the lack of information but from information itself and even from an excess of information” (Baudrillard, 1985). The excessive overload of mediatization enables DeepFakes to persist as a threat because the energy and e”ort required to validate any given piece of media is an unsustainable practice when there are so many to verify. It seems then the only response to overload is to generate…more. This presentation reports on an ongoing project to re-energize Baudrillard by computationally generating new texts. Using an instance of GPT-3 machine learning—one trained on Baudrillard’s texts—the presenter will rely on “new” primary texts to comment on the rise of DeepFakes, Post-Truth, and Fake News. Ultimately, this presentation, relying on “new” primary work from Baudrillard, argues that we are not entering an era of Post- Truth but of Post-Piety, which is an era in which we have failed to spend energy building agreement and commonplace.
A Gestural Technics of Individuation as Descent (Hank Gerba)
Googling “What is a DeepFake?” returns a vertiginous list of results detailing the technical processes involved in their production. Operational images par excellence, DeepFakes have spawned an industry of verification practices meant to buttress the epistemological doubt their existence sows. It would seem then that to be concerned with DeepFakes is to be concerned with veridicality, but, as this presentation argues, this problematic is derivative of, and entangled with, an aesthetic encounter. What if we approach DeepFakes otherwise, arriving at, rather than departing from, a causal understanding of their technicity? When a DeepFake “works,” it succeeds in satisfactorily producing gestures characteristic of the person it has “learned” to perform—through these gestures it means them, and only them. The question DeepFakes pose, then, is no longer simply “Is this video a true representation of X?” but “Is this performance true to X?” Gestures therefore plunge us into the aesthetics of personhood; they are, as Vilém Flusser argues, that which mediate personhood by bringing it into the social manifold of meaning. By linking Flusser’s theory of gesture with Gilbert Simondon’s theory of individuation, this presentation concludes by arguing that DeepFakes are a gestural technics of individuation—machinic operations which enfold personhood within the topological logic of gradient descent.