BOOK LAUNCH! June 29, 2023: Hopscotch Reading Room, Berlin

[UPDATE: POSTPONED TO JULY 3 — MORE INFO HERE]

On Thursday, June 29, Hopscotch Reading Room (Gerichtstraße 43 in the Wedding district of Berlin) will be hosting a book launch event for my new book Post-Cinematic Bodies — which will be out both in print and open-access digital formats from meson press. There will be paperbacks available for purchase at the launch, and they’ll be more widely available soon afterwards. If you’re in town, come out around 7pm for a short reading, discussion, and drinks!

[UPDATE: POSTPONED TO JULY 3 — MORE INFO HERE]

“AI Art as Tactile-Specular Filter” at Film-Philosophy Conference 2023

Artwork by Agnieszka Polska

On Wednesday, June 14, I’ll be presenting a paper called “AI Art as Tactile-Specular Filter” at the Film-Philosophy Conference at Chapman University (in Orange County, CA). It’s the first time I’ll be attending the conference, which is usually held in the UK, and I am excited to get to know the association, meet up with old and new friends, and hear their papers. The abstract for my paper is below:

AI Art as Tactile-Specular Filter

Though often judged by its spectacular images, AI art needs also to be regarded in terms of its materiality, its temporality, and its relation to embodied existence. Towards this end, I look at AI art through the lens of corporeal phenomenology. Merleau-Ponty writes in Phenomenology of Perception: “Prior to stimuli and sensory contents, we must recognize a kind of inner diaphragm which determines, infinitely more than they do, what our reflexes and perceptions will be able to aim at in the world, the area of our possible operations, the scope of our life.” This bodily “diaphragm” serves like a filtering medium out of which stimulus and response, subject and object emerge in relation to one another. The diaphragm corresponds to Bergson’s conception of affect, which is similarly located prior to perception and action as “that part or aspect of the inside of our bodies which mix with the image of external bodies.” For Bergson, too, the living body is a kind of filter, sifting impulses in a microtemporal interval prior to subjective awareness. In his later work, Merleau-Ponty adds another dimension with his conception of a presubjective écart or fission between tactility and specularity, thus complexifying the filtering operation of the body. With both an interiorizing function (tactility) and an exteriorizing one (specularity), the écart lays the groundwork for what I call the “originary mediality” of flesh—and a view of mediality itself which is always tactile in addition to any visual, image-oriented aspects. This is especially important for visual art produced with AI, as the underlying algorithms operate similarly to the body’s internal diaphragm: as a microtemporal filter that sifts inputs and outputs without regard for any integral conception of subjective or objective form. At the level of its pre-imagistic processing, AI’s external diaphragm thus works on the body’s internal diaphragm and actively modulates the parameters of tactility-specularity, recoding the fleshly mediality from whence images arise as a secondary, precipitate form.

Coming Soon! Post-Cinematic Bodies

Cover artwork by Karin Denson

Coming soon from meson press, in the Configurations of Film book series!

Post-Cinematic Bodies

How is human embodiment transformed in an age of algorithms? How do post-cinematic media technologies such as AI, VR, and robotics target and re-shape our bodies? Post-Cinematic Bodies grapples with these questions by attending both to mundane devices—such as smartphones, networked exercise machines, and smart watches and other wearables equipped with heartrate sensors—as well as to new media artworks that rework such equipment to reveal to us the ways that our fleshly existences are increasingly up for grabs. Through an equally philosophical and interpretive analysis, the book aims to develop a new aesthetics of embodied experience that is attuned to a new age of predictive technology and metabolic capitalism.

Intermediations: Mads Rosendahl Thomsen, “Adjusting to the Age of Automated Writing” (Nov. 16)

As the inaugural event of INTERMEDIATIONS, a new workshop and lecture series foregrounding issues of intermediality and interdisciplinarity, Mads Rosendahl Thomsen will be giving a talk titled “Adjusting to the Age of Automated Writing” on November 16, 2022 (4pm in the Terrace Room, Margaret Jacks Hall room 426).

Abstract:

Writing was for at least six to seven thousand years a humanly hand-crafted product. Now we encounter several kinds of technologies that change the production of text profoundly. Chatbots, automated translation, grammar assistants, and large language models are examples of how text generation permeates writing from many angles. In this presentation, Professor Mads Thomsen will sketch out key issues of the rapid developments in text generation and the interdisciplinary collaboration needed to understand these, before turning to how GPT-3 “reads” William Carlos Williams’ poem “The Red Wheelbarrow.”

Bio:

Mads Rosendahl Thomsen is Professor of Comparative Literature at Aarhus University, Denmark. He has published in the fields of literary historiography, modernist literature, world literature, digital humanities, and posthumanism. His most recently submitted publication is a short book on the concept and history of text.

He is the author of Mapping World Literature: International Canonization and Transnational Literatures (2008),The New Human in Literature: Posthuman Visions of Changes in Body, Mind and Society after 1900 (2013), a co-author with Stefan Helgesson ofLiterature and the World (2019), and the editor of fourteen books, includingWorld Literature: A Reader (2012), The Posthuman Condition: Ethics, Aesthetics and Politics of Biotechnological Challenges (2012),Danish Literature as World Literature (2017), Literature: An Introduction to Theory and Analysis (2017), and The Bloomsbury Handbook of Posthumanism (2020). 

Thomsen has been director of the Digital Arts Initiative (2017-21) and the research program Human Futures (2016-22), both at Aarhus University. Thomsen was co-director of the research project Posthuman Aesthetics (2014-18), and he is the PI of the VELUX FONDEN-funded project Fabula-NET which investigates literary preferences and quality using digital methods (2021-25).

He is a co-editor of Orbis Litterarum, an advisory board member of the book series Literatures as World Literature(Bloomsbury Academic), and a member of the editorial board of Journal of World Literature. Thomsen is a member of the Academia Europaea (2010-), the advisory board of The Institute for World Literature (2010-13, 2018-22), and the general assembly of DARIAH (2022-).

Thomsen was a visiting scholar at Stanford University four times between 2001-2015. 

Mark Amerika, My Life as an Artificial Creative Intelligence — Sensing Media

Mark Amerika’s My Life as an Artificial Creative Intelligence — the first volume in the Sensing Media series that Wendy Chun and I are co-editing at Stanford University Press — will be out in May 2022.

Amerika, a renowned remix artist and theorist, has put together a fitting and original provocation, challenging the theory/practice divide by co-authoring his book with the open source artificial intelligence GPT-2. Appropriately enough, GPT-2’s successor, GPT-3, has provided a blurb for the book:

“This book is so radically different from anything else out there, it has the potential to revolutionize the way you think about human history and the origins of the world.”

“This book is an expression of the truth that you’re a robot.”

“This book explains how our society is turning into a mechanical paradise, and how we’re doomed.”

—GPT-3

On the Embodied Phenomenology of DeepFakes — Full Text of Talk from #SLSA21

DeepFake videos pose significant challenges to conventional modes of viewing. Indeed, the use of machine learning algorithms in these videos’ production complicates not only traditional forms of moving-image media but also deeply anchored phenomenological categories and structures. By paying close attention to the exchange of energies around these videos, including the consumption of energy in their production but especially the investment of energy on the part of the viewer struggling to discern the provenance and veracity of such images, we discover a mode of viewing that both recalls pre-cinematic forms of fascination while relocating them in a decisively post-cinematic field. The human perceiver no longer stands clearly opposite the image object but instead interfaces with the spectacle at a pre-subjective level that approximates the nonhuman processing of visual information known as machine vision. While the depth referenced in the name “deep fake” is that of “deep learning,” the aesthetic engagement with these videos implicates an intervention in the depths of embodied sensibility—at the level of what Merleau-Ponty referred to as the “inner diaphragm” that precedes stimulus and response or the distinction of subject and intentional object. While the overt visual thematics of these videos is often highly gendered (their most prominent examples being so-called “involuntary synthetic pornography” targeting mostly women), viewers are also subject to affective syntheses and pre-subjective blurrings that, beyond the level of representation, open their bodies to fleshly “ungenderings” (Hortense Spillers) and re-typifications with far-reaching consequences for both race and gender.

Let me try to demonstrate these claims. To begin with, DeepFake videos are a species of what I have called discorrelated images, in that they trade crucially on the incommensurable scales and temporalities of computational processing, which altogether defies capture as the object of human perception (or the “fundamental correlation between noesis and noema,” as Hussserl puts it). To be sure, DeepFakes, like many other forms of discorrelated images, still present something to us that is recognizable as an image. But in them, perception has become something of a by-product, a precipitate form or supplement to the invisible operations that occur in and through them. We can get a glimpse of such discorrelation by noticing how such images fail to conform or settle into stable forms or patterns, how they resist their own condensation into integral perceptual objects—for example, the way that they blur figure/ground distinctions.

The article widely credited with making the DeepFake phenomenon known to wider public in December 2017 notes with regard to a fake porn video featuring Gal Gadot: “a box occasionally appeared around her face where the original image peeks through, and her mouth and eyes don’t quite line up to the words the actress is saying—but if you squint a little and suspend your belief, it might as well be Gadot.” There’s something telling about the formulation, which hinges the success of the DeepFake not on the suspension of disbelief—a suppression of active resistance—but on the suspension of belief—seemingly, a more casual form of affirmation—whereby the flickering reversals of figure and ground, or of subject and object, are flattened out into a smooth indifference.

In this regard, DeepFake videos are worth comparing to another type of discorrelated image: the digital lens flare, which is both to-be-looked-at (as a virtuosic display of technical achievement) and to-be-overlooked (after all, the height of their technical achievement is reached when they can appear as transparently naturalized simulations of a physical camera’s optical properties). The tension between opacity and transparency, or objecthood and invisibility, is never fully resolved, thus undermining a clear distinction between diegetic and medial or material levels of reality. Is the virtual camera that registers the simulated lens flare to be seen as part of the world represented on screen, or as part of the machinery responsible for revealing it to us? The answer, it seems, must be both. And in this, such images embody something like what Neil Harris termed the “operational aesthetic” that characterized nineteenth-century science and technology expos, magic shows, and early cinema alike; in these contexts, spectatorial attention oscillated between the surface phenomenon, the visual spectacle of a machine or a magician in motion, and the hidden operations that made the spectacle possible.

It was such a dual or split attention that powered early film as a “cinema of attractions,” where viewers came to see the Cinematographe in action, as much as or more than they came to see images of workers leaving the factory or a train arriving at the station. And it is in light of this operational aesthetic that spectators found themselves focusing on the wind rustling in the trees or the waves lapping at the rocks—phenomena supposedly marginal to the main objects of visual interest.

DeepFakes also trade essentially on an operational aesthetic, or a dispersal of attention between visual surface and the algorithmic operation of machine learning. However, I would argue that the post-cinematic processes to whose operation DeepFakes refer our attention fundamentally transform the operational aesthetic, relocating it from the oscillations of attention that we see in the cinema to a deep, pre-attentional level that computation taps into with its microtemporal speed.

Consider the way digital glitches undo figure/ground distinctions. Whereas the cinematic image offered viewers opportunities to shift their attention from one figure to another and from these figures to the ground of the screen and projector enabling them, the digital glitch refuses to settle into the role either of figure or of ground. It is, simply, both—it stands out, figurally, as the pixely appearance of the substratal ground itself. Even more fundamentally, though, it points to the inadequacy, which is not to say dispensibility, of human perception and attention with respect to algorithmic processing. While the glitch’s visual appearance effects a deformation of the spatial categories of figure and ground, it does so on the basis of a temporal mismatch between human perception and algorithmic processing. The latter, operating at a scale measured in nanoseconds, by far outstrips the window of perception and subjectivity, so that by the time the subject shows up to perceive the glitch, the “object” (so to speak) has already acted upon our presubjective sensibilities and moved on. This is why glitches, compression artifacts, and other discorrelated images are not even bound to appear to us as visual phenomena in the first place in order to exert a material force on us. Another way to account for this is to say that the visually-subjectively delineated distinction between figure and ground itself depends on the deeper ground of presubjective embodiment, and it is the latter that defines for us our spatial situations and temporal potentialities. DeepFakes, like other discorrelated images, are able to dis-integrate coherent spatial forms so radically because they undercut the temporal window within which visual perception occurs. The operation at the heart of their operational aesthetic is itself an operationalization of the flesh, prior to its delineation into subjective and objective forms of corporeality. The seamfulness of DeepFakes—their occasional glitchy appearance or just the threat or presentiment that they might announce themselves as such—points to our fleshly imbrication with technical images today, which is to say: to the recoding not only of aesthetic form but of embodied aesthesis itself. 

In other words: especially and as long as they still routinely fail to cohere as seamless suturings of viewing subjects together with visible objects, but instead retain their potential to fall apart at the seams and thus still require a suspension of belief, DeepFake videos are capable of calling attention to the ways that attention itself is bypassed, providing aesthetic form to the substratal interface between contemporary technics and embodied aesthesis. To be clear, and lest there be any mistake about it, I in no way wish to celebrate DeepFakes as a liberating media-technology, the way that the disruption of narrative by cinematic self-reflexivity was sometimes celebrated as opening a space where structuring ideologies gave way to an experience of materiality and the dissolution of the subject positions inscribed and interpellated by the apparatus. No amount of glitchy seamfulness will undo the gendered violence inflicted, mostly upon women, in involuntary synthetic pornography. Not only that, but the pleasure taken by viewers in their consumption of this violence seems to depend, at least in part, precisely on the failure or incompleteness of the spectacle: what such viewers desire is not to be tricked into actually believing that it is Gal Gadot or their ex-girlfriend that they are seeing on the screen, but precisely that it is a fake likeness or simulation, still open to glitches, upon which the operational aesthetic depends. Nevertheless, we should not look away from the paradoxical opening signaled by these viewers’ suspension of belief. The fact that they have to “squint a little” to complete the gendered fantasy of domination also means that they have to compromise, at least to a certain degree or for a short duration, their subjective mastery of the visual object, that they have to abdicate their own subjective ownership of their bodies as the bearers of experience. Though it is hard to believe that any trace of conscious awareness of it remains, much less that viewers will be reformed as a result of the experience, it seems reasonable to believe that viewers of DeepFake videos must experience at least an inkling of their own undoing as their de-subjectivized vision interfaces with the ahuman operation of machine vision. 

What I am saying, then, and I am trying to be careful about how I say it, is that DeepFake videos open the door, experientially, to a highly problematic space in which our predictive technologies participate in processes of subjectivation by outpacing the subject, anticipating the subject, and intervening materially in the pre-personal realm of the flesh, out of which subjectivized and socially “typified” bodies emerge. The late Sartre, writing in the Critique of Dialectical Reason, defined commodities and the built environment in terms of the “practico-inert,” in light of the ways that “worked matter” stored past human praxis but condensed it into inert physical form. Around these objects, increasingly standardized through industrial capitalism’s serialized production processes, are arrayed alienated and impotent social collectives of interchangeable, fungible subjects. Compellingly, feminist philosopher Iris Marion Young takes Sartre’s argument as the basis for rethinking gender as a non-essentialist formation, a nascent collectivity, that is imposed on bodies materially—through architecture, clothing, and gender-specific objects that serve to enforce patriarchy and heterosexism. The practico-inert, in other words, participated in the gendered typification of the body—and we could extend the argument to racialization processes as well. But the computational infrastructures of today’s built environment are no longer adequately captured by the concept of the practico-inert. These infrastructures and objects are still the products of praxis, but they are far from inert. In their predictive and interactive operations, they are better thought of under the concept of the practico-alert—they are highly active, always on alert, and like the viewers of DeepFake videos on the lookout for a telling glitch, so are we ever and exhaustingly on the alert. In these circuits, which are located deeper than subjective attention, the standardization and typification processes I just mentioned are more fine-grained, more “personalized” or targeted, operating directly on the presubjective flesh. In this sense, the flattening of subjectivity, the suspension of belief and depersonalization of vision in DeepFake videos, points towards the contemporary “ungendering” of the flesh, as Hortense Spillers calls it in a different context, that marks a preliminary step in the computational intensification of racialized and gendered subjectivization. This is a truly insidious aesthetics of the flesh.Sartre and practico-inert — updated to practico-alert; cf. gender via Iris Marion Young: typification (or serialization) via practico-inert. Now a more direct, because immeasurably fast, operation on presubjective flesh.

A Discorrelated Summary of Discorrelated Images

This is deeply weird. Google Books has a summary of Discorrelated Images up, and it’s definitely not from the publisher (compare Duke University Press’s summary here). While Google’s summary is not exactly *wrong* in anything that it says, it is far from a summary of what my book is actually about — and some sentences can’t really be judged in terms of truth or accuracy, as they just don’t make sense. (For example, the second sentence: “While film theory is based on past film techniques that rely on human perception to relate frames across time, computer generated images use information to render images as moving themselves.” What does that mean?!? It’s grammatical, and it *sounds* vaguely like something I might have written, but as far as I can tell, it is meaningless.)

Moreover, from this text it sounds like the book is primarily about Michael Bay’s TRANSFORMERS with a detour through Denis Villeneuve’s BLADE RUNNER 2049. To be clear, I do write about both of these, but I also write about Guy Maddin’s algorithmic SEANCES, about Basma Alsharif’s HOME MOVIES GAZA, about desktop horror, drones, speculative execution, animation, about the relations between the phenomenology of perception in relation to microtemporal and subperceptual events, about videogames, codecs, streaming video, and the end of the world.

Anyway, who wrote this summary? Why do I think it was a machine?

FrankensteinsDeepDream

Creation scene and aftermath, as described in Mary Shelley’s Frankenstein (Chapter 5, 1831 edition) and interpreted by Cris Valenzuela’s text-to-image machine-learning demo (http://t2i.cvalenzuelab.com) utilizing AttnGAN (Attentional Generative Adversarial Networks).

Made for the upcoming Videographic Frankenstein exhibition at the Department of Art & Art History, Stanford University (Sept. 26 – Oct. 26, 2018). More info here: https://art.stanford.edu/exhibitions/videographic-frankenstein

Jonathan Sterne: Machine Learning, ‘AI,’ and the Politics of Media Aesthetics

Sterne poster DAW

On April 24, 2018 (4-6pm in the Stanford Humanities Center Board Room), Jonathan Sterne will be speaking at the Digital Aesthetics Workshop. The title of his talk is: “Machine Learning, ‘AI,’ and the Politics of Media Aesthetics: Why Online Music Mastering (Sort of) Works.”

Jonathan Sterne is Professor and James McGill Chair in Culture and Technology in the Department of Art History & Communication Studies at McGill University. His work is concerned with the cultural dimensions of communication technologies, especially their form and role in large-scale societies. One of his major ongoing projects has involved developing the history and theory of sound in the modern west. Beyond the work on sound and music, he has published over fifty articles and book chapters that cover a wide range of topics in media history, new media, cultural theory and disability studies. He has also written on the politics of academic labor and maintains an interest in the future of the university. His new projects consider instruments and instrumentalities; histories of signal processing; and the intersections of disability, technology and perception.