The Future of Intelligence and/or the Future of Unintelligibility

The following is an excerpt of my talk from the Locarno Film Festival, at the “Long Night of Dreaming about the Future of Intelligence” held August 9-10, 2023. (Animated imagery created with ModelScope Text to Video Synthesis demo, using text drawn from the talk itself.)

Thanks to Rafael Dernbach for organizing and inviting me to this event, and thanks to Francesco de Biasi and Bernadette Klausberger for help with logistics and other support. And thanks to everyone for coming out tonight. I’m really excited to be here with you, especially during this twilight hour, in this in-between space, between day and night, like some hypnagogic state between waking existence and a sleep of dreams. 

For over a century this liminal space of twilight has been central to thinking and theorizing the cinema and its shadowy realm of dreams, but I think it can be equally useful for thinking about the media transitions we are experiencing today towards what I and others have called “post-cinematic” media.

In the context of a film festival, the very occurrence of which testifies to the continued persistence and liveliness of cinema today, I should clarify that “post-cinema,” as I use the term, is not meant to suggest that cinema is over or dead. Far from it.

Rather, the “post” in post-cinema points to a kind of futurity that is being integrated into, while also transforming and pointing beyond, what we have traditionally known as the cinema.

That is, a shift is taking place from cinema’s traditional modes of recording and reproducing past events to a new mode of predicting, anticipating, and shaping mediated futures—something that we see in everything from autocorrect on our phones to the use of AI to generate trippy, hypnagogic spectacles. 

Tonight, I hope to use this twilight time to prime us all for a long night of dreaming, and thinking, maybe even hallucinating, about the future of intelligence. The act of priming is an act that sets the stage and prepares for a future operation.

We prime water pumps, for example, removing air from the line to ensure adequate suction and thus delivery of water from the well. We also speak of priming engines, distributing oil throughout the system to avoid damage on initial startup. Interestingly, when we move from mechanical, hydraulic, and thermodynamic systems to cybernetic and more broadly informatic ones, this notion of priming tends to be replaced by the concept of “training,” as we say of AI models. 

Large language models like ChatGPT are not primed but instead trained. The implication seems to be that (dumb) mechanical systems are merely primed, prepared, for operations that are guided or supervised by human users, while AI models need to be trained, perhaps even educated, for an operation that is largely autonomous and intelligent. But let’s not forget that artificial intelligence was something of a marketing term proposed in the 1950s (Dartmouth workshop 1956) as an alternative to, and in order to compete with, the dominance of cybernetics. Clearly, AI won that competition, and so while we still speak of computer engineers, we don’t speak of computer engines in need of priming, but AI models in need of training.

In the following, I want to take a step back from this language, and the way of thinking that it primes us for, because it encodes also a specific way of imagining the future—and the future of intelligence in particular—that I think is still up for grabs, suspended in a sort of liminal twilight state. My point is not that these technologies are neutral, or that they might turn out not to affect human intelligence and agency. Rather, I am confident in saying that the future of intelligence will be significantly different from intelligence’s past. There will be some sort of redistribution, at least, if not a major transformation, in the intellective powers that exist and are exercised in the world.

I am reminded of Plato’s Phaedrus, in which Socrates recounts the mythical origins of writing, and the debate that it engendered: would this new inscription technology extend human memory by externalizing it and making it durable, or would it endanger memory by the same mechanisms? If people could write things down, so the worry went, they wouldn’t need to remember them anymore, and the exercise of active, conscious memory would suffer as a result.

Certainly, the advent of writing was a watershed moment in the history of human intelligence, and perhaps the advent of AI will be regarded similarly. This remains to be seen. In any case, we see the same polarizing tendencies: some think that AI will radically expand our powers of intelligence, while others worry that it will displace or eclipse our powers of reason. So there is a similar ambivalence, but we shouldn’t overlook a major difference, which is one of temporality (and this brings us back to the question of post-cinema).

Plato’s question concerned memory and memorial technologies (which includes writing as well as, later, photography, phonography, and cinema), but if we ask the question of intelligence’s future today, it is complicated by the way that futurity itself is centrally at stake now: first by the predictive algorithms and future-oriented technologies of artificial intelligence, and second by the potential foreclosure of the future altogether via climate catastrophe, possible extinction, or worse—all of which is inextricably tied up with the technological developments that have led from hydraulic to thermodynamic to informatic systems. To ask about the future of intelligence is therefore to ask both about the futurity of intelligence as well as its environmentality—dimensions that I have sought to think together under the concept of post-cinema.

In my book Discorrelated Images, I assert that the nature of digital images does not correspond to the phenomenological assumptions on which classical film theory was built. While film theory is based on past film techniques that rely on human perception to relate frames across time, computer generated images use information to render images as moving themselves. Consequently, cinema studies and new media theory are no longer separable, and the aesthetic and epistemological consequences of shifts in technology must be accounted for in film theory and cinema studies more broadly as computer-generated images are now able to exceed our perceptual grasp. I introduce discorrelation as a conceptual tool for understanding not only the historical, but also the technological specificity, of how films are actively and affectively perceived as computer generated images. This is a kind of hyperinformatic cinema – with figures intended to overload and exceed our perceptual grasp, enabled by algorithmic processing. In the final chapter of the book, I consider how these computer-generated images have exceeded spectacle, and are arguably not for human perception at all, thus serving as harbingers of human extinction, and the end of the environment as defined by human habitation.

At least, that is what you will read about my book if you search for it on Google Books — above, I have only slightly modified and excerpted the summary included there. Note that this is not the summary provided by my publisher, even though that is what Google claims. I strongly suspect that a computer, and not a human, wrote this summary, as the text kind of makes sense and kind of doesn’t. I do indeed argue that computer-generated images exceed our perceptual grasp, that their real-time algorithmic rendering and futural or predictive dimensions put them, at least partially, outside of conscious awareness and turn them into potent vectors of subjectivation and environmental change. But I honestly don’t know what it means to say that “computer generated images use information to render images as moving themselves.” The repetition of the word images makes this sentence confusing, and the final words are ambiguous: are these supposed to be “self-moving images,” or images that, themselves, are moving? Or do the images use information to render themselves as moving images? What would that mean? The images are self-rendering? There is a multilayered problem of intelligibility involved, despite the fact that the sentences are more or less grammatical. The semantic ambiguities, the strange repetitions, and the feeling that something is just a little off are tell-tale signs of AI-generated text. This is not full-blown “hallucination,” as they say when AI just makes things up, but instead a kind of twilight recursion, suspended between the past of the training data and the future of the predictive algorithm, generating a sleepy, hypnagogic loop or a quasi-lucid, semi-waking dream.

But that summary was generated back in 2020. Since then, with GPT and other tools proliferating, we have witnessed a quantum leap in the intelligibility of AI-generated texts. In preparation for this event, I asked ChatGPT to summarize several of my books and to explain key concepts and arguments I made in them. The results were much better than what I just discussed (even though I was using the basic version that runs on GPT-3.5, not the more advanced GPT-4). Asked to explain my theory that “media are the originary correlators of experience,” the algorithm responded: “In this context, ‘originary’ suggests that media have been present from the beginning of human existence and have continuously evolved alongside our species. They are ingrained in our social and cultural development and have become integral to how we make sense of the world. […] Whether it’s language, art, writing, photography, film, or digital technology, each medium influences and organizes our experiences, constructing the framework through which we navigate reality.” That’s not bad, and it gets at what I’m calling the environmentality of media, including the medium or milieu of intelligence. 

We could say, then, that artificial intelligence technology functions as a contemporary manifestation of the correlation between media and human experience. ChatGPT represents a significant leap in the relationship between humans and technology in the digital age. As a sophisticated language model, it mediates human interaction with information, communication, and even decision-making processes. ChatGPT is an intermediary that transforms the way we engage with knowledge and ideas, redefining the boundaries between human and machine. As an AI language model, ChatGPT embodies the fusion of the organic (human intelligence) and the artificial (machine intelligence). This fusion blurs the lines between human creativity and algorithmic generation, questioning traditional notions of authorship and creativity.

The only problem, though, is that everything I just said about ChatGPT was written by ChatGPT, which I asked to speculate, on the basis of my books, about what I would say about large language model AIs. The impersonation is competent, and even clarifying, as it brings out implications of my previous thinking in transferring them to the new case. Significantly, it points the way out of the impasse I described earlier with reference to Plato’s Phaedrus: AI will neither simply empower nor simply imperil human intelligence but will fundamentally alter it by transforming the parameters or environment of its operation. 

The fact that ChatGPT could write this text, and that I could speak it aloud without any noticeable change in my voice, style, or even logical commitments, offers a perfect example of the aforementioned leap in the intelligibility of AI-generated contents. Intelligibility is of course not the same as intelligence, but neither is it easily separated from the latter. Nevertheless, or as a result, I want to suggest that perhaps the future of intelligence depends on the survival of unintelligibility. This can be taken in several ways. Generally, noise is a necessary condition, substrate, or environment for the construction of signals, messages, or meanings. Without the background of unintelligible noise, meaningful figures could hardly stand out as, well, meaningful. In the face of the increasingly pervasive—and increasingly intelligible—AI-generated text circulating on the Internet (and beyond), Matthew Kirschenbaum speaks of a coming Textpocalypse: “a tsunami of text swept into a self-perpetuating cataract of content that makes it functionally impossible to reliably communicate in any digital setting.” Kirschenbaum observes: “It is easy now to imagine a setup wherein machines could prompt other machines to put out text ad infinitum, flooding the internet with synthetic text devoid of human agency or intent: gray goo, but for the written word.” 

Universal intelligibility, in effect, threatens intelligence, for if all text (or other media) becomes intelligible, how can we intelligently discriminate, and how can we cultivate intelligence? Cultivating intelligence, in such an environment, requires exposure to the unintelligible, that which resists intellective parsing: e.g. glitches, errors, and aesthetic deformations that both expose the computational infrastructures and emphasize our own situated, embodied processing. Such embodied processing precedes and resists capture by higher-order cognition. The body is not dumb; it has its own sort of intelligence, which is modified by way of interfacing with computation and its own sub-intellective processes. In this interface, a microtemporal collision takes place that, for better or for worse, transforms us and our powers of intelligence. If I emphasize the necessary role of unintelligibility, this is not (just) about protecting ourselves from being duped and dumbed by all-too-intelligible deepfakes or the textpocalypse, for example; it is also about recognizing and caring for the grounds of intelligence itself, both now and in the future.

And here is where art comes in. Some of the most intelligent contemporary AI-powered or algorithmic art actively resists easy and uncomplicated intelligibility, instead foregrounding unintelligibility as a necessary substrate or condition of possibility. Remix artist Mark Amerika’s playful/philosophical use of GPT for self-exploration (or “critique” in a quasi-Kantian sense) is a good example; in his book My Life as an Artifical Creative Intelligence, coauthored with GPT-2, and in the larger project of which it is a part, language operates beyond intention as the algorithm learns from the artist, and the artist from the algorithm, increasingly blurring the lines that nevertheless reveal themselves as seamful cracks in digital systems and human subjectivities alike. The self-deconstructive performance reveals the machinic substrate even of human meaning. In her forthcoming book Malicious Deceivers, theater and performance scholar Ioana Jucan offers another example, focusing on the question of intelligibility in Annie Dorsen’s algorithmic theater. For example, Dorsen’s play A Piece of Work (2013) uses Markov chains and other algorithms to perform real-time analyses of Shakespeare’s Hamlet and generate a new play, different in each performance, in which human and machinic actors interface on stage, often getting caught in unintelligible loops that disrupt conventions of theatrical and psychological/semantic coherence alike. 

Moreover, a wide range of AI-generated visual art foregrounds embodied encounters that point to the limits of intellect as the ground of intelligence: as I have discussed in a recent essay in Outland magazine, artists like Refik Anadol channel the sublime as a pre- or post-intellecitve mode of aesthetic encounter with algorithms; Ian Cheng uses AI to create self-playing videogame scenarios that, because they offer not point of interface, leave the viewer feeling sidelined and disoriented; and Jon Rafman channels cringe and the uncomfortable underbellies of online life, using diffusion models like Midjourney or DALL-E 2 to illustrate weird copypasta tales from the Internet that point us toward a visual equivalent of the gray goo that Kirschenbaum identifies with the textpocalypse. These examples are wildly divergent in their aesthetic and political concerns, but they are all united, I contend, in a shared understanding of environmentality and noise as a condition of perceptual engagement; they offer important challenges to intelligibility that might help us to navigate the future of intelligence.

To be continued…

Notes toward a Phenomenology of AI Art

Jon Rafman Counterfeit Poast, 2022 4K stereo video 23:39 min MSPM JRA 49270 film still

Today I have a short piece in Outland on AI art and its embodied processing, as part of a larger suite of articles curated by Mark Amerika.

The essay offers a first taste of something I’m developing at the moment on the phenomenology of AI and the role of aesthetics as first philosophy in the contemporary world — or, AI aesthetics as the necessary foundation of AI ethics.

CFP: 2023 Berkeley-Stanford-SFMOMA Symposium

CFP_ 2023 Berkeley-Stanford… by medieninitiative

This year’s Berkeley-Stanford Symposium will again take place at SFMOMA on April 28, 2023. This is always an exciting event, open to graduate student presenters working in art history, visual culture, film and media studies, and interdisciplinary spaces. This year’s theme is “In-Between: Art and Cultural Practices from Here.”

Please see the CFP above. Those interested should submit an abstract no longer than 300 words and a brief bio by February 28th to berkeleystanford2023@gmail.com

CYBERPUBLICS, MONUMENTS, AND PARTICIPATION — Legacy Russell at Digital Aesthetics Workshop, May 20

Poster by Hank Gerba

We’re excited to announce our next event at the Digital Aesthetics Workshop, a talk by writer and curator Legacy Russell, author of Glitch Feminism, which will take place next Thursday, May 20th at 10 am Pacific and is co-sponsored by the Clayman Institute for Gender Research.

Please register in advance at: tinyurl.com/GFDAW.

About the event:

“CYBERPUBLICS, MONUMENTS, AND PARTICIPATION”

Join writer and curator Legacy Russell in a discussion about the ways in which artists engaging the digital are building new models for what monuments can be in a networked era of mechanical reproduction.

Legacy Russell is a curator and writer. Born and raised in New York City, she is the Associate Curator of Exhibitions at The Studio Museum in Harlem. Russell holds an MRes with Distinction in Art History from Goldsmiths, University of London with a focus in Visual Culture. Her academic, curatorial, and creative work focuses on gender, performance, digital selfdom, internet idolatry, and new media ritual. Russell’s written work, interviews, and essays have been published internationally. She is the recipient of the Thoma Foundation 2019 Arts Writing Award in Digital Art, a 2020 Rauschenberg Residency Fellow, and a recipient of the 2021 Creative Capital Award. Her first book Glitch Feminism: A Manifesto (2020) is published by Verso Books. Her second book, BLACK MEME, is forthcoming via Verso Books.

Sponsored by the Stanford Humanities Center. Made possible by support from Linda Randall Meier, the Mellon Foundation, and the National Endowment for the Humanities. Co-sponsored by the Michelle R. Clayman Institute for Gender Research.

“Bit Field Black” — Kris Cohen at Digital Aesthetics Workshop/CPU, May 19, 2020 (via Zoom)

Poster by Hank Gerba

The Digital Aesthetics Workshop is excited to announce our second event of the Spring quarter: on May 19th, at 5 PM, we’ll host a workshop with Kris Cohen, via Zoom. This workshop has been co-organized with Stanford’s Critical Practices Unit (CPU), whom you can (and should!) follow for future CPU events here. Please email Jeff Nagy (jsnagy at stanford dot edu) by May 18th for the Zoom link.

Professor Cohen will discuss new research from his manuscript-in-progress, Bit Field BlackBit Field Black accounts for how a group of Black artists working from the Sixties to the present were addressing, in ways both belied and surprisingly revealed by the language of abstraction and conceptualism, nascent configurations of the computer screen and the forms of labor and personhood associated with those configurations.

Professor Cohen is Associate Professor of Art and Humanities at Reed College. He works on the relationship between art, economy, and media technologies, focusing especially on the aesthetics of collective life. His book, Never Alone, Except for Now (Duke University Press, 2017), addresses these concerns in the context of electronic networks.

A poster with all the crucial information is attached for lightweight recirculation. 

Thank you to all of the very many of you who logged on for our first Spring workshop with Sarah T. Roberts. We hope you will also join us on the 19th, and keep an eye out for an announcement of our third Spring workshop, with Xiaochang Li, coming up on May 26th

Jim Campbell’s Discorrelated Images

IMG_8778

Last evening I had the pleasure of discussing Jim Campbell’s work with him at the Anderson Collection at Stanford, where he has a wonderful exhibition of LED-based works up right now. It was a far-ranging discussion, in a packed gallery, and great fun all around. Here are my opening remarks:

Before we start our conversation, I have the honor of offering some framing thoughts about Jim Campbell’s work. I want to use this opportunity to put that work into dialogue with some of my own interests and concerns as a theorist of the intersection between computational and moving-image media. I am concerned, in other words, with the historical and phenomenological encounter between the invisible processing of digital information and the visible forms that result from it—and it is precisely this encounter that Jim’s LED-based artworks enact or perform in a variety of thought-provokingly deformative ways. This is to say that his work, by means of occluding, blocking, and de-focusing our view, ironically makes perceptible the very mismatch between perception and computational processing that lies at the heart of digital video as it circulates online, on our smartphones, on DVDs and BluRays, on digital cable and satellite TV, and in the digital projection systems of contemporary movie theaters. In all of those contexts, digital processing remains resolutely invisible to perception (except, that is, through exceptional moments of glitching, buffering, and the like); but, those exceptional and denigrated moments aside, the perceptual “content” of digital video is privileged, thus blinding us to the ways that the medial form of video’s computational processing is changing the very parameters of our embodied perception, or the ways that, as Canadian media theorist Marshall McLuhan put it, our “sensory ratios” are being reformed by our encounter with a new media environment.

By re-valorizing the exceptional, or that which disrupts or impedes the easy transmission of visual “content,” Jim’s work offers an oblique view of the hidden parameters of this new environment; he makes what I call the “discorrelation” between our perception and its infrastructure perceptible—if only in a necessarily incomplete and volatile form. And the volatility of these operations is key: Jim’s works keep our eyes and our bodies moving, making us move now closer and then farther away, causing us to squint and then relax our focus, in order to catch a glimpse of something figural, recognizable, the so-called “content” of the moving images. Certainly, this content is not irrelevant, but it is hardly the ultimate telos or desideratum towards which the work directs our attention. The works are not simple puzzles that are “solved” once we identify their contents. Rather, the incessant oscillation between perception and non-perception, between seeing and not seeing, would seem to be closer to the point, as it is this oscillation that keeps everything at play, unsettling basic categories and forms. We shift our focus between individual LEDs, the screen or wall upon which they reflect, and an indirect, sometimes volumetric illumination of bodies or objects in motion. Our perception doesn’t come to rest upon a stable object or meaning, and this instability infects the broader conceptual context within which our perception is situated: Jim’s work upsets and makes us question so many basic distinctions—for example, between video art and sculpture, between art and engineering, between material substrates and perceptual forms, between perception and imagination. Through his destabilization of perception, he re-opens also the gap between art and technology, a gap created around the time of the industrial revolution, when thinkers like Immanuel Kant helped engineer a split between the aesthetic and the technical, or between the fine arts and the applied arts. Earlier, both the Greek term techne and the Latin ars referred indiscriminately to both arts and technologies. Now, the poets were to work with words while the engineers worked on steam engines; artists concerned themselves with the non-utilitarian forms of aesthetic experience while technologists made the machines that kept the factories running. However, in the space cleared between art and technology, a third thing emerged, a common ground for aesthetic and technological production alike: namely, media in its modern sense. A medium in this sense is not reducible to its “content” in a narrow way; rather, it is something that straddles perceptual form and infrastructure. Take, for example, the way the Sunday comics capitalized on innovations in four-color printing processes, or the way cinema responded to synchronized sound with new genres like the musical or the horror film, which involves its spectator through an offscreen space of screams and bumps in the night. It is in this sense that McLuhan proclaimed that “the medium is the message”—a claim that he explained with the example of the light bulb, a content-less medium, the message of which is the electrification of the world and the resulting transformation of agency, perception, and social relation. In order to explore the message or the meaning of more recent shifts in the media environment, Jim replaces McLuhan’s light bulb with LEDs—the same light emitting diodes that provide backlighting for flatscreen computer monitors and television sets, that power digital projectors, or that illuminate our increasingly “smart” homes. Routing perception through these characteristically digital-era lights, and powering them by way of unseen “custom electronics,” Jim defocuses intentional perception, foregrounds the obfuscation of infrastructure, and indirectly illuminates a media environment in which computation has finally (arguably) rendered the industrial-era split between art and technology untenable.

When I recently spoke to him on the phone, Jim identified himself not as an artist but as an engineer—and certainly he holds the degrees, the patents, and the experience to justify that statement. But he is an engineer of a special sort: an engineer of perception in an age when perception teeters precariously atop invisible circuits and computational infrastructures not cut to our measure, an engineer of experience when experience is routed through ubiquitous circuits of computational processing. Occluding both the image and its digital infrastructure, Jim’s work puts our perceptual experience in motion, incessantly circulating between what we can and cannot see. The work arouses a curiosity about the conditions of this circulation, including the means by which the LEDs, and hence also our perception, have been programmed. In the context of nineteenth-century magic shows and scientific expositions, this curiosity about how the spectacle works has been called an “operational aesthetic”—an aesthetic that, fittingly for the era of industrial media, includes an enjoyment in the sight of technical operation. In the twenty-first-century context of ubiquitous computational processing and experiential engineering, Jim offers us something slightly different, I suggest: an operational aesthetic of perception itself, a questioning of our ability and the means of seeing in an age of discorrelation, when visibility is rendered ambiguously at the margins of human signs and invisible informatic signals.

Critical Practices Unit (CPU)

cpu-small

I am excited to announce the inaugural session of Critical Practices Unit (CPU), on November 19 at 6:30pm (in McMurtry 360).

In this interdisciplinary and practice-based group, with support from the Vice President for the Arts, we hope to stage collisions between the various epistemes and critical frameworks we all know and love through performances, art-objects, interactive media, and “critical making” projects, which, in some sense to be explored, materialize critical reflection.

In fidelity to these objects’ disobedience to any specific field, we want to stress that CPU is for those in the humanities, sciences, and arts. These conversations—spanning computation, performance, race, personhood, gesture, interaction, and more—will be made all the richer by a diversity of perspectives.

For our first event, we will be playing with haptic devices for underwater robots graciously loaned by The Stanford Robotics Lab, involving ourselves in a live performance piece / installation by Catie Cuan, and settling into a conversation about the grafting of robotics and performativity. We are overjoyed that situating this discussion will be Sydney Skybetter, Lecturer in Theater and Performance Studies at Brown University, and Matthew Wilson Smith, Professor of German Studies and Performance Studies here at Stanford.

APPROXIMATELY 800cm3 OF PLA — Exhibition Catalog

Screen Shot 2019-04-09 at 3.31.33 PM

The exhibition catalog for APPROXIMATELY 800cm3 of PLA, curated by Gabriel Menotti at last year’s Center for 21st Century Studies conference on The Ends of Cinema (May 3-5, 2018 at University of Wisconsin-Milwaukee) is now online.

Screen Shot 2019-03-30 at 5.06.34 PM

Among the pieces featured was DataGnomeKD1.stl, a generative/deformative 3D-printed garden gnome that Karin Denson and I made a couple of years ago in the context of a larger project at the Duke S-1: Speculative Sensation Lab. (You can check out our publication here.)

Screen Shot 2019-03-30 at 5.09.31 PM

Thanks to Gabriel Menotti for putting together this playful show!

Screen Shot 2019-03-30 at 5.08.19 PM

Screen Shot 2019-03-30 at 5.06.00 PM

Michael Richards: Winged

IMG_0635

The other day I promised (or threatened) to reactivate this blog for things other than announcing talks, publications, etc. It remains to be seen how much time I will actually devote to this, but my thought was anyway that I should reclaim the time I waste on social media (especially Facebook). Accordingly, why not post just the pictures I would otherwise be sharing there here instead? Of course, these are not just any pictures…

These are from the moving show Michael Richards: Winged, which is currently up at the Stanford Art Gallery (but ending this week, so hurry if you plan to see it!).

Michael Richards, whose work powerfully probes race in American culture, tragically died in the September 11, 2001 attacks on the World Trade Center, where he was working in his studio on the 92nd floor of Tower One.

IMG_2327

The show was recently an Artforum Critics’ Pick.

Check it out if you can!

 

Plastic Dialectics: Community and Collectivity in Japanese Contemporary Art — Miryam Sas at Digital Aesthetics Workshop

Sas Poster

What do we mean when we speak of “collectivity,” collaboration, and community? How have artists and theorists in Japan questioned and created experimental practices that reframe these terms, so crucial to discussions of the arts today? Sas will reflect on issues of collectivity and assemblage as manifested in Japanese contemporary art, drawing examples from 1950s art theory, late 1960s intermedia art, 1970s site-specific photography events, and post 3-11 sculptural installation. Through site-specific critique and new modes of engagement with local space, artists in each of these distinct moments engage in a subtle but powerful rethinking of the frameworks and practices of collectives past and present.

At the next meeting of the Digital Aesthetics Workshop, Miryam Sas, Professor of Comparative Literature and Film & Media at UC Berkeley, will discuss Plastic Dialectics: Community and Collectivity in Japanese Contemporary Art. As we have throughout this quarter, we will meet on Tuesday, Dec 4, from 5-7 at the Roble Gym. RSVP to deacho@stanford.edu – we expect there will be a paper that we will pre-circulate this weekend.

Sas studies Japanese literature, film, theater, and dance; 20th century literature and critical theory; and avant-garde and experimental visual and literary arts.  She is the author of Experimental Arts in Postwar Japan: Moments of Encounter, Engagement, and Imagined Return  (Harvard, 2010); and Fault Lines: Cultural Memory and Japanese Surrealism(Stanford, 2001).  Sas is currently working on a book on media theory and contemporary art in Japan, Feeling Media: Infrastructure, Potentiality, and the Afterlife of Art in Japan, for which she was awarded a President’s Research Fellowship in the Humanities (2017-18).  She has published numerous articles in English, French, and Japanese on subjects such as Japanese futurism, cross-cultural performance, intermedia art, butoh dance, pink film and Japanese experimental animation.