Jason Mittell: “Videographic Deformations: How (and Why) to Break Your Favorite Films” — Oct. 10, 2018

 

Frankenstein's Television headshot copy

In conjunction with the exhibition Videographic Frankenstein (Sept. 26 – Oct. 26, 2018 in The Dr Sidney & Iris Miller Discussion Space, McMurtry Building, Stanford), television scholar and video essayist Jason Mittell (Middlebury College) will deliver a public lecture titled “Videographic Deformations: How (and Why) to Break Your Favorite Films.”

The lecture, which takes place at 5:30pm on October 10, 2018 in Oshman Hall (McMurtry Building), is in conversation with Frankenstein’s Television, Mittell’s contribution to the exhibition, and with a broader set of methodological concerns around the idea of “deformative” methods:

Deformative criticism has emerged as an innovative site of critical practice within media studies and digital humanities, revealing new insights into media texts by “breaking” them in controlled or chaotic ways. Media scholars are particularly well situated to such experimentation, as many of our objects of study exist in digital forms that lend themselves to wide-ranging manipulation. Building on Jason Mittell’s experiments with Singin’ in the Rain and his “Frankenstein’s Television” video (included in Stanford’s Videographic Frankenstein exhibit), this presentation discusses a range of deformations applied to film and television, considering what we can learn by breaking a media text in creative and unexpected ways.

Jason Mittell is Professor of Film & Media Culture and American Studies, and founder of the Digital Liberal Arts Initiative at Middlebury College. His books include Complex Television: The Poetics of Contemporary Television Storytelling (NYU Press, 2015), The Videographic Essay: Criticism in Sound and Image (with Christian Keathley; caboose books, 2016), and co-editor of How to Watch Television (with Ethan Thompson; NYU Press, 2013). He is project manager for [in]Transition: Journal of Videographic Film & Moving Image Studies, co-director of the NEH-supported workshop series Scholarship in Sound & Image, and a Fellow at the Peabody Media Center.

See here for more information.

FrankensteinsDeepDream

Creation scene and aftermath, as described in Mary Shelley’s Frankenstein (Chapter 5, 1831 edition) and interpreted by Cris Valenzuela’s text-to-image machine-learning demo (http://t2i.cvalenzuelab.com) utilizing AttnGAN (Attentional Generative Adversarial Networks).

Made for the upcoming Videographic Frankenstein exhibition at the Department of Art & Art History, Stanford University (Sept. 26 – Oct. 26, 2018). More info here: https://art.stanford.edu/exhibitions/videographic-frankenstein

Virtual and Augmented Reality Digital (and/or Deformative?) Humanities Institute at Duke

I am excited to be participating in the the NEH-funded Virtual and Augmented Reality Digital Humanities Institute — or V/AR-DHI — next month (July 23 – August 3, 2018) at Duke University. I am hoping to adapt “deformative” methods (as described by Mark Sample following a provocation from Lisa Samuels and Jerome McGann) as a means of transformatively interrogating audiovisual media such as film and digital video in the spaces opened up by virtual and augmented reality technologies. In preparation, I have been experimenting with photogrammetric methods to reconstruct the three-dimensional spaces depicted on two-dimensional screens. The results, so far, have been … modest — nothing yet in comparison to artist Claire Hentschker’s excellent Shining360 (2016) or Gregory Chatonsky’s The Kiss (2015). There is something interesting, though, about the dispersal of the character Neo’s body into an amorphous blob and the disappearance of bullet time’s eponymous bullet in this scene from The Matrix, and there’s something incredibly eerie about the hidden image behind the image in this famous scene from Frankenstein, where the monster’s face is first revealed and his head made virtually to protrude from the screen through a series of jump cuts. Certainly, these tests stand in an intriguing (if uncertain) deformative relation to these iconic moments. In any case, I look forward to seeing where (if anywhere) this leads, and to experimenting further at the Institute next month.

Let’s Make a Monster — Exhibition at Shriram Center for Bioengineering and Chemical Engineering

IMG_1991

Works from the course “Let’s Make a Monster: Critical Making,” which I co-taught this quarter with my art practice colleague Paul DeMarinis, are currently on display in the Shriram Center for Bioengineering and Chemical Engineering at Stanford University. The show, which officially opened today, is up through Friday, June 8.

We are particularly excited to take this work across campus and show it in the context of a space devoted to cutting-edge engineering work, where we hope that it provokes thought and discussion about the transformations of technology, experience, and life itself taking place in Silicon Valley and elsewhere. Thanks especially to Prof. Drew Endy for his help in facilitating and making this show possible.

Here are just a few glimpses of the work on display.

Nora Wheat, Decode (2018)

Hieu Minh Pham, The Knot (2018)

Raphael Palefsky-Smith, Brick (2018) — more info here

David Zimmerman, Eigenromans I-III (2018)

Jennifer Xilo, Mirror for Our Upturned Palms (2018)

Jackie Langelier, Creepers (2018)

Fembots: From Representation to Reality

deFren-Poster

On Monday, November 13, 2017 (5:30pm in Oshman Hall, McMurtry Building), media maker/scholar Allison de Fren (Occidental College) will be on hand for a screening of her 2010 documentary The Mechanical Bride and her 2015 video essay Fembot in a Red Dress. The screening, which is free and open to the public, will be followed by a Q&A.

Sponsored by the Stanford Department of Art & Art History, the Documentary Film Program, and Stanford’s Frankenstein@200 Initiative.

What Is Monster? What Is Human? (Update)

F200 Opening Colloquium 10 2017 poster

This is the updated poster for the opening colloquium for Stanford’s Frankenstein@200 Initiative, October 17, 2017 (7:00-8:30pm in Cubberley Auditorium, Stanford School of Education). I’ll be speaking alongside Denise Gigante (English Department), Aleta Hayes (Theater and Performance Studies), Russ Altman (Bio-Engineering, Genetics, Medicine, Computer Science), and Hank Greely (Law and Genetics), moderated by Jane Shaw (Dean for Religious Life).

Free and open to the public: All humans, monsters, cyborgs, others welcome.

Out Now: “Visualizing Digital Seriality” in Kairos 22.1

2017-08-15 01.23.20 pm

I am excited to see my interactive piece, “Visualizing Digital Seriality, or: All Your Mods Are Belong to Us,” out now in the latest issue of Kairos: A Journal of Rhetoric, Technology, and Pedagogy. This is by far the most technically demanding piece of scholarship I have ever produced, and it underwent what is possibly the most rigorous peer-review process to which any of my published articles has ever been subject. If you’re interested in data visualization, distant reading techniques, network graphing, critical code studies, game studies, modding scenes, or Super Mario Bros. (and who doesn’t like Super Mario Bros.?), check it out!

Post-Cinema: Videographic Explorations

Post-Cinema-Exhibit-Poster

Starting May 1, I am proud to present an exhibition of video essays, including works by well-known scholar-filmmakers Allison de Fren and Kevin B. Lee, as well as students from my “Post-Cinema” seminar. Selected videos deal with a range of topics, including digital animation, Beyoncé’s Lemonade and the visual album, contemporary horror, slow cinema, transmedia franchises and post-cinematic television, and more.

The show will be on view May 1-12, 2017 in the Gunn Foyer, McMurtry Building (home of the Department of Art & Art History) at Stanford University.

#SCMS17 Deformative Criticism Workshop — Slides, Videos, Tutorials, Stuff

2017-03-24 01.40.14 pm

Click here or on the image above to view the slides from today’s workshop on “Deformative Criticism & Digital Experimentations in Film & Media Studies” at the 2107 SCMS conference.

Also, see here for a Google Doc with my contribution (“Glitch Augment Scan”) — including thoughts on AR, examples, and a super-simple AR tutorial — as well as links to videos, code, experiments, and deformations by my co-panelists Stephanie Boluk, Kevin Ferguson, Virginia Kuhn, Jason Mittell, and Mark Sample.