How to:

"Eno" might be the first quantum film

Gary Hustwit, famed creator of such design documentaries as Helvetica and Rams, is premiering his next film, Eno, at Sundance this January 18th-28th.

The film is a biography of Brian Eno, a “visionary musician and artist...known for producing David Bowie, U2, Talking Heads, among many others; pioneering the genre of ambient music; and releasing over 40 solo and collaboration albums." (Shari Frilot, Senior Programmer, Sundance Film Festival).

But what's really cool here isn't just the subject—it's that the film is generatively produced, live, each time it's viewed, so it's different every time. 🤯

Hustwit partnered with creative technologist Brendan Dawes to create something that is almost as much computer code as it is the video and audio itself.

Frilot from Sundance continues:

"[They've] developed bespoke generative software designed to sequence scenes and create transitions out of Hustwit’s original interviews with Eno, and Eno’s rich archive of hundreds of hours of never-before-seen footage, and unreleased music. Each screening of Eno is unique, presenting different scenes, order, music, and meant to be experienced live. The generative and infinitely iterative quality of Eno poetically resonates with the artist's own creative practice, his methods of using technology to compose music, and his endless deep dive into the mercurial essence of creativity."

It's a pretty revolutionary concept in filmmaking and art more generally. While there have been generative artworks before, these tend to be abstract generations of unrecognizable sound and light. It feels like another thing entirely for a photorealistic (obviously, it's a live-action documentary) portrait of a real person to be produced generatively.

There's a bit of the uncanny valley here as we consider that no human made the choice of what shot or sound we'll hear next. It's as if the footage was shot by Hustwit but the story is almost inherently something else altogether, given how important the question words "what" and "when" (that is, the order we experience things, when we experience them, and how long we experience them for) are to time-based media, especially for something that's as cinema verité as a documentary.

In some ways, however, this format is more authentic than traditional film ever could be, for when are two people's impressions, separated in time, ever the same about one person? The subject changes. The viewer changes. And all viewers are different. By modeling that in a film, we may be experiencing something more realistically than ever.

Wild times. Super excited to see this.

Posts
/
Field Notes

"Eno" might be the first quantum film

Gary Hustwit, famed creator of such design documentaries as Helvetica and Rams, is premiering his next film, Eno, at Sundance this January 18th-28th.

The film is a biography of Brian Eno, a “visionary musician and artist...known for producing David Bowie, U2, Talking Heads, among many others; pioneering the genre of ambient music; and releasing over 40 solo and collaboration albums." (Shari Frilot, Senior Programmer, Sundance Film Festival).

But what's really cool here isn't just the subject—it's that the film is generatively produced, live, each time it's viewed, so it's different every time. 🤯

Hustwit partnered with creative technologist Brendan Dawes to create something that is almost as much computer code as it is the video and audio itself.

Frilot from Sundance continues:

"[They've] developed bespoke generative software designed to sequence scenes and create transitions out of Hustwit’s original interviews with Eno, and Eno’s rich archive of hundreds of hours of never-before-seen footage, and unreleased music. Each screening of Eno is unique, presenting different scenes, order, music, and meant to be experienced live. The generative and infinitely iterative quality of Eno poetically resonates with the artist's own creative practice, his methods of using technology to compose music, and his endless deep dive into the mercurial essence of creativity."

It's a pretty revolutionary concept in filmmaking and art more generally. While there have been generative artworks before, these tend to be abstract generations of unrecognizable sound and light. It feels like another thing entirely for a photorealistic (obviously, it's a live-action documentary) portrait of a real person to be produced generatively.

There's a bit of the uncanny valley here as we consider that no human made the choice of what shot or sound we'll hear next. It's as if the footage was shot by Hustwit but the story is almost inherently something else altogether, given how important the question words "what" and "when" (that is, the order we experience things, when we experience them, and how long we experience them for) are to time-based media, especially for something that's as cinema verité as a documentary.

In some ways, however, this format is more authentic than traditional film ever could be, for when are two people's impressions, separated in time, ever the same about one person? The subject changes. The viewer changes. And all viewers are different. By modeling that in a film, we may be experiencing something more realistically than ever.

Wild times. Super excited to see this.

Updated continuously — Latest commit on
12.13.23

More stuff to read