Seasons: a generative multimedia installation

Seasons-Still-1Seasons is a meditation on our natural environment, inviting viewers to savor the passing of time over the course of a year. The work runs continuously using a variety of computational processes to build the audio-visual output for a single large-screen display and 4-channel sound system. [Video Preview]

Generative artists strive to instantiate the dynamics of artistic practice within the structure of code and algorithm. The system for Seasons builds video sequencing and transitions based on procedural rules and video metatags. Simultaneously, the system composes and mixes music and soundscape tracks that incorporate both semantic and affective elements of the video into their own aesthetic rules.

The creative team behind Seasons have been working independently for many years within separate generative art domains. They have all developed individual generative works as a means to further their artistic goals. They have now come together to explore their shared ideas within an interdisciplinary context. [Full credits]

  1. VISUALS

Seasons is conceived as an example of “Ambient Video”. Bizzocchi has been exploring and working within this form for over a decade. He has been inspired by Brian Eno’s description of ambient music, which “must be able to accommodate many levels of listening attention without enforcing one in particular; it must be as ignorable as it is interesting.”[1] Bizzocchi’s “ambient video” art is designed not to require viewer attention, but rather to reward viewer attention whenever it occurs. Bizzocchi sees this form of video art as an appropriate medium to communicate his deep love for natural places, and an effective expression for his deep love of visual pleasure and cinematic flow. In these works, he applied three creative tactics:

  • gathering high-definition moving images imbued with filmic expressivity of visual composition, cinematic patterns and textures, and the ongoing play of color, light and shadow;
  • treating time as plastic – slowing the editing pace through extended shot times and manipulating the time base within the shots to slow down certain features (such as water flow) or speed up other features (such as clouds floating across the sky)
  • building a complex aesthetic of visual layering and complex shot transition to both confound our sense the “real” and to push back against the traditional cinematic norm of simple hard cuts

His linear videos were relatively short pieces – ranging from 8 minutes to 20 minutes in length. Bizzocchi began exploring generative methods in order to build a system that would create an ongoing ambient video flow that never stopped, yet always presented varying visual connections and sequences. His first generative work Re:Cycle used simple rules for sequencing shots and incorporating algorithmic visual transitions to approximate the aesthetics of his linear video art. With Seasons, he has incorporated more sophisticated sequencing rules to increase visual coherence, and refined the transition strategies to maximize visual flow. He is also excited to collaborate with his colleagues to develop tactics that incorporate the power of music and soundscape with the visual expressivity of the video.

Bizzocchi has written in more detail about his ongoing exploration of Ambient Video[2] and completed five works in this style.

  1. MUSIC

The music in Seasons is generated by an ensemble of musical agents, called musebots. A musebots is a “piece of software that autonomously creates music collaboratively with other musebots”.[3] Within Seasons, fourteen different musebots are combined into eight different ensembles, which rotate with each subsequent season, and are launched and coordinated by a Conductor. Musebots are designed to generate and manage specific musical functions; for each, one musebot generates a harmonic progression based upon the incoming arousal and valence values, and transmits this to other active musebots. Other musebots may generate rhythmic material, while others generate harmonic material (based upon the progression generated by the harmony generator). Musebots transmit their states, as well as their intentions, allowing other musebots to coordinate their musical actions.

The ambient aesthetic within Seasons suggests slower tempi; therefore, each ensemble ranges from 40 to 60 beats per minute. Several musebots create longer sustained textures (named “Chord”, “BassDrone”, “Drone”, “Texture”, “Pad”), while others generate more rhythmic elements (named “Figurate8”, “Figurate16”, “Mallets”, “Ostinati”, “Tinkle”). One musebot uses a corpus of machine-learned melodies derived from Pat Metheny (“MethenyMelodies”), while another uses instrumental samples and embellishes neighbor tone melodies (“Neighbor”).

Musebots use sample-based audio generation and synthesis, including granular synthesis. Many of the samples are utilitarian (for example, “Harp”, “Electric Piano”, “Celesta”), while others are quite evocative on their own (for example, “Chord” uses “Bowed Guitar” and “Bowed Vibe” samples).

While the musebots are autonomous, they were designed by a composer, and “inherited” the composer’s musical aesthetic. This is, perhaps, most clearly reflected not only in the types of musebot designed, but also curated into the ensembles for Seasons. Eigenfeldt has created over three dozen musebots that function in a variety of styles; however, most of those used within Seasons were created specifically for the work.

  1. SOUNDSCAPE

Thorogood and Pasquier developed the Audio Metaphor system [4] to generate live performances of soundscape composition. The system is designed to utilize natural language cues combined with audio analysis, segmentation and recombination to create evocative sound art performances. In its initial implementation, the system took its cues from Twitter comments and searched online audio clips for source sound material.

For Seasons, the inputs to the system derive from metatags manually assigned to each video clip and access to an audio clip database which has been curated to support the Ambient Video aesthetic. Initially, the tags used were often the same as the tags used by the Re:Cycle engine to select and sequence the video. This was an attempt to select source material that correlated closely to the content of the videos (lakes, forests, snowy mountains). However, it became clear that some of the audio sources returned sound that did not reflect the pleasant and calm aesthetic sought. This is due mainly to the unregulated tagging system (text added by those who contributed sound clips). Furthermore, some of the video tags were more visually oriented and would not often be used to describe sound clips. One solution we adopted for the first installation of the work in August 2015 was to adjust the settings in the system that composes the soundscape clip to create a less representational or natural soundscape in favour of a more abstract but evocative track.

Additional methods have also now been implemented. In the first case, a system to identify and eliminate certain sound clips from the database used by the system for Seasons has been implemented. For the second issue, a separate set of text cues for the sound was created. These help to differentiate between two shots that contain similar elements, but convey different environments. For example one shot of a lake or stream might be very calm and quiet, while another displays pounding waves. Finally, settings in the system that composes the soundscape clip were modulated in response to phrase and chord configurations generated by musebots to create a less representational or natural soundscape.

  1. INSTALLATION

The ideal setting for Seasons is in a separate space which allows viewers to take the time to absorb the slow pace of the video and fully enjoy the multi-layered sound track. This is not always possible in a gallery situation, particularly with respect to the sound. In such cases the work may be displayed by video projection or on a large high-definition monitor, with headphones provided for listening.

References
[1] Eno, Brian. (1978) Music for Airports, PVC 7908 (AMB 001) album liner notes.
[2] Bizzocchi, Jim. (2008) The Aesthetics of the Ambient Video Experience, The Fibreculture Journal, Issue 11 2008 dac conference. http://eleven.fibreculturejournal.org/fcj-068-the-aesthetics-of-the-ambient-video-experience/
[3] Eigenfeldt, A., Bown, O., and Carey, B. (2015). “Collaborative Composition with Creative Systems: Reflections on the First Musebot Ensemble.” Proceedings of the Sixth International Conference on Computational Creativity, Park City, 134–143
[4] Thorogood, M., Pasquier, P. (2013). ”Computationally Generated Soundscapes with Audio Metaphor.” Proceedings of the Fourth International Conference on Computational Creativity, Sydney, 256–260.