Computing Short Films Using Language-Guided Diffusion and Vocoding Through Virtual Timelines of Summaries Cover Image

Computing Short Films Using Language-Guided Diffusion and Vocoding Through Virtual Timelines of Summaries
Computing Short Films Using Language-Guided Diffusion and Vocoding Through Virtual Timelines of Summaries

Author(s): Luís Arandas, Miguel Carvalhais, Mick Grierson
Subject(s): Media studies, Film / Cinema / Cinematography
Published by: INSAM Institut za savremenu umjetničku muziku
Keywords: artificial filmmaking; deep generative models; language-guided diffusion; short film computing; audiovisual composition; multimodal sequencing;

Summary/Abstract: Language-guided generative models are increasingly used in audiovisual production. Image diffusion allows for the development of video sequences and some of its coordination can be established by text prompts. This research automates a video production pipeline leveraging CLIP-guidance with longform text inputs and a separate text-to-speech system. We introduce a method for producing frame-accurate video and audio summaries using a virtual timeline and document a set of video outputs with diverging parameters. Our approach was applied in the production of the film Irreplaceable Biography and contributes to a future where multimodal generative architectures are set as underlying mechanisms to establish visual sequences in time. We contribute to a practice where language modelling is part of a shared and learned representation which can support professional video production, specifically used as a vehicle throughout the composition process as potential videography in physical space.

  • Issue Year: 2023
  • Issue No: 10
  • Page Range: 71-89
  • Page Count: 19
  • Language: English