Name
Video Production with Generative AI
Date & Time
Wednesday, October 23, 2024, 10:00 AM - 10:30 AM
Description

This paper explores the current and potential applications of Generative AI (GenAI) in video workflows. We begin by delving into advancements in video generation models, such as text-to-video, and their use in previsualization, second unit/b-roll, and short-form content. Emphasis is placed on prompt engineering, a critical technique for tailoring model outputs to achieve desired cinematographic effects, including camera placement, movement, shot composition, focus, and lighting. Additionally, GenAI’s role in creating storyboards and enhancing video post-production workflows—including editing, visual effects, color correction, and localization tasks like lip syncing and voice dubbing—is examined. We also discuss how GenAI can enrich existing content, such as adding commentary to sports broadcasts, and improve video content management, including increasing discoverability and monetization of archived content.

Technical Depth of Presentation
Intermediate
Take-Aways from this Presentation

Although the technology has only very recently evolved to allow for practical use cases, video generation models are useful now for previsualization, second unit/b-roll, and short form content. To make effective use of video generation models, expert prompt engineering is necessary apply the visual language of cinematography to craft the model output. Beside video generation models, other model types have evolved to assist with other video workflows, including localization and other ways of enriching video content.

Presentation
Manuscript
The New Paradigm of Software Architected Broadcast Facilities: