Exploratory Stage Lighting Design using Visual Objectives

    Abstract

    Lighting is a critical element of theater. A lighting designer is responsible for drawing the audience’s attention to a specific part of the stage, setting time of day, creating a mood, and conveying emotions. Designers often begin the lighting design process by collecting reference visual imagery that captures different aspects of their artistic intent. Then, they experiment with various lighting options to determine which ideas work best on stage. However, modern stages contain tens to hundreds of lights, and setting each light source’s parameters individually to realize an idea is both tedious and requires expert skill. In this paper, we describe an exploratory lighting design tool based on feedback from professional designers. The system extracts abstract visual objectives from reference imagery and applies them to target regions of the stage. Our system can rapidly generate plausible
    design candidates that embody the visual objectives through a Gibbs sampling method, and present them as a design gallery for rapid exploration and iterative refinement. We demonstrate that the resulting system allows lighting designers of all skill levels to quickly create and communicate complex designs, even for scenes containing many color-changing lights.

    Authors

    Evan Shimizu, Sylvain Paris, Matthew Fisher, Ersin Yumer, Kayvon Fatahalian

    Conference

    Eurographics 2019

    Full Paper

    Exploratory Stage Lighting Design using Visual Objectives (PDF)

    Uber ATG

    Comments
    Previous articleMetropolis-Hastings Generative Adversarial Networks
    Next articleLearning a Generative Model for Multi-Step Human-Object Interactions from Videos
    Ersin Yumer
    Ersin Yumer is a Staff Research Scientist, leading the San Francisco research team within Uber ATG R&D. Prior to joining Uber, he led the perception machine learning team at Argo AI, and before that he spent three years at Adobe Research. He completed his PhD studies at Carnegie Mellon University, during which he spent several summers at Google Research as well. His current research interests lie at the intersection of machine learning, 3D computer vision, and graphics. He develops end-to-end learning systems and holistic machine learning applications that bring signals of the visual world together: images, point clouds, videos, 3D shapes and depth scans.