iMapper: Interaction-guided Scene Mapping from Monocular Videos

    Abstract

    A long-standing challenge in scene analysis is the recovery of scene arrangements under moderate to heavy occlusion, directly from monocular video. While the problem remains a subject of active research, concurrent advances have been made in the context of human pose reconstruction from monocular video, including image-space feature point detection and 3D pose recovery. These methods, however, start to fail under moderate to heavy occlusion as the problem becomes severely under-constrained. We approach the problems differently. We observe that people interact similarly in similar scenes. Hence, we exploit the correlation between scene object arrangement and motions performed in that scene in both directions: first, typical motions performed when interacting with objects inform us about possible object arrangements; and second, object arrangements, in turn, constrain the possible motions.
    We present iMapper, a data-driven method that focuses on identifying human-object interactions, and jointly reasons about objects and human movement over space-time to recover both a plausible scene arrangement and consistent human interactions. We first introduce the notion of characteristic interactions as regions in space-time when an informative human-object interaction happens. This is followed by a novel occlusion-aware matching procedure that searches and aligns such characteristic snapshots from an interaction database to best explain the input monocular video. Through extensive evaluations, both quantitative and qualitative, we demonstrate that iMapper significantly improves performance over both dedicated state-of-the-art scene analysis and 3D human pose recovery approaches, especially under medium to heavy occlusion.

    Authors

    Aron Monszpart, Paul Guerrero, Duygu Ceylan, Ersin Yumer, Niloy Mitra

    Conference

    SIGGRAPH 2019

    Full Paper

    ‘iMapper: Interaction-guided Scene Mapping from Monocular Videos’ (PDF)

    Uber ATG

    Comments
    Previous articleSurfConv: Bridging 3D and 2D Convolution for RGBD Images
    Next articleSBNet: Sparse Block’s Network for Fast Inference
    Ersin Yumer
    Ersin Yumer is a Staff Research Scientist, leading the San Francisco research team within Uber ATG R&D. Prior to joining Uber, he led the perception machine learning team at Argo AI, and before that he spent three years at Adobe Research. He completed his PhD studies at Carnegie Mellon University, during which he spent several summers at Google Research as well. His current research interests lie at the intersection of machine learning, 3D computer vision, and graphics. He develops end-to-end learning systems and holistic machine learning applications that bring signals of the visual world together: images, point clouds, videos, 3D shapes and depth scans.