Learning to Localize through Compressed Binary Maps

    Abstract

    One of the main difficulties of scaling current localization systems to large environments is the on-board storage required for the maps. In this paper we propose to learn to compress the map representation such that it is optimal for the localization task. As a consequence, higher compression rates can be achieved without loss of localization accuracy when compared to standard coding schemes that optimize for reconstruction, thus ignoring the end task. Our experiments show that it is possible to learn a task-specific compression which reduces storage requirements by two orders of magnitude over general-purpose codecs such as WebP without sacrificing performance.

    Authors

    Xinkai Wei, Ioan Andrei Bârsan, Shenlong Wang, Julieta Martinez, Raquel Urtasun

    Conference

    CVPR 2019

    Full Paper

    ‘Learning to Localize through Compressed Binary Maps’ (PDF)

    Uber ATG

    Comments
    Previous articleConvolutional Recurrent Network for Road Boundary Extraction
    Next articleEnd-to-end Interpretable Neural Motion Planner
    Avatar
    I'm a graduate student at the University of Toronto doing research on combining classic geometric methods with learning for computer vision. My supervisor is Prof. Raquel Urtasun. At the same time, I am working full-time at Uber ATG Toronto (also with Raquel) to apply my research to the real-world challenges of self-driving cars. Before coming to UofT, I did my Master's at ETH Zurich, writing my Master's Thesis on large-scale dense mapping under Prof. Andreas Geiger's supervision. I am originally from Brașov, Romania.
    Shenlong Wang
    Shenlong Wang is research scientist at Uber ATG Toronto working on the development of self-driving cars. He is also a PhD student at University of Toronto. His advisor is Prof. Raquel Urtasun. He has a broad interest in computer vision, machine learning and robotics. He is particularly interested in 3D vision and deep structured models.
    Avatar
    Julieta is a researcher at Uber ATG Toronto, focused on computer vision for autonomous driving. Julieta obtained a MSc and a PhD at the University of British Columbia.
    Raquel Urtasun
    Raquel Urtasun is the Chief Scientist for Uber ATG and the Head of Uber ATG Toronto. She is also a Professor at the University of Toronto, a Canada Research Chair in Machine Learning and Computer Vision and a co-founder of the Vector Institute for AI. She is a recipient of an NSERC EWR Steacie Award, an NVIDIA Pioneers of AI Award, a Ministry of Education and Innovation Early Researcher Award, three Google Faculty Research Awards, an Amazon Faculty Research Award, a Connaught New Researcher Award, a Fallona Family Research Award and two Best Paper Runner up Prize awarded CVPR in 2013 and 2017. She was also named Chatelaine 2018 Woman of the year, and 2018 Toronto’s top influencers by Adweek magazine