Deep Multi-Sensor Lane Detection

    Abstract

    Reliable and accurate lane detection has been a long-standing problem in the field of autonomous driving. In recent years, many approaches have been developed that use images (or videos) as input and reason in image space. In this paper we argue that accurate image estimates do not translate to precise 3D lane boundaries, which are the input required by modern motion planning algorithms. To address this issue, we propose a novel deep neural network that takes advantage of both LiDAR and camera sensors and produces very accurate estimates directly in 3D space. We demonstrate the performance of our approach on both highways and in cities, and show very accurate estimates in complex scenarios such as heavy traffic (which produces occlusion), fork, merges and intersections.

    Authors

    Min Bai, Gellert Mattyus, Namdar Homayounfar, Shenlong Wang, Shrinidhi Kowshika Lakshmikanth, Raquel Urtasun

    Conference

    IROS 2018

    Full Paper

    ‘Deep Multi Sensor Lane Detection’ (PDF)

    Uber ATG

    Comments
    Previous articleHDNET: Exploiting HD Maps for 3D Object Detection
    Next articleLearning to Localize Using a LiDAR Intensity Map
    Min Bai
    Min Bai is a research scientist at Uber ATG Toronto. Before that, he was a wireless systems engineer at Apple. He has an undergraduate degree in electrical engineering from the University of Waterloo. His research interest includes various perception tasks such as segmentation, point cloud processing, online mapping.
    Gellert Mattyus
    Gellert Mattyus is a research scientist at Uber ATG Toronto working on computer vision and machine learning problems related to self-driving with an emphasis on perceiving maps. Gellert Mattyus has earned his PhD at the Remote Sensing Technology Chair of the Technical University of Munich (TUM) while working as a research scientist at the Photogrammetry and Image Analysis Department of the German Aerospace Center (DLR). After earning his PhD, Gellert Mattyus has spent nearly a year as a post-doc at the Machine Learning Group of the University of Toronto under the supervision of Professor Raquel Urtasun.
    Namdar Homayounfar
    I'm a research scientist at Uber ATG Toronto and a PhD student at Univeristy of Toronto under the supervision of Prof. Raquel Urtasun. I have broad research interests in deep learning and computer vision. My current focus is in development of deep structured models for the creation of HD maps required for the safe navigation of autonomous vehicles. Previously, I obtained my MSc degree in Statistics at University of Toronto and prior to that my BSc in Probalblity and Statistics from McGill University.
    Shenlong Wang
    Shenlong Wang is research scientist at Uber ATG Toronto working on the development of self-driving cars. He is also a PhD student at University of Toronto. His advisor is Prof. Raquel Urtasun. He has a broad interest in computer vision, machine learning and robotics. He is particularly interested in 3D vision and deep structured models.
    Raquel Urtasun
    Raquel Urtasun is the Chief Scientist for Uber ATG and the Head of Uber ATG Toronto. She is also a Professor at the University of Toronto, a Canada Research Chair in Machine Learning and Computer Vision and a co-founder of the Vector Institute for AI. She is a recipient of an NSERC EWR Steacie Award, an NVIDIA Pioneers of AI Award, a Ministry of Education and Innovation Early Researcher Award, three Google Faculty Research Awards, an Amazon Faculty Research Award, a Connaught New Researcher Award, a Fallona Family Research Award and two Best Paper Runner up Prize awarded CVPR in 2013 and 2017. She was also named Chatelaine 2018 Woman of the year, and 2018 Toronto’s top influencers by Adweek magazine