Convolutional Recurrent Network for Road Boundary Extraction

    Abstract

    Creating high definition maps that contain precise information of static elements of the scene is of utmost importance for enabling self driving cars to drive safely. In this paper, we tackle the problem of drivable road boundary extraction from LiDAR and camera imagery. Towards this goal, we design a structured model where a fully convolutional network obtains deep features encoding the location and direction of road boundaries and then, a convolutional recurrent network outputs a polyline representation for each one of them. Importantly, our method is fully automatic and does not require a user in the loop. We showcase the effectiveness of our method on a large North American city where we obtain perfect topology of road boundaries 99:3% of the time at a high precision and recall.

    Authors

    Justin Liang, Namdar Homayounfar, Shenlong Wang, Wei-Chiu Ma, Raquel Urtasun

    Conference

    CVPR 2019

    Full Paper

    ‘Convolutional Recurrent Network for Road Boundary Extraction’ (PDF)

    Uber ATG

    Comments
    Previous articleMulti-Task Multi-Sensor Fusion for 3D Object Detection
    Next articleLearning to Localize through Compressed Binary Maps
    Avatar
    Justin Liang is a research scientist at Uber ATG Toronto. His research focuses on computer vision and machine learning for mapping and detection in self driving vehicles. Before joining ATG, he completed a MSc in Computer Science, supervised by Raquel Urtasun at the University of Toronto. He also has a BASc in Mechanical Engineering from the University of British Columbia.
    Namdar Homayounfar
    I'm a research scientist at Uber ATG Toronto and a PhD student at Univeristy of Toronto under the supervision of Prof. Raquel Urtasun. I have broad research interests in deep learning and computer vision. My current focus is in development of deep structured models for the creation of HD maps required for the safe navigation of autonomous vehicles. Previously, I obtained my MSc degree in Statistics at University of Toronto and prior to that my BSc in Probalblity and Statistics from McGill University.
    Shenlong Wang
    Shenlong Wang is research scientist at Uber ATG Toronto working on the development of self-driving cars. He is also a PhD student at University of Toronto. His advisor is Prof. Raquel Urtasun. He has a broad interest in computer vision, machine learning and robotics. He is particularly interested in 3D vision and deep structured models.
    Wei-Chiu Ma
    Wei-Chiu Ma is a PhD student at MIT advised by Prof. Antonio Torralba. His research interests lie in the intersection of computer vision and machine learning, in particular low-level vision and 3D vision. He also works part-time at Uber ATG Toronto with Prof. Raquel Urtasun to apply his research to self-driving vehicles.
    Raquel Urtasun
    Raquel Urtasun is the Chief Scientist for Uber ATG and the Head of Uber ATG Toronto. She is also a Professor at the University of Toronto, a Canada Research Chair in Machine Learning and Computer Vision and a co-founder of the Vector Institute for AI. She is a recipient of an NSERC EWR Steacie Award, an NVIDIA Pioneers of AI Award, a Ministry of Education and Innovation Early Researcher Award, three Google Faculty Research Awards, an Amazon Faculty Research Award, a Connaught New Researcher Award, a Fallona Family Research Award and two Best Paper Runner up Prize awarded CVPR in 2013 and 2017. She was also named Chatelaine 2018 Woman of the year, and 2018 Toronto’s top influencers by Adweek magazine