LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving

    Abstract

    In this paper, we present LaserNet, a computationally efficient method for 3D object detection from LiDAR data for autonomous driving. The efficiency results from processing LiDAR data in the native range view of the sensor, where the input data is naturally compact. Operating in the range view involves well known challenges for learning, including occlusion and scale variation, but it also provides contextual information based on how the sensor data was captured. Our approach uses a fully convolutional network to predict a multimodal distribution over 3D boxes for each point and then it efficiently fuses these distributions to generate a prediction for each object. Experiments show that modeling each detection as a distribution rather than a single deterministic box leads to better overall detection performance. Benchmark results show that this approach has significantly lower runtime than other recent detectors and that it achieves state-of-the-art performance when compared on a large dataset that has enough data to overcome the challenges of training on the range view.

    Authors

    Gregory P. Meyer, Ankit Laddha, Eric Kee, Carlos Vallespi-Gonzalez, Carl Wellington

    Conference

    CVPR 2019

    Full Paper

    ‘LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving’ (PDF)

    Comments
    Previous articleUnderstanding and Designing for Deaf or Hard of Hearing Drivers on Uber
    Next articleStakeholders as Researchers: Empowering non-researchers to interact directly with consumers
    Eric Kee
    Eric received his Ph.D. with advisor Hany Farid at Dartmouth College, where he studied physics-based vision and its applications to image forensics. As a postdoc, Eric studied computational imaging under advisor Shree Nayar at Columbia University. Following Columbia, Eric joined the Facebook computational imaging group, and a graphics & simulation startup, Avametric, with Prof. James O’Brien, U.C. Berkeley where he developed methods for fitting deformable models of human bodies for virtual clothing try-on. Eric joined the Uber ATG perception group in 2016, then led by Prof. Drew Bagnel of CMU, before the first public launch of self-driving vehicles in Pittsburgh. In his work at ATG, Eric (and coauthors) developed and deployed the first deep neural network architecture for object detection to run on Uber’s self-driving fleet. Eric is currently a member of the Uber ATG R&D group, led by Raquel Urtasun. His research interests include self-driving, machine learning, and biological vision.