Skip to footer

Results for Self-Driving Vehicles

LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving

G. P. Meyer, A. Laddha, E. Kee, C. Vallespi-Gonzalez, C. Wellington
In this paper, we present LaserNet, a computationally efficient method for 3D object detection from LiDAR data for autonomous driving. The efficiency results from processing LiDAR data in the native range view of the sensor, where the input data is naturally compact. […]
[PDF]
Computer Vision and Pattern Recognition (CVPR), 2019

End-to-end Interpretable Neural Motion Planner

W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, R. Urtasun
In this paper, we propose a neural motion planner for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. Towards this goal, we design a holistic model that takes as input raw LIDAR data and an HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost volume defining the goodness of each position that the self-driving car can take within the planning horizon. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Learning to Localize through Compressed Binary Maps

X. Wei, I. A. Bârsan, S. Wang, J. Martinez, R. Urtasun
One of the main difficulties of scaling current localization systems to large environments is the on-board storage required for the maps. In this paper we propose to learn to compress the map representation such that it is optimal for the localization task. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Multi-Task Multi-Sensor Fusion for 3D Object Detection

M. Liang, B. Yang, Y. Chen, R. Hu, R. Urtasun
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Deep Rigid Instance Scene Flow

W.-C. Ma, S. Wang, R. Hu, Y. Xiong, R. Urtasun
In this paper we tackle the problem of scene flow estimation in the context of self-driving. We leverage deep learning techniques as well as strong priors as in our application domain the motion of the scene can be composed by the motion of the robot and the 3D motion of the actors in the scene. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

UPSNet: A Unified Panoptic Segmentation Network

Y. Xiong, R. Liao, H. Zhao, R. Hu, M. Bai, E. Yumer, R. Urtasun
In this paper we tackle the problem of scene flow estimation in the context of self-driving. We leverage deep learning techniques as well as strong priors as in our application domain the motion of the scene can be composed by the motion of the robot and the 3D motion of the actors in the scene. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

DeepSignals: Predicting Intent of Drivers Through Visual Attributes

D. Frossard, E. Kee, R. Urtasun
Detecting the intention of drivers is an essential task in self-driving, necessary to anticipate sudden events like lane changes and stops. Turn signals and emergency flashers communicate such intentions, providing seconds of potentially critical reaction time. In this paper, we propose to detect these signals in video sequences by using a deep neural network that reasons about both spatial and temporal information. […] [PDF]
International Conference on Robotics and Automation (ICRA), 2019

Predicting Motion of Vulnerable Road Users using High-Definition Maps and Efficient ConvNets

F. Chou, T.-H. Lin, H. Cui, V. Radosavljevic, T. Nguyen, T. Huang, M. Niedoba, J. Schneider, N. Djuric
Following detection and tracking of traffic actors, prediction of their future motion is the next critical component of a self-driving vehicle (SDV), allowing the SDV to move safely and efficiently in its environment. This is particularly important when it comes to vulnerable road users (VRUs), such as pedestrians and bicyclists. We present a deep learning method for predicting VRU movement where we rasterize high-definition maps and actor’s surroundings into bird’s-eye view image used as input to convolutional networks. […] [PDF]
MLITS workshop @ Neural Information Processing Systems (NeurIPS), 2018

Joint Mapping and Calibration via Differentiable Sensor Fusion

J. Chen, F. Obermeyer, V. Lyapunov, L. Gueguen, N. Goodman
We leverage automatic differentiation (AD) and probabilistic programming to develop an end-to-end optimization algorithm for batch triangulation of a large number of unknown objects. Given noisy detections extracted from noisily geo-located street level imagery without depth information, we jointly estimate the number and location of objects of different types, together with parameters for sensor noise characteristics and prior distribution of objects conditioned on side information. […] [PDF]
CoRR, 2018

Deep Multi-Sensor Lane Detection

M. Bai, G. Mattyus, N. Homayounfar, S. Wang, S. K. Lakshmikanth, R. Urtasun
Reliable and accurate lane detection has been a long-standing problem in the field of autonomous driving. In recent years, many approaches have been developed that use images (or videos) as input and reason in image space. In this paper we argue that accurate image estimates do not translate to precise 3D lane boundaries, which are the input required by modern motion planning algorithms. […] [PDF]
International Conference on Intelligent Robots and Systems (IROS), 2018

HDNET: Exploiting HD Maps for 3D Object Detection

B. Yang, M. Liang, R. Urtasun
In this paper we show that High-Definition (HD) maps provide strong priors that can boost the performance and robustness of modern 3D object detectors. Towards this goal, we design a single stage detector that extracts geometric and semantic features from the HD maps. […] [PDF]
Conference on Robot Learning (CORL), 2018

IntentNet: Learning to Predict Intention from Raw Sensor Data

S. Casas, W. Luo, R. Urtasun
In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. […] [PDF]
Conference on Robot Learning (CORL), 2018

Efficient Convolutions for Real-Time Semantic Segmentation of 3D Point Clouds

C. Zhang, W. Luo, R. Urtasun
We propose an approach for semi-automatic annotation of object instances. While most current methods treat object segmentation as a pixel-labeling problem, we here cast it as a polygon prediction task, mimicking how most current datasets have been annotated. […] [PDF]
International Conference on 3D Vision (3DV), 2018

Deep Continuous Fusion for Multi-Sensor 3D Object Detection

M. Liang, B. Yang, S. WangR. Urtasun
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. […] [PDF]
European Conference on Computer Vision (ECCV), 2018

Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks

H. Cui, V. Radosavljevic, F. Chou, T.-H. Lin, T. Nguyen, T. Huang, J. Schneider, N. Djuric
Autonomous driving presents one of the largest problems that the robotics and artificial intelligence communities are facing at the moment, both in terms of difficulty and potential societal impact. Self-driving vehicles (SDVs) are expected to prevent road accidents and save millions of lives while improving the livelihood and life quality of many more. […] [PDF]
International Conference on Robotics and Automation (ICRA), 2019

End-to-End Deep Structured Models for Drawing Crosswalks

J. Liang, R. Urtasun
In this paper we address the problem of detecting crosswalks from LiDAR and camera imagery. Towards this goal, given multiple Li-DAR sweeps and the corresponding imagery, we project both inputs onto the ground surface to produce a top down view of the scene. […] [PDF]
European Conference on Computer Vision (ECCV), 2018

MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving

M. Teichmann, M. Weber, M. Zöllner, R. Cipolla, R. Urtasun
While most approaches to semantic reasoning have focused on improving performance, in this paper we argue that computational times are very important in order to enable real time applications such as autonomous driving. […] [PDF]
IEEE Intelligent Vehicles Symposium (IV), 2018

Deep Parametric Continuous Convolutional Neural Networks

S. Wang, S. Suo, W. Ma, A. PokrovskyR. Urtasun
We propose an approach for semi-automatic annotation of object instances. While most current methods treat object segmentation as a pixel-labeling problem, we here cast it as a polygon prediction task, mimicking how most current datasets have been annotated. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2018

SurfConv: Bridging 3D and 2D Convolution for RGBD Images

H. Chu, W. Ma, K. Kundu, R. Urtasun, S. Fidler
The last few years have seen approaches trying to combine the increasing popularity of depth sensors and the success of the convolutional neural networks. Using depth as additional channel alongside the RGB input has the scale variance problem present in image convolution based approaches. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2018

Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net

W. Luo, B. Yang, R. Urtasun
In this paper we propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor. By jointly reasoning about these tasks, our holistic approach is more robust to occlusion as well as sparse data at range. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2018

End-to-end Learning of Multi-sensor 3D Tracking by Detection

D. Frossard, R. Urtasun
In this paper we propose a novel approach to tracking by detection that can exploit both cameras as well as LIDAR data to produce very accurate 3D trajectories. Towards this goal, we formulate the problem as a linear program that can be solved exactly, and learn convolutional networks for detection as well as matching in an end-to-end manner. […] [PDF]
International Conference on Robotics and Automation (ICRA), 2018

Hierarchical Recurrent Attention Networks for Structured Online Maps

N. Homayounfar, W. Ma, S. Lakshmikanth, R. Urtasun
In this paper, we tackle the problem of online road network extraction from sparse 3D point clouds. Our method is inspired by how an annotator builds a lane graph, by first identifying how many lanes there are and then drawing each one in turn. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2018

PIXOR: Real-time 3D Object Detection from Point Clouds

B. Yang, W. Luo, R. Urtasun
We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2018

Robust Dense Mapping for Large-Scale Dynamic Environments

I. Bârsan, P. Liu, M. Pollefeys, A. Geiger
We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. […] [PDF]
Video: [LINK]
Project Page: [LINK]
International Conference on Robotics and Automation (ICRA), 2018

DeepRoadMapper: Extracting Road Topology From Aerial Images

G. Máttyus, W. Luo, R. Urtasun
Creating road maps is essential for applications such as autonomous driving and city planning. Most approaches in industry focus on leveraging expensive sensors mounted on top of a fleet of cars. This results in very accurate estimates when exploiting a user in the loop. […] [PDF]
International Conference on Computer Vision (ICCV), 2017

SGN: Sequential Grouping Networks for Instance Segmentation

S. Liu, J. Jia, S. Fidler, R. Urtasun
In this paper, we propose Sequential Grouping Networks (SGN) to tackle the problem of object instance segmentation. SGNs employ a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. […] [PDF]
International Conference on Computer Vision (ICCV), 2017

End-To-End Instance Segmentation With Recurrent Attention

M. Ren, R. Zemel
While convolutional neural networks have gained impressive success recently in solving structured prediction problems such as semantic segmentation, it remains a challenge to differentiate individual object instances in the scene. Instance segmentation is very important in a variety of applications, such as autonomous driving, image captioning, and visual question answering. […] [PDF]
Supplementary Materials: [LINK]
Code: [LINK]
Conference on Computer Vision and Pattern Recognition (CVPR), 2017

Annotating Object Instances with a Polygon-RNN

L. Castrejón, K. Kundu, R. Urtasun, S. Fidler
We propose an approach for semi-automatic annotation of object instances. While most current methods treat object segmentation as a pixel-labeling problem, we here cast it as a polygon prediction task, mimicking how most current datasets have been annotated. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2017

TorontoCity: Seeing the World With a Million Eyes

S. Wang; M. Bai; G. Mattyus; H. Chu; W. Luo; B. Yang; J. Liang; J. Cheverie; R. Urtasun; D. Lin.
Despite the substantial progress in recent years, the image captioning techniques are still far from being perfect. Sentences produced by existing methods, e.g. those based on RNNs, are often overly rigid and lacking in variability. […] [PDF]
International Conference on Computer Vision (ICCV), 2017

Deep Watershed Transform for Instance Segmentation

M. Bai, R. Urtasun
Most contemporary approaches to instance segmentation use complex pipelines involving conditional random fields, recurrent neural networks, object proposals, or template matching schemes. In our paper, we present a simple yet powerful end-to-end convolutional neural network to tackle this task. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2017

Page 1 of 2