Sergio Casas
Research Papers
Discrete Residual Flow for Probabilistic Pedestrian Behavior Prediction
A. Jain, S. Casas, R. Liao, Y. Xiong, S. Feng, S. Segal, R. Urtasun
Our research shows that non-parametric distributions can capture extremely well the (erratic) pedestrian behavior. We propose Discrete Residual Flow, a convolutional neural network for human motion prediction that accurately models the temporal dependencies and captures the uncertainty inherent in long-range motion forecasting. In particular, our method captures multi-modal posteriors over future human motion very realistically. [PDF]
Conference on Neural Information Processing Systems (NeurIPS), 2019
Our research shows that non-parametric distributions can capture extremely well the (erratic) pedestrian behavior. We propose Discrete Residual Flow, a convolutional neural network for human motion prediction that accurately models the temporal dependencies and captures the uncertainty inherent in long-range motion forecasting. In particular, our method captures multi-modal posteriors over future human motion very realistically. [PDF]
Conference on Neural Information Processing Systems (NeurIPS), 2019
End-to-end Interpretable Neural Motion Planner
W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, R. Urtasun
In this paper, we propose a neural motion planner for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. Towards this goal, we design a holistic model that takes as input raw LIDAR data and an HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost volume defining the goodness of each position that the self-driving car can take within the planning horizon. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019
In this paper, we propose a neural motion planner for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. Towards this goal, we design a holistic model that takes as input raw LIDAR data and an HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost volume defining the goodness of each position that the self-driving car can take within the planning horizon. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019
IntentNet: Learning to Predict Intention from Raw Sensor Data
S. Casas, W. Luo, R. Urtasun
In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. [...] [PDF]
Conference on Robot Learning (CORL), 2018
In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. [...] [PDF]
Conference on Robot Learning (CORL), 2018