Skip to footer

Shenlong Wang

Shenlong Wang
0 BLOG ARTICLES 9 RESEARCH PAPERS
Shenlong Wang is research scientist at Uber ATG Toronto working on the development of self-driving cars. He is also a PhD student at University of Toronto. His advisor is Prof. Raquel Urtasun. He has a broad interest in computer vision, machine learning and robotics. He is particularly interested in 3D vision and deep structured models.

Research Papers

Learning to Localize through Compressed Binary Maps

X. Wei, I. A. Bârsan, S. Wang, J. Martinez, R. Urtasun
One of the main difficulties of scaling current localization systems to large environments is the on-board storage required for the maps. In this paper we propose to learn to compress the map representation such that it is optimal for the localization task. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Convolutional Recurrent Network for Road Boundary Extraction

J. Liang, N. Homayounfar, S. Wang, W.-C. Ma, R. Urtasun
Creating high definition maps that contain precise information of static elements of the scene is of utmost importance for enabling self driving cars to drive safely. In this paper, we tackle the problem of drivable road boundary extraction from LiDAR and camera imagery. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Deep Rigid Instance Scene Flow

W.-C. Ma, S. Wang, R. Hu, Y. Xiong, R. Urtasun
In this paper we tackle the problem of scene flow estimation in the context of self-driving. We leverage deep learning techniques as well as strong priors as in our application domain the motion of the scene can be composed by the motion of the robot and the 3D motion of the actors in the scene. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Learning to Localize Using a LiDAR Intensity Map

I. Bârsan, S. Wang, A. Pokrovsky, R. Urtasun
In this paper we propose a real-time, calibration-agnostic and effective localization system for self-driving cars. Our method learns to embed the online LiDAR sweeps and intensity map into a joint deep embedding space. [...] [PDF]
Conference on Robot Learning (CORL), 2018

Deep Multi-Sensor Lane Detection

M. Bai, G. Mattyus, N. Homayounfar, S. Wang, S. K. Lakshmikanth, R. Urtasun
Reliable and accurate lane detection has been a long-standing problem in the field of autonomous driving. In recent years, many approaches have been developed that use images (or videos) as input and reason in image space. In this paper we argue that accurate image estimates do not translate to precise 3D lane boundaries, which are the input required by modern motion planning algorithms. [...] [PDF]
International Conference on Intelligent Robots and Systems (IROS), 2018

Deep Continuous Fusion for Multi-Sensor 3D Object Detection

M. Liang, B. Yang, S. WangR. Urtasun
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. [...] [PDF]
European Conference on Computer Vision (ECCV), 2018

Deep Parametric Continuous Convolutional Neural Networks

S. Wang, S. Suo, W. Ma, A. PokrovskyR. Urtasun
We propose an approach for semi-automatic annotation of object instances. While most current methods treat object segmentation as a pixel-labeling problem, we here cast it as a polygon prediction task, mimicking how most current datasets have been annotated. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2018

Find Your Way by Observing the Sun and Other Semantic Cues

W.-C. Ma, S. Wang, M. Brubaker, S. Fidler, R. Urtasun
In this paper we present a robust, efficient and affordable approach to self-localization which does not require neither GPS nor knowledge about the appearance of the world. Towards this goal, we utilize freely available cartographic maps and derive a probabilistic model that exploits semantic cues in the form of sun direction, presence of an intersection, road type, speed limit as well as the ego-car trajectory in order to produce very reliable localization results. [...] [PDF]
International Conference on Robotics and Automation (ICRA), 2017

TorontoCity: Seeing the World With a Million Eyes

S. Wang; M. Bai; G. Mattyus; H. Chu; W. Luo; B. Yang; J. Liang; J. Cheverie; R. Urtasun; D. Lin.
Despite the substantial progress in recent years, the image captioning techniques are still far from being perfect. Sentences produced by existing methods, e.g. those based on RNNs, are often overly rigid and lacking in variability. [...] [PDF]
International Conference on Computer Vision (ICCV), 2017

Popular Articles