Ming Liang
Research Papers
Identifying Unknown Instances for Autonomous Driving
K. Wong, S. Wang, M. Ren, M. Liang, R. Urtasun
We propose a novel open-set instance segmentation algorithm for point clouds that identifies instances from both known and unknown classes. In particular, we train a deep convolutional neural network that projects points belonging to the same instance together in a category-agnostic embedding space. [PDF]
The Conference on Robot Learning (CoRL), 2019
We propose a novel open-set instance segmentation algorithm for point clouds that identifies instances from both known and unknown classes. In particular, we train a deep convolutional neural network that projects points belonging to the same instance together in a category-agnostic embedding space. [PDF]
The Conference on Robot Learning (CoRL), 2019
Learning Joint 2D-3D Representations for Depth Completion
Y. Chen, B. Yang, M. Liang, R. Urtasun
We design a simple yet effective architecture that fuses information between 2D and 3D representations at multiple levels to learn fully fused joint representations at multiple levels, and show state-of-the-art results on the KITTI depth completion benchmark. [PDF]
International Conference on Computer Vision (ICCV), 2019
We design a simple yet effective architecture that fuses information between 2D and 3D representations at multiple levels to learn fully fused joint representations at multiple levels, and show state-of-the-art results on the KITTI depth completion benchmark. [PDF]
International Conference on Computer Vision (ICCV), 2019
Multi-Task Multi-Sensor Fusion for 3D Object Detection
M. Liang, B. Yang, Y. Chen, R. Hu, R. Urtasun
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019
HDNET: Exploiting HD Maps for 3D Object Detection
B. Yang, M. Liang, R. Urtasun
In this paper we show that High-Definition (HD) maps provide strong priors that can boost the performance and robustness of modern 3D object detectors. Towards this goal, we design a single stage detector that extracts geometric and semantic features from the HD maps. [...] [PDF]
Conference on Robot Learning (CORL), 2018
In this paper we show that High-Definition (HD) maps provide strong priors that can boost the performance and robustness of modern 3D object detectors. Towards this goal, we design a single stage detector that extracts geometric and semantic features from the HD maps. [...] [PDF]
Conference on Robot Learning (CORL), 2018
Deep Continuous Fusion for Multi-Sensor 3D Object Detection
M. Liang, B. Yang, S. Wang, R. Urtasun
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. [...] [PDF]
European Conference on Computer Vision (ECCV), 2018
In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. [...] [PDF]
European Conference on Computer Vision (ECCV), 2018