Yun Chen
Research Papers
Learning Joint 2D-3D Representations for Depth Completion
Y. Chen, B. Yang, M. Liang, R. Urtasun
We design a simple yet effective architecture that fuses information between 2D and 3D representations at multiple levels to learn fully fused joint representations at multiple levels, and show state-of-the-art results on the KITTI depth completion benchmark. [PDF]
International Conference on Computer Vision (ICCV), 2019
We design a simple yet effective architecture that fuses information between 2D and 3D representations at multiple levels to learn fully fused joint representations at multiple levels, and show state-of-the-art results on the KITTI depth completion benchmark. [PDF]
International Conference on Computer Vision (ICCV), 2019
Multi-Task Multi-Sensor Fusion for 3D Object Detection
M. Liang, B. Yang, Y. Chen, R. Hu, R. Urtasun
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019