Skip to footer

Renjie Liao

Renjie Liao
0 BLOG ARTICLES 16 RESEARCH PAPERS
Renjie Liao is a PhD student in Machine Learning Group, Department of Computer Science, University of Toronto, supervised by Prof. Raquel Urtasun and Prof. Richard Zemel. He is also a Research Scientist in Uber Advanced Technology Group Toronto. He is also affiliated with Vector Institute. He received M.Phil. degree from Department of Computer Science and Engineering, Chinese University of Hong Kong, under the supervision of Prof. Jiaya Jia. He got B.Eng. degree from School of Automation Science and Electrical Engineering in Beihang University (former Beijing University of Aeronautics and Astronautics).

Research Papers

DARNet: Deep Active Ray Network for Building Segmentation

D. Cheng, R. Liao, S. Fidler, R. Urtasun
In this paper, we propose a Deep Active Ray Network (DARNet) for automatic building segmentation. Taking an image as input, it first exploits a deep convolutional neural network (CNN) as the backbone to predict energy maps, which are further utilized to construct an energy function. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

UPSNet: A Unified Panoptic Segmentation Network

Y. Xiong, R. Liao, H. Zhao, R. Hu, M. Bai, E. Yumer, R. Urtasun
In this paper we tackle the problem of scene flow estimation in the context of self-driving. We leverage deep learning techniques as well as strong priors as in our application domain the motion of the scene can be composed by the motion of the robot and the 3D motion of the actors in the scene. [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Neural Guided Constraint Logic Programming for Program Synthesis

L. Zhang, G. Rosenblatt, E. Fetaya, R. Liao, W. Byrd, M. Might, R. Urtasun, R. Zemel
Synthesizing programs using example input/outputs is a classic problem in artificial intelligence. We present a method for solving Programming By Example (PBE) problems by using a neural model to guide the search of a constraint logic programming system called miniKanren. [...] [PDF]
Advances in Neural Information Processing Systems (NeurIPS), 2018

LanczosNet: Multi-Scale Deep Graph Convolutional Networks

R. Liao, Z. Zhao, R. Urtasun, R. Zemel
Relational data can generally be represented as graphs. For processing such graph structured data, we propose LanczosNet, which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution. [...] [PDF]
Neural Information Processing Systems (NeurIPS), 2018

Incremental Few-Shot Learning with Attention Attractor Networks

M. Ren, R. Liao, E. Fetaya, R. Zemel
This paper addresses the problem, incremental few-shot learning, where a regular classification network has already been trained to recognize a set of base classes; and several extra novel classes are being considered, each with only a few labeled examples. [...] [PDF]
Meta Learning workshop @ NeurIPS (NeurIPS), 2018

Reviving and Improving Recurrent Back Propagation

R. Liao, Y. Xiong, E. Fetaya, L. Zhang, K. Yoon, X. Pitkow, R. Urtasun, R. Zemel
In this paper, we revisit the recurrent back-propagation (RBP) algorithm, discuss the conditions under which it applies as well as how to satisfy them in deep neural networks. We show that RBP can be unstable and propose two variants based on conjugate gradient on the normal equations (CG-RBP) and Neumann series (Neumann-RBP). [...] [PDF]
Conference on Computer Vision and Pattern Recognition (ICML), 2018

Learning deep structured active contours end-to-end

D. Marcos, D. Tuia, B. Kellenberger, L. Zhang, M. Bai, R. Liao, R. Urtasun
The world is covered with millions of buildings, and precisely knowing each instance's position and extents is vital to a multitude of applications. Recently, automated building footprint segmentation models have shown superior detection accuracy thanks to the usage of Convolutional Neural Networks (CNN). [...] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2018

Leveraging Constraint Logic Programming for Neural Guided Program Synthesis

L. Zhang, G. Rosenblatt, E. Fetaya, R. Liao, W. Byrd, R. Urtasun, R. Zemel
We present a method for solving Programming by Example (PBE) problems that tightly integrates a neural network with a constraint logic programming system called miniKanren. Internally, miniKanren searches for a program that satisfies the recursive constraints imposed by the provided examples. [...] [PDF]
International Conference on Machine Learning (ICLR), 2018

Understanding Short-Horizon Bias in Stochastic Meta-Optimization

Y. Wu, M. Ren, R. Liao, R. Grosse
Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. [...] [PDF]
International Conference on Learning Representations (ICLR), 2018

Graph Partition Neural Networks for Semi-Supervised Classification

R. Liao, M. Brockschmidt, D. Tarlow, A. Gaunt, R. Urtasun, R. Zemel
We present graph partition neural networks (GPNN), an extension of graph neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate between locally propagating information between nodes in small subgraphs and globally propagating information between the subgraphs. [...] [PDF]
Workshop @ International Conference on Machine Learning (ICLR), 2018

Inference in Probabilistic Graphical Models by Graph Neural Networks

K. Yoon, R. Liao, Y. Xiong, L. Zhang, E. Fetaya, R. Urtasun, R. Zemel, X. Pitkow
A fundamental computation for statistical inference and accurate decision-making is to compute the marginal probabilities or most probable states of task-relevant variables. Probabilistic graphical models can efficiently represent the structure of such complex data, but performing these inferences is generally difficult. [...] [PDF]
Workshop @ International Conference on Learning Representations (ICLR), 2018

NerveNet: Learning Structured Policy with Graph Neural Networks

L. Castrejón, K. Kundu, R. Urtasun, S. Fidler
We address the problem of learning structured policies for continuous control. In traditional reinforcement learning, policies of agents are learned by multi-layer perceptrons (MLPs) which take the concatenation of all observations from the environment as input for predicting actions. [...] [PDF]
International Conference on Machine Learning (ICLR), 2018

3D Graph Neural Networks for RGBD Semantic Segmentation

X. Qi, R. Liao, J. Jia, S. Fidler, R. Urtasun
RGBD semantic segmentation requires joint reasoning about 2D appearance and 3D geometric information. In this paper we propose a 3D graph neural network (3DGNN) that builds a k-nearest neighbor graph on top of 3D point cloud. [...] [PDF]
International Conference on Computer Vision (ICCV), 2017

Situation Recognition With Graph Neural Networks

R. Li, M. Tapaswi, R. Liao, J. Jia, R. Urtasun, S. Fidler
We address the problem of recognizing situations in images. Given an image, the task is to predict the most salient verb (action), and fill its semantic roles such as who is performing the action, what is the source and target of the action, etc. [...] [PDF]
International Conference on Computer Vision (ICCV), 2017

Normalizing the Normalizers: Comparing and Extending Network Normalization Scheme

M. Ren, R. Liao, R. Urtasun, F. H. Sinz, R. Zemel
Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. [...] [PDF]
International Conference on Learning Representations (ICLR), 2017

Detail-Revealing Deep Video Super-Resolution

X. Tao, H. Gao, R. Liao, J. Wang, J. Jia, K. Kundu
Previous CNN-based video super-resolution approaches need to align multiple frames to the reference. In this paper, we show that proper frame alignment and motion compensation is crucial for achieving high quality results. [...] [PDF]
International Conference on Computer Vision (ICCV), 2017

Popular Articles