Skip to footer

Rosanne Liu

Rosanne Liu
6 BLOG ARTICLES 5 RESEARCH PAPERS
Rosanne is a senior research scientist and a founding member of Uber AI. She obtained her PhD in Computer Science at Northwestern University, where she used neural networks to help discover novel materials. She is currently working on the multiple fronts where machine learning and neural networks are mysterious. She attempts to write in her spare time.

Engineering Blog Articles

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

Uber builds upon the Lottery Ticket Hypothesis by proposing explanations behind these mechanisms and deriving a surprising by-product: the Supermask.

First Uber Science Symposium: Discussing the Next Generation of RL, NLP, ConvAI, and DL

The Uber Science Symposium featured talks from members of the broader scientific community about the the latest innovations in RL, NLP, and other fields.

Creating a Zoo of Atari-Playing Agents to Catalyze the Understanding of Deep Reinforcement Learning

Uber AI Labs releases Atari Model Zoo, an open source repository of both trained Atari Learning Environment agents and tools to better understand them.

Faster Neural Networks Straight from JPEG

Uber AI Labs introduces a method for making neural networks that process images faster and more accurately by leveraging JPEG representations.

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

As powerful and widespread as convolutional neural networks are in deep learning, AI Labs’ latest research reveals both an underappreciated failing and a simple fix.

Measuring the Intrinsic Dimension of Objective Landscapes

Curious about what it is like to traverse the high-dimensional loss landscapes of modern neural networks? Check out Uber AI Labs’ latest research on measuring intrinsic dimension to find out.

Research Papers

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

H. Zhou, J. Lan, R. Liu, J. Yosinski
Optical Character Recognition (OCR) approaches have been widely advanced in recent years thanks to the resurgence of deep learning. The state-of-the-art models are mainly trained on the datasets consisting of the constrained scenes. Detecting and recognizing text from the real-world images remains a technical challenge. [...] [PDF]
2019

An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents

F. Such, V. Madhavan, R. Liu, R. Wang, P. Castro, Y. Li, L. Schubert, M. Bellemare, J. Clune, J. Lehman
Much human and computational effort has aimed to improve how deep reinforcement learning algorithms perform on benchmarks such as the Atari Learning Environment. Comparatively less effort has focused on understanding what has been learned by such methods, and investigating and comparing the representations learned by different families of reinforcement learning (RL) algorithms. [...] [PDF]
2018

Faster Neural Networks Straight from JPEG

L. Gueguen, A. Sergeev, B. Kadlec, R. Liu, J. Yosinski
The simple, elegant approach of training convolutional neural networks (CNNs) directly from RGB pixels has enjoyed overwhelming empirical success. But can more performance be squeezed out of networks by using different input representations? In this paper we propose and explore a simple idea: train CNNs directly on the blockwise discrete cosine transform (DCT) coefficients computed and available in the middle of the JPEG codec. [...] [PDF]
Advances in Neural Information Processing Systems (NeurIPS), 2018

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

R. Liu, J. Lehman, P. Molino, F.i Such, E. Frank, A. Sergeev, J. Yosinski
Few ideas have enjoyed as large an impact on deep learning as convolution. For any problem involving pixels or spatial representations, common intuition holds that convolutional neural networks may be appropriate. In this paper we show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x,y) Cartesian space and one-hot pixel space. [...] [PDF]
Advances in Neural Information Processing Systems (NeurIPS), 2018

Measuring the Intrinsic Dimension of Objective Landscapes

Chunyuan Li, Heerad Farkhoor, R. Liu, J. Yosinski
Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. [...] [PDF]
International Conference on Learning Representations (ICLR), 2018

Popular Articles