Skip to footer
Home Authors Posts by Rosanne Liu

Rosanne Liu

8 BLOG ARTICLES 8 RESEARCH PAPERS
Rosanne is a senior research scientist and a founding member of Uber AI. She obtained her PhD in Computer Science at Northwestern University, where she used neural networks to help discover novel materials. She is currently working on the multiple fronts where machine learning and neural networks are mysterious. She attempts to write in her spare time.

Engineering Blog Articles

Controlling Text Generation with Plug and Play Language Models

0

This article is based on the paper “Plug and Play Language Models: A Simple Approach To Controlled Text Generationby Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu.

The

Introducing LCA: Loss Change Allocation for Neural Network Training

0

Neural networks (NNs) have become prolific over the last decade and now power machine learning across the industry. At Uber, we use NNs for a variety of purposes, including detecting and predicting object motion for self-driving vehicles, responding more

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

0

At Uber, we apply neural networks to fundamentally improve how we understand the movement of people and things in cities. Among other use cases, we employ them to enable faster customer service response with natural language models and lower wait

First Uber Science Symposium: Discussing the Next Generation of RL, NLP, ConvAI, and DL

0

At Uber, hundreds of data scientists, economists, AI researchers and engineers, product analysts, behavioral scientists, and other practitioners leverage scientific methods to solve challenges on our platform. From modeling and experimentation to data analysis, algorithm development, and fundamental research, the

Creating a Zoo of Atari-Playing Agents to Catalyze the Understanding of Deep Reinforcement Learning

0

This research was conducted with valuable help from collaborators at Google Brain and OpenAI.

A selection of trained agents populating the Atari zoo.

Some of the most exciting advances in AI recently have come from the field of deep reinforcement

Faster Neural Networks Straight from JPEG

0

Neural networks, an important tool for processing data in a variety of industries, grew from an academic research area to a cornerstone of industry over the last few years. Convolutional Neural Networks (CNNs) have been particularly useful for extracting information

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

0

Uber uses convolutional neural networks in many domains that could potentially involve coordinate transforms, from designing self-driving vehicles to automating street sign detection to build maps and maximizing the efficiency of spatial movements in the Uber Marketplace.

In deep learning,

Measuring the Intrinsic Dimension of Objective Landscapes

0

Neural networks have revolutionized machine learning over the last decade, rising from a relatively obscure academic research area to a mainstay of industry powering a myriad of applications wherever large volumes of data are available. Uber uses neural networks for

Research Papers

Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients

A. Edwards, Himanshu Sahni, R. Liu, J. Hung, A. Jain, R. Wang, A. Ecoffet, T. Miconi, C. Isbell, J. Yosinski
In this paper, we introduce a novel form of value function, Q(s,s′), that expresses the utility of transitioning from a state s to a neighboring state s′ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. [...] [PDF]
International Conference on Machine Learning (ICML), 2020

Plug and Play Language Models: A Simple Approach to Controlled Text Generation

S. Dathathri, A. Madotto, J. Lan, J. Hung, E. Frank, P. Molino, J. Yosinski, R. Liu
Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. [PDF]
International Conference on Learning Representations (ICLR), 2020

LCA: Loss Change Allocation for Neural Network Training

J. Lan, R. Liu, H. Zhou, J. Yosinski
Neural networks enjoy widespread use, but many aspects of their training, representation, and operation are poorly understood. In particular, our view into the training process is limited, with a single scalar loss being the most common viewport into this high-dimensional, dynamic process. We propose a new window into training called Loss Change Allocation (LCA), in which credit for changes to the network loss is conservatively partitioned to the parameters. [...] [PDF]
Conference on Neural Information Processing Systems (NeurIPS), 2019

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

H. Zhou, J. Lan, R. Liu, J. Yosinski
Optical Character Recognition (OCR) approaches have been widely advanced in recent years thanks to the resurgence of deep learning. The state-of-the-art models are mainly trained on the datasets consisting of the constrained scenes. Detecting and recognizing text from the real-world images remains a technical challenge. [...] [PDF]
Conference on Neural Information Processing Systems (NeurIPS), 2019

An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents

F. Such, V. Madhavan, R. Liu, R. Wang, P. Castro, Y. Li, L. Schubert, M. Bellemare, J. Clune, J. Lehman
Much human and computational effort has aimed to improve how deep reinforcement learning algorithms perform on benchmarks such as the Atari Learning Environment. Comparatively less effort has focused on understanding what has been learned by such methods, and investigating and comparing the representations learned by different families of reinforcement learning (RL) algorithms. [...] [PDF]
2018

Faster Neural Networks Straight from JPEG

L. Gueguen, A. Sergeev, B. Kadlec, R. Liu, J. Yosinski
The simple, elegant approach of training convolutional neural networks (CNNs) directly from RGB pixels has enjoyed overwhelming empirical success. But can more performance be squeezed out of networks by using different input representations? In this paper we propose and explore a simple idea: train CNNs directly on the blockwise discrete cosine transform (DCT) coefficients computed and available in the middle of the JPEG codec. [...] [PDF]
Advances in Neural Information Processing Systems (NeurIPS), 2018

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

R. Liu, J. Lehman, P. Molino, F.i Such, E. Frank, A. Sergeev, J. Yosinski
Few ideas have enjoyed as large an impact on deep learning as convolution. For any problem involving pixels or spatial representations, common intuition holds that convolutional neural networks may be appropriate. In this paper we show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x,y) Cartesian space and one-hot pixel space. [...] [PDF]
Advances in Neural Information Processing Systems (NeurIPS), 2018

Measuring the Intrinsic Dimension of Objective Landscapes

Chunyuan Li, Heerad Farkhoor, R. Liu, J. Yosinski
Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. [...] [PDF]
International Conference on Learning Representations (ICLR), 2018

Popular Articles