Skip to footer

Rui Wang

Rui Wang
5 BLOG ARTICLES 7 RESEARCH PAPERS
Rui Wang is a senior research scientist with Uber AI. He is passionate about advancing the state of the art of machine learning and AI, and connecting cutting-edge advances to the broader business and products at Uber. His recent work at Uber was published on leading international conferences in machine learning and AI (ICML, IJCAI, GECCO, etc.), won a Best Paper Award at GECCO 2019, and was covered by technology media such as Science, Wired, VentureBeat, and Quanta Magazine.

Engineering Blog Articles

Fiber: Distributed Computing for AI Made Simple

Project Homepage: GitHub Over the past several years, increasing processing power of computing machines has led to an increase in machine learning advances. More and...

Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions

Building upon our existing open-ended learning research, Uber AI released Enhanced POET, a project that incorporates an improved algorithm and allows for more diverse training environments.

Creating a Zoo of Atari-Playing Agents to Catalyze the Understanding of Deep Reinforcement Learning

Uber AI Labs releases Atari Model Zoo, an open source repository of both trained Atari Learning Environment agents and tools to better understand them.

POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the...

Uber AI Labs introduces the Paired Open-Ended Trailblazer (POET), an algorithm that leverages open-endedness to push the bounds of machine learning.

VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution

Uber AI Labs introduces Visual Inspector for Neuroevolution (VINE), an open source interactive data visualization tool to help neuroevolution researchers better understand this family of algorithms.

Research Papers

First-Order Preconditioning via Hypergradient Descent

T. Moskovitz, R. Wang, J. Lan, S. Kapoor, T. Miconi, J. Yosinski, A. Rawal
Standard gradient descent methods are susceptible to a range of issues that can impede training, such as high correlations and different scaling in parameter space.These difficulties can be addressed by second-order approaches that apply a pre-conditioning matrix to the gradient to improve convergence. Unfortunately, such algorithms typically struggle to scale to high-dimensional problems, in part because the calculation of specific preconditioners such as the inverse Hessian or Fisher information matrix is highly expensive. We introduce first-order preconditioning (FOP), a fast, scalable approach that generalizes previous work on hypergradient descent (Almeida et al., 1998; Maclaurin et al., 2015; Baydin et al.,2017) to learn a preconditioning matrix that only makes use of first-order information. [...] [PDF]
Conference on Neural Information Processing Systems (NeurlPS), 2019

Fiber: A Platform for Efficient Development and Distributed Training for Reinforcement Learning and Population-Based...

J. Zhi, R. Wang, J. Clune, K. Stanley
Recent advances in machine learning are consistently enabled by increasing amounts of computation. Reinforcement learning (RL) and population-based methods in particular pose unique challenges for efficiency and flexibility to the underlying distributed computing frameworks. These challenges include frequent interaction with simulations, the need for dynamic scaling, and the need for a user interface with low adoption cost and consistency across different backends. In this paper we address these challenges while still retaining development efficiency and flexibility for both research and practical applications by introducing Fiber, a scalable distributed computing framework for RL and population-based methods. [...] [PDF]
arXiv

Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients

A. Edwards, Himanshu Sahni, R. Liu, J. Hung, A. Jain, R. Wang, A. Ecoffet, T. Miconi, C. Isbell, J. Yosinski
In this paper, we introduce a novel form of value function, Q(s,s′), that expresses the utility of transitioning from a state s to a neighboring state s′ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. [...] [PDF]
International Conference on Machine Learning (ICML), 2020

Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions

R. Wang, J. Lehman, A. Rawal, J. Zhi, Y. Li, J. Clune, K. Stanley
Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning. A recent step in this direction is the Paired Open-Ended Trailblazer (POET), an algorithm that generates and solves its own challenges, and allows solutions to goal-switch between challenges to avoid local optima. Here we introduce and empirically validate two new innovations to the original algorithm, as well as two external innovations designed to help elucidate its full potential. [...] [PDF]
International Conference on Machine Learning (ICML), 2020

Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their...

R. Wang, J. Lehman, J. Clune, K. Stanley
While the history of machine learning so far encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorithm at the same time as they are being solved. [...] [PDF]
2019

An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents

F. Such, V. Madhavan, R. Liu, R. Wang, P. Castro, Y. Li, L. Schubert, M. Bellemare, J. Clune, J. Lehman
Much human and computational effort has aimed to improve how deep reinforcement learning algorithms perform on benchmarks such as the Atari Learning Environment. Comparatively less effort has focused on understanding what has been learned by such methods, and investigating and comparing the representations learned by different families of reinforcement learning (RL) algorithms. [...] [PDF]
2018

VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution

R. Wang, J. Clune, K. Stanley
Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) problems. However, it remains a challenge to analyze and interpret the underlying process of neuroevolution in such high dimensions. To begin to address this challenge, this paper presents an interactive data visualization tool called VINE (Visual Inspector for NeuroEvolution) aimed at helping neuroevolution researchers and end-users better understand and explore this family of algorithms. [...] [PDF]
Visualization Workshop at The Genetic and Evolutionary Computation Conference (GECCO), 2018

Popular Articles