# Felipe Petroski Such

## Engineering Blog Articles

### Creating a Zoo of Atari-Playing Agents to Catalyze the Understanding of Deep Reinforcement Learning

Uber AI Labs releases Atari Model Zoo, an open source repository of both trained Atari Learning Environment agents and tools to better understand them.

### An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

As powerful and widespread as convolutional neural networks are in deep learning, AI Labsâ€™ latest research reveals both an underappreciated failing and a simple fix.

### Accelerating Deep Neuroevolution: Train Atari in Hours on a Single Personal Computer

Applying hardware acceleration to deep neuroevolution in what is now an open source project, Uber AI Labs was able to train a neural network to play Atari in just a few hours on a single personal computer, making this type of research accessible to a far greater number of people.

## Research Papers

### An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents

**F. Such**,

**V. Madhavan**,

**R. Liu**,

**R. Wang**, P. Castro,

**Y. Li**, L. Schubert, M. Bellemare,

**J. Clune**,

**J. Lehman**

Much human and computational effort has aimed to improve how deep reinforcement learning algorithms perform on benchmarks such as the Atari Learning Environment. Comparatively less effort has focused on understanding what has been learned by such methods, and investigating and comparing the representations learned by different families of reinforcement learning (RL) algorithms. [...]

**[PDF]**

*2018*

### An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

**R. Liu**,

**J. Lehman**,

**P. Molino**,

**F.i Such**,

**E. Frank**,

**A. Sergeev**,

**J. Yosinski**

Few ideas have enjoyed as large an impact on deep learning as convolution. For any problem involving pixels or spatial representations, common intuition holds that convolutional neural networks may be appropriate. In this paper we show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x,y) Cartesian space and one-hot pixel space. [...]

**[PDF]**

*Advances in Neural Information Processing Systems*

**(NeurIPS)**, 2018### Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking...

E. Conti,

Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. [...]

**V. Madhavan**,**F. Such**,**J. Lehman**,**K. Stanley**,**J. Clune**Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. [...]

**[PDF]***ViGIL @ NeurIPS 2017***(NeurIPS)**, 2017### Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for...

**F. Such**,

**V. Madhavan**, E. Conti,

**J. Lehman**,

**K. Stanley**,

**J. Clune**

Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. [...]

**[PDF]**

*Deep RL @ NeurIPS 2018*