Skip to footer
Home Authors Posts by Kenneth O. Stanley

Kenneth O. Stanley

Kenneth O. Stanley
10 BLOG ARTICLES 17 RESEARCH PAPERS
Before joining Uber AI Labs full time, Ken was an associate professor of computer science at the University of Central Florida (he is currently on leave). He is a leader in neuroevolution (combining neural networks with evolutionary techniques), where he helped invent prominent algorithms such as NEAT, CPPNs, HyperNEAT, and novelty search. His ideas have also reached a broader audience through the recent popular science book, Why Greatness Cannot Be Planned: The Myth of the Objective.

Engineering Blog Articles

Fiber: Distributed Computing for AI Made Simple

Project Homepage: GitHub

Over the past several years, increasing processing power of computing machines has led to an increase in machine learning advances. More and more, algorithms exploit parallelism and rely on distributed training to process an enormous amount of

Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions

0

Jeff Clune and Kenneth Stanley were co-senior authors on this work and our associated research paper.

Machine learning (ML) powers many technologies and services that underpin Uber’s platforms, and we invest in advancing fundamental ML research and engaging with

Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data

0

Kenneth O. Stanley and Jeff Clune served as co-senior authors of this article and its corresponding paper.

At Uber, many of the hard problems we work on can benefit from machine learning, such as improving safety, improving ETAs,

Introducing EvoGrad: A Lightweight Library for Gradient-Based Evolution

0

Tools that enable fast and flexible experimentation democratize and accelerate machine learning research. Take for example the development of libraries for automatic differentiation, such as Theano, Caffe, TensorFlow, and PyTorch: these libraries have been instrumental in

POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer

0

Jeff Clune and Kenneth O. Stanley were co-senior authors.

We are interested in open-endedness at Uber AI Labs because it offers the potential for generating a diverse and ever-expanding curriculum for machine learning entirely on its own. Having vast amounts

Montezuma’s Revenge Solved by Go-Explore, a New Algorithm for Hard-Exploration Problems (Sets Records on Pitfall, Too)

0

Kenneth O. Stanley and Jeff Clune were co-senior authors.

 

In deep reinforcement learning (RL), solving the Atari games Montezuma’s Revenge and Pitfall has been a grand challenge. These games represent a broad class of challenging, real-world problems called

Accelerating Deep Neuroevolution: Train Atari in Hours on a Single Personal Computer

0

We recently published an article announcing five papers on deep neuroevolution, including the discovery that genetic algorithms can solve deep reinforcement learning problems as well as popular alternatives, such as deep Q-learning and policy gradients. That work follows on

Differentiable Plasticity: A New Method for Learning to Learn

0

Neural networks, which underlie many of Uber’s machine learning systems, have proven highly successful in solving complex problems, including image recognition, language understanding, and game-playing. However, these networks are usually trained to a stopping point through gradient descent, which

VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution

At Uber’s scale, advances in machine learning can significantly enhance technologies that power safer and more reliable transportation solutions. One such advance announced recently by Uber AI Labs is deep neuroevolution, where evolutionary algorithms, such as evolution strategies

Welcoming the Era of Deep Neuroevolution

0

By Kenneth O. Stanley, Jeff Clune

On behalf of an Uber AI Labs team that also includes Joel Lehman, Jay Chen, Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, & Xingwen Zhang.

In the field of deep learning, deep neural networks

Research Papers

Fiber: A Platform for Efficient Development and Distributed Training for Reinforcement Learning and Population-Based Methods

J. Zhi, R. Wang, J. Clune, K. Stanley
Recent advances in machine learning are consistently enabled by increasing amounts of computation. Reinforcement learning (RL) and population-based methods in particular pose unique challenges for efficiency and flexibility to the underlying distributed computing frameworks. These challenges include frequent interaction with simulations, the need for dynamic scaling, and the need for a user interface with low adoption cost and consistency across different backends. In this paper we address these challenges while still retaining development efficiency and flexibility for both research and practical applications by introducing Fiber, a scalable distributed computing framework for RL and population-based methods. [...] [PDF]
arXiv

Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions

R. Wang, J. Lehman, A. Rawal, J. Zhi, Y. Li, J. Clune, K. Stanley
Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning. A recent step in this direction is the Paired Open-Ended Trailblazer (POET), an algorithm that generates and solves its own challenges, and allows solutions to goal-switch between challenges to avoid local optima. Here we introduce and empirically validate two new innovations to the original algorithm, as well as two external innovations designed to help elucidate its full potential. [...] [PDF]
International Conference on Machine Learning (ICML), 2020

Evolvability ES: Scalable and Direct Optimization of Evolvability

A. Gajewski, J. Clune, K. O. Stanley, J. Lehman
Designing evolutionary algorithms capable of uncovering highly evolvable representations is an open challenge; such evolvability is important because it accelerates evolution and enables fast adaptation to changing circumstances. This paper introduces evolvability ES, an evolutionary algorithm designed to explicitly and efficiently optimize for evolvability, i.e. the ability to further adapt. [...] [PDF]
The Genetic and Evolutionary Computation Conference (GECCO), 2019

Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions

R. Wang, J. Lehman, J. Clune, K. Stanley
While the history of machine learning so far encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorithm at the same time as they are being solved. [...] [PDF]
2019

Go-Explore: a New Approach for Hard-Exploration Problems

A. Ecoffet, J. Huizinga, J. Lehman, K. Stanley, J. Clune
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. [...] [PDF]
2019

Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity

T. Miconi, A. Rawal, J. Clune, K. Stanley
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. [...] [PDF]
International Conference on Learning Representations (ICLR), 2019

VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution

R. Wang, J. Clune, K. Stanley
Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) problems. However, it remains a challenge to analyze and interpret the underlying process of neuroevolution in such high dimensions. To begin to address this challenge, this paper presents an interactive data visualization tool called VINE (Visual Inspector for NeuroEvolution) aimed at helping neuroevolution researchers and end-users better understand and explore this family of algorithms. [...] [PDF]
Visualization Workshop at The Genetic and Evolutionary Computation Conference (GECCO), 2018

Differentiable plasticity: training plastic neural networks with backpropagation

T. Miconi, J. Clune, K. Stanley
How can we build agents that keep learning from experience, quickly and efficiently, after their initial training? Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to produce efficient lifelong learning. We show that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections. [...] [PDF]
International Conference on Machine Learning (ICML), 2018

The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities

J. Lehman, J. Clune, D. Misevic, C. Adami, L. Altenberg, J. Beaulieu, P. Bentley, S. Bernard, G. Beslon, D. Bryson, P. Chrabaszcz, N. Cheney, A. Cully, S. Doncieux, F. Dyer, K. Ellefsen, R. Feldt, S. Fischer, S. Forrest, A. Frénoy, C. Gagné, L. Goff, L. Grabowski, B. Hodjat, F. Hutter, L. Keller, C. Knibbe, P. Krcah, R. Lenski, H. Lipson, R. MacCurdy, C. Maestre, R. Miikkulainen, S. Mitri, D. Moriarty, J. Mouret, A. Nguyen, C. Ofria, M. Parizeau, D. Parsons, R. Pennock, W. Punch, T. Ray, M. Schoenauer, E. Shulte, K. Sims, K. Stanley, F. Taddei, D. Tarapore, S. Thibault, W. Weimer, R. Watson, J. Yosinski
Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. [...] [PDF]
2018

Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents

E. Conti, V. Madhavan, F. Such, J. Lehman, K. Stanley, J. Clune
Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. [...] [PDF]
ViGIL @ NeurIPS 2017 (NeurIPS), 2017

On the Relationship Between the OpenAI Evolution Strategy and Stochastic Gradient Descent

X. Zhang, J. Clune, K. Stanley
Because stochastic gradient descent (SGD) has shown promise optimizing neural networks with millions of parameters and few if any alternatives are known to exist, it has moved to the heart of leading approaches to reinforcement learning (RL). [...] [PDF]
2017

Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients

J. Lehman, J. Chen, J. Clune, K. Stanley
While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights is likely to break existing functionality, providing no learning signal even if some individual weight changes were beneficial. [...] [PDF]
The Genetic and Evolutionary Computation Conference (GECCO), 2018

Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning

F. Such, V. Madhavan, E. Conti, J. Lehman, K. Stanley, J. Clune
Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. [...] [PDF]
Deep RL @ NeurIPS 2018

Open-endedness: The last grand challenge you’ve never heard of

K. Stanley
Artificial intelligence (AI) is a grand challenge for computer science. Lifetimes of effort and billions of dollars have powered its pursuit. Yet, today its most ambitious vision remains unmet: though progress continues, no human-competitive general digital intelligence is within our reach. [..] [HTML]
O’Reilly Online, 2017

ES Is More Than Just a Traditional Finite-Difference Approximator

J. Lehman, J. Chen, Jeff Clune, Kenneth O. Stanley
An evolution strategy (ES) variant based on a simplification of a natural evolution strategy recently attracted attention because it performs surprisingly well in challenging deep reinforcement learning domains. It searches for neural network parameters by generating perturbations to the current set of parameters, checking their performance, and moving in the aggregate direction of higher reward. [...] [PDF]
The Genetic and Evolutionary Computation Conference (GECCO), 2018

Neuroevolution: A Different Kind of Deep Learning

K. Stanley
Neuroevolution is making a comeback. Prominent artificial intelligence labs and researchers are experimenting with it, a string of new successes have bolstered enthusiasm, and new opportunities for impact in deep learning are emerging. [...] [HTML]
O’Reilly Online, 2017

The emergence of canalization and evolvability in an open-ended, interactive evolutionary system

J. Huizinga, K. Stanley, J. Clune
Natural evolution has produced a tremendous diversity of functional organisms. Many believe an essential component of this process was the evolution of evolvability, whereby evolution speeds up its ability to innovate by generating a more adaptive pool of offspring. [...] [PDF]
Artificial Life (to appear), 2017

Popular Articles