Skip to footer

Kenneth O. Stanley

Kenneth O. Stanley
6 BLOG ARTICLES 14 RESEARCH PAPERS
Before joining Uber AI Labs full time, Ken was an associate professor of computer science at the University of Central Florida (he is currently on leave). He is a leader in neuroevolution (combining neural networks with evolutionary techniques), where he helped invent prominent algorithms such as NEAT, CPPNs, HyperNEAT, and novelty search. His ideas have also reached a broader audience through the recent popular science book, Why Greatness Cannot Be Planned: The Myth of the Objective.

Engineering Blog Articles

POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the...

Uber AI Labs introduces the Paired Open-Ended Trailblazer (POET), an algorithm that leverages open-endedness to push the bounds of machine learning.

Montezuma’s Revenge Solved by Go-Explore, a New Algorithm for Hard-Exploration Problems (Sets Records on...

Uber AI Labs introduces Go-Explore, a new reinforcement learning algorithm for solving a variety of challenging problems, especially in robotics.
Evolution to running

Accelerating Deep Neuroevolution: Train Atari in Hours on a Single Personal Computer

Applying hardware acceleration to deep neuroevolution in what is now an open source project, Uber AI Labs was able to train a neural network to play Atari in just a few hours on a single personal computer, making this type of research accessible to a far greater number of people.
Neuron image

Differentiable Plasticity: A New Method for Learning to Learn

Differentiable Plasticity is a new machine learning method for training neural networks to change their connection weights adaptively even after training is completed, allowing a form of learning inspired by the lifelong plasticity of biological brains.

VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution

Uber AI Labs introduces Visual Inspector for Neuroevolution (VINE), an open source interactive data visualization tool to help neuroevolution researchers better understand this family of algorithms.

Welcoming the Era of Deep Neuroevolution

By leveraging neuroevolution to train deep neural networks, Uber AI Labs is developing solutions to solve reinforcement learning problems.

Research Papers

Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their...

R. Wang, J. Lehman, J. Clune, K. Stanley
While the history of machine learning so far encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorithm at the same time as they are being solved. [...] [PDF]
2019

Go-Explore: a New Approach for Hard-Exploration Problems

A. Ecoffet, J. Huizinga, J. Lehman, K. Stanley, J. Clune
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. [...] [PDF]
2019

Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity

T. Miconi, A. Rawal, J. Clune, K. Stanley
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. [...] [PDF]
International Conference on Learning Representations (ICLR), 2019

VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution

R. Wang, J. Clune, K. Stanley
Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) problems. However, it remains a challenge to analyze and interpret the underlying process of neuroevolution in such high dimensions. To begin to address this challenge, this paper presents an interactive data visualization tool called VINE (Visual Inspector for NeuroEvolution) aimed at helping neuroevolution researchers and end-users better understand and explore this family of algorithms. [...] [PDF]
Visualization Workshop at The Genetic and Evolutionary Computation Conference (GECCO), 2018

Differentiable plasticity: training plastic neural networks with backpropagation

T. Miconi, J. Clune, K. Stanley
How can we build agents that keep learning from experience, quickly and efficiently, after their initial training? Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to produce efficient lifelong learning. We show that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections. [...] [PDF]
International Conference on Machine Learning (ICML), 2018

The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation...

J. Lehman, J. Clune, D. Misevic, C. Adami, L. Altenberg, J. Beaulieu, P. Bentley, S. Bernard, G. Beslon, D. Bryson, P. Chrabaszcz, N. Cheney, A. Cully, S. Doncieux, F. Dyer, K. Ellefsen, R. Feldt, S. Fischer, S. Forrest, A. Frénoy, C. Gagné, L. Goff, L. Grabowski, B. Hodjat, F. Hutter, L. Keller, C. Knibbe, P. Krcah, R. Lenski, H. Lipson, R. MacCurdy, C. Maestre, R. Miikkulainen, S. Mitri, D. Moriarty, J. Mouret, A. Nguyen, C. Ofria, M. Parizeau, D. Parsons, R. Pennock, W. Punch, T. Ray, M. Schoenauer, E. Shulte, K. Sims, K. Stanley, F. Taddei, D. Tarapore, S. Thibault, W. Weimer, R. Watson, J. Yosinski
Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. [...] [PDF]
2018

Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking...

E. Conti, V. Madhavan, F. Such, J. Lehman, K. Stanley, J. Clune
Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. [...] [PDF]
ViGIL @ NeurIPS 2017 (NeurIPS), 2017

On the Relationship Between the OpenAI Evolution Strategy and Stochastic Gradient Descent

X. Zhang, J. Clune, K. Stanley
Because stochastic gradient descent (SGD) has shown promise optimizing neural networks with millions of parameters and few if any alternatives are known to exist, it has moved to the heart of leading approaches to reinforcement learning (RL). [...] [PDF]
2017

Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients

J. Lehman, J. Chen, J. Clune, K. Stanley
While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights is likely to break existing functionality, providing no learning signal even if some individual weight changes were beneficial. [...] [PDF]
The Genetic and Evolutionary Computation Conference (GECCO), 2018

Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for...

F. Such, V. Madhavan, E. Conti, J. Lehman, K. Stanley, J. Clune
Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. [...] [PDF]
Deep RL @ NeurIPS 2018

Open-endedness: The last grand challenge you’ve never heard of

K. Stanley
Artificial intelligence (AI) is a grand challenge for computer science. Lifetimes of effort and billions of dollars have powered its pursuit. Yet, today its most ambitious vision remains unmet: though progress continues, no human-competitive general digital intelligence is within our reach. [..] [HTML]
O’Reilly Online, 2017

ES Is More Than Just a Traditional Finite-Difference Approximator

J. Lehman, J. Chen, Jeff Clune, Kenneth O. Stanley
An evolution strategy (ES) variant based on a simplification of a natural evolution strategy recently attracted attention because it performs surprisingly well in challenging deep reinforcement learning domains. It searches for neural network parameters by generating perturbations to the current set of parameters, checking their performance, and moving in the aggregate direction of higher reward. [...] [PDF]
The Genetic and Evolutionary Computation Conference (GECCO), 2018

Neuroevolution: A Different Kind of Deep Learning

K. Stanley
Neuroevolution is making a comeback. Prominent artificial intelligence labs and researchers are experimenting with it, a string of new successes have bolstered enthusiasm, and new opportunities for impact in deep learning are emerging. [...] [HTML]
O’Reilly Online, 2017

The emergence of canalization and evolvability in an open-ended, interactive evolutionary system

J. Huizinga, K. Stanley, J. Clune
Natural evolution has produced a tremendous diversity of functional organisms. Many believe an essential component of this process was the evolution of evolvability, whereby evolution speeds up its ability to innovate by generating a more adaptive pool of offspring. [...] [PDF]
Artificial Life (to appear), 2017

Popular Articles