Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients

    Abstract

    In this paper, we introduce a novel form of value function, Q(s,s), that expresses the utility of transitioning from a state s to a neighboring state s and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at this http URL.

    Authors

    Ashley D. EdwardsHimanshu SahniRosanne LiuJane HungAnkit JainRui WangAdrien EcoffetThomas MiconiCharles IsbellJason Yosinski

    Publication

    37th International Conference on Machine Learning (ICML), 2020

    Full Paper

    Estimating Q(s,s’) with Deep Deterministic Dynamics Gradients (PDF)

    Comments
    Previous articleEnhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions
    Next articleFiber: A Platform for Efficient Development and Distributed Training for Reinforcement Learning and Population-Based Methods
    Rosanne Liu
    Rosanne is a senior research scientist and a founding member of Uber AI. She obtained her PhD in Computer Science at Northwestern University, where she used neural networks to help discover novel materials. She is currently working on the multiple fronts where machine learning and neural networks are mysterious. She attempts to write in her spare time.
    Jane Hung
    Jane Hung is a research scientist with Uber AI Labs.
    Ankit Jain
    Ankit Jain is a senior research scientist with Uber AI Labs.
    Rui Wang
    Rui Wang is a senior research scientist with Uber AI. He is passionate about advancing the state of the art of machine learning and AI, and connecting cutting-edge advances to the broader business and products at Uber. His recent work at Uber was published on leading international conferences in machine learning and AI (ICML, IJCAI, GECCO, etc.), won a Best Paper Award at GECCO 2019, and was covered by technology media such as Science, Wired, VentureBeat, and Quanta Magazine.
    Adrien Ecoffet
    Adrien Ecoffet is a research scientist with Uber AI Labs.
    Thomas Miconi
    Thomas Miconi is a research scientist at Uber AI Labs.
    Jason Yosinski
    Jason Yosinski is a founding member of Uber AI Labs and there leads the Deep Collective research group. He is known for contributions to understanding neural network modeling, representations, and training. Prior to Uber, Jason worked on robotics at Caltech, co-founded two web companies, and started a robotics program in Los Angeles middle schools that now serves over 500 students. He completed his PhD working at the Cornell Creative Machines Lab, University of Montreal, JPL, and Google DeepMind. He is a recipient of the NASA Space Technology Research Fellowship, has co-authored over 50 papers and patents, and was VP of ML at Geometric Intelligence, which Uber acquired. His work has been profiled by NPR, the BBC, Wired, The Economist, Science, and the NY Times. In his free time, Jason enjoys cooking, reading, paragliding, and pretending he's an artist.