Differentiable Compositional Kernel Learning for Gaussian Processes

    Abstract

    The generalization properties of Gaussian processes depend heavily on the choice of kernel, and this choice remains a dark art. We present the Neural Kernel Network (NKN), a flexible family of kernels represented by a neural network. The NKN architecture is based on the composition rules for kernels, so that each unit of the network corresponds to a valid kernel. It can compactly approximate compositional kernel structures such as those used by the Automatic Statistician (Lloyd et al., 2014), but because the architecture is differentiable, it is end-to-end trainable with gradient-based optimization. We show that the NKN is universal for the class of stationary kernels. Empirically we demonstrate pattern discovery and extrapolation abilities of NKN on several tasks that depend crucially on identifying the underlying structure, including time series and texture extrapolation, as well as Bayesian optimization.

    Authors

    Shengyang Sun, Guodong Zhang, Chaoqi Wang, Wenyuan Zeng, Jiaman Li, Roger Grosse

    Conference

    ICML 2018

    Full Paper

    ‘Differentiable Compositional Kernel Learning for Gaussian Processes’ (PDF)

    Uber ATG

    Comments
    Previous articleMultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving
    Next articleUber Happy? Work and Well-being in the “Gig Economy”
    Avatar
    Wenyuan Zeng is currently a PhD student at the University of Toronto, supervised by Prof. Raquel Urtasun. His research interest mainly lies in deep learning, computer vision and decision making process. At the same time, he is also working full-time at Uber ATG Toronto to apply his research work to the development of self-driving cars, focusing on perception, prediction and planning. Before coming to University of Toronto, Wenyuan Zeng finished his Bachelor degree in Tsinghua University, China, majored in Mathematics and Physics.