Discovering Interpretable Representations for Both Deep Generative and Discriminative Models

    Abstract

    Interpretability of representations in both deep generative and discriminative models is highly desirable. Current methods jointly optimize an objective combining accuracy and interpretability. However, this may reduce accuracy, and is not applicable to already trained models. We propose two interpretability frameworks. First, we provide an interpretable lens for an existing model. We use a generative model which takes as input the representation in an existing (generative or discriminative) model, weakly supervised by limited side information. Applying a flexible and invertible transformation to the input leads to an interpretable representation with no loss in accuracy. We extend the approach using an active learning strategy to choose the most useful side information to obtain, allowing a human to guide what “interpretable” means. Our second framework relies on joint optimization for a representation which is both maximally informative about the side information and maximally compressive about the non-interpretable data factors. This leads to a novel perspective on the relationship between compression and regularization. We also propose a new interpretability evaluation metric based on our framework. Empirically, we achieve state-of-the-art results on three datasets using the two proposed algorithms.

    Authors

    Tameem Adel, Zoubin Ghahramani, Adrian Weller

    Conference

    ICML 2018

    Full Paper

    ‘Discovering Interpretable Representations for Both Deep Generative and Discriminative Models’ (PDF)

    Uber AI

    Comments
    Previous articleMultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving
    Next articleUber Happy? Work and Well-being in the “Gig Economy”
    Zoubin Ghahramani
    Zoubin Ghahramani is Chief Scientist of Uber and a world leader in the field of machine learning, significantly advancing the state-of-the-art in algorithms that can learn from data. He is known in particular for fundamental contributions to probabilistic modeling and Bayesian approaches to machine learning systems and AI. Zoubin also maintains his roles as Professor of Information Engineering at the University of Cambridge and Deputy Director of the Leverhulme Centre for the Future of Intelligence. He was one of the founding directors of the Alan Turing Institute (the UK's national institute for Data Science and AI), and is a Fellow of St John's College Cambridge and of the Royal Society.