Skip to footer

Theofanis Karaletsos

Theofanis Karaletsos
1 BLOG ARTICLES 5 RESEARCH PAPERS
Theofanis took his first steps as a machine learner at the Max Planck Institute For Intelligent Systems in collaboration with Microsoft Research Cambridge with work focused on unsupervised knowledge extraction from unstructured data, such as generative modeling of images and phenotyping for biology. He then moved to Memorial Sloan Kettering Cancer Center in New York, where he worked on machine learning in the context of cancer therapeutics. He joined a small AI startup Geometric Intelligence in 2016 and with his colleagues formed the new Uber AI Labs. Theofanis' research interests are focused on rich probabilistic modeling, approximate inference and probabilistic programming. His main passion are structured models, examples of which are spatio-temporal processes, models of image formation, deep probabilistic models and the tools needed to make them work on real data. His past in the life sciences has also made him keenly interested in how to make models interpretable and quantify their uncertainty, non-traditional learning settings such as weakly supervised learning and model criticism.

Engineering Blog Articles

Announcing the 2019 Uber AI Residency

The Uber AI Residency is a 12-month training program for academics and professionals interested in becoming an AI researcher with Uber AI Labs or Uber ATG.

Research Papers

Probabilistic Meta-Representations Of Neural Networks

T. Karaletsos, P. Dayan, Z. Ghahramani
Existing Bayesian treatments of neural networks are typically characterized by weak prior and approximate posterior distributions according to which all the weights are drawn independently. Here, we consider a richer prior distribution in which units in the network are represented by latent variables, and the weights between units are drawn conditionally on the values of the collection of those variables. [...] [PDF at arXiv]
UAI 2018 Uncertainty In Deep Learning Workshop (UDL), 2018

Pyro: Deep Universal Probabilistic Programming

E. Bingham, J. Chen, M. Jankowiak, F. Obermeyer, N. Pradhan, T. Karaletsos, R. Singh, P. Szerlip, P. Horsfall, N. Goodman
Pyro is a probabilistic programming language built on Python as a platform for developing advanced probabilistic models in AI research. [...] [PDF at arXiv]
Journal of Machine Learning Research (JMLR), 2018

Pathwise Derivatives for Multivariate Distributions

M. Jankowiak, T. Karaletsos
We exploit the link between the transport equation and derivatives of expectations to construct efficient pathwise gradient estimators for multivariate distributions. We focus on two main threads. [...] [PDF at arXiv]
International Conference on Artificial Intelligence and Statistics (AI STATS) (in submission), 2019

Likelihood-free inference with emulator networks

J.-M. Lueckmann, G. Bassetto, T. Karaletsos, J. H. Macke
Approximate Bayesian Computation (ABC) provides methods for Bayesian inference in simulation-based stochastic models which do not permit tractable likelihoods. We present a new ABC method which uses probabilistic neural emulator networks to learn synthetic likelihoods on simulated data -- both local emulators which approximate the likelihood for specific observed data, as well as global ones which are applicable to a range of data. [...] [PDF at arXiv]
2018

Conditional Similarity Networks

A. Veit, S. Belongie, T. Karaletsos
What makes images similar? To measure the similarity between images, they are typically embedded in a feature-vector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. [...] [PDF at arXiv]
Conference on Computer Vision and Pattern Recognition (CVPR), 2017

Popular Articles