Skip to footer

Jason Yosinski

Jason Yosinski
5 POSTS 0 COMMENTS
Jason Yosinski is a machine learning researcher and founding member of Uber AI Labs, where he uses neural networks to build more capable and more understandable AI.

Engineering Blog Articles

Faster Neural Networks Straight from JPEG

Uber AI Labs introduces a method for making neural networks that process images faster and more accurately by leveraging JPEG representations.

How to Get a Better GAN (Almost) for Free: Introducing the Metropolis-Hastings GAN

Metropolis-Hastings Generative Adversarial Networks (GANs) leverage the discriminator to pick better samples from the generator after ML model training is done.

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

As powerful and widespread as convolutional neural networks are in deep learning, AI Labs’ latest research reveals both an underappreciated failing and a simple fix.

Measuring the Intrinsic Dimension of Objective Landscapes

Curious about what it is like to traverse the high-dimensional loss landscapes of modern neural networks? Check out Uber AI Labs’ latest research on measuring intrinsic dimension to find out.

Introducing the Uber AI Residency

Interested in accelerating your career by tackling some of Uber’s most challenging AI problems? Apply for the Uber AI Residency, a research fellowship dedicated to fostering the next generation of AI talent.

Research Papers

Metropolis-Hastings Generative Adversarial Networks

R. Turner, J. Hung, Y. Saatci, J. Yosinski
We introduce the Metropolis-Hastings generative adversarial network (MH-GAN), which combines aspects of Markov chain Monte Carlo and GANs. The MH-GAN draws samples from the distribution implicitly defined by a GAN's discriminator-generator pair, as opposed to sampling in a standard GAN which draws samples from the distribution defined by the generator. [...] [PDF at arXiv]
arXiv, 2018

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

R. Liu, J. Lehman, P. Molino, F.i Such, E. Frank, A. Sergeev, J. Yosinski
Few ideas have enjoyed as large an impact on deep learning as convolution. For any problem involving pixels or spatial representations, common intuition holds that convolutional neural networks may be appropriate. In this paper we show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x,y) Cartesian space and one-hot pixel space. [...] [PDF at arXiv]
Advances in Neural Information Processing Systems (NIPS), 2018

Measuring the Intrinsic Dimension of Objective Landscapes

Chunyuan Li, Heerad Farkhoor, R. Liu, J. Yosinski
Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. [...] [PDF at arXiv]
International Conference on Learning Representations (ICLR), 2018

The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation...

J. Lehman, J. Clune, D. Misevic, C. Adami, L. Altenberg, J. Beaulieu, P. Bentley, S. Bernard, G. Beslon, D. Bryson, P. Chrabaszcz, N. Cheney, A. Cully, S. Doncieux, F. Dyer, K. Ellefsen, R. Feldt, S. Fischer, S. Forrest, A. Frénoy, C. Gagné, L. Goff, L. Grabowski, B. Hodjat, F. Hutter, L. Keller, C. Knibbe, P. Krcah, R. Lenski, H. Lipson, R. MacCurdy, C. Maestre, R. Miikkulainen, S. Mitri, D. Moriarty, J. Mouret, A. Nguyen, C. Ofria, M. Parizeau, D. Parsons, R. Pennock, W. Punch, T. Ray, M. Schoenauer, E. Shulte, K. Sims, K. Stanley, F. Taddei, D. Tarapore, S. Thibault, W. Weimer, R. Watson, J. Yosinski
Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. [...] [PDF on arXiv]
References & Citations: NASA ADS
arXiv, 2018

Faster Neural Networks Straight from JPEG

L. Gueguen, A. Sergeev, B. Kadlec, R. Liu, J. Yosinski
The simple, elegant approach of training convolutional neural networks (CNNs) directly from RGB pixels has enjoyed overwhelming empirical success. But can more performance be squeezed out of networks by using different input representations? In this paper we propose and explore a simple idea: train CNNs directly on the blockwise discrete cosine transform (DCT) coefficients computed and available in the middle of the JPEG codec. [...] [PDF at NIPS Proceedings]
Advances in Neural Information Processing Systems (NIPS), 2018

Automated Identification of Northern Leaf Blight-Infected Maize Plants from Field Imagery Using Deep Learning

C. DeChant, T. Wiesner-Hanks, S, Chen, E. Stewart, J. Yosinski, M. Gore, R. Nelson, and H. Lipson
Northern leaf blight (NLB) can cause severe yield loss in maize; however, scouting large areas to accurately diagnose the disease is time consuming and difficult. We demonstrate a system capable of automatically identifying NLB lesions in field-acquired images of maize plants with high reliability. [...] [PDF at Phytopathology]
Phytopathology, 2017

Time-series extreme event forecasting with neural networks at Uber

N. Laptev, J. Yosinski, L. Li, S. Smyl
Accurate time-series forecasting during high variance segments (e.g., holidays), is critical for anomaly detection, optimal resource allocation, budget planning and other related tasks. At Uber accurate prediction for completed trips during special events can lead to a more efficient driver allocation resulting in a decreased wait time for the riders. [PDF on roseyu.com]
International Conference on Machine Learning (ICML), 2017

SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability

M. Raghu, J. Gilmer, J. Yosinski, J. Sohl-Dickstein
We propose a new technique, Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform (allowing comparison between different layers and networks) and fast to compute (allowing more comparisons to be calculated than with previous methods). [...] [PDF at arXiv]
Neural Information Processing Systems (NIPS), 2017

Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space

A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, J. Clune
Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, Nguyen et al. (2016) showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227x227) than previous generative models, and does so for all 1000 ImageNet categories. [...] [PDF at arXiv]
Computer Vision and Pattern Recognition (CVPR), 2017

Popular Articles