Skip to footer
Home Authors Posts by Alex Sergeev

Alex Sergeev

3 BLOG ARTICLES 2 RESEARCH PAPERS
Alex Sergeev is a deep learning engineer on the Machine Learning Platform team.

Engineering Blog Articles

Faster Neural Networks Straight from JPEG

0

Neural networks, an important tool for processing data in a variety of industries, grew from an academic research area to a cornerstone of industry over the last few years. Convolutional Neural Networks (CNNs) have been particularly useful for extracting information

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

0

Uber uses convolutional neural networks in many domains that could potentially involve coordinate transforms, from designing self-driving vehicles to automating street sign detection to build maps and maximizing the efficiency of spatial movements in the Uber Marketplace.

In deep learning,

Meet Horovod: Uber’s Open Source Distributed Deep Learning Framework for TensorFlow

Over the past few years, advances in deep learning have driven tremendous progress in image processing, speech recognition, and forecasting. At Uber, we apply deep learning across our business; from self-driving research to trip forecasting and fraud prevention, deep learning

Research Papers

Faster Neural Networks Straight from JPEG

L. Gueguen, A. Sergeev, B. Kadlec, R. Liu, J. Yosinski
The simple, elegant approach of training convolutional neural networks (CNNs) directly from RGB pixels has enjoyed overwhelming empirical success. But can more performance be squeezed out of networks by using different input representations? In this paper we propose and explore a simple idea: train CNNs directly on the blockwise discrete cosine transform (DCT) coefficients computed and available in the middle of the JPEG codec. [...] [PDF]
Advances in Neural Information Processing Systems (NeurIPS), 2018

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

R. Liu, J. Lehman, P. Molino, F.i Such, E. Frank, A. Sergeev, J. Yosinski
Few ideas have enjoyed as large an impact on deep learning as convolution. For any problem involving pixels or spatial representations, common intuition holds that convolutional neural networks may be appropriate. In this paper we show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x,y) Cartesian space and one-hot pixel space. [...] [PDF]
Advances in Neural Information Processing Systems (NeurIPS), 2018

Popular Articles