Skip to footer

Optimization of Swift Protocols

R. Barik, M. Sridharan, M. K. Ramanathan, M. Chabbi
Swift, an increasingly-popular programming language, advocates the use of protocols, which define a set of required methods and properties for conforming types. Protocols are commonly used in Swift programs for abstracting away implementation details; e.g., in a large industrial app from Uber, they are heavily used to enable mock objects for unit testing. Unfortunately, heavy use of protocols can result in significant performance overhead. […] [PDF]
Object-Oriented Programming, Systems, Languages & Applications (OOPSLA), 2019

Hamiltonian Neural Networks

S. Greydanus, M. Dzamba, J. Yosinski
Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics. How might we endow them with better inductive biases? In this paper, we draw inspiration from Hamiltonian mechanics to train models that learn and respect exact conservation laws in an unsupervised manner. […] [PDF]
Conference on Neural Information Processing Systems (NeurIPS), 2019

LCA: Loss Change Allocation for Neural Network Training

J. Lan, R. Liu, H. Zhou, J. Yosinski
Neural networks enjoy widespread use, but many aspects of their training, representation, and operation are poorly understood. In particular, our view into the training process is limited, with a single scalar loss being the most common viewport into this high-dimensional, dynamic process. We propose a new window into training called Loss Change Allocation (LCA), in which credit for changes to the network loss is conservatively partitioned to the parameters. […] [PDF]
Conference on Neural Information Processing Systems (NeurIPS), 2019

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

H. Zhou, J. Lan, R. Liu, J. Yosinski
Optical Character Recognition (OCR) approaches have been widely advanced in recent years thanks to the resurgence of deep learning. The state-of-the-art models are mainly trained on the datasets consisting of the constrained scenes. Detecting and recognizing text from the real-world images remains a technical challenge. […] [PDF]
Conference on Neural Information Processing Systems (NeurIPS), 2019

Maximum Relevance and Minimum Redundancy Feature Selection Methods for a Marketing Machine Learning Platform

Z. Zhao, R. Anand, M. Wang
In machine learning applications for online product offerings and marketing strategies, there are often hundreds or thousands of features available to build such models. Feature selection is one essential method in such applications for multiple objectives: improving the prediction accuracy by eliminating irrelevant features, accelerating the model training and prediction speed, reducing the monitoring and maintenance workload for feature data pipeline, and providing better model interpretation and diagnosis capability. […] [PDF]
IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2019

Uplift Modeling for Multiple Treatments with Cost Optimization

Z. Zhao, T. Harinen
Uplift modeling is an emerging machine learning approach for estimating the treatment effect at an individual or subgroup level. It can be used for optimizing the performance of interventions such as marketing campaigns and product designs. […] [PDF]
IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2019

Flexibly-Structured Model for Task-Oriented Dialogues

L. Shu, P. Molino, M. Namazifar, H. Xu, B. Liu, H. Zheng, G. Tur
This paper proposes a novel end-to-end architecture for task-oriented dialogue systems. It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot. The policy engine and language generation tasks are modeled jointly following that. […] [PDF]
2019

Improve User Retention with Causal Learning

S. Du, J. Lee, F. Ghaffarizadeh
User retention is a key focus for consumer based internet companies and promotions are an effective lever to improve retention. However, companies rely either on non-causal churn prediction to capture heterogeneity or on regular A/B testing to capture average treatment effect. In this paper, we propose a heterogeneous treatment effect optimization framework to capture both heterogeneity and causal effect. […] [PDF]
SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2019

NullAway: Practical Type-Based Null Safety for Java

S. Banerjee, L. Clapp, M. Sridharan
NullPointerExceptions (NPEs) are a key source of crashes in modern Java programs. Previous work has shown how such errors can be prevented at compile time via code annotations and pluggable type checking. However, such systems have been difficult to deploy on large-scale software projects, due to significant build-time overhead and / or a high annotation burden. This paper presents NullAway, a new type-based null safety checker for Java that overcomes these issues. […] [PDF]
The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (FSE), 2019

Evolvability ES: Scalable and Direct Optimization of Evolvability

A. Gajewski, J. Clune, K. O. Stanley, J. Lehman
Designing evolutionary algorithms capable of uncovering highly evolvable representations is an open challenge; such evolvability is important because it accelerates evolution and enables fast adaptation to changing circumstances. This paper introduces evolvability ES, an evolutionary algorithm designed to explicitly and efficiently optimize for evolvability, i.e. the ability to further adapt. […] [PDF]
The Genetic and Evolutionary Computation Conference (GECCO), 2019

Probabilistic Programming for Birth-Death Models of Evolution Using an Alive Particle Filter with Delayed Sampling

J. Kudlicka, L. M. Murray, F. Ronquist, T. B. Schön
We consider probabilistic programming for birth-death models of evolution and introduce a new widely-applicable inference method that combines an extension of the alive particle filter (APF) with automatic Rao-Blackwellization via delayed sampling. […] [PDF]
Conference on Uncertainty in Artificial Intelligence (UAI), 2019

Collaborative Multi-Agent Dialogue Model Training Via Reinforcement Learning

A. Papangelis, Y.-C. Wang, P. Molino, G. Tur
We present the first complete attempt at concurrently training conversational agents that communicate only via self-generated language. Using DSTC2 as seed data, we trained natural language understanding (NLU) and generation (NLG) networks for each agent and let the agents interact online. […] [PDF]
Special Interest Group on Discourse and Dialogue (SIGDIAL), 2019

Stakeholders as Researchers: Empowering non-researchers to interact directly with consumers

Marta Ponte Fissgus
An investigation into the trends of user experience research revealed that businesses and stakeholders will increasingly value human insights, and hence, as research becomes more mainstream, “organizations will continue to develop new tools to democratize those practices and adapt to company needs (dscout, 2018).” […] [PDF]
Delft University of Technology (TU Delft), 2019

LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving

G. P. Meyer, A. Laddha, E. Kee, C. Vallespi-Gonzalez, C. Wellington
In this paper, we present LaserNet, a computationally efficient method for 3D object detection from LiDAR data for autonomous driving. The efficiency results from processing LiDAR data in the native range view of the sensor, where the input data is naturally compact. […]
[PDF]
Computer Vision and Pattern Recognition (CVPR), 2019

Understanding and Designing for Deaf or Hard of Hearing Drivers on Uber

S. Lee, B. Hubert-Wallander, M. Stevens, J. M. Carroll
We used content analysis of in-app driver survey responses, customer support tickets, and tweets, and face-to-face interviews of DHH Uber drivers to better understand the DHH driver experience. Here we describe challenges DHH drivers experience and how they address those difficulties via Uber’s accessibility features and their own workarounds. […]
[PDF]
Conference on Human Factors in Computing Systems (CHI), 2019

End-to-end Interpretable Neural Motion Planner

W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, R. Urtasun
In this paper, we propose a neural motion planner for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. Towards this goal, we design a holistic model that takes as input raw LIDAR data and an HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost volume defining the goodness of each position that the self-driving car can take within the planning horizon. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Learning to Localize through Compressed Binary Maps

X. Wei, I. A. Bârsan, S. Wang, J. Martinez, R. Urtasun
One of the main difficulties of scaling current localization systems to large environments is the on-board storage required for the maps. In this paper we propose to learn to compress the map representation such that it is optimal for the localization task. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Convolutional Recurrent Network for Road Boundary Extraction

J. Liang, N. Homayounfar, S. Wang, W.-C. Ma, R. Urtasun
Creating high definition maps that contain precise information of static elements of the scene is of utmost importance for enabling self driving cars to drive safely. In this paper, we tackle the problem of drivable road boundary extraction from LiDAR and camera imagery. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Multi-Task Multi-Sensor Fusion for 3D Object Detection

M. Liang, B. Yang, Y. Chen, R. Hu, R. Urtasun
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Deep Rigid Instance Scene Flow

W.-C. Ma, S. Wang, R. Hu, Y. Xiong, R. Urtasun
In this paper we tackle the problem of scene flow estimation in the context of self-driving. We leverage deep learning techniques as well as strong priors as in our application domain the motion of the scene can be composed by the motion of the robot and the 3D motion of the actors in the scene. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Dimensionality Reduction for Representing the Knowledge of Probabilistic Models

M. T. Law, J. Snell, A.-M. Farahmand, R. Urtasun, R. S. Zemel
Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification. However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting. We propose a simple, intuitive and scalable dimension reduction framework that takes into account the soft probabilistic interpretation of standard deep models for classification. […] [PDF]
International Conference on Learning Representations (ICLR), 2019

DARNet: Deep Active Ray Network for Building Segmentation

D. Cheng, R. Liao, S. Fidler, R. Urtasun
In this paper, we propose a Deep Active Ray Network (DARNet) for automatic building segmentation. Taking an image as input, it first exploits a deep convolutional neural network (CNN) as the backbone to predict energy maps, which are further utilized to construct an energy function. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Building Together: When Research Went Viral at Uber

B. Hubert-Wallander, E. G. Ruiz, M. Jain, L. G. Barrio, S. S. Mitra, M. Stevens

In late 2017, Uber was nearly a year into a complete redesign of its driver-facing mobile app. This case study describes the research program we executed to support the app’s global beta launch, which aimed to “Build Together” with drivers across different geographies. […]
[PDF][VIDEO]
Conference on Human Factors in Computing Systems (CHI), 2019

UPSNet: A Unified Panoptic Segmentation Network

Y. Xiong, R. Liao, H. Zhao, R. Hu, M. Bai, E. Yumer, R. Urtasun
In this paper we tackle the problem of scene flow estimation in the context of self-driving. We leverage deep learning techniques as well as strong priors as in our application domain the motion of the scene can be composed by the motion of the robot and the 3D motion of the actors in the scene. […] [PDF]
Conference on Computer Vision and Pattern Recognition (CVPR), 2019

Learning a Generative Model for Multi-Step Human-Object Interactions from Videos

H. Wang, S. Pirk, V. Kim, E. Yumer, L. Guibas
Creating dynamic virtual environments consisting of humans interacting with objects is a fundamental problem in computer graphics. While it is well-accepted that agent interactions play an essential role in synthesizing such scenes, most extant techniques exclusively focus on static scenes, leaving the dynamic component out. In this paper, we present a generative model to synthesize plausible multi-step dynamic human–object interactions. […] [PDF]
European Association for Computer Graphics (Eurographics), 2019

DeepSignals: Predicting Intent of Drivers Through Visual Attributes

D. Frossard, E. Kee, R. Urtasun
Detecting the intention of drivers is an essential task in self-driving, necessary to anticipate sudden events like lane changes and stops. Turn signals and emergency flashers communicate such intentions, providing seconds of potentially critical reaction time. In this paper, we propose to detect these signals in video sequences by using a deep neural network that reasons about both spatial and temporal information. […] [PDF]
International Conference on Robotics and Automation (ICRA), 2019

Exploratory Stage Lighting Design using Visual Objectives

E. Shimizu, S. Paris, M. Fisher, E. Yumer, K. Fatahalian
Lighting is a critical element of theater. A lighting designer is responsible for drawing the audience’s attention to a specific part of the stage, setting time of day, creating a mood, and conveying emotions. Designers often begin the lighting design process by collecting reference visual imagery that captures different aspects of their artistic intent. Then, they experiment with various lighting options to determine which ideas work best on stage. However, modern stages contain tens to hundreds of lights, and setting each light source’s parameters individually to realize an idea is both tedious and requires expert skill. In this paper, we describe an exploratory lighting design tool based on feedback from professional designers. […] [PDF]
European Association for Computer Graphics (Eurographics), 2019

Metropolis-Hastings Generative Adversarial Networks

R. Turner, J. Hung, Y. Saatci, J. Yosinski
We introduce the Metropolis-Hastings generative adversarial network (MH-GAN), which combines aspects of Markov chain Monte Carlo and GANs. The MH-GAN draws samples from the distribution implicitly defined by a GAN’s discriminator-generator pair, as opposed to sampling in a standard GAN which draws samples from the distribution defined by the generator. […] [PDF]
International Conference on Machine Learning (ICML), 2019

Understanding Neural Networks via Feature Visualization: A survey

A. Nguyen, J. Yosinski, J. Clune
A neuroscience method to understanding the brain is to find and study the preferred stimuli that highly activate an individual cell or groups of cells. Recent advances in machine learning enable a family of methods to synthesize preferred stimuli that cause a neuron in an artificial or biological brain to fire strongly. […] [PDF]
Interpretable AI: Interpreting, Explaining and Visualizing Deep Learning, 2019

Exact Gaussian Processes on a Million Data Points

K. A. Wang, G. Pleiss, J. R. Gardner, S. Tyree, K. Q. Weinberger, A. G. Wilson
Gaussian processes (GPs) are flexible models with state-of-the-art performance on many impactful applications. However, computational constraints with standard inference procedures have limited exact GPs to problems with fewer than about ten thousand training points, necessitating approximations for larger datasets. In this paper, we develop a scalable approach for exact GPs that leverages multi-GPU parallelization and methods like linear conjugate gradients, accessing the kernel matrix only through matrix multiplication. […] [PDF]
arXiv, 2019

Page 1 of 6