With tech offices around the world, Uber engineers are responsible for building new features and systems that improve rideshare, new mobility, food delivery, and other services enabled by our platform. Our Uber Engineering Blog highlights some of these efforts, giving technical explanations of our work that can serve as useful examples to the engineering community at large.

Throughout 2019, we published articles about front-end and back-end development, data science, applied machine learning, and cutting edge research in artificial intelligence. Some of our most popular articles introduced new open source projects originally developed at Uber, such as Kraken, Base Web, Ludwig, and AresDB. Likewise, we shared articles from Uber AI covering research projects such as POET, EvoGrad, LCA, and Plato, and original research on our new research publications site.

Along with our technical articles, we offer a look at what it’s like to work at Uber through interviews with engineers and profiles of offices and community building programs. Early in the year, we highlighted our Career Prep Program, which gave computer science students from backgrounds underrepresented in the tech industry an opportunity to learn skills around technical interviews and working as an engineer.

Beyond articles, we also produced a video series in 2019, Science at Uber, featuring interviews with Uber technologists working in the fields of data science, artificial intelligence, and machine learning. These videos, including the one below, show how we make practical use of cutting edge technology to improve Uber’s platform services: 

In these final weeks of 2019, we are publishing recaps of our engineering work in a few prominent areas, including data, infrastructure, and artificial intelligence. To cap off the year, we present a selection of our most popular articles covering a range of categories:

Introducing Ludwig, a Code-Free Deep Learning Toolbox

Machine learning models perform a diversity of tasks at Uber, from improving our maps to streamlining chat communications and even preventing fraud. In addition to serving a variety of use cases, it is important that we make machine learning as accessible as possible for experts and non-experts alike so it can improve areas across our business. To highlight our efforts to democratize ML, we wrote an article Ludwig, Uber’s open source, deep learning toolbox built on top of TensorFlow that allows users to train and test machine learning models without writing code. Since its release in February 2019, Ludwig has been leveraged by teams at Apple, IBM, and Nvidia, among others. In July 2019, we released Ludwig version 0.2, which included the integration of several new features, including support for Comet.ML, BERT, H3, and audio/speech, as well as substantial improvements to Ludwig’s visualization API.

Introducing AresDB: Uber’s GPU-Powered Open Source, Real-time Analytics Engine

At Uber, real-time analytics allow us to attain business insights and operational efficiency, enabling us to make data-driven decisions to improve experiences on the Uber platform. Uber’s unprecedented scale made it difficult to use existing third-party solutions, encouraging us to turn to an unconventional power source, graphics processing units (GPUs), to support our analytics computations. In recent years, GPU technology has advanced significantly, making it a perfect fit for real-time computation and data processing in parallel. Released in November 2018, AresDB, as discussed in this article, is an open source, real-time analytics engine that leverages GPUs to enable us to more performantly and efficiently unify, simplify, and improve Uber’s real-time analytics database solutions; we hope others have found the tool useful, too! 

Employing QUIC Protocol to Optimize Uber’s App Performance

Optimizing Uber’s services requires innovation on all layers of our tech stack, but one area, the actual network protocols that send data packets between cell towers and phones, often goes unsung. Remedying that, Uber engineers working to improve network performance wrote about their project to replace the Transmission Control Protocol (TCP) and HTTP/2 with the newer QUIC protocol and HTTP/3 in our apps. QUIC, a stream-multiplexed modern transport protocol implemented over UDP, lets us better control the transport protocol performance, customizing it for the tasks performed by our apps. Implementing QUIC gives an overall network tail-end latency reduction of up to 30 percent compared to TCP, better supporting real-time tasks on Uber apps, such as showing the current location of a delivery person carrying an Uber Eats order. 

Advancing AI: A Conversation with Jeff Clune, Senior Research Manager at Uber

We sat down with Jeff Clune, a senior research manager at Uber, to discuss his academic background, journey to co-founding Uber AI Labs, and his team’s current work improving deep reinforcement learning to train evolutionary algorithms. Jeff, who was recently awarded Presidential Early Career Award for Science and Engineering (PECASE), discovered artificial intelligence while in undergrad, but didn’t choose to enter the field until encountering research out of Cornell University that leveraged evolutionary algorithms to automatically create 3D-printed robots that could walk in the real world. “I remember it was like an explosion went off inside my head,” he says. “I thought it was so cool that you could combine the ideas behind evolution and use them to automatically design complex things that can then impact the real world.” Reading about Jeff’s experience can inspire others curious about pursuing industry careers in AI.

Measuring Kotlin Build Performance at Uber

Most Uber users book rides and order food through our apps, and our Mobile Engineering team maintains 20 Android apps and more than 2,000 modules in our Android monorepo. Apps make up a very important part of our business, so when we considered adopting the Kotlin language for Android app development, we performed a careful analysis of its performance, sharing the results in an article for the benefit of other Android developers. Outlining the methodology and platform we used to conduct 129 experiments around Kotlin performance shows the rigor of our analyses, and rather than simple metrics, the nuanced results we describe show the various permutations of Kotlin and its components compared to Java. Our detailed analysis can serve other mobile developers faced with similar considerations.

Optimizing M3: How Uber Halved Our Metrics Ingestion Latency by (Briefly) Forking the Go Compiler

Latency graph

In Uber’s New York engineering office, our Observability team maintains a robust, scalable metrics and alerting pipeline responsible for detecting, mitigating, and notifying engineers of issues with their services as soon as they occur. In early 2019, a routine deployment in a core service of M3, our open source metrics and monitoring platform, caused a doubling in overall latency for collecting and persisting metrics to storage in Go. Mitigating the issue was simple–we just reverted to the last known good build, but we still needed to figure out the root cause so we could fix it. In this article, we highlight key takeaways of our end-to-end latency ingestion regression so that other engineers can learn from our experiences and apply it to their own Go-based systems.

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

At Uber, we apply neural networks to fundamentally improve how we understand the movement of people and things in cities. Though neural networks are powerful, widely used tools, many of their subtle properties are still poorly understood. In our most recent paper aimed at demystifying neural networks, Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask, we build upon the fascinating Lottery Ticket Hypothesis developed by Frankle and Carbin. Although these researchers clearly demonstrated lottery tickets to be effective, their work raised as many questions as it answered, and many of the underlying mechanics were not yet well understood. Our paper, a worthwhile read for others who leverage neural networks for forecasting purposes, proposes explanations behind these mechanisms, uncovers curious quirks of these subnetworks, introduces competitive variants of the lottery ticket algorithm, and derives a surprising by-product: the Supermask.

Food Discovery with Uber Eats: Using Graph Learning to Power Recommendations

The Uber Eats app serves as a portal to more than 320,000 restaurant-partners in over 500 cities globally across 36 countries. In order to make the user experience more seamless and easy-to-navigate, we show users the dishes, restaurants, and cuisines they might like up front. This third entry in our Food Discovery with Uber Eats series focuses on how we leverage graph learning, a technique by which ML algorithms are trained on data structured as graphs by learning representations of their nodes, to improve restaurant and food recommendations in the Uber Eats search and recommender system. Our work in graph learning provides a compelling option for other similar recommendation systems deployed at scale and points to the broader capabilities of AI to improve customer satisfaction on the Uber platform.

Expanding Access: Engineering Uber Lite

In some global regions where Uber operates, people tend to carry phones based on older technology and wireless networks may suffer from slow speeds or spotty coverage. Recognizing that our latest apps may not perform optimally in these environments, engineers in our Bangalore, India office came up with Uber Lite, a streamlined version of our rider app. Presenting Uber Lite on our engineering blog, these engineers describe how they designed it around three principles: Light, Instant, and Simple. They built Uber Lite to take up minimal memory on phones, react quickly to network activity, and make requesting a ride as easy as possible. Learning about Uber Lite’s design can help mobile engineers better understand how to customize apps for global markets.

On Internships, Career Advice, and Reaching 15B Rides: A Conversation with Uber CTO Thuan Pham

Sudhanshu Mishra, an Uber engineering intern who went on to become a full-time employee, interviewed CTO Thuan Pham to find out about his journey into tech. Thuan discusses his early choice to pursue computer science studies over pre-med, his internship at Hewlett Packard, and his philosophy around managing a large engineering organization. He talks about a mentor from his early days who not only helped him advance in his career, but also made him feel welcome after moving across the country. The experiences Thuan relates in the interview can serve as a model for how to mentor fellow employees and interact with people in the workplace.

Check out our other 2019 Uber Engineering highlights articles: 

    • Uber Infrastructure in 2019: Improving Reliability, Driving Customer Satisfaction
    • Uber Open Source in 2019: Community Engagement and Contributions