Engineering Safety with Uber’s Real-Time ID Check
Safety and security are at the core of Uber’s business. An important part of safety is verifying that the driver-partner who is picking up riders is the same person who underwent Uber’s screening process. Through engineering, we have taken a proactive approach through a new security solution called Real-Time ID Check which ensures the right person is behind the wheel.
This feature protects riders from unverified drivers, and also prevents fraud by ensuring drivers’ accounts are not compromised. Here is the story of how the safety team in Uber Engineering recently built this critical tool.
Building a Solution with Selfies
To build an effective and scalable solution, we needed a technology that could be deployed around the globe and still provide results with a high level of confidence. We started by evaluating various cutting edge technical solutions in face, voice, and gesture recognition. Each technology introduced unique challenges, especially given the high degree of environmental variance where our product is used around the world.
One challenge with voice recognition solutions was that their performance drastically degraded with background interference from music and traffic. This solution was also impractical for speech-impaired driver-partners on our platform. Gesture recognition algorithms delivered promising results in a controlled environment with a limited set of tech-savvy users. However, it did not meet our global scale requirements due to user experience challenges and a steep learning curve.
Face verification stood out on top compared to all other solutions we evaluated. A typical face verification algorithm involves three main steps:
- Face Detection: Identify a face in a given image. If a face is not detected, the next steps are skipped as there is no face to match.
- Feature Extraction: Build a feature set by extracting facial features, like eye position and face shape, from the image.
- Face Matching: Compare the feature set with an already validated feature set for a given face.
Once we decided to use face verification, we explored a few leading vendors and ran a comparison analysis. We plotted each vendor’s results as a ratio of the true positive rate (TPR) against the false positive rate (FPR) over varying degrees of match confidence thresholds. The resulting receiver operating characteristic (ROC) curve allowed us to determine the maximum TPR possible while minimizing FPR. Looking for the solution with the highest TPR, we ultimately chose to use Microsoft’s Face API from its Cognitive Services suite.
Microsoft’s Face API was not only the most accurate in our testing, but it also provided a strong set of critical features for our product. Our Real-Time ID Check utilizes two specific APIs:
- Face-Detect: Detects human faces in an image and provides additional attributes, such as whether or not the face has glasses. This API is even smart enough to differentiate between reading glasses and sunglasses. This helped us filter out images without a face and improved the match score for verification of the selfies.
- Face-Verify: Compares a face detected from Face-Detect API with an already verified face, and provides a confidence score on whether the two faces belong to the same person or not. Based on the confidence score, we can take appropriate verification action, such as asking the user to retake the selfie.
The User Experience
One of the key objectives for Real-Time ID Check was to avoid unnecessary friction for driver- partners. Therefore, we focused on making the user experience as seamless as possible throughout every stage of the project. We refined designs and tweaked the engineering based on feedback from users and prototype testing. This enabled us to create a simple, yet effective experience that drivers can complete in only a few seconds.
Here is what that experience looks like:
- Driver-partners are selected at random to submit a selfie for verification.
- Microsoft’s Face API is used to detect if the submitted picture contains a face or not. If no face is detected, the driver-partner is asked to resubmit.
- Once a face is detected, Microsoft’s Face API Verify feature compares the face with the verified picture associated with the driver’s account.
- If the face is a match, the driver-partner is allowed to start picking up riders. If the face does not match, the driver is temporarily blocked while we investigate the situation.
From Prototype to Global Scale
Early prototypes demonstrated that the global expectations of this identity-check tool—making it work for drivers anywhere—would bring its own set of unique challenges.
Our initial UI leveraged the existing photo capture experience of the Driver app for taking profile photos. We soon realized the dark background of the UI prevented usable results in low-light conditions such as night time. To address this challenge, we switched to a white UI and increased the brightness of the screen, leading to a drastic improvement in the quality of the photo captured.
Before using Microsoft’s Face API, we originally used client-side face detection to ensure there was a face present in the selfie. However, low-end Android phones, used by many of our driver-partners, do not support face detection. Even some devices that claimed to support face detection did not function well.
By using Microsoft’s Face API, we shifted the detection logic to the backend and removed these hardware constraints. This resulted in consistent results, regardless of the device, and gave us better insights to diagnose detection failures. Additionally, this process improved the quality of driver-partners’ profile photos.
Selfie While Driving
To be an effective security measure, Real-Time ID Check asks randomly-selected driver-partners to verify their identity. Due to the random nature of this selection process, the request could potentially pop up while a driver-partner is driving. To prevent unsafe and distracted driving, we added movement detection using the driver’s phone GPS. If the vehicle is moving, we prompt drivers to pull over before they begin the verification flow.
Selfies with Glasses
Although Microsoft’s Face API is able to detect a face even if the user is wearing sunglasses or if there is a reflection from regular glasses, these accessories can impact whether the new selfie will match the driver’s profile photo. In cases where the photos do not match and glasses are detected, the feature asks driver partners to remove their glasses and retry:
Real World Impact
The true success of a product is reflected by the impact it makes in people’s lives. Since our initial launch in September 2016, well over 99% of driver-partners have been verified through Real-Time ID Check in the markets where it is available, and the majority of mismatches were due to unclear profile photos.
Riders can feel safe knowing that their driver has been verified and driver-partners can rest assured that a fraudster is not using their account. If you are interested in making a real world impact by engineering safety and security with us, come be a part of our story.
Nitin Bajaj is a software engineer and Shimul Sachdeva is an engineering manager on Uber Engineering’s Trust & Safety program team. They wrote this article with Dima Kovalev, its product manager.
This article is part of a continuing series on engineering safety: see our earlier article on Telematics for more on this type of work across Uber Engineering.
For more information on Uber’s Security & Privacy Teams in general, see their own Uber Security + Privacy blog.