Globally, there are approximately 1.3 million collision-related fatalities on the road every year. Crash fatalities are still the leading cause of death for people between 15-29 years old, impacting families, communities, and cities. Governments around the world are working to reduce the risks, committing more resources towards improving road safety.

At Uber, we want to do our part by committing the power of our technology to help make travel safer for everyone. We have multiple teams at Uber working on road safety, reflecting a variety of disciplines, and we’re approaching this challenge from a number of angles. On the platform itself, we have many features to help improve road safety, such as driving hour limits to help curb drowsy driving, a partnership with the GHSA to raise awareness about seatbelts, and a 911 assistance feature to make it easier for riders and drivers to receive emergency assistance.

Uber Beacon, launched in 2016, is one of our more visible safety products. This device mounts on the driver’s windshield and uses color-pairing technology to help drivers and riders connect more quickly during pick-ups. Drivers have the option to add a Beacon to their vehicle and can unpair the device at anytime.

Beyond its visual signaling, the Beacon device also contains an accelerometer and gyroscope, similar to the sensors that currently exist in most smartphones. This sensor technology makes Beacon a perfect complement to our recently announced automatic crash detection feature, a solution that harnesses the power of GPS and other sensors in the smartphone to detect possible crashes. After a possible crash is detected, Uber initiates a Ride Check by reaching out to both the rider and the driver to offer quick assistance.

In the following post, we’ll discuss how we use the Beacon to improve our vehicle movement data, which is used to improve the accuracy of products such as Ride Check.


Measuring movement

Consider the problem of detecting a sudden deceleration, an important feature in models that power Ride Check. We can solve this problem in two steps. First, we need to infer the vehicle’s movement along three axes—forward (Y), lateral (i.e. side-to-side) (X), and vertical (Z). Second, once we have these axes, we can implement an “event detector” by analyzing movement in the forward (Y) direction.

Figure 1: Vehicle Reference Frame. The goal is to infer the vehicle’s movement along these three axes.

We can infer the vehicle’s movement using sensors in the driver’s phone. Smartphones contain an array of sensors, offering several ways to determine how the vehicle is moving. Each sensor has its own advantages and disadvantages.

Smartphone Global Positioning System (GPS)

Our previous approach to measuring decelerations was to take speed data directly from the smartphone GPS and assume that all motion is exclusively in the forward (Y) direction. This allows us to use the difference in speed between consecutive GPS samples to derive acceleration.

While this approach to determine speed is simple to implement, it is susceptible to GPS positioning noise when it comes to detecting decelerations and is constrained by the low sampling rate of GPS (at most, once per second). Because events of interest typically only last a second, this approach alone isn’t sufficient for detecting decelerations.

Smartphone Inertial Sensors – Inertial Measurement Units (IMU)

An even more accurate method for detecting decelerations is to use the inertial, or motion, sensors on the driver’s smartphone. This generally consists of an accelerometer, which measures acceleration, and a gyroscope, which measures turn rate. Collectively, the accelerometer / gyroscope pair is referred to as an Inertial Measurement Unit (IMU).

We can collect these signals at much higher frequencies than GPS. Therefore, IMUs can effectively capture events that happen over small timescales. IMUs are also helpful in areas where GPS is unreliable, such as urban canyons and tunnels.

However, some  challenges may exist with smartphone IMUs. First, smartphone sensors can vary across devices. Every phone uses a different chipset, so the raw sensor data may look different on different device models. Second, some smartphones may have only a partial IMU (an accelerometer but no gyroscope). Lastly, we cannot assume the device is rigidly mounted to the vehicle. Even when phones are mounted, some phone mounts have “long arms” and can introduce some vibration as a result.

A set of phone mounts
Figure 2: Different types of phone mounts with different “arm” lengths which result in varying noise profiles.


The phone mounts in Figure 2 each hold the phone in a different orientation. The smartphone inertial sensors report data in the phone’s reference frame, not in that of the car, so we additionally need to run a process known as “reference frame adjustment” to convert the smartphone data to the vehicle’s movement.

Figure 3: Raw inertial data from the phone is relative to the phone’s orientation (blue), not to the car’s (green).


Bringing sensors together: GPS / IMU Fusion

GPS and inertial sensors both present possible limitations for detecting decelerations. However, if we fuse the two sensors together, we can get the best of both worlds. In this section, we walk through how we accomplish this and discuss how Beacon can help.  

First, we identify periods of trips in which the phone is not mounted to the vehicle (using a separate classifier). Then, we estimate the forward and lateral acceleration of the car from the phone’s IMU readings.

Figure 4: This is the architecture for our phone-based vehicle movement pipeline (without the Beacon).

Gravity estimation

At first, the device’s orientation relative to the car is completely unknown. In the following section, we discuss how to determine the vehicle’s orientation from the phone’s orientation.

The first step is to estimate gravity, which will allow us to align the first axis of the phone to the car.

Figure 5: At the beginning of our pipeline, the X-Y-Z axes of the smartphone have unknown relationship to the XYZ axes of the vehicle.

The acceleration experienced by the smartphone comes from two sources: the Earth’s gravitational force and the vehicle’s motion. Therefore, the gravitational force must be subtracted from the accelerometer readings to determine the acceleration caused by the vehicle. To estimate gravity, we use a low pass filter and Principal Component Analysis (PCA) to track the direction with the most energy in the accelerometer.

This works because (i) gravity is the strongest acceleration that a phone typically experiences and (ii) cars generally drive on flat roads, so gravity is normally perpendicular to the acceleration of the car. We also use the gyroscope reading to proactively rotate our estimate of gravity.

Figure 6: This is how we estimate gravity from accelerometer and gyroscope readings.

Resolving forward and radial directions

After we’ve estimated gravity, we can consider the IMU signal to be level with the vehicle, i.e. we’ve aligned the phone’s vertical axis (Z) with the car’s vertical axis (Z). You can picture this by imagining the phone is lying flat on the car, but we still don’t know which direction it’s pointing.

The next step is to resolve the forward direction of the car from the phone’s perspective, which will align the phone’s remaining two axes, X and Y, with that of the car. This will help us determine the direction the phone is “pointing.”

GPS / IMU sensor fusion

In order to estimate the forward direction, otherwise known as heading misalignment, we use sensor fusion to combine the information obtained from the GPS and IMU respectively.

While a detailed discussion of sensor fusion is beyond the scope of our article, one common technique to fuse sensors is a Kalman filter. To learn more about Kalman filters, please refer to this tutorial.

Here, we can use a Kalman filter which takes the GPS and IMU sensor streams (the measurements, or observations), and solves for the vehicle position and motion (the state).

To start, we’ll look at a filter which only uses the IMU. In this set-up, we use a non-linear variant of the filter (such as an extended or unscented Kalman filter). We model the leveled IMU readings as explicitly noisy measurements, as follows:


We depict the relationship between these quantities in Figure 7, below:

Figure 7: The relationship between state (estimated) and observed (measured) variables in the Kalman filter setting. We have introduced the dot notation for derivatives and θ as the vehicle heading.

An advantage of this approach is that it can also be used to fuse GPS with IMU. The process is similar, except that the GPS provides direct measurements of the quantities of interest (speed and heading). In this case, the measurements are direct but can be susceptible to GPS noise in certain circumstances. The Kalman filter helps us smooth out noise in a principled way.

Another advantage of this approach to sensor fusion is that we can leverage the strengths of each sensor. For example, the IMU is very accurate at estimating acceleration (a in the above equations), so we can weigh observations from it accordingly.

Figure 8: After fusing the GPS, we have aligned the smartphone’s X-Y-Z axes with those of the vehicle. We can now use the outputs of the Kalman filter to analyze vehicle movement.

Detecting decelerations

Now that we have inferred the vehicle’s movement along its three axes, we can implement a deceleration detector, that is used in models that power Ride Check.

To detect decelerations, we integrate our tangential acceleration estimates in moving one-second windows, which measures change in vehicle speed. If the reduction in speed in a one-second time window is larger than a certain threshold, we consider it a deceleration event.


Validation using the Uber Beacon

Now that we have estimated vehicle movement and detected decelerations, how do we measure the accuracy of this algorithm?

Enter the Beacon…

Beacon inertial sensors (IMUs)

The Beacon IMU offers three major advantages over the smartphone IMU:

  • We develop the hardware. Since Uber is producing the Beacon, we were able to select a relatively high-quality IMU with known characteristics and run a calibration procedure on it.
  • We avoid the device fragmentation problem. Since Uber supports thousands of device models, sensor hardware can vary a lot from phone to phone. However, every Beacon has the same sensors, which reduces complexity.
  • The Beacon is mounted in a fixed position. Smartphones can be tapped, picked up, and dropped during a trip. Mounting the Beacon on the windshield helps us avoid misinterpreting a dropped device as a vehicle crash!

As a result, the raw input from Beacon is more reliable than the IMU from the driver’s phone in detecting deceleration related to a crash. A comparison between acceleration observations from the three sources (Driver Phone GPS, Driver Phone IMU, and Beacon IMU) is depicted in Figures 9a and 9b, below:

Figure 9a: For measuring acceleration, the driver device GPS signal (blue) doesn’t have the resolution required. While more accurate, driver device IMU (green) can be quite noisy (and subject to extra vibration). In comparison, the Beacon IMU (red) is quite clean.
Figure 9b: The gyroscope on the Beacon is better at measuring the motion of the vehicle as well. We plot the turn rate derived from driver device GPS (blue), driver device IMU (green), and Beacon (red).

Since the Beacon IMU is more reliable than driver phone IMU in detecting decelerations related to a crash, we can run a simpler event detection algorithm on the Beacon IMU. We can then use that to benchmark the more heavily processed signal from the GPS / IMU fusion described above.

Thus, for validation of our GPS / IMU fusion algorithm, we treat Beacon-based events as ground truth for decelerations. We then compute metrics like precision and recall for event detection. Therefore, having the Beacon on any vehicle helps improve our movement data for applications like Ride Check across all vehicles.

Beacon IMU-only estimation

Critically, because the Beacon is always affixed to the windshield, the IMU orientation and thus forward misalignment are fixed. This means we can rely on the IMU to perform reference frame adjustment without needing the GPS. The overall pipeline consists of gravity estimation, as described above, followed by a single PCA step to derive vehicle movement, depicted in Figure 10, below:

Figure 10: This is the architecture for our Beacon-based vehicle movement pipeline (contrast with Figure 4).

We accomplish alignment by projecting gravity at every time step and performing PCA on the residual acceleration time series. Because the majority of cars’ acceleration is forward, the direction returned by PCA will be aligned with the axis of forward motion.  However, PCA can return either the forward or reverse direction, so we need to resolve the sign ambiguity.

We do so using the gyroscope. We exploit the fact that the gyroscope readings projected in the direction of gravity give us turn rates in the plane of motion of the vehicle. This quantity will be maximally correlated or anticorrelated with radial accelerations due to turns, depending on whether PCA gave us forward or backward direction. We use the sign of the correlation between these two time series to resolve the sign ambiguity, thereby providing us with the direction of motion.

For vehicles with a Beacon device, we now have three separate ways to derive vehicle movement: (1) GPS-only from the driver phone; (2) GPS / IMU fusion on the driver phone; (3) the Beacon IMU. This allows us to compare the various methodologies.

Figure 11: After processing the data shown in Figure 2, we plot forward acceleration of the vehicle derived using the Driver Phone GPS (blue), GPS / IMU using the Driver Phone sensors (green), and the Beacon IMU (red). Fusion was performed using an extended Kalman filter. The limitations of GPS-only methodologies are evident in the low rate, lag, and choppiness observable in the blue signal.

Compared to the GPS-only algorithm, we can validate that our GPS / IMU fusion algorithm is a major improvement in accuracy for deceleration detection.

We can visualize these improvements by mapping deceleration events. Figures 12a and 12b, below, show deceleration events in San Francisco, using both GPS-only and GPS/IMU-fused methods. As you can see, our fused solution is more precise than GPS alone.

Figure 12A: This map shows GPS-only deceleration events in San Francisco (from our older, less accurate algorithm).
Figure 12B: This map shows fused GPS/IMU deceleration events in San Francisco (from our newer, more accurate algorithm).


Stand for safety

Uber is deeply committed to the safety of riders, drivers, and delivery-partners on the Uber platform as well as others who share the road. We are invested in developing technology to further improve safety on the road, through applications like automatic crash detection. Our work on safety is never done, however we are taking a big step forward by using the power of sensors to build best-in-class safety products.

If you’re interested in working to improve the safety of our riders and drivers around the world with sensors, please join us! We’re hiring for roles on both the Safety team and the Sensing & Perception team.


Andrew Beinstein is a product manager on Uber’s Driving Safety team, Dinesh Ramasamy and Andrew Irish are software engineers on Uber’s Location Intelligence team, and Ted Sumers is an engineering manager leading Uber’s Sensor Intelligence team.