Skip to main content
Uber logo

Schedule rides in advance

Reserve a rideReserve a ride

Schedule rides in advance

Reserve a rideReserve a ride
Engineering

Tricks of the Trade: Tuning JVM Memory for Large-scale Services

February 26, 2020 / Global
Featured image for Tricks of the Trade: Tuning JVM Memory for Large-scale Services
Figure 1. Comparing GC pauses from a 120 to 160 gigabyte heap size, we saw that the largest increase in GC time came from ParNew in the Young Generation.
Figure 2. We compared RpcQueueTimeAvgTime, RpcQueueTimeNumOps, GC Time, and GC Count for three sets of JVM parameters, finding that our third parameter delivered the best performance.
Figure 3. Over one hour, our logs show that the Old Generation increases by 4 gigabytes.
Figure 4. GCViewer shows the CMS marking percentage remains small over a period of one hour.
Figure 5. Analyzing TLAB allocation with Java Flight Recorder shows that NameNode does not generate a high volume of large objects.
Chart comparing garbage collection times
Figure 6. Comparing C4 and CMS using the Dynamometer tool, we found that C4 delivered 30 percent lower RPC queue times.
Figure 7. A report from GCeasy, based on our Hive Metastore GC logs, shows 2,258 GC pauses of 0 to 1 second in duration. (Graph generated by .)
Figure 8. Excessive object creation resulted in a highly oscillating heap usage pattern in our Hive Metastore.
Figure 9. A single thread caused excessive oscillation in the heap, which settled down after resetting its backoff time.
Figure 10. After resetting the threat back time, GCeasy reported far fewer GCs. (Graph generated by .)
Figure 11. While troubleshooting, our Hive Metastore latency increased due to the excessive heap usage, affecting internal data users, but decreased substantially after the fix.
Figure 12. Running tests with String Deduplication enabled and disabled showed a large difference in GC pause time and the error rate.
Figure 13. This GCeasy chart showed a number of full GCs occuring at the time the Presto coordinator stalled.
Figure 14. During this incident, full GCs did not reclaim enough bytes to continue running, triggering more full GCs, and eventually causing an out-of-memory error.
Xinli Shang

Xinli Shang

Xinli Shang is a Manager on the Uber Big Data Infra team, Apache Parquet PMC Chair, Presto Commmiter, and Uber Open Source Committee member. He is leading the Apache Parquet community and contributing to several other communities. He is also leading several initiatives on data format for storage efficiency, security, and performance. He is also passionate about tuning large-scale services for performance, throughput, and reliability.

Yi Zhang

Yi Zhang

Yi Zhang is a Senior Software Engineer on Uber's Machine Learning Platform team. She thrives while solving big data problems, from data infrastructure to data application layers.

Fengnan Li

Fengnan Li

Fengnan Li is an Engineer Manager with the Data Infrastructure team at Uber. He is an Apache Hadoop Committer.

Amruth Sampath

Amruth Sampath

Amruth Sampath is a Senior EM on Uber's Data Platform team. He leads the Data Analytics org comprising Hive, Spark, Flink, Pinot and DataCentral.

Girish Baliga

Girish Baliga

Girish manages Pinot, Flink, and Presto teams at Uber. He is helping the team build a comprehensive self-service real-time analytics platform based on Pinot to power business-critical external facing dashboards and metrics. Girish is the Chairman of the Presto Linux Foundation Governing Board.

Posted by Xinli Shang, Yi Zhang, Fengnan Li, Amruth Sampath, Girish Baliga

Category: