-
Notifications
You must be signed in to change notification settings - Fork 4.1k
perf: investigate slow performance degradation #14108
Description
Running ycsb --concurrency 600 --splits 5000 against denim (a 6-node cluster) shows the following throughput:
Similarly, latencies slowly climb:
Pre-splitting the ycsb table into 5000 ranges means that the number of ranges is constant over the lifetime of the test. The most interesting metrics that shows an increase and could account for this performance decline are the disk metrics:
Each node is configured with the default 1/4 physical memory of cache, which in this case is 7GB. Each ycsb write is ~1KB in size. Writing at 2K/sec should generate ~7GB/hour and the graphs show we generated ~31GB when disk reads started. Are reads starting to miss in the cache? That's somewhat surprising given the skewed distribution for reads. Perhaps the system just reached a point where it is doing a significant number of background compactions continuously and those compactions are impacting foreground work.




