Use github.com/klauspost/compress for gzip and zlib#10782
Use github.com/klauspost/compress for gzip and zlib#10782bboreham merged 3 commits intoprometheus:mainfrom
Conversation
|
|
What are the tradeoffs made by the library? Did they submit their changes upstream? |
I'm not aware of any trade-offs but I do recall seeing something that main benefits are for amd64 arch. |
|
See klauspost/compress#617 for the answer. |
Can you explain (or point out where I missed the explanation) what these pictures show? Assume I know nothing about your environment, your dashboards, what you changed, when you changed it, etc. |
|
I had a 2.36.0 running on a biggish Prometheus server scraping ~21M time series. Does that help? |
|
Thanks, that helps a bit. I am puzzled why your charts don't show a repeating pattern every 2 hours, which is the standard head compaction interval. Also puzzled if your entire Prometheus benefitted by 3.3%, why did your benchmark focused solely on this one improvement show a smaller benefit? |
We run prometheus with 30m min/max tsdb block range so we can run GC more often.
It shows 3.23% for a response with 100 metrics. As usual with benchmarking how you run a test influences the outcome, that's why I deployed it to a test instance to see real world results. |
|
I would like to propose a change to your benchmark, to do compression just once outside of the benchmark loop. This way, the benchmark is more focused on decompression and gives a clearer signal: (that last +22% object allocations is from decompression initialization; it is not material overall) |
|
Good call, updated, thanks |
|
Given you changed The changes to |
That was in a commit where I run gofumpt. |
Ok that's what you did; please add why you did it , making sure you cover why you edited a file marked DO NOT EDIT. |
06afaf6 to
e186344
Compare
fb6d75f to
bf5451e
Compare
|
Rebased on top of 2.39.0 |
|
Note I attempted to fix the way Prombench indicates increased CPU usage if your Prometheus runs faster at prometheus/test-infra#618 |
|
/prombench main |
|
Benchmark tests are running for 3 days! If this is intended ignore this message otherwise you can cancel it by commenting: |
|
/prombench cancel |
|
Benchmark cancel is in progress. |
|
(During Prometheus bug scrub meeting) Looks like this PR was superseded by #13330 |
That PR is for using github.com/klauspost/compress is different places so I don’t think ot supersedes this one. |
|
I'll rebase this one, unless someone objects soon. |
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
klauspost/compress is a high quality drop-in replacement for common Go compression libraries. Since Prometheus sends out a lot of HTTP requests that often return compressed output having improved compression libraries helps to save cpu & memory resources. On a test Prometheus server I was able to see cpu reduction from 31 to 30 cores. Benchmark results: name old time/op new time/op delta TargetScraperGzip/metrics=1-8 69.4µs ± 4% 69.2µs ± 3% ~ (p=0.122 n=50+50) TargetScraperGzip/metrics=100-8 84.3µs ± 2% 80.9µs ± 2% -4.02% (p=0.000 n=48+46) TargetScraperGzip/metrics=1000-8 296µs ± 1% 274µs ±14% -7.35% (p=0.000 n=47+45) TargetScraperGzip/metrics=10000-8 2.06ms ± 1% 1.66ms ± 2% -19.34% (p=0.000 n=47+45) TargetScraperGzip/metrics=100000-8 20.9ms ± 2% 17.5ms ± 3% -16.50% (p=0.000 n=49+50) name old alloc/op new alloc/op delta TargetScraperGzip/metrics=1-8 6.06kB ± 0% 6.07kB ± 0% +0.24% (p=0.000 n=48+48) TargetScraperGzip/metrics=100-8 7.04kB ± 0% 6.89kB ± 0% -2.17% (p=0.000 n=49+50) TargetScraperGzip/metrics=1000-8 9.02kB ± 0% 8.35kB ± 1% -7.49% (p=0.000 n=50+50) TargetScraperGzip/metrics=10000-8 18.1kB ± 1% 16.1kB ± 2% -10.87% (p=0.000 n=47+47) TargetScraperGzip/metrics=100000-8 1.21MB ± 0% 1.01MB ± 2% -16.69% (p=0.000 n=36+50) name old allocs/op new allocs/op delta TargetScraperGzip/metrics=1-8 71.0 ± 0% 72.0 ± 0% +1.41% (p=0.000 n=50+50) TargetScraperGzip/metrics=100-8 81.0 ± 0% 76.0 ± 0% -6.17% (p=0.000 n=50+50) TargetScraperGzip/metrics=1000-8 92.0 ± 0% 83.0 ± 0% -9.78% (p=0.000 n=50+50) TargetScraperGzip/metrics=10000-8 93.0 ± 0% 91.0 ± 0% -2.15% (p=0.000 n=50+50) TargetScraperGzip/metrics=100000-8 111 ± 0% 135 ± 1% +21.89% (p=0.000 n=40+50) Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
Since we want to use a faster gzip/zlib libraries we should add a lint check so that it's used everywhere Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
|
Sorry, I was on a leave. |
|
/prombench main |
|
/prombench cancel |
|
Benchmark cancel is in progress. |



Metrics from a test instance: