Use more cache friendly implementations#1164
Conversation
3eb67b0 to
8156f2c
Compare
|
So in master, we iterate, for each point in time, over all series and the output series. It makes sense that the latter could be faster, though it's a suprise to me why a profile would show time spent in math.IsNaN() though. did you mean time is spent on the |
so that we can better see the effect of #1164
|
I decided to run my own benchmarks using https://godoc.org/golang.org/x/perf/cmd/benchstat to get a more complete picture. This includes Avg, Median, and Stdev which received no, or minor updates. I was also curious wether the number of input series affects the performance delta, The results speak for themselves I think. |
| mins := make([]schema.Point, 0, len(in[0].Datapoints)) | ||
|
|
||
| crossSeriesMax(in, &maxes) | ||
| crossSeriesMax(in, out) |
There was a problem hiding this comment.
clever reuse of slices too :)
|
I took down my branch as we don't really need it anymore. |
Definitely. If the number of series is small enough, then they can all fit in cache and there is no issue with the old code. |
|
right, to be more clear, I was looking for cases where the new code would be slower, but i couldn't find such case. |
While debugging slow
groupByTagsperformance I noticed that a lot of the time was inmath.IsNaN. This function should be fast, so I figured it was due to cpu cache line misses. Some of the functions were easy to make more cache friendly, so I did.