optimize large slice alloc in Stats#105
Conversation
| var ( | ||
| gstringsPool = sync.Pool{ | ||
| New: func() interface{} { | ||
| // new() will allocate and zero-initialize the struct. | ||
| // The large data array within ethtoolGStrings will be zeroed. | ||
| return new(ethtoolGStrings) | ||
| }, | ||
| } | ||
| statsPool = sync.Pool{ | ||
| New: func() interface{} { | ||
| // new() will allocate and zero-initialize the struct. | ||
| // The large data array within ethtoolStats will be zeroed. | ||
| return new(ethtoolStats) | ||
| }, | ||
| } | ||
| ) |
There was a problem hiding this comment.
Can we in addition to this give control to the customer to pass in the buffer? This implementation will be really good if someone wants a threadsafe operation however it will have overhead if you compare to a static one time buffer allocation.
|
This is the change i was planning: #106, maybe we can combine the two of them. This change can be done as a default and then we would also expose those buffers for customers to have them manage those variables themselves. |
That's ok. The original |
|
We can merge this anyway, I can rebase my changes on top of it, and we can look if it makes sense or not! |
fix #104
Introduce sync pool to optimize large object allocation