Skip to content

Idea to speed up WAL loading #8638

@bboreham

Description

@bboreham

I observe that WAL loading can spend a lot of CPU time rejecting samples as already in chunks.
Would it be reasonable to cache s.mmappedChunks[len(s.mmappedChunks)-1].maxTime in Head.processWALSamples() , and thereby avoid calling and re-computing it?

  Total:      14.90s     15.31s (flat, cum) 32.93%
...
   2331         50ms       50ms           func (s *memSeries) append(t int64, v float64, appendID uint64, chunkDiskMapper *chunks.ChunkDiskMapper) (sampleInOrder, chunkCreated bool) { 
...
   2337            .       70ms           	c := s.head() 
   2338            .          .            
   2339        6.27s      6.27s           	if c == nil { 
   2340        4.22s      4.28s           		if len(s.mmappedChunks) > 0 && s.mmappedChunks[len(s.mmappedChunks)-1].maxTime >= t { 
   2341            .          .           			// Out of order sample. Sample timestamp is already in the mmaped chunks, so ignore it. 
   2342        4.31s      4.31s           			return false, false 
   2343            .          .           		} 
   2344            .          .           		// There is no chunk in this series yet, create the first chunk for the sample. 
   2345            .      120ms           		c = s.cutNewHeadChunk(t, chunkDiskMapper) 
   2346            .       10ms           		chunkCreated = true 
   2347            .          .           	} 

(This profile is from Cortex built with Prometheus code from commit c7e525b)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions