DuckDB checkpoint optimization and timing fix#159
Merged
erikdarlingdata merged 1 commit intodevfrom Feb 19, 2026
Merged
Conversation
- Add checkpoint_threshold=1GB to connection string to prevent auto-checkpoint stalls - Add manual CHECKPOINT after each collection cycle during idle time - Change collector execution from parallel Task.WhenAll to sequential per-server - Fix using var timing bug across all 16 collector files: change to explicit using blocks so appender Dispose (flush + connection close) is captured inside the DuckDB stopwatch, giving accurate timing in collection_log Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This was referenced Feb 19, 2026
erikdarlingdata
added a commit
that referenced
this pull request
Feb 19, 2026
Three fixes for "DuckDBOpen failed: Cannot open file" errors introduced by PR #159 (checkpoint) and PR #162 (compaction): 1. Timer initialization: DateTime.MinValue → DateTime.UtcNow prevents compaction/archival from firing on the very first collection cycle 2. Inline checkpoint: moved CHECKPOINT to end of RunDueCollectorsAsync using the existing connection pool instead of opening a separate DuckDB instance that conflicts via OS file locks 3. Atomic file swap: replaced two-step File.Move in CompactAsync with File.Replace (single OS operation, no window where the database file is missing) plus retry logic for locked files and WAL cleanup Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
checkpoint_threshold=1GBto DuckDB connection string to prevent auto-checkpoint stalls during writesCHECKPOINTafter each collection cycle during idle time (15-32ms vs 3,920ms mid-write)Task.WhenAllto sequential per-server to eliminate DuckDB write contentionusing vartiming bug across all 16 collector files — appender Dispose (flush + connection close) now captured inside DuckDB stopwatch for accuratecollection_logtimingBackground
DuckDB's default 16MB WAL auto-checkpoint was triggering 2-3 second stop-the-world pauses mid-write. Combined with database bloat (3.8GB for 35MB of real data), collectors were taking 3-7 seconds each and driving CPU to 50-60%. After these changes + manual database compaction, collectors run in 100-400ms with normal CPU usage.
Test plan
dotnet build -c Debug— 0 warnings, 0 errorscheckpoint_thresholdreads as 953.6 MiB in DuckDB config🤖 Generated with Claude Code