blobs: Reduce file size of benchmark tests#43208
blobs: Reduce file size of benchmark tests#43208craig[bot] merged 1 commit intocockroachdb:masterfrom
Conversation
ajwerner
left a comment
There was a problem hiding this comment.
129k is a little 🤔 but sure.
Reviewable status:
complete! 0 of 0 LGTMs obtained (waiting on @ajwerner)
pkg/blobs/bench_test.go, line 71 at r1 (raw file):
remoteExternalDir: remoteExternalDir, blobClient: blobClient, fileSize: 1 << 30, // 1 GB
How about put these both into a const and comment that to test on actually large files change the const and that it was made small for CI? I’m also cool with just skipping the test. The benchmark is less interesting when run in small files. With this change we’ll keep running it. Still running it is good too because it prevents rot
g3orgia
left a comment
There was a problem hiding this comment.
Haha I wanted to have at least 2 chunks and one chunk is 128K.
Reviewable status:
complete! 0 of 0 LGTMs obtained
pkg/blobs/bench_test.go, line 71 at r1 (raw file):
Previously, ajwerner wrote…
How about put these both into a const and comment that to test on actually large files change the const and that it was made small for CI? I’m also cool with just skipping the test. The benchmark is less interesting when run in small files. With this change we’ll keep running it. Still running it is good too because it prevents rot
Okiee, will extract it out into a constant and add a comment.
Release note: None
a900e20 to
710fa23
Compare
|
bors r+ |
43023: sql: acceptance test and test cleanup for TimeTZ r=otan a=otan Resolves #26097. This PR completes the TimeTZ saga! * Added Java unit tests * Removed some tests from the test whitelist * Added postgres regress suite. Fix parse error to use New instead of Wrap, as the Wrap actually confuses the error message more. Release note (sql change): This PR (along with a string of past PRs) allows the usage of TimeTZ throughout cockroach. 43208: blobs: Reduce file size of benchmark tests r=g3orgia a=g3orgia As titled. Release note: None 43221: backupccl: change the error code for "file already exists" errors r=andreimatei a=andreimatei The backup code used to use a class 58 error code ("system error") for situations where a backup target already exists - DuplicateFile. Class 58 is the wrong one, particularly since we've started using 58 errors to represent errors about the state of the cluster (range unavailable, dropped connections). So clients should treat 58 errors as retriable (and for example the scaledata tests do). This patch switches to a new code in "Class 42 - Syntax or Access Rule Violation". It's hard to imagine that Postgres returns the 58 code for anything related to user input. Release note (sql change): The error code for backups which would overwrite files changed from class 58 ("system") to class 42 ("Syntax or Access Rule Violation"). 43240: storage/engine: small logging fixes r=petermattis a=petermattis Change `Pebble.GetCompactionStats` to be prefixed with a newline to match the formatting of RocksDB. This ensures that the compaction stats display will not contain the log prefix which was misaligning the table header. Adding a missing sort to `Pebble.GetSSTables`. This was causing the sstable summary log message to be much busier than for RocksDB because `SSTableInfos.String` expects the infos to be sorted. Move the formatting of `estimated_pending_compaction_bytes: x` into `RocksDB.GetCompactionStats`. The Pebble compaction stats already included this and it is useful to see the estimated pending compaction bytes whenever the compaction stats are output. Release note: None Co-authored-by: Oliver Tan <otan@cockroachlabs.com> Co-authored-by: Georgia Hong <georgiah@cockroachlabs.com> Co-authored-by: Andrei Matei <andrei@cockroachlabs.com> Co-authored-by: Peter Mattis <petermattis@gmail.com>
Build succeeded |
As titled.
Release note: None