Skip to content

PERF: RemoveAll Tests and ensure we return a deterministic remove all count + batching#99

Merged
niemyjski merged 1 commit intobugfix/cache-client-list-item-sliding-cache-expirationfrom
perf/cache-remove-all
Apr 24, 2025
Merged

PERF: RemoveAll Tests and ensure we return a deterministic remove all count + batching#99
niemyjski merged 1 commit intobugfix/cache-client-list-item-sliding-cache-expirationfrom
perf/cache-remove-all

Conversation

@niemyjski
Copy link
Copy Markdown
Member

@niemyjski niemyjski commented Apr 23, 2025

@niemyjski niemyjski requested a review from Copilot April 23, 2025 20:33
@niemyjski niemyjski self-assigned this Apr 23, 2025
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces performance improvements by ensuring that the RemoveAll functionality returns a deterministic count and uses batching for key deletion. Key changes include:

  • Adding new tests for RemoveAll and RemoveAllKeys in both RedisHybridCacheClientTests and RedisCacheClientTests.
  • Implementing batching logic for key deletion in RedisCacheClient.cs with a configurable batch size.
  • Adding extension methods for batching in IEnumerable and IAsyncEnumerable to support the changes.

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated no comments.

File Description
tests/Foundatio.Redis.Tests/Caching/RedisHybridCacheClientTests.cs Added tests for RemoveAll and RemoveAllKeys functionality
tests/Foundatio.Redis.Tests/Caching/RedisCacheClientTests.cs Added tests for RemoveAll and RemoveAllKeys functionality
src/Foundatio.Redis/Extensions/EnumerableExtensions.cs New batching extension methods for IEnumerable and IAsyncEnumerable
src/Foundatio.Redis/Cache/RedisCacheClient.cs Updated RemoveAllAsync to use batching and return a deterministic delete count
Comments suppressed due to low confidence (1)

src/Foundatio.Redis/Cache/RedisCacheClient.cs:117

  • [nitpick] Consider renaming the iteration variable 'redisKeys' to 'keyBatch' (or a similar name) to clearly indicate that it represents a batch of keys.
foreach (var redisKeys in keys.Where(k => !String.IsNullOrEmpty(k)).Select(k => (RedisKey)k).Batch(batchSize))

Comment on lines +102 to +107
var seen = new HashSet<RedisKey>();
await foreach (var key in server.KeysAsync().ConfigureAwait(false))
await Database.KeyDeleteAsync(key).AnyContext();
seen.Add(key);

foreach (var batch in seen.Batch(batchSize))
deleted += await Database.KeyDeleteAsync(batch.ToArray()).AnyContext();
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do this to basically get very low memory footprint but then the key count isn't deterministic as scan can return duplicates.

await foreach (var redisKeys in server.KeysAsync().BatchAsync(batchSize).ConfigureAwait(false))
   deleted += await Database.KeyDeleteAsync(redisKeys).AnyContext();


int count = 0;
foreach (var key in redisKeys)
foreach (var redisKeys in keys.Where(k => !String.IsNullOrEmpty(k)).Select(k => (RedisKey)k).Batch(batchSize))
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Batch size matches the size that SE client uses internally for getting batches of keys. So I just reused it, higher count like 1k went faster but then it's basically getting batches and sending batches with better alignment.

@niemyjski niemyjski requested a review from ejsmith April 23, 2025 20:41
@niemyjski niemyjski merged commit 4167562 into bugfix/cache-client-list-item-sliding-cache-expiration Apr 24, 2025
1 of 3 checks passed
@niemyjski niemyjski deleted the perf/cache-remove-all branch April 24, 2025 04:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants