Skip to content

Conversation

@pentp
Copy link
Contributor

@pentp pentp commented Jan 3, 2025

Remove some allocations and interface calls.

@dotnet-policy-service dotnet-policy-service bot added the community-contribution Indicates that the PR has been added by a community member label Jan 3, 2025
@dotnet-policy-service
Copy link
Contributor

Tagging subscribers to this area: @dotnet/area-extensions-caching
See info in area-owners.md if you want to be subscribed.

@mgravell
Copy link
Contributor

mgravell commented Jan 7, 2025

The main win here is not eagerly allocating the options lists, is that right? Plus using the list-specific iterator instead of IEnumerable-T/IEnumerator-T. Definitely worthwhile - however, I'm not a huge fan of exposing the naked field, even as internal. I wonder if we could have a private field and an internal get that is the span via collectionsmarshal, and return the default/empty span if the list is absent. Alternatively, maybe an internal HasFoos => _foos is { Length :> 0 }; and just add an if (options.HasFoos) ? (although then we still need to think about the enumerator - the span approach avoids that entirely, and should be more direct than the custom list iterator, additionally skipping bounds checks)

@pentp
Copy link
Contributor Author

pentp commented Jan 8, 2025

Yes, the allocation savings come from on-demand allocation of options lists and avoiding an enumerator allocation each time when using options. And skipping options allocation entirely in MemoryDistributedCache.Set.

I'll try the Span based variant - it could actually generate less code compared to List<T>.Enumerator.
Nvm, CollectionsMarshal is only available on .NET 5+.

@mgravell
Copy link
Contributor

mgravell commented Jan 8, 2025

Yes, the allocation savings come from on-demand allocation of options lists and avoiding an enumerator allocation each time when using options. And skipping options allocation entirely in MemoryDistributedCache.Set.

I'll try the Span based variant - it could actually generate less code compared to List<T>.Enumerator. Nvm, CollectionsMarshal is only available on .NET 5+.

Since this is an internal API and isn't being used by IVTA hackery except for the tests, we could actually cheat by having the property typed differently on different runtimes, but perhaps we can keep things simple

private List<Whatever>? _foos;
public IList<Whatever> Foos => _foos ??= []; // lazy alloc
internal List<Whatever>? FoosDirect => _foos; // note never allocs

This adds a line of code, but is IMO significantly preferable to exposing the naked field.

@pentp
Copy link
Contributor Author

pentp commented Jan 17, 2025

This should be ready for merging if there are no other concerns.

@pentp pentp requested a review from mgravell February 4, 2025 01:07
entry.AbsoluteExpirationRelativeToNow = options.AbsoluteExpirationRelativeToNow;
entry.SlidingExpiration = options.SlidingExpiration;
entry.Size = value.Length;
entry.Value = value;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this use entry.Dispose() for parity with the change applied to MemoryCacheExtensions in this same PR?

Copy link
Contributor Author

@pentp pentp Feb 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In practice entry.Dispose() would be slightly more efficient, but have to use using because someone could enable TrackLinkedCacheEntries for the MemoryCache options passed to this class.

The only overload I changed for MemoryCacheExtensions.Set was the one that only set Value (which always succeeds), other more complex overloads could cause exceptions from validation checks and thus need using.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That raises complex questions about what the correct behaviour even is if an exception is thrown and we've only done half the work - could that result in us committing something incorrect?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, until Value is set, the entry is incomplete and Dispose won't store it in cache. TrackLinkedCacheEntries logic still needs to run however, because it needs to restore the state changed by entry allocation.

Copy link
Contributor

@mgravell mgravell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me; added some commentary and questions, but nothing that looks like a hard blocker

@pentp
Copy link
Contributor Author

pentp commented Mar 5, 2025

I resolved the merge conflicts again, could this be merged?

{
return (TItem?)(cache.Get(key) ?? default(TItem));
cache.TryGetValue(key, out object? value);
return value is null ? default : (TItem)value;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe clearer?

return cache.TryGetValue(key, out object? value) && value is not null ? (TItem)value : default;

@pentp pentp requested a review from ericstj April 10, 2025 16:33
@jozkee
Copy link
Member

jozkee commented Apr 14, 2025

/ba-g all unknown failures are DeadLetters.

@jozkee jozkee merged commit 840fc4a into dotnet:main Apr 14, 2025
78 of 85 checks passed
@pentp pentp deleted the memcache-opts branch April 14, 2025 15:24
@github-actions github-actions bot locked and limited conversation to collaborators May 15, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

area-Extensions-Caching community-contribution Indicates that the PR has been added by a community member

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants