Remove leaky lru_cache#2277
Conversation
sloria
left a comment
There was a problem hiding this comment.
good catch! i hadn't considered that side effect of context objects remaining in the global LRU cache. and i generally agree that the CPU cost in typical usage scenarios is small.
i'm good with this, but i'll give a moment to @lafrech @deckar01 to look if they want. otherwise i'll plan to merge and release this over the weekend
|
in the meantime, do you mind adding yourself to |
lafrech
left a comment
There was a problem hiding this comment.
I agree with the rationale.
Thanks.
|
TLDR: 👍 I tested this using our dump benchmark. Indexing
The benchmark shows ~30% of the hot path is tuple hashing, at least in |
|
Thanks for the benchmark! I updated |
Problem
After calling the
endpoint, the instancedb_sessionwas not freed, causing a warning to be emitted.The reason is that the
MySchemainstance is kept in a global LRU cache. Callingendpointmultiple times will eventually release the olderdb_sessioninstances, though.Analysis
The
@lru_cache(max_size=8)decorator was used on a method, causing instances ofselfto be cached and outlive their intended scope. This led to unexpected behavior where instances persisted beyond a single web request, retaining references and consuming memory unnecessarily.This problem was called out by PyCQA/flake8-bugbear#310 but ignored via
noqa.Options
cachetools.cachedmethod(requires a third party dependency)Proposal
The LRU cache was introduced in #1309 (the PR reported 3% performance increase on the test base). I'm not sure the LRU cache actually has that much of an effect in real world scenarios:
max_sizeis exhausted, leading to cache misses.loadcauses 3 misses, 0 hits (warm -> 3 hits)dumpcauses 2 misses, 0 hits (warm -> 3 hits).