Skip to content

Conversation

@JimB123
Copy link
Contributor

@JimB123 JimB123 commented Aug 13, 2020

Overview

When Redis breaches the maxmemory threshold, freeMemoryIfNeededAndSafe is used to perform evictions until the memory situation is corrected. This is a synchronous process, and can cause large periods of unresponsiveness as well as wild variation in command latencies.

Extreme use case

Consider a large Redis instance with 268MM keys. This requires a hash table that's 2GB in size. Add another 1MM keys and the hash table will require expansion. Redis will allocate a new 4GB block of memory for the hash table. (Some time later, after rehashing is complete, the old 2GB table is released.)

If the instance was near it's maxmemory setting before the rehash, we could suddenly find ourselves 4GB over the limit. At this point, Redis will stop everything while running the eviction algorithm until 4GB of memory has been freed. Unresponsiveness in this case might reasonably last several minutes.

Solution

Understand that when eviction is happening, Redis has already breached maxmemory. Redis does NOT prevent us from breaching maxmemory. At this point, the intent is to recover - however we don't need to recover instantly.

This update limits eviction processing to 500us.

  • This is enough time to evict a number of keys (definitely more than 1).
  • This is small enough to limit fluctuations in latencies due to eviction spikes.

Given that eviction processing is run before every command (read/write/other) Redis will quickly return to its nominal memory usage. Also, this update creates a timer event which will continue to evict in-between client requests until normal memory usage is restored.

Given a large eviction event like mentioned above, the effects will be:

  • Memory usage will take a little longer to return to normal
  • Command processing will be minimally impacted
  • CPU will jump to 100% (like today), as all idle time will be used for evictions until memory usage has returned to normal
  • Redis will remain fully responsive

For more common, smaller, eviction events:

  • CPU will behave similarly to without the update (jumping to 100% for a short time)
  • Latencies will be smoothed out. Rather than having 1 command with an extra 10ms latency, 1-20 commands would have an extra 500us latency (and the dead time between commands would be utilized).
  • Clients will see more predictable performance

Refactoring

Included in this update is a slight refactoring. The original function freeMemoryIfNeeded is wrapped by freeMemoryIfNeededAndSafe (awkward at best). Although these routines are identified to "free memory", the only technique employed is to evict. So this is a very general purpose name applied to a very specific function.

This update replaces the above 2 routines with processEvictions which is more in accordance with what the function actually does. "IfNeeded" and "AndSafe" are not needed as it should be obvious that processEvictions won't/shouldn't evict if unnecessary or unsafe.

The new processEvictions function has 3 return codes - EVICT_OK, EVICT_RUNNING, & EVICT_FAIL (nothing left to evict). The processCommand function will only reject (write) commands when EVICT_FAIL is returned.

References updated.

Dependencies

This update is dependent on #7644. The monotonic clock is used for timing the eviction loop. Once that is approved and merged, This update will be rebased.

Caveat

Eviction processing can still take a long time in the case that large values (hash/set/etc) are present and server.lazyfree_lazy_eviction is false. However it's generally unlikely that large hashes would be evicted. In the case that a large hash was evicted, performance would be similar to existing performance.

@guybe7
Copy link
Collaborator

guybe7 commented Aug 13, 2020

@JimB123 nice PR, eviction-related latency is a painful issue indeed.
IIUC each processCommand will evict less keys than it used to (because of the 500us limit)
isn't it possible that the time-limited performEvictions won't "keep up" with a memory-consuming stream of commands?

Copy link
Member

@oranagra oranagra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JimB123 Thanks for another important improvement.

This is indeed a painful matter, but i'm a bit worried about two things:

  1. As Guy mentioned, there might be cases where the client traffic manages to overcome the eviction efforts and the memory grows and grows despite it. I think we may need to add some mechanism that gradually increases the timeout and makes it more aggressive when it detects that despite the background eviction, the memory keeps growing.
  2. Till now the eviction mechanics where simple, and i suppose that it was argued that the implications are predictable and the limitations are acceptable. (compromise between simple and limited design, and one that's attempting to achieve more by being more complicated). This change makes redis less predictable, and with the addition of a mechanism like i suggested above we add even more complexity.

Still, my feeling is that it would be worth it, it's just that i have a concern.

p.s. i suppose that sooner or later we'll find places where we do want to execute the eviction loop to run to completion, or be more aggressive. maybe it would be a good idea to pass the time limit as an argument, in some contexts we'll want it to be infinite, in others just make it run a bit longer (rather than run a loop that calls performEvictions).

src/evict.c Outdated
Comment on lines 668 to 684
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i understand why you remove the loop (latency), but in that case as long as there's something in the lazy free list, we should return with PENDING, not FAIL.
and if we do that, maybe there's no need for the usleep that's still here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My logic:

  • If we've exited the main loop above with EVICT_FAIL, the main loop has exhausted all evictable items. Though there are some evictable items still being processed by lazy free, we have no idea how many or if there are enough to overcome the current memory deficiency. The safe thing to do in this situation is to return EVICT_FAIL. (Even if we did return EVICT_PASS, it's almost certain that one of the next commands will fail anyway. So it becomes a situation of trading one failure for another.)
  • I had considered removing the sleep entirely, but decided that it was better to slightly slow the main thread - giving some time to the background thread to perform evictions. This will perform better than the existing code and is a bit more conservative than removing the sleep altogether. ALSO - if the sleep is removed, there's little point in calling getMemoryState and therefore no reason to check for bioPendingJobs (so the entire if clause can be removed).

@oranagra oranagra added the state:major-decision Requires core team consensus label Aug 13, 2020
@oranagra oranagra requested review from soloestoy and yossigo August 13, 2020 09:36
@JimB123
Copy link
Contributor Author

JimB123 commented Aug 13, 2020

@oranagra @guybe7 Thanks for reviewing and commenting!

  1. As Guy mentioned, there might be cases where the client traffic manages to overcome the eviction efforts and the memory grows and grows despite it. I think we may need to add some mechanism that gradually increases the timeout and makes it more aggressive when it detects that despite the background eviction, the memory keeps growing.

I don't think the extra complexity is necessary. Regardless of the time check, the loop is only checking for time after every 16 evictions. This means that for every command, (at least) 16 keys are being evicted.

  • You could construct a scenario where 1 very large key was being created, and only 16 very small keys are deleted. But you can't maintain this scenario over time. Over time, 1 average key is created, and (at least) 16 average keys are deleted.
  • You could construct a scenario where the database was running continuous MSET operations creating more keys than can be expired. But this would be difficult. You'd need to ensure that only MSET commands are running. You'd need to pipeline them (probably across multiple clients) to ensure CPU was maintained at 100%.
  • Usually there are more read operations than writes - and every read command is expiring also.
  • Usually the CPU is not pegged at 100% - and evictions will run during all of the dead times.

Given this, the risk of client traffic overcoming eviction efforts is extremely small.

  1. Till now the eviction mechanics where simple, and i suppose that it was argued that the implications are predictable and the limitations are acceptable. (compromise between simple and limited design, and one that's attempting to achieve more by being more complicated). This change makes redis less predictable, and with the addition of a mechanism like i suggested above we add even more complexity.

It should be considered that most often, this will make Redis more predictable. With today's mechanism, latency can vary wildly, for even the simplest of commands. A simple GET command which usually takes <1ms might suddenly take several seconds to return. Any operation which is performed as a single, large, spike operation will negatively impact predictability. By smoothing evictions over time, predictability is improved.

Still, my feeling is that it would be worth it, it's just that i have a concern.

p.s. i suppose that sooner or later we'll find places where we do want to execute the eviction loop to run to completion, or be more aggressive. maybe it would be a good idea to pass the time limit as an argument, in some contexts we'll want it to be infinite, in others just make it run a bit longer (rather than run a loop that calls performEvictions).

I can't really think of a case for this. Most of Redis should just want to allow the eviction process time to run. What that eviction process actually does should be up to the eviction process itself. I think it's more likely that we'd want to make the eviction process smarter - to potentially run longer under those unlikely scenarios above - but that logic should be within the eviction process, not coming from outside.

Though I don't think it should ever be needed, the existing (run to completion) mechanism can easily be coded as:

while (performEvictions() == EVICT_RUNNING) {}

@madolson
Copy link
Contributor

Just to add my thoughts:

  1. As Guy mentioned, there might be cases where the client traffic manages to overcome the eviction efforts and the memory grows and grows despite it. I think we may need to add some mechanism that gradually increases the timeout and makes it more aggressive when it detects that despite the background eviction, the memory keeps growing.

I think these have to basically be constructed situations. Before every command you have to be inserting enough traffic then can be evicted in 500us in steady state. I think this a very unlikely usecase since it would likely be overwhelming redis. You can also construct other scenarios where we blow past maxmemory since there are read commands that affect memory consumption. We can also guard against it with a configuration if a user really wants to do that, which is simpler.

madolson
madolson previously approved these changes Aug 13, 2020
Copy link
Contributor

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I already reviewed this so just adding this for accounting.

@soloestoy
Copy link
Contributor

soloestoy commented Aug 14, 2020

We have a similar feature, but simpler:

  1. in freeMemoryIfNeededAndSafe we don't evict until used_memory < maxmemory, we allow users use 10% more than maxmemory, then freeMemoryIfNeededAndSafe can return quickly. That's because normally used_memory won't be able to grow too fast, if redis contains 268M keys, it means 2GB exceed maxmemory is not a big deal.

  2. then in serverCron we call freeMemoryIfNeededAndSafe with a timeout limitation to evict keys to try used_memory < maxmemory.

Our users never meet eviction latency spike after this feature implemented.

@soloestoy
Copy link
Contributor

BTW, if just return error after 500us, users may confused, for example if they set maxmemory_policy as allkeys-random but get an OOM error, it's a little bit odd.

@JimB123
Copy link
Contributor Author

JimB123 commented Aug 14, 2020

@soloestoy Thanks for reviewing and your comments.

If I understand your comment correctly, you're saying that up until 10% over the limit, freeMemoryIfNeededAndSafe can return quickly - this means that you've implemented a time limit and then after 10% it's unlimited? I think this proposal is very similar and your implementation proves that a shorter cycle coupled with an eviction cron will resolve issues.

However I suggest that the submitted solution is better for several reasons:

  • The 10% level is arbitrary. You mention 2GB exceeding memory is not a problem. In the scenario I presented with 268MM keys, memory will jump by 4GB when the hash table is resized. 4GB is a big chunk of memory, no matter how you look at it.
  • Even after exceeding 10% (like possible with a 4GB overage) you don't really want to make the server non-responsive. It's quite possible that you've never observed memory jumping over 10% (as this would be very unusual), but if it did jump over 10% we don't want non-responsive behavior.
  • You suggested serverCron. However that only runs every 100ms. In periods where the machine is largely idle, this will extend recovery time by a very large amount. The proposed solution uses a very aggressive timer (0ms) which will spin the event loop if nothing else is going on. (If there is other activity, the extra 500us impact on each event loop will not create measurable impact - as it's the same impact that occurs on each command.) Of course, once the memory situation is resolved, this timer is deactivated and will cause no processing overhead.

Re:

BTW, if just return error after 500us, users may confused, for example if they set maxmemory_policy as allkeys-random but get an OOM error, it's a little bit odd.

I think you mis-understood this. An error is returned ONLY if there are no more evictable keys. After 500us of eviction, the client commands will execute as if memory was OK. Meanwhile, Redis will continue to evict for each command processed and in a timerProc.

@oranagra
Copy link
Member

I agree with everything Jim wrote above.

if we do add some mode of aggressiveness to protect against abuse, it should not be a boolean thing changing behaviour above a certain threshold, but rather a gradual thing.
And it should not be dependant of the amount of overuse (that can jump at rehashing or other scenarios), but rather a factor of how the eviction is making progress towards the target (I.e. If it doesn't make progress it should gradually become more aggressive)..

Another point I wanna raise before I forget is that like the test that this PR there may be other (tests, client apps, or modules) that looks at the memory overuse to decide something.
I can't put my finger on them right now, but I'm sure I saw both modules and tests that might get broken.
I'm not saying that we should avoid that change. The advantages are far greater than the disadvantages.
But we'll need to expose new INFO fields, and new module "context flags"

@yossigo
Copy link
Collaborator

yossigo commented Aug 16, 2020

@JimB123 Thank you for this contribution, very interesting indeed.

Your analysis of how this change doesn't increase the risk of eviction not catching is very convincing indeed. With this kind of changes, I'm often concerned about pathological cases that turn out to be more common than expected in production. But if I understand correctly, this (or similar) mechanism has some production mileage already - is that correct?

Another concern is the system-level impact of taking longer to align with maxmemory. For example, any environment that hosts multiple redis-server processes (e.g. a Redis Cluster with a process per core).

Intuitively, I may be missing something like having EVICTION_PERIOD_US being configurable, or even dynamically tuned to reflect the memory pressure level. I understand that technically once we're in EVICT_RUNNING state eviction keeps going, however:

  1. Evicting small chunks while interleaved with everything else on the main loop is definitely less efficient and thus takes longer.
  2. Being impacted by user traffic and specific workload patterns makes it even less unpredictable.

Re-reading the discussion above, I believe this is what @oranagra refers to in the last comment as well but please correct me if I'm wrong.

@JimB123
Copy link
Contributor Author

JimB123 commented Aug 16, 2020

@yossigo Thanks for reviewing

Another concern is the system-level impact of taking longer to align with maxmemory. For example, any environment that hosts multiple redis-server processes (e.g. a Redis Cluster with a process per core).

An interesting consideration. However if we consider a system where memory is split between multiple Redis instances, each (smaller) instance is using a smaller portion of total system memory. Memory overages are expected to be somewhat proportional with memory allocation. (The smaller the total memory, the smaller possible hash-table resize.) So if all of the instances on a host experienced simultaneous overages, that would likely be similar to one large instance experiencing a worst-cast overage. In most situations, multiple Redis instances on a host would tend to limit/buffer the impact of transient overages.

Intuitively, I may be missing something like having EVICTION_PERIOD_US being configurable, or even dynamically tuned to reflect the memory pressure level.

I agree that there's opportunity to add the ability for dynamic adjustment of the eviction period. However, I have no evidence to believe that this extra complexity is necessary. If, at a later date, we decide to add the capability for dynamic tuning, I agree with @oranagra that such a mechanism should not be dependent on the amount of the overage, but rather ensuring that memory is progressing towards a goal.

Alternatively, adding a config parameter for the eviction period is a simple option, but in discussion with @madolson, we felt it unlikely that this would add value. Extra knobs and options make configuration more complex, and adding something of dubious value should be avoided. But this is certainly an option if there is enough concern around a fixed value. Setting the eviction period to something like 15 minutes would make the process similar to what it is today. It's also an option to choose a higher fixed value.

I understand that technically once we're in EVICT_RUNNING state eviction keeps going, however:

  1. Evicting small chunks while interleaved with everything else on the main loop is definitely less efficient and thus takes longer.

I agree that breaking eviction into smaller chunks will make the process slightly less efficient overall. This is a fair trade-off for the reduction in latency spikes which would occur otherwise.

  1. Being impacted by user traffic and specific workload patterns makes it even less unpredictable.

The time to recovery memory is less predictable, but the impacts on client command latency are more predictable. I would suggest that predictability for the client is more important. The system is over the limit already. There are many ways to drive Redis over the memory limit - possibly into swap. What's important is to get the system back within it's configured memory usage as quickly as possible, without impacting Redis availability, and with minimal impact on client latencies.

As an aside... If there is a lot of concern about recovering memory quickly, we'd probably get more mileage out of improving the eviction scheme. The current method samples a number of values to find the best value to evict. Sample/Evict/Repeat. This is slow - especially on non-clustermode hosts having multiple DBs. If the system is significantly over memory, evicting 2 values each loop would practically double the eviction performance.

@soloestoy
Copy link
Contributor

soloestoy commented Aug 17, 2020

@JimB123 thanks for your reply, I know it's the rehash problem, the new hashtable needs a lot of memory if redis contains too many keys. And I like the background eviction, it is very similar with our implement. 10% is not a fixed number, it's configurable, mostly 10% can balance maxmemory and eviction.

Whatever, rehash is a problem, but I think it's not the root cause, the root cause is how we distinguish overhead memory and dataset memory.

We can see overhead detail via function getMemoryOverheadData in object.c, it counts all client buffer and db's hashtable memory etc. But now redis only doesn't count replica output buffer and aof buffer when maxmemory eviction. It seems that maybe we should not count all overhead memory, but I'm not sure, then it may lead to new problem, overhead memory maybe too large. However, it's another issue, but I think it deserves our attention, many users are confused about overhead memory in redis, ping @oranagra hope to get his opinion.

Back to this PR, I reread the code, you are right, it doesn't break maxmemory-policy. But another problem, if I understand right, performEvictions return EVICT_FAIL only when there is no key to evict, so if 500us timeout, users can still insert data, right? If memory growth rate is faster than eviction rate, will the memory keep growing unlimited? Since we don't have a hard memory limit.

And I also concern what @yossigo said, need more feedback.

@madolson
Copy link
Contributor

The core group didn't reach a consensus here, so I'll post the update on the conversation. There are two major cases:

  1. A degenerate use case (Such as massive MSETs) where eviction isn't able to keep up with eviction. There is some concern here, but I'm less worried.
  2. Some other type of regression that changes the expectations of how memory is reclaimed within Redis. This is more about memory constraints and making sure Redis is using the memory it is set with, and we don't somehow accidentally OOM it.

The mitigation that was outlined was to introduce some type of config like eviction-latency-reduction, which is an integer from 0-100. With 0 being the no attempt to reduce latency and attempt to evict keys ASAP (I.E., the current case) and 100 being some very gradual release of memory with minimal impact to latency (I.E., we only evict one key at a time each event loop)

I'm not the biggest fan of this approach, since the memory usage is still capped to roughly the same amount. @JimB123, if you think the config is a good idea, then I would commit to that (or if you have some other idea). Otherwise I guess this is still open.

@oranagra
Copy link
Member

FYI, stumbled upon a comment from Salvatore where he mentioned some design to resolve a similar problem (and try to avoid the problem of not evicting enough): #4583 (comment)
IIUC it's basically a static variable that remembers that last time the function exited due to hitting the time limit, and what was the "memory overuse", so that next time, it won't exit due to time limit again unless the "memory overuse" is smaller than it was in the previous time.

@JimB123
Copy link
Contributor Author

JimB123 commented Aug 18, 2020

@madolson The suggested approach seems non-intuitive/backwards to me. I understand that you want to make the config generic and not tied to any specific timing values. But in this situation, I would expect 0 to be the default/normal case (which would be the minimum time spent evicting - as submitted) and 100 would indicate block-until-done (the current/bad strategy). So, instead of calling this "eviction-latency-reduction" (or similar), I would suggest "eviction-tenacity" where 0 (the default) indicates minimum latency/effort, and 100 indicates block-until-done.

I guess I don't feel terribly strongly about this. If adding a config value will break the log-jam, It's worth it. I don't expect that any users will actually need to configure this, but it will be there if needed.

Unless I get other comments, I'll proceed with "eviction-tenacity".

@oranagra I read over the referenced commit. Clearly others are having this problem as well. The problem with Salvatore's suggestion to keep a record of the last call is that there is lots of normal jitter in Redis memory measurements. I wouldn't ever want to base a decision on a single pair of data points.

Unless there are objections, I'll add the config parameter as mentioned above. Though I don't believe it will ever be needed/justified, there is always the option to add a more complex mechanism of auto-tuning later. Heck, we could add a complete machine learning model. :)

@soloestoy

Whatever, rehash is a problem, but I think it's not the root cause, the root cause is how we distinguish overhead memory and dataset memory.

I absolutely agree with this. We should have a measurement for dataset memory which is independent of overhead memory. We are mixing all types of transient and non-transient memory usage, lumping it together, and using eviction as the single method to address memory concerns. Maybe that's the right thing to do, maybe not (I think not). Regardless, that is out of scope for this PR.

Back to this PR, I reread the code, you are right, it doesn't break maxmemory-policy. But another problem, if I understand right, performEvictions return EVICT_FAIL only when there is no key to evict, so if 500us timeout, users can still insert data, right? If memory growth rate is faster than eviction rate, will the memory keep growing unlimited? Since we don't have a hard memory limit.

Yes, it is correct that we can construct a scenario which will cause memory to grow. But I don't think that such a scenario is a valid real-world scenario. Today, I can blow through memory with a single SUNIONSTORE command. I can lock up the CPU with a single KEYS command. And yes, with the this update, a stream of pipelined MSETs might possibly outstrip eviction. But the important thing is that for 99.99% of systems, this is an important improvement to performance and stability. For that user who is running pipelined MSETs, it really shouldn't be a surprise when that exceeds memory (just like a big SUNIONSTORE would do).

As mentioned above, though I don't really think it's necessary, I can/will add a config parameter to allow adjustment of eviction tenacity. With such a parameter, eviction could be tuned to mimic the degenerate case that exists in code today.

@oranagra
Copy link
Member

@JimB123 eviction-tenacity seems fine to me (i don't care if it would be 0 of 100 that reintroduces the blocking mechanism).
But i think the default value should not be on the edge. i.e. we set the default to 500us and 16 keys, but maybe someone wants less (like stating that they don't want the eviction to affect latency at all).

I imagine some logarithmic formula that would make a value of 100 block until done, something like 50 will give the default of 16 keys and 500ms, and 0 will evict just a single key and leave (maybe without even bothering to check the time)..
i'm not entirely sure if we should mix the key count into it.. maybe just the time as a factor or the tunable.

regarding the name, i see all eviction related tunables are named maxmemory-... so maybe we should follow that in some way?

@oranagra oranagra added this to the Redis 6.2 milestone Sep 2, 2020
@oranagra
Copy link
Member

oranagra commented Sep 2, 2020

@JimB123 now that the monolithic timer PR is merged, maybe it's time to push this forward.
Reading the last two posts i think we're more or less aligned. probably just some final details to iron while implementing.

note that the change you made to the tests would no longer be needed once #7726 is merged (not using DEBUG POPULATE anymore). But now i realize that maybe we wanna add some tests to cover this new mechanism.

@JimB123
Copy link
Contributor Author

JimB123 commented Sep 2, 2020

@oranagra, I will rebase and address comments next week. Currently addressing conflicting priorities. I haven't abandoned this.

@oranagra
Copy link
Member

oranagra commented Sep 9, 2020

@JimB123 i'm struggling to review what you added.
I see a force push for which i can't review the diff between the versions because it also contains a rebase (so it has tons of unrelated changes).
And i see a 3rd commit but it seems to contain things which i already reviewed.
Any chance you can rearrange it so that i can see what's new?

@JimB123
Copy link
Contributor Author

JimB123 commented Sep 9, 2020

@JimB123 i'm struggling to review what you added.
I see a force push for which i can't review the diff between the versions because it also contains a rebase (so it has tons of unrelated changes).

Apologies @Oran. I wasn't expecting you to review. I'm still addressing the previous comments. This was just to rebase, pulling in the monotonic changes (which had been duplicated here in the first posting).

@JimB123
Copy link
Contributor Author

JimB123 commented Sep 10, 2020

Rebased again.

@oranagra
Copy link
Member

i assume rebase was trivial.. reapply some typo fixes on your version?
i noticed a few mentioned of freeMemoryIfNeeded slipped in.

@JimB123
Copy link
Contributor Author

JimB123 commented Sep 10, 2020

i assume rebase was trivial.. reapply some typo fixes on your version?

Yes, fairly trivial. The one change was in a comment that I had removed/replaced.

i noticed a few mentioned of freeMemoryIfNeeded slipped in.

Yes, just one. Interestingly, this auto-merged. When I had done a global search for freeMemoryIfNeeded, I didn't find the one that was mis-spelled. So the spelling correction didn't conflict with my code.

For completeness, I'll fix it. The comment is in the wrong place and doesn't make sense anyway.

Rather than blindly evicting until maxmemory limit is achieved, this
update adds a time limit to eviction.  While over the maxmemory limit,
eviction will process before each command AND as a timeProc when no
commands are running.
@oranagra
Copy link
Member

Thanks Jim.
@redis/core-team I think it's ready to be merged. Please approve or raise other concerns.

Copy link
Member

@itamarhaber itamarhaber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@oranagra oranagra merged commit 810e28a into redis:unstable Sep 16, 2020
@oranagra
Copy link
Member

merged! thank you @JimB123.

I tried to add some details to the commit comment (810e28a), hope i didn't miss anything important.

@oranagra oranagra removed this from the Next minor backlog milestone Oct 19, 2020
JackieXie168 pushed a commit to JackieXie168/redis that referenced this pull request Nov 4, 2020
Rather than blindly evicting until maxmemory limit is achieved, this
update adds a time limit to eviction.  While over the maxmemory limit,
eviction will process before each command AND as a timeProc when no
commands are running.

This will reduce the latency impact on many cases, especially pathological
cases like massive used memory increase during dict rehashing.

There is a risk that some other edge cases (like massive pipelined use
of MGET) could cause Redis memory usage to keep growing despite the
eviction attempts, so a new maxmemory-eviction-tenacity config is
introduced to let users mitigate that.
@oranagra oranagra mentioned this pull request Jan 13, 2021
oranagra pushed a commit that referenced this pull request Jan 4, 2022
This would mean that the effects of `CONFIG SET maxmemory` may not be visible once the command returns.
That could anyway happen since incremental eviction was added in redis 6.2 (see #7653)

We do this to fix one of the propagation bugs about eviction see #9890 and #10014.
oranagra pushed a commit that referenced this pull request Sep 18, 2022
This bug is introduced in #7653. (Redis 6.2.0)

When `server.maxmemory_eviction_tenacity` is 100, `eviction_time_limit_us` is
`ULONG_MAX`, and if we cannot find the best key to delete (e.g. maxmemory-policy
is `volatile-lru` and all keys with ttl have been evicted), in `cant_free` redis will sleep
forever if some items are being freed in the lazyfree thread.
oranagra pushed a commit that referenced this pull request Sep 21, 2022
This bug is introduced in #7653. (Redis 6.2.0)

When `server.maxmemory_eviction_tenacity` is 100, `eviction_time_limit_us` is
`ULONG_MAX`, and if we cannot find the best key to delete (e.g. maxmemory-policy
is `volatile-lru` and all keys with ttl have been evicted), in `cant_free` redis will sleep
forever if some items are being freed in the lazyfree thread.

(cherry picked from commit 464aa04)
oranagra pushed a commit that referenced this pull request Dec 12, 2022
This bug is introduced in #7653. (Redis 6.2.0)

When `server.maxmemory_eviction_tenacity` is 100, `eviction_time_limit_us` is
`ULONG_MAX`, and if we cannot find the best key to delete (e.g. maxmemory-policy
is `volatile-lru` and all keys with ttl have been evicted), in `cant_free` redis will sleep
forever if some items are being freed in the lazyfree thread.

(cherry picked from commit 464aa04)
madolson pushed a commit to madolson/redis that referenced this pull request Apr 19, 2023
…#11237)

This bug is introduced in redis#7653. (Redis 6.2.0)

When `server.maxmemory_eviction_tenacity` is 100, `eviction_time_limit_us` is
`ULONG_MAX`, and if we cannot find the best key to delete (e.g. maxmemory-policy
is `volatile-lru` and all keys with ttl have been evicted), in `cant_free` redis will sleep
forever if some items are being freed in the lazyfree thread.
enjoy-binbin pushed a commit to enjoy-binbin/redis that referenced this pull request Jul 31, 2023
…#11237)

This bug is introduced in redis#7653. (Redis 6.2.0)

When `server.maxmemory_eviction_tenacity` is 100, `eviction_time_limit_us` is
`ULONG_MAX`, and if we cannot find the best key to delete (e.g. maxmemory-policy
is `volatile-lru` and all keys with ttl have been evicted), in `cant_free` redis will sleep
forever if some items are being freed in the lazyfree thread.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

release-notes indication that this issue needs to be mentioned in the release notes state:major-decision Requires core team consensus state:to-be-merged The PR should be merged soon, even if not yet ready, this is used so that it won't be forgotten

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants