Skip to content

Conversation

@yuxiaochen16
Copy link

1.Slave Evicting

slave should not perform evicting , even if it reaches it`s "maxmemory"

The way Redis calculate how much memory should be released is how much memory we have used in the entire instance minus "maxmemory" directive . But master and slave may not always have exactly the same memory usage, there are always some slightly different !(for example : slave have much more connection than master)

If slave have reached it`s "maxmemory", but master did not, then some keys would be evicted from slave, but still existing in master. It might be fine if our key was just a simply string and we never use commands like "INCR" or "DECR", but considering if the key is a more complex key, a "list" for example, this would result data was incomplete between master and slave

The way we fix this was disable memory eviction if the current instance is a slave. The evicting shall be performed by master and replicate to slave

2. maximum evicting

The way we evicting key if "maxmemory" reached was calculate how much memory should be release as a "memory_to_be_free" variable, and keep deleting objects till there are exactly "memory_to_be_free" bytes of memory was freed.

There are special cases : the "memory_to_be_free" variable may be a too big value.
for example :

  • dict rehashing

    This really happens in our product environment ! And was the major motivation of this update. Redis would always perform rehash size of a hash table was reached a specific number . And as the size of hash table grows, the rehashing memory allocation grows too. If the size reaches "8013608" , 128Mb of memory shall be alloced at once. The "memory_to_be_free" variable would be about 128Mb, in the next call of "freeMemoryIfNeeded()", that`s were could block Redis.

  • manual config change

    User changes "maxmemory" to a smaller value would also block server. But that`s acceptable

I set a maximum iterate limitation for "freeMemoryIfNeeded()". The value was set to 5, which means 5 keys shall be evicted at most per call .

xiaochen16 added 2 commits January 8, 2018 16:11
    master-slave incomplete if maxmemory reached
@antirez
Copy link
Contributor

antirez commented Jan 12, 2018

Hello @Rosanta, thanks for your PR. A few notes:

  1. Yes the fact that slaves should not evict was considered several times... I always avoided implementing it because there is a legitimate usage for this feature, so probably the way to go is to make it a tunable option, but with "disabled" as default. The use case is that you have, for instance, a plain cache populated with just SETs, and you want to have another replica for the cache in order to scale reads, however your slave host has less memory, but you are ok with it having less keys, so fewer hit rates from the POV of the application using it as a cache. In that case it could make sense... But in the general case I agree this should be switched off. Something like slave-eviction no.
  2. Thanks for identifying this pathological case for memory_to_be_freed and dict rehashing. I think we should fix this indeed, however the way you fixed it may have a problem: we could make in theory too little work in order to free memory, and commands at this point may allocate more memory than it gets freed, so basically the memory limit may no longer respected at this point.

About problem "2", a way this could be addressed is by having a flag that remembers if we had to give up last time because we were spending too much iteration in releasing memory, and if so, what was the size of memory_to_be_freed in that case. The next iteration, if we need to exit again from the loop because too many time elapsed, we do it only if this flag is not set, or if it is set, only if memory_to_be_freed is smaller than the value it had when we exited the loop last time. In this way we say: It is ok to terminate the eviction loop before all the memory was reclaimed, but only as long as the memory to be freed is going down, if instead the memory to be freed is starting to raise iteration after iteration, we block.

Does this make sense?

@soloestoy
Copy link
Contributor

BTW, I think we have some bugs about lazy-evict, see more detail in PR #4544

@yuxiaochen16
Copy link
Author

Hi :

Thank you for your time

1. slave evicting

  • Adding a switch for disable slaving evicting and disable it as default was a very nice idea.

  • I don`t think replicate cache populated with plain SET command was the final solution. Because master would never know a key was evicted in slave, and which key was evicted. Even if it does know, and replicate a entire key back to the slave. Slave would evict another key, should this key be replicate back to slave , too?

  • Replicate a entire key to slave would make extra pressure to slave, it`s ok if our key was a simple key-value. But if a key gets complicated, restore it back would be expensive.

    So I think the strategy if disable the "slave-eviction" switch should be considered carefully.

2. maximum evicting

  • Having a flag was a wonderful choice, indeed my implication evicting memory was based on user request, the requests gets less, the eviction gets slower.Perhaps add the evict operation into beforeSleep() or serverCron() like activeExpireCycle() could be better.

  • In my implication , it is possible a command could alloc more memory than freeMemoryIfNeed() just reclamied indeed. Like MSET command and etc.

    Your solution about this problem was very convincing .

    Again , Thank you for you time :)

@oranagra
Copy link
Member

This PR is being closed in favor of #7653 (which implements a similar idea).
The replica-ignore-maxmemory config was already added.
If you have any concerns not covered by the other PR, please raise them there.

@oranagra oranagra closed this Aug 18, 2020
oranagra pushed a commit that referenced this pull request Dec 6, 2020
As we know, redis may reject user's requests or evict some keys if
used memory is over maxmemory. Dictionaries expanding may make
things worse, some big dictionaries, such as main db and expires dict,
may eat huge memory at once for allocating a new big hash table and be
far more than maxmemory after expanding.
There are related issues: #4213 #4583

More details, when expand dict in redis, we will allocate a new big
ht[1] that generally is double of ht[0], The size of ht[1] will be
very big if ht[0] already is big. For db dict, if we have more than
64 million keys, we need to cost 1GB for ht[1] when dict expands.

If the sum of used memory and new hash table of dict needed exceeds
maxmemory, we shouldn't allow the dict to expand. Because, if we
enable keys eviction, we still couldn't add much more keys after
eviction and rehashing, what's worse, redis will keep less keys when
redis only remains a little memory for storing new hash table instead
of users' data. Moreover users can't write data in redis if disable
keys eviction.

What this commit changed ?

Add a new member function expandAllowed for dict type, it provide a way
for caller to allow expand or not. We expose two parameters for this
function: more memory needed for expanding and dict current load factor,
users can implement a function to make a decision by them.
For main db dict and expires dict type, these dictionaries may be very
big and cost huge memory for expanding, so we implement a judgement
function: we can stop dict to expand provisionally if used memory will
be over maxmemory after dict expands, but to guarantee the performance
of redis, we still allow dict to expand if dict load factor exceeds the
safe load factor.
Add test cases to verify we don't allow main db to expand when left
memory is not enough, so that avoid keys eviction.

Other changes:

For new hash table size when expand. Before this commit, the size is
that double used of dict and later _dictNextPower. Actually we aim to
control a dict load factor between 0.5 and 1.0. Now we replace *2 with
+1, since the first check is that used >= size, the outcome of before
will usually be the same as _dictNextPower(used+1). The only case where
it'll differ is when dict_can_resize is false during fork, so that later
the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e.
_dictNextPower(1025*2) will return 4096).
Fix rehash test cases due to changing algorithm of new hash table size
when expand.
JackieXie168 pushed a commit to JackieXie168/redis that referenced this pull request Mar 2, 2021
As we know, redis may reject user's requests or evict some keys if
used memory is over maxmemory. Dictionaries expanding may make
things worse, some big dictionaries, such as main db and expires dict,
may eat huge memory at once for allocating a new big hash table and be
far more than maxmemory after expanding.
There are related issues: redis#4213 redis#4583

More details, when expand dict in redis, we will allocate a new big
ht[1] that generally is double of ht[0], The size of ht[1] will be
very big if ht[0] already is big. For db dict, if we have more than
64 million keys, we need to cost 1GB for ht[1] when dict expands.

If the sum of used memory and new hash table of dict needed exceeds
maxmemory, we shouldn't allow the dict to expand. Because, if we
enable keys eviction, we still couldn't add much more keys after
eviction and rehashing, what's worse, redis will keep less keys when
redis only remains a little memory for storing new hash table instead
of users' data. Moreover users can't write data in redis if disable
keys eviction.

What this commit changed ?

Add a new member function expandAllowed for dict type, it provide a way
for caller to allow expand or not. We expose two parameters for this
function: more memory needed for expanding and dict current load factor,
users can implement a function to make a decision by them.
For main db dict and expires dict type, these dictionaries may be very
big and cost huge memory for expanding, so we implement a judgement
function: we can stop dict to expand provisionally if used memory will
be over maxmemory after dict expands, but to guarantee the performance
of redis, we still allow dict to expand if dict load factor exceeds the
safe load factor.
Add test cases to verify we don't allow main db to expand when left
memory is not enough, so that avoid keys eviction.

Other changes:

For new hash table size when expand. Before this commit, the size is
that double used of dict and later _dictNextPower. Actually we aim to
control a dict load factor between 0.5 and 1.0. Now we replace *2 with
+1, since the first check is that used >= size, the outcome of before
will usually be the same as _dictNextPower(used+1). The only case where
it'll differ is when dict_can_resize is false during fork, so that later
the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e.
_dictNextPower(1025*2) will return 4096).
Fix rehash test cases due to changing algorithm of new hash table size
when expand.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants