-
Notifications
You must be signed in to change notification settings - Fork 24.4k
Evicting strategy update #4583
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evicting strategy update #4583
Conversation
master-slave incomplete if maxmemory reached
|
Hello @Rosanta, thanks for your PR. A few notes:
About problem "2", a way this could be addressed is by having a flag that remembers if we had to give up last time because we were spending too much iteration in releasing memory, and if so, what was the size of Does this make sense? |
|
BTW, I think we have some bugs about lazy-evict, see more detail in PR #4544 |
|
Hi : Thank you for your time 1. slave evicting
2. maximum evicting
|
|
This PR is being closed in favor of #7653 (which implements a similar idea). |
As we know, redis may reject user's requests or evict some keys if used memory is over maxmemory. Dictionaries expanding may make things worse, some big dictionaries, such as main db and expires dict, may eat huge memory at once for allocating a new big hash table and be far more than maxmemory after expanding. There are related issues: #4213 #4583 More details, when expand dict in redis, we will allocate a new big ht[1] that generally is double of ht[0], The size of ht[1] will be very big if ht[0] already is big. For db dict, if we have more than 64 million keys, we need to cost 1GB for ht[1] when dict expands. If the sum of used memory and new hash table of dict needed exceeds maxmemory, we shouldn't allow the dict to expand. Because, if we enable keys eviction, we still couldn't add much more keys after eviction and rehashing, what's worse, redis will keep less keys when redis only remains a little memory for storing new hash table instead of users' data. Moreover users can't write data in redis if disable keys eviction. What this commit changed ? Add a new member function expandAllowed for dict type, it provide a way for caller to allow expand or not. We expose two parameters for this function: more memory needed for expanding and dict current load factor, users can implement a function to make a decision by them. For main db dict and expires dict type, these dictionaries may be very big and cost huge memory for expanding, so we implement a judgement function: we can stop dict to expand provisionally if used memory will be over maxmemory after dict expands, but to guarantee the performance of redis, we still allow dict to expand if dict load factor exceeds the safe load factor. Add test cases to verify we don't allow main db to expand when left memory is not enough, so that avoid keys eviction. Other changes: For new hash table size when expand. Before this commit, the size is that double used of dict and later _dictNextPower. Actually we aim to control a dict load factor between 0.5 and 1.0. Now we replace *2 with +1, since the first check is that used >= size, the outcome of before will usually be the same as _dictNextPower(used+1). The only case where it'll differ is when dict_can_resize is false during fork, so that later the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e. _dictNextPower(1025*2) will return 4096). Fix rehash test cases due to changing algorithm of new hash table size when expand.
As we know, redis may reject user's requests or evict some keys if used memory is over maxmemory. Dictionaries expanding may make things worse, some big dictionaries, such as main db and expires dict, may eat huge memory at once for allocating a new big hash table and be far more than maxmemory after expanding. There are related issues: redis#4213 redis#4583 More details, when expand dict in redis, we will allocate a new big ht[1] that generally is double of ht[0], The size of ht[1] will be very big if ht[0] already is big. For db dict, if we have more than 64 million keys, we need to cost 1GB for ht[1] when dict expands. If the sum of used memory and new hash table of dict needed exceeds maxmemory, we shouldn't allow the dict to expand. Because, if we enable keys eviction, we still couldn't add much more keys after eviction and rehashing, what's worse, redis will keep less keys when redis only remains a little memory for storing new hash table instead of users' data. Moreover users can't write data in redis if disable keys eviction. What this commit changed ? Add a new member function expandAllowed for dict type, it provide a way for caller to allow expand or not. We expose two parameters for this function: more memory needed for expanding and dict current load factor, users can implement a function to make a decision by them. For main db dict and expires dict type, these dictionaries may be very big and cost huge memory for expanding, so we implement a judgement function: we can stop dict to expand provisionally if used memory will be over maxmemory after dict expands, but to guarantee the performance of redis, we still allow dict to expand if dict load factor exceeds the safe load factor. Add test cases to verify we don't allow main db to expand when left memory is not enough, so that avoid keys eviction. Other changes: For new hash table size when expand. Before this commit, the size is that double used of dict and later _dictNextPower. Actually we aim to control a dict load factor between 0.5 and 1.0. Now we replace *2 with +1, since the first check is that used >= size, the outcome of before will usually be the same as _dictNextPower(used+1). The only case where it'll differ is when dict_can_resize is false during fork, so that later the _dictNextPower(used*2) will cause the dict to jump to *4 (i.e. _dictNextPower(1025*2) will return 4096). Fix rehash test cases due to changing algorithm of new hash table size when expand.
1.Slave Evicting
slave should not perform evicting , even if it reaches it`s "maxmemory"
The way Redis calculate how much memory should be released is how much memory we have used in the entire instance minus "maxmemory" directive . But master and slave may not always have exactly the same memory usage, there are always some slightly different !(for example : slave have much more connection than master)
If slave have reached it`s "maxmemory", but master did not, then some keys would be evicted from slave, but still existing in master. It might be fine if our key was just a simply string and we never use commands like "INCR" or "DECR", but considering if the key is a more complex key, a "list" for example, this would result data was incomplete between master and slave
The way we fix this was disable memory eviction if the current instance is a slave. The evicting shall be performed by master and replicate to slave
2. maximum evicting
The way we evicting key if "maxmemory" reached was calculate how much memory should be release as a "memory_to_be_free" variable, and keep deleting objects till there are exactly "memory_to_be_free" bytes of memory was freed.
There are special cases : the "memory_to_be_free" variable may be a too big value.
for example :
dict rehashing
This really happens in our product environment ! And was the major motivation of this update. Redis would always perform rehash size of a hash table was reached a specific number . And as the size of hash table grows, the rehashing memory allocation grows too. If the size reaches "8013608" , 128Mb of memory shall be alloced at once. The "memory_to_be_free" variable would be about 128Mb, in the next call of "freeMemoryIfNeeded()", that`s were could block Redis.
manual config change
User changes "maxmemory" to a smaller value would also block server. But that`s acceptable
I set a maximum iterate limitation for "freeMemoryIfNeeded()". The value was set to 5, which means 5 keys shall be evicted at most per call .