Skip to content

Conversation

@QuChen88
Copy link
Contributor

Currently, when the replica disconnects from the master and performs a PSYNC, its sub-replicas will remain connected because the master ID remains the same. However, with the Redis 6.0 introduction of master meaningful offset mechanism, doing so would cause steady-state replication offset mismatch between master and chained replicas.

If the master has previously replicated down some non-meaningful PING commands to its replicas. When the replica disconnects, it requests the master from the last meaningful offset value, and the master would again send down the non-meaningful PINGs a second time. If the chained replicas remain connected, then it will receive the additional PINGs in the replication stream, and advance its own master_repl_offset field beyond the actual master's replication offset and remain that way forever. This will lead to it doing a full-sync un-necessarily next time when a real failover happens.

…master always to avoid replication offset mismatch between master and chained replicas.
@antirez
Copy link
Contributor

antirez commented May 22, 2020

Thanks @QuChen88, pinging @oranagra as well so that we review this using more eyes.

@oranagra
Copy link
Member

Sigh... I was afraid we didn't see the last of the side effects of that meaningful offset feature.
Fix looks good to me.

Regarding the test, I would have preferred to write it in a way that it doesn't explicitly look at the meaningful offset (none of the other tests do), and instead just make sure the replication isn't corrupted and the main offsets end up matching.
This way when we some day change the implementation and drop this meaningful offset in favor of something else, the test still works and adds coverage for this scenario.
@QuChen88 do you mind trying to change this?

@antirez
Copy link
Contributor

antirez commented May 22, 2020

Yep the fix looks ok to me as well, and anyway should not be too bad: chained replicas should be able to partially synchronise without issues: in turn they should trim their offsets and so forth. @QuChen88 did you test this explicitly? I see that the test you provided does not check to see if the sub-replicas were able to perform the PSYNC2 correctly instead of going through a full sync, but this makes sense in the test, otherwise we get other false positives here as well. However at least as "manual testing" it would be good to see this working well in practice, that is, not having to go via full sync.

I agree with @oranagra about using a more generic test that captures the semantics. However for now I'll merge this and wait for a new PR if you want to submit it. Thanks.

@antirez antirez merged commit ee93a70 into redis:unstable May 22, 2020
@soloestoy
Copy link
Contributor

Good catch!

@oranagra
Copy link
Member

Btw, correct me if I'm wrong, but this but can cause serious issue.. If the psync with bad offset succeeds, and it happens that it doesn't cause parsing / protocol error, it can cause data corruption.

@antirez
Copy link
Contributor

antirez commented May 22, 2020

@oranagra yes, I think there is some problem if we are unlucky. Normally the chained replicas, after having the offset in desync (towards greater offsets), will just fail to PSYNC because their master don't have enough replication stream. But if their master during some network partition time receives more writes, and we PSYNC with the wrong offset, and we are unlucky and the protocol is ok so no disconnection happens, I think it can cause problems.

@antirez
Copy link
Contributor

antirez commented May 22, 2020

@oranagra @QuChen88 @soloestoy btw I suggest we change this fix to something else:

  1. We revert the fix.
  2. We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.

This way we can avoid disconnections when there is no trimming of the backlog. What do you think?

@oranagra
Copy link
Member

@antirez, that's indeed sounds like the right thing to do (fix it where it's broken). Would also be more clear as to why we do that disconnect and not need a huge comment.
Unless I'm missing something (AFK). It would reduce unnecessary disconnections and also make it more future proof (in case we call the backlog trimming function in other scenarios).

let's just be sure to add tests for the various flows (disconnect in case there was or wasn't something to trim), so that we're extra sure that indeed it solves the problem and doesn't add new ones.

@antirez
Copy link
Contributor

antirez commented May 22, 2020

@oranagra that's the idea (it passes the test):

diff --git a/src/replication.c b/src/replication.c
index 7e981873f..603ef4895 100644
--- a/src/replication.c
+++ b/src/replication.c
@@ -39,7 +39,7 @@
 #include <sys/socket.h>
 #include <sys/stat.h>
 
-long long adjustMeaningfulReplOffset();
+long long adjustMeaningfulReplOffset(int *adjusted);
 void replicationDiscardCachedMaster(void);
 void replicationResurrectCachedMaster(connection *conn);
 void replicationSendAck(void);
@@ -2027,19 +2027,12 @@ int slaveTryPartialResynchronization(connection *conn, int read_reply) {
                  * new one. */
                 memcpy(server.replid,new,sizeof(server.replid));
                 memcpy(server.cached_master->replid,new,sizeof(server.replid));
+
+                /* Disconnect all the sub-slaves: they need to be notified. */
+                disconnectSlaves();
             }
         }
 
-        /* Disconnect all the sub-replicas: they need to be notified always because
-         * in case the master has last replicated some non-meaningful commands
-         * (e.g. PINGs), the primary replica will request the PSYNC offset for the
-         * last known meaningful command. This means the master will again replicate
-         * the non-meaningful commands. If the sub-replicas still remains connected,
-         * they will receive those commands a second time and increment the master
-         * replication offset to be higher than the master's offset forever.
-         */
-        disconnectSlaves();
-
         /* Setup the replication to continue. */
         sdsfree(reply);
         replicationResurrectCachedMaster(conn);
@@ -2708,7 +2701,8 @@ void replicationCacheMaster(client *c) {
 
     /* Adjust reploff and read_reploff to the last meaningful offset we
      * executed. This is the offset the replica will use for future PSYNC. */
-    server.master->reploff = adjustMeaningfulReplOffset();
+    int offset_adjusted;
+    server.master->reploff = adjustMeaningfulReplOffset(&offset_adjusted);
     server.master->read_reploff = server.master->reploff;
     if (c->flags & CLIENT_MULTI) discardTransaction(c);
     listEmpty(c->reply);
@@ -2731,6 +2725,13 @@ void replicationCacheMaster(client *c) {
      * so make sure to adjust the replication state. This function will
      * also set server.master to NULL. */
     replicationHandleMasterDisconnection();
+
+    /* If we trimmed this replica backlog, we need to disconnect our chained
+     * replicas (if any), otherwise they may have the PINGs we removed
+     * from the stream and their offset would no longer match: upon
+     * disconnection they will also trim the final PINGs and will be able
+     * to incrementally sync without issues. */
+    if (offset_adjusted) disconnectSlaves();
 }
 
 /* If the "meaningful" offset, that is the offset without the final PINGs
@@ -2740,8 +2741,13 @@ void replicationCacheMaster(client *c) {
  * offset because of the PINGs and will not be able to incrementally
  * PSYNC with the new master.
  * This function trims the replication backlog when needed, and returns
- * the offset to be used for future partial sync. */
-long long adjustMeaningfulReplOffset() {
+ * the offset to be used for future partial sync.
+ *
+ * If the integer 'adjusted' was passed by reference, it is set to 1
+ * if the function call actually modified the offset and the replication
+ * backlog, otherwise it is set to 0. It can be NULL if the caller is
+ * not interested in getting this info. */
+long long adjustMeaningfulReplOffset(int *adjusted) {
     if (server.master_repl_offset > server.master_repl_meaningful_offset) {
         long long delta = server.master_repl_offset -
                           server.master_repl_meaningful_offset;
@@ -2761,6 +2767,9 @@ long long adjustMeaningfulReplOffset() {
                 (server.repl_backlog_idx + (server.repl_backlog_size - delta)) %
                 server.repl_backlog_size;
         }
+        if (adjusted) *adjusted = 1;
+    } else {
+        if (adjusted) *adjusted = 0;
     }
     return server.master_repl_offset;
 }
@@ -2784,7 +2793,7 @@ void replicationCacheMasterUsingMyself(void) {
      * by replicationCreateMasterClient(). We'll later set the created
      * master as server.cached_master, so the replica will use such
      * offset for PSYNC. */
-    server.master_initial_offset = adjustMeaningfulReplOffset();
+    server.master_initial_offset = adjustMeaningfulReplOffset(NULL);
 
     /* The master client we create can be set to any DBID, because
      * the new master will start its replication stream with SELECT. */

Now I'm going to test it a bit more in practice, but I can't see any issue here.

@antirez
Copy link
Contributor

antirez commented May 22, 2020

Nevermind, this fix has issues as well because calling disconnectSlaves() from freeClient() can be at times dangerous. Because here the chain is like:

freeClient(<master pointer>) -> cacheMaster...() -> disconnectSlave() -> freeClient()

So for instance, if we do CLIENT KILL TYPE master, what happens is that the clients representing replicas will be freed in this context above, while the command is still running the linked list: crash. In general would be safer to do it in a more safe context.

@antirez
Copy link
Contributor

antirez commented May 22, 2020

(Solved by freeing replica clients asynchronously)

antirez added a commit that referenced this pull request May 22, 2020
Citing from the issue:

btw I suggest we change this fix to something else:
* We revert the fix.
* We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
This way we can avoid disconnections when there is no trimming of the backlog.

Note that we now want to disconnect replicas asynchronously in
disconnectSlaves(), because it's in general safer now that we can call
it from freeClient(). Otherwise for instance the command:

    CLIENT KILL TYPE master

May crash: clientCommand() starts running the linked of of clients,
looking for clients to kill. However it finds the master, kills it
calling freeClient(), but this in turn calls replicationCacheMaster()
that may also call disconnectSlaves() now. So the linked list iterator
of the clientCommand() will no longer be valid.
@antirez
Copy link
Contributor

antirez commented May 22, 2020

Pushed the fix to unstable. Let's see how it behaves. The previous fix made the PSYNC2 test failed a lot often as a side effect, since PSYNC2 test uses chained replication a lot (but why it failed counting the number of successful PSYNCs is not clear to me: replicas should be able to reconnect and still partially resynchronise). Let's see if this errors are gone with the new fix. Anyway there is to investigate exactly why there are so many errors where the synchronisations are > 4.

@QuChen88
Copy link
Contributor Author

Thanks for the follow-ups. I also thought about this approach where we only disconnect sub-replicas when the replica's offset got trimmed from the master. However, it does make the logic a bit more complicated, so I went for a simpler solution of always disconnecting the replicas. This is indeed a more future-proof solution.

BTW I am not sure I understand the comment about some PSYNC2 tests failing. I ran "make test" locally and all PSYNC 2 tests passed. Am I missing something?

@antirez
Copy link
Contributor

antirez commented May 22, 2020

@QuChen88 Yep the problem with the test is that, even after I modified it, still this fix triggers a lot of this errors (that are probably fase positives, but we need to investigate better):

*** [err]: PSYNC2: total sum of full synchronizations is exactly 4 in tests/integration/psync2.tcl
Expected 5 == 4 (context: type eval line 6 cmd {assert {$sum == 4}} proc ::test)
*** [err]: PSYNC2: total sum of full synchronizations is exactly 4 in tests/integration/psync2.tcl
Expected 5 == 4 (context: type eval line 6 cmd {assert {$sum == 4}} proc ::test)
*** [err]: PSYNC2: total sum of full synchronizations is exactly 4 in tests/integration/psync2.tcl
Expected 5 == 4 (context: type eval line 6 cmd {assert {$sum == 4}} proc ::test)
Cleanup: may take some time... OK

It happens from time to time, and always happened even before this fix, but now it happens a lot more often. I would like to understand what's happening. @oranagra was working on it but now I think that this fix changed the dynamics as well.

@QuChen88
Copy link
Contributor Author

Yes, this fix will definitely change the dynamics of the PSYNC. But I think most of the times the replicas should be able to re-connect and PSYNC with no problems.

I was not aware of those tests failing from time to time... I guess they are flaky. Probably worth stabilizing them.

@antirez
Copy link
Contributor

antirez commented May 22, 2020

@QuChen88 it's more complex than that. We always thought they to be just false negatives, then @oranagra started investigating the tests and found that failures were in part due to real issues. So we need to understand very well the failures. I'll try today and monday, if I fail I'll release 6.0.4 with such fixes anyway at the start of the next week, and continue to investigate. But I want to get out of this PSYNC2 test drama because it is sucking a lot of development time from Oran and me.

@antirez
Copy link
Contributor

antirez commented May 22, 2020

@QuChen88 P.S. you can easily reproduce with:

./runtest --single integration/psync2 --stop --loop

Btw what I want to do, is when this test fails, the ability to print the relevant replication logs to the error output as well. So that we can understand what happened with the offsets.

@antirez
Copy link
Contributor

antirez commented May 22, 2020

Epilogue: in order to change the fix I had to use the following two commits:

Because freeing the replicas from within freeClient() that is targeting the master causes certain code paths to be reviewed. Still we may want to take such fix? Not sure.

@oranagra
Copy link
Member

Seems a bit fragile, but maybe if we trust out tests to have good coverage, we can use it. (recent experience shows that it is, if we don't ignore rare issues, right? ).
@QuChen88 how did you expose this issue?

The alternative is to disconnect replicas excessively (considering that ping interval is low, the vast majority of the disconnections are excessive).

Also, I can argue that since the replication backlog is small, and the reconnect interval (replicationCron) is long, there may be cases where a disconnection leads to full sync. (one of the things that https://github.com/antirez/redis/pull/6271 tries to solve).

Anyway, I'm leaning towards keeping change and trying to make sure all flows and cases are covered.

@madolson
Copy link
Contributor

@oranagra Responding on behalf of Qu and to you from our other thread. We have some internal testing tools and sanity checks that uncovered this, but they're pretty deeply integrated into our code. We're looking into generalizing it so we can upstream the changes, we'll follow up when we figure out what we're going to pull out.

@QuChen88 QuChen88 deleted the chained-replica-offset branch May 22, 2020 19:45
@QuChen88
Copy link
Contributor Author

QuChen88 commented May 22, 2020

Speaking from my personal experience, Debug digest is a very good way to detect data mismatch between master and replica. I observed some Data corruption issues which are caused by the replica having incorrect meaningful and main offsets so that partial commands got replayed on replica and caused protocol errors and keys being lost etc. Having the correct offset values in steady state and ensuring proper disconnections for re-sync is the key here.

when I debug replication related problems, I often times do "serverLog" in processCommand() method to dump all replication data I process from the master, which helped me spot root cause to those issues. I did find that debugging issues related to replication data can be pretty time consuming.

@QuChen88
Copy link
Contributor Author

@antirez +1 on releasing a new redis 6.0.4 early next week with this fix.

@soloestoy
Copy link
Contributor

Hi all, I've read this thread, all the fixes makes sense and can work well for redis 6.0, but for old version redis 5.0, there is still a problem, it's about server. second_replid_offset.

Redis 5.0 can sync with redis 6.0, but doesn't know meaningful offset, so the old version redis 5.0 may receive wrong data from replication stream, for example:

  1. We have three instances, R6A R6B is redis 6, and R6B is an replica of R6A, and R5 is redis 5 replica of R6B.

  2. Now replication offset is 126 in all the three instance, and R6A R6B's meaningful offset is 60, but R5 doesn't know the meaningful offset.

  3. Then we send replicaof no one to R6B promote it to be master, and R6B would store replid2 and second_replid_offset with 60, and disconnect R5.

  4. After that before R5 reconnect with R6B, we append some commands to R6B increasing the offset to 130, and R5 try psync replid2 126 to R6B, it works but R5 will receive wrong data.

To fix it, I think after adjustMeaningfulReplOffset(), all the other related variable should be updated like server.second_replid_offset, see PR #7320

Moreover, should we increase RDB_VERSION? Then redis 5 cannot sync with redis 6, but redis 6.0 doesn't change any data types.

@QuChen88
Copy link
Contributor Author

QuChen88 commented May 25, 2020

@soloestoy I don't know what you mean specifically about "receiving the wrong data". Can you please be more specific as to what data with R5 receive from R6B? i.e. from which offset will the new master R6B send the data to R5? Or will the PSYNC request be rejected and force a full sync? It would be helpful to implement a test case that would reproduce this hypothetical scenario.

I just walked through the code for the method shiftReplicationId(), it is setting server. second_replid_offset to be server.master_repl_offset+1. My understanding is that this will not introduce any regression for older redis versions that don't support meaningful offset feature to PSYNC with redis 6.0. Please correct me if I am wrong.

@oranagra
Copy link
Member

IIUC this isn't directly related to this PR, which already has a long thread. let's discuss it in the new PR.
@soloestoy it would be nice if you can copy the detailed scenario description you posted here, to the top comment of your new PR.

@soloestoy
Copy link
Contributor

@QuChen88 no full sync, psync can work in step 4, but offset 126 which R5 expect to get from R6B is not correct, cause R6B has rollback offset to meaningful offset 60, but backlog doesn't know that, so data from 126 to 130 in R6B's backlog will be send to R5, but what R5 want is data from 60 to 130.

@QuChen88
Copy link
Contributor Author

@soloestoy Let's post the discussion on this other issue in your new PR

antirez added a commit that referenced this pull request May 25, 2020
Citing from the issue:

btw I suggest we change this fix to something else:
* We revert the fix.
* We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
This way we can avoid disconnections when there is no trimming of the backlog.

Note that we now want to disconnect replicas asynchronously in
disconnectSlaves(), because it's in general safer now that we can call
it from freeClient(). Otherwise for instance the command:

    CLIENT KILL TYPE master

May crash: clientCommand() starts running the linked of of clients,
looking for clients to kill. However it finds the master, kills it
calling freeClient(), but this in turn calls replicationCacheMaster()
that may also call disconnectSlaves() now. So the linked list iterator
of the clientCommand() will no longer be valid.
antirez added a commit that referenced this pull request May 27, 2020
JackieXie168 pushed a commit to JackieXie168/redis that referenced this pull request Jun 18, 2020
Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas
JackieXie168 pushed a commit to JackieXie168/redis that referenced this pull request Jun 18, 2020
Citing from the issue:

btw I suggest we change this fix to something else:
* We revert the fix.
* We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
This way we can avoid disconnections when there is no trimming of the backlog.

Note that we now want to disconnect replicas asynchronously in
disconnectSlaves(), because it's in general safer now that we can call
it from freeClient(). Otherwise for instance the command:

    CLIENT KILL TYPE master

May crash: clientCommand() starts running the linked of of clients,
looking for clients to kill. However it finds the master, kills it
calling freeClient(), but this in turn calls replicationCacheMaster()
that may also call disconnectSlaves() now. So the linked list iterator
of the clientCommand() will no longer be valid.
mayongze added a commit to mayongze/redis that referenced this pull request Jul 8, 2020
* XREADGROUP with NOACK should propagate only one XGROUP SETID command

* ACL: deny commands execution of disabled users.

* ACL GENPASS: emit 256 bits instead of 128.

* ACL GENPASS: take number of bits as argument.

* getRandomBytes(): use HMAC-SHA256.

Now that we have an interface to use this API directly, via ACL GENPASS,
we are no longer sure what people could do with it. So why don't make it
a strong primitive exported by Redis in order to create unique IDs and
so forth?

The implementation was tested against the test vectors that can
be found in RFC4231.

* ACL: re-enable command execution of disabled users.

After all I changed idea again: enabled/disabled should have a more
clear meaning, and it only means: you can't authenticate with such user
with new connections, however old connections continue to work as
expected.

* Minor aesthetic changes to redis#7135.

* Also use propagate() in streamPropagateGroupID().

* optimize memory usage of deferred replies

When deffered reply is added the previous reply node cannot be used so
all the extra space we allocated in it is wasted. in case someone uses
deffered replies in a loop, each time adding a small reply, each of
these reply nodes (the small string reply) would have consumed a 16k
block.
now when we add anther diferred reply node, we trim the unused portion
of the previous reply block.

see redis#7123

* LCS -> STRALGO LCS.

STRALGO should be a container for mostly read-only string
algorithms in Redis. The algorithms should have two main
characteristics:

1. They should be non trivial to compute, and often not part of
programming language standard libraries.
2. They should be fast enough that it is a good idea to have optimized C
implementations.

Next thing I would love to see? A small strings compression algorithm.

* Implemented CRC64 based on slice by 4

* Made crc64 test consistent

* Added crcspeed library

* Fix STRALGO command flags.

* Keep track of meaningful replication offset in replicas too

Now both master and replicas keep track of the last replication offset
that contains meaningful data (ignoring the tailing pings), and both
trim that tail from the replication backlog, and the offset with which
they try to use for psync.

the implication is that if someone missed some pings, or even have
excessive pings that the promoted replica has, it'll still be able to
psync (avoid full sync).

the downside (which was already committed) is that replicas running old
code may fail to psync, since the promoted replica trims pings form it's
backlog.

This commit adds a test that reproduces several cases of promotions and
demotions with stale and non-stale pings

Background:
The mearningful offset on the master was added recently to solve a problem were
the master is left all alone, injecting PINGs into it's backlog when no one is
listening and then gets demoted and tries to replicate from a replica that didn't
have any of the PINGs (or at least not the last ones).

however, consider this case:
master A has two replicas (B and C) replicating directly from it.
there's no traffic at all, and also no network issues, just many pings in the
tail of the backlog. now B gets promoted, A becomes a replica of B, and C
remains a replica of A. when A gets demoted, it trims the pings from its
backlog, and successfully replicate from B. however, C is still aware of
these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
that's not in the backlog anymore (since A trimmed the tail of it's backlog),
and be forced to do a full sync (something it didn't have to do before the
meaningful offset fix).

Besides that, the psync2 test was always failing randomly here and there, it
turns out the reason were PINGs. Investigating it shows the following scenario:

cycle 1: redis #1 is master, and all the rest are direct replicas of #1
cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and redis#3 is replica of #1
now we see that when #1 is demoted it prints:
17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
and when redis#3 connects to the demoted #2, #2 says:
17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964

so the issue here is that the meaningful offset feature saved the day for the
demoted master (since it needs to sync from a replica that didn't get the last
ping), but it didn't help one of the other replicas which did get the last ping.

* allow dictFind using static robj

since the recent addition of OBJ_STATIC_REFCOUNT and the assertion in
incrRefCount it is now impossible to use dictFind using a static robj,
because dictEncObjKeyCompare will call getDecodedObject which tries to
increment the refcount just in order to decrement it later.

* Rework comment in dictEncObjKeyCompare().

* fix loading race in psync2 tests

* hickup, re-fix dictEncObjKeyCompare

come to think of it, in theory (not in practice), getDecodedObject can
return the same original object with refcount incremented, so the
pointer comparision in the previous commit was invalid.
so now instead of checking the encoding, we explicitly check the
refcount.

* Extend XINFO STREAM output

Introducing XINFO STREAM <key> FULL

* Fix create-cluster BIN_PATH.

* XINFO STREAM FULL should have a default COUNT of 10

* fix pipelined WAIT performance issue.

If client gets blocked again in `processUnblockedClients`, redis will not send
`REPLCONF GETACK *` to slaves untill next eventloop, so the client will be
blocked for 100ms by default(10hz) if no other file event fired.

move server.get_ack_from_slaves sinppet after `processUnblockedClients`, so
that both the first WAIT command that puts client in blocked context and the
following WAIT command processed in processUnblockedClients would trigger
redis-sever to send `REPLCONF GETACK *`, so that the eventloop would get
`REPLCONG ACK <reploffset>` from slaves and unblocked ASAP.

* Comment clearly why we moved some code in redis#6623.

* redis-cli: try to make clusterManagerFixOpenSlot() more readable.

Also improve the message to make clear that there is no *clear* owner,
not that there is no owner at all.

* redis-cli: simplify cluster nodes coverage display.

* redis-cli: safer cluster fix with unreachalbe masters.

* Fix tracking table max keys option in redis.conf.

* lazyfree & eviction: record latency generated by lazyfree eviction

1. add eviction-lazyfree monitor
2. put eviction-del & eviction-lazyfree into eviction-cycle
   that means eviction-cycle contains all the latency in
   the eviction cycle including del and lazyfree
3. use getMaxmemoryState to check if we can break in lazyfree-evict

* CLIENT KILL USER <username>.

* MIGRATE AUTH2 for ACL support.

* redis-cli command help updated.

* redis-cli: fix hints with subcommands.

* Update help.h again before Redis 6 GA.

* Save a call to stopThreadedIOIfNeeded() for the base case.

Probably no performance changes, but the code should be trivial to
read as in "No threading? Use the normal function and return".

* Revert "optimize memory usage of deferred replies"

This reverts commit fb732f7.

* Cast printf() argument to the format specifier.

We could use uint64_t specific macros, but after all it's simpler to
just use an obvious equivalent type plus casting: this will be a no op
and is simpler than fixed size types printf macros.

* Update redis-cli.c

* optimize memory usage of deferred replies - fixed

When deffered reply is added the previous reply node cannot be used so
all the extra space we allocated in it is wasted. in case someone uses
deffered replies in a loop, each time adding a small reply, each of
these reply nodes (the small string reply) would have consumed a 16k
block.
now when we add anther diferred reply node, we trim the unused portion
of the previous reply block.

see redis#7123

cherry picked from commit fb732f7
with fix to handle a crash with LIBC allocator, which apparently can
return the same pointer despite changing it's size.
i.e. shrinking an allocation of 16k into 56 bytes without changing the
pointer.

* Support setcpuaffinity on linux/bsd

Currently, there are several types of threads/child processes of a
redis server. Sometimes we need deeply optimise the performance of
redis, so we would like to isolate threads/processes.

There were some discussion about cpu affinity cases in the issue:
https://github.com/antirez/redis/issues/2863

So implement cpu affinity setting by redis.conf in this patch, then
we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
bgsave_cpulist by cpu list.

Examples of cpulist in redis.conf:
server_cpulist 0-7:2      means cpu affinity 0,2,4,6
bio_cpulist 1,3           means cpu affinity 1,3
aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
bgsave_cpulist 1,10-11    means cpu affinity 1,10,11

Test on linux/freebsd, both work fine.

Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>

* Client Side Caching: Add Tracking Prefix Number Stats in Server Info

* reformat code

* add daily github actions with libc malloc and valgrind

* fix memlry leaks with diskless replica short read.
* fix a few timing issues with valgrind runs
* fix issue with valgrind and watchdog schedule signal

about the valgrind WD issue:
the stack trace test in logging.tcl, has issues with valgrind:
==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
==28808==   too small or bad protection modes

it seems to be some valgrind bug with SA_ONSTACK.
SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
also, not sure if it's even valid without a call to sigaltstack()

* XPENDING should not update consumer's seen-time

Same goes for XGROUP DELCONSUMER (But in this case, it doesn't
have any visible effect)

* Rework a bit the documentation for CPU pinning.

* Fix NetBSD build by fixing redis_set_thread_title() support.

See redis#7188.

* Fix compiler warnings on function rev(unsigned long)

* Add --user argument to redis-benchmark.c (ACL)

* Fix CRC64 initialization outside the Redis server itself.

* Move CRC64 initialization in main().

* Drop not needed part from redis#7194.

* make struct user anonymous (only typedefed)

This works because this struct is never referenced by its name,
but always by its type.

This prevents a conflict with struct user from <sys/user.h>
when compiling against uclibc.

Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>

* Test: --dont-clean should do first cleanup.

* Dump recent backlog on master query generating errors.

* Don't propagate spurious MULTI on DEBUG LOADAOF.

* Remove unreachable branch.

* Redis Benchmark: Fix coredump because of double free

* add include guard for lolwut.h

* add jemalloc-bg-thread config in redis conf

* stringmatchlen() should not expect null terminated strings.

* Use dictSize to get the size of dict in dict.c

* Cluster: introduce data_received field.

We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in redis#7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.

* Cluster: refactor ping/data delay handling.

* Cluster: clarify we always resolve the sender.

* TLS: Fix test failures on recent Debian/Ubuntu.

Seems like on some systems choosing specific TLS v1/v1.1 versions no
longer works as expected. Test is reduced for v1.2 now which is still
good enough to test the mechansim, and matters most anyway.

* TLS: Add crypto locks for older OpenSSL support.

This is really required only for older OpenSSL versions.

Also, at the moment Redis does not use OpenSSL from multiple threads so
this will only be useful if modules end up doing that.

* Handle keys with hash tag when computing hash slot using tcl cluster client.

* Add a test to prove current tcl cluster client can not handle keys with hash tag.

* fix redis 6.0 not freeing closed connections during loading.

This bug was introduced by a recent change in which readQueryFromClient
is using freeClientAsync, and despite the fact that now
freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since
it's not called during loading in processEventsWhileBlocked.
furthermore, afterSleep was called in that case but beforeSleep wasn't.

This bug also caused slowness sine the level-triggered mode of epoll
kept signaling these connections as readable causing us to keep doing
connRead again and again for ll of these, which keep accumulating.

now both before and after sleep are called, but not all of their actions
are performed during loading, some are only reserved for the main loop.

fixes issue redis#7215

* fix unstable replication test

this test which has coverage for varoius flows of diskless master was
failing randomly from time to time.

the failure was:
[err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl
log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found

what seemed to have happened is that the master didn't detect that all
replicas dropped by the time the replication ended, it thought that one
replica is still connected.

now the test takes a few seconds longer but it seems stable.

* Some rework of redis#7234.

* NetBSD build update.

This platform supports CPU affinity (but not OpenBSD).

* Redis-cli 6.0.1 `--cluster-yes` doesn't work (fix redis#7246)

This make it so that all prompts for all redis-cli --cluster commands are automatically answered with a yes.

* fix typo ...

* Track events processed while blocked globally.

Related to redis#7234.

* Tracking: send eviction messages when evicting entries.

A fix for redis#7249.

* rax.c updated from upstream antirez/rax.

* Regression test for redis#7249.

* Added a refcount on timer events to prevent deletion of recursive timer calls

* Converge hash validation for adding and removing

* do not handle --cluster-yes for cluster fix mode

* Cache master without checking of deferred close flags.

The context is issue redis#7205: since the introduction of threaded I/O we close
clients asynchronously by default from readQueryFromClient(). So we
should no longer prevent the caching of the master client, to later
PSYNC incrementally, if such flags are set. However we also don't want
the master client to be cached with such flags (would be closed
immediately after being restored). And yet we want a way to understand
if a master was closed because of a protocol error, and in that case
prevent the caching.

* Remove the client from CLOSE_ASAP list before caching the master.

This was broken in 1a7cd2c: we identified a crash in the CI, what
was happening before the fix should be like that:

1. The client gets in the async free list.
2. However freeClient() gets called again against the same client
   which is a master.
3. The client arrived in freeClient() with the CLOSE_ASAP flag set.
4. The master gets cached, but NOT removed from the CLOSE_ASAP linked
   list.
5. The master client that was cached was immediately removed since it
   was still in the list.
6. Redis accessed a freed cached master.

This is how the crash looked like:

=== REDIS BUG REPORT START: Cut & paste starting from here ===
1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11
1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18
1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff
1092:S 16 May 2020 11:44:09.731 # Failed assertion:  (:0)

------ STACK TRACE ------
EIP:
src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18]

And the 0xffff address access likely comes from accessing an SDS that is
set to NULL (we go -1 offset to read the header).

* add regression test for the race in redis#7205

with the original version of 6.0.0, this test detects an excessive full
sync.
with the fix in 1a7cd2c, this test detects memory corruption,
especially when using libc allocator with or without valgrind.

* Improve the PSYNC2 test reliability.

* fix valgrind test failure in replication test

in b441628 i added more keys to that test to make it run longer
but in valgrind this now means the test times out, give valgrind more
time.

* Redis Benchmark: generate random test data

The function of generating random data is designed by antirez. See redis#7196.

* fix clear USER_FLAG_ALLCOMMANDS flag in acl

in ACLSetUserCommandBit, when the command bit overflows, no operation
is performed, so no need clear the USER_FLAG_ALLCOMMANDS flag.

in ACLSetUser, when adding subcommand, we don't need to call
ACLGetCommandID ahead since subcommand may be empty.

* Redis-Benchmark: avoid potentical memmory leaking

* improve DEBUG MALLCTL to be able to write to write only fields.

also support:
  debug mallctl-str thread.tcache.flush VOID

* TLS: Improve tls-protocols clarity in redis.conf.

* fix a rare active defrag edge case bug leading to stagnation

There's a rare case which leads to stagnation in the defragger, causing
it to keep scanning the keyspace and do nothing (not moving any
allocation), this happens when all the allocator slabs of a certain bin
have the same % utilization, but the slab from which new allocations are
made have a lower utilization.

this commit fixes it by removing the current slab from the overall
average utilization of the bin, and also eliminate any precision loss in
the utilization calculation and move the decision about the defrag to
reside inside jemalloc.

and also add a test that consistently reproduce this issue.

* Tracking: flag CLIENT_TRACKING_BROKEN_REDIR when redir broken

* Fix reply bytes calculation error

Fix redis#7275.

* Replace addDeferredMultiBulkLength with addReplyDeferredLen in comment

* fix server crash for STRALGO command

* using moreargs variable

* EAGAIN for tls during diskless load

* Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas.

* Fix redis#7306 less aggressively.

Citing from the issue:

btw I suggest we change this fix to something else:
* We revert the fix.
* We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
This way we can avoid disconnections when there is no trimming of the backlog.

Note that we now want to disconnect replicas asynchronously in
disconnectSlaves(), because it's in general safer now that we can call
it from freeClient(). Otherwise for instance the command:

    CLIENT KILL TYPE master

May crash: clientCommand() starts running the linked of of clients,
looking for clients to kill. However it finds the master, kills it
calling freeClient(), but this in turn calls replicationCacheMaster()
that may also call disconnectSlaves() now. So the linked list iterator
of the clientCommand() will no longer be valid.

* Make disconnectSlaves() synchronous in the base case.

Otherwise we run into that:

Backtrace:
src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
src/redis-server 127.0.0.1:21322[0x4552d4]
src/redis-server 127.0.0.1:21322[0x4c5593]
src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]

Since we disconnect all the replicas and free the replication backlog in
certain replication paths, and the code that will free the replication
backlog expects that no replica is connected.

However we still need to free the replicas asynchronously in certain
cases, as documented in the top comment of disconnectSlaves().

* add CI for 32bit build

* PSYNC2: second_replid_offset should be real meaningful offset

After adjustMeaningfulReplOffset(), all the other related variable
should be updated, including server.second_replid_offset.

Or the old version redis like 5.0 may receive wrong data from
replication stream, cause redis 5.0 can sync with redis 6.0,
but doesn't know meaningful offset.

* Clarify what is happening in PR redis#7320.

* Test: PSYNC2 test can now show server logs.

* tests: each test client work on a distinct port range

apparently when running tests in parallel (the default of --clients 16),
there's a chance for two tests to use the same port.
specifically, one test might shutdown a master and still have the
replica up, and then another test will re-use the port number of master
for another master, and then that replica will connect to the master of
the other test.

this can cause a master to count too many full syncs and fail a test if
we run the tests with --single integration/psync2 --loop --stop

see Probmem 2 in redis#7314

* avoid using sendfile if tls-replication is enabled

this obviously broke the tests, but went unnoticed so far since tls
wasn't often tested.

* daily CI test with tls

* Replication: log backlog creation event.

* fix clusters mixing accidentally by gossip

`clusterStartHandshake` will start hand handshake
and eventually send CLUSTER MEET message, which is strictly prohibited
in the REDIS CLUSTER SPEC.
Only system administrator can initiate CLUSTER MEET message.
Futher, according to the SPEC, rather than IP/PORT pairs, only nodeid
can be trusted.

* Set a protocol error if master use the inline protocol.

We want to react a bit more aggressively if we sense that the master is
sending us some corrupted stream. By setting the protocol error we both
ensure that the replica will disconnect, and avoid caching the master so
that a full SYNC will be required. This is protective against
replication bugs.

* Remove the meaningful offset feature.

After a closer look, the Redis core devleopers all believe that this was
too fragile, caused many bugs that we didn't expect and that were very
hard to track. Better to find an alternative solution that is simpler.

* Remove the PSYNC2 meaningful offset test.

* Another meaningful offset test removed.

* Fix TLS certificate loading for chained certificates.

This impacts client verification for chained certificates (such as Lets
Encrypt certificates). Client Verify requires the full chain in order to
properly verify the certificate.

* tests: find_available_port start search from next port

i.e. don't start the search from scratch hitting the used ones again.
this will also reduce the likelihood of collisions (if there are any
left) by increasing the time until we re-use a port we did use in the
past.

* Drop useless line from replicationCacheMaster().

* 32bit CI needs to build modules correctly

* revive meaningful offset tests

* adjust revived meaningful offset tests

these tests create several edge cases that are otherwise uncovered (at
least not consistently) by the test suite, so although they're no longer
testing what they were meant to test, it's still a good idea to keep
them in hope that they'll expose some issue in the future.

* Replication: showLatestBacklog() refactored out.

* Test: add the tracking unit as default.

* Test: take PSYNC2 test master timeout high during switch.

This will likely avoid false positives due to trailing pings.

* Fix handling of special chars in ACL LOAD.

Now it is also possible for ACL SETUSER to accept empty strings
as valid operations (doing nothing), so for instance

    ACL SETUSER myuser ""

Will have just the effect of creating a user in the default state.

This should fix redis#7329.

* fix pingoff  test race

* donot free protected client in freeClientsInAsyncFreeQueue

related redis#7234

* AOF: append origin SET if no expire option

* Revert "avoid using sendfile if tls-replication is enabled"

This reverts commit b9abecf.

* Revert "Implements sendfile for redis."

This reverts commit 9cf500a.

* return the correct proto version
HELLO should return the current proto version, while the code hardcoded
3

* Avoid rejecting WATCH / UNWATCH, like MULTI/EXEC/DISCARD

Much like MULTI/EXEC/DISCARD, the WATCH and UNWATCH are not actually
operating on the database or server state, but instead operate on the
client state. the client may send them all in one long pipeline and check
all the responses only at the end, so failing them may lead to a
mismatch between the client state on the server and the one on the
client end, and execute the wrong commands (ones that were meant to be
discarded)

the watched keys are not actually stored in the client struct, but they
are in fact part of the client state. for instance, they're not cleared
or moved in SWAPDB or FLUSHDB.

* Don't queue commands in an already aborted MULTI state

* fix disconnectSlaves, to try to free each slave.

the recent change in that loop (iteration rather than waiting for it to
be empty) was intended to avoid an endless loop in case some slave would
refuse to be freed.

but the lookup of the first client remained, which would have caused it
to try the first one again and again instead of moving on.

* fix server crash in STRALGO command

* Temporary fix for redis#7353 issue about EVAL during -BUSY.

* Adapt EVAL+busy script test to new behavior.

* LRANK: Add command (the command will be renamed LPOS).

The `LRANK` command returns the index (position) of a given element
within a list. Using the `direction` argument it is possible to specify
going from head to tail (acending, 1) or from tail to head (decending,
-1). Only the first found index is returend. The complexity is O(N).

When using lists as a queue it can be of interest at what position a
given element is, for instance to monitor a job processing through a
work queue. This came up within the Python `rq` project which is based
on Redis[0].

[0]: rq/rq#1197

Signed-off-by: Paul Spooren <mail@aparcar.org>

* LPOS: implement the final design.

* LPOS: update to latest proposal.

See https://gist.github.com/antirez/3591c5096bc79cad8b5a992e08304f48

* LPOS: tests + crash fix.

* fix memory leak

* help.h updated.

* Fix LCS object type checking. Related to redis#7379.

* Fix RM_ScanKey module api not to return int encoded strings

The scan key module API provides the scan callback with the current
field name and value (if it exists). Those arguments are RedisModuleString*
which means it supposes to point to robj which is encoded as a string.
Using createStringObjectFromLongLong function might return robj that
points to an integer and so break a module that tries for example to
use RedisModule_StringPtrLen on the given field/value.

The PR introduces a fix that uses the createObject function and sdsfromlonglong function.
Using those function promise that the field and value pass to the to the
scan callback will be Strings.

The PR also changes the Scan test module to use RedisModule_StringPtrLen
to catch the issue. without this, the issue is hidden because
RedisModule_ReplyWithString knows to handle integer encoding of the
given robj (RedisModuleString).

The PR also introduces a new test to verify the issue is solved.

* cluster.c remove if of clusterSendFail in markNodeAsFailingIfNeeded

* Tracking: fix enableBcastTrackingForPrefix() invalid sdslen() call.

Related to redis#7387.

* Use cluster connections too, to limit maxclients.

See redis#7401.

* fix comments in listpack.c

* ensure SHUTDOWN_NOSAVE in Sentinel mode

- enforcing of SHUTDOWN_NOSAVE flag in one place to make it consitent
  when running in Sentinel mode

* Fix comments in function raxLowWalk of listpack.c

* fix memory leak in sentinel connection sharing

* Clarify maxclients and cluster in conf. Remove myself too.

* Fix BITFIELD i64 type handling, see redis#7417.

* Include cluster.h for getClusterConnectionsCount().

* EXEC always fails with EXECABORT and multi-state is cleared

In order to support the use of multi-exec in pipeline, it is important that
MULTI and EXEC are never rejected and it is easy for the client to know if the
connection is still in multi state.

It was easy to make sure MULTI and DISCARD never fail (done by previous
commits) since these only change the client state and don't do any actual
change in the server, but EXEC is a different story.

Since in the past, it was possible for clients to handle some EXEC errors and
retry the EXEC, we now can't affort to return any error on EXEC other than
EXECABORT, which now carries with it the real reason for the abort too.

Other fixes in this commit:
- Some checks that where performed at the time of queuing need to be re-
  validated when EXEC runs, for instance if the transaction contains writes
  commands, it needs to be aborted. there was one check that was already done
  in execCommand (-READONLY), but other checks where missing: -OOM, -MISCONF,
  -NOREPLICAS, -MASTERDOWN
- When a command is rejected by processCommand it was rejected with addReply,
  which was not recognized as an error in case the bad command came from the
  master. this will enable to count or MONITOR these errors in the future.
- make it easier for tests to create additional (non deferred) clients.
- add tests for the fixes of this commit.

* updated copyright year

Changed "2015" to "2020"

* LPOS: option FIRST renamed RANK.

* Update comment to clarify change in redis#7398.

Co-authored-by: antirez <antirez@gmail.com>
Co-authored-by: Valentino Geron <valentino@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Madelyn Olson <matolson@amazon.com>
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: srzhao <srzhao@sysnew.com>
Co-authored-by: zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
Co-authored-by: Benjamin Sergeant <bsergean@gmail.com>
Co-authored-by: zhenwei pi <pizhenwei@bytedance.com>
Co-authored-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: Deliang Yang <yang623601391@gmail.com>
Co-authored-by: Muhammad Zahalqa <m@tryfinally.com>
Co-authored-by: Titouan Christophe <titouan.christophe@railnova.eu>
Co-authored-by: Brad Dunbar <dunbarb2@gmail.com>
Co-authored-by: ShooterIT <wangyuancode@163.com>
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: WuYunlong <xzsyeb@126.com>
Co-authored-by: David Carlier <devnexen@gmail.com>
Co-authored-by: Benjamin Sergeant <bsergeant@mz.com>
Co-authored-by: hujie <hujiecs@qq.com>
Co-authored-by: Qu Chen <quchen@amazon.com>
Co-authored-by: Liu Zhen <zhenliu215252@sohu-inc.com>
Co-authored-by: Kevin Fwu <kfwu@litespeedtech.com>
Co-authored-by: xhe <xw897002528@gmail.com>
Co-authored-by: Paul Spooren <mail@aparcar.org>
Co-authored-by: meir@redislabs.com <meir@redislabs.com>
Co-authored-by: root <root@mongo.config1.com>
Co-authored-by: chenhui0212 <chenhui19900212@gmail.com>
Co-authored-by: Tomasz Poradowski <tomasz.poradowski@generiscorp.com>
Co-authored-by: Dave Nielsen <dnielsen@gmail.com>
whq4123 added a commit to whq4123/redis that referenced this pull request Aug 22, 2020
* fix server crash for STRALGO command

* using moreargs variable

* EAGAIN for tls during diskless load

* Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas.

* Fix #7306 less aggressively.

Citing from the issue:

btw I suggest we change this fix to something else:
* We revert the fix.
* We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
This way we can avoid disconnections when there is no trimming of the backlog.

Note that we now want to disconnect replicas asynchronously in
disconnectSlaves(), because it's in general safer now that we can call
it from freeClient(). Otherwise for instance the command:

    CLIENT KILL TYPE master

May crash: clientCommand() starts running the linked of of clients,
looking for clients to kill. However it finds the master, kills it
calling freeClient(), but this in turn calls replicationCacheMaster()
that may also call disconnectSlaves() now. So the linked list iterator
of the clientCommand() will no longer be valid.

* Make disconnectSlaves() synchronous in the base case.

Otherwise we run into that:

Backtrace:
src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
src/redis-server 127.0.0.1:21322[0x4552d4]
src/redis-server 127.0.0.1:21322[0x4c5593]
src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]

Since we disconnect all the replicas and free the replication backlog in
certain replication paths, and the code that will free the replication
backlog expects that no replica is connected.

However we still need to free the replicas asynchronously in certain
cases, as documented in the top comment of disconnectSlaves().

* add CI for 32bit build

* PSYNC2: second_replid_offset should be real meaningful offset

After adjustMeaningfulReplOffset(), all the other related variable
should be updated, including server.second_replid_offset.

Or the old version redis like 5.0 may receive wrong data from
replication stream, cause redis 5.0 can sync with redis 6.0,
but doesn't know meaningful offset.

* Clarify what is happening in PR #7320.

* Test: PSYNC2 test can now show server logs.

* tests: each test client work on a distinct port range

apparently when running tests in parallel (the default of --clients 16),
there's a chance for two tests to use the same port.
specifically, one test might shutdown a master and still have the
replica up, and then another test will re-use the port number of master
for another master, and then that replica will connect to the master of
the other test.

this can cause a master to count too many full syncs and fail a test if
we run the tests with --single integration/psync2 --loop --stop

see Probmem 2 in #7314

* avoid using sendfile if tls-replication is enabled

this obviously broke the tests, but went unnoticed so far since tls
wasn't often tested.

* daily CI test with tls

* Replication: log backlog creation event.

* fix clusters mixing accidentally by gossip

`clusterStartHandshake` will start hand handshake
and eventually send CLUSTER MEET message, which is strictly prohibited
in the REDIS CLUSTER SPEC.
Only system administrator can initiate CLUSTER MEET message.
Futher, according to the SPEC, rather than IP/PORT pairs, only nodeid
can be trusted.

* Set a protocol error if master use the inline protocol.

We want to react a bit more aggressively if we sense that the master is
sending us some corrupted stream. By setting the protocol error we both
ensure that the replica will disconnect, and avoid caching the master so
that a full SYNC will be required. This is protective against
replication bugs.

* Remove the meaningful offset feature.

After a closer look, the Redis core devleopers all believe that this was
too fragile, caused many bugs that we didn't expect and that were very
hard to track. Better to find an alternative solution that is simpler.

* Remove the PSYNC2 meaningful offset test.

* Another meaningful offset test removed.

* Fix TLS certificate loading for chained certificates.

This impacts client verification for chained certificates (such as Lets
Encrypt certificates). Client Verify requires the full chain in order to
properly verify the certificate.

* tests: find_available_port start search from next port

i.e. don't start the search from scratch hitting the used ones again.
this will also reduce the likelihood of collisions (if there are any
left) by increasing the time until we re-use a port we did use in the
past.

* Drop useless line from replicationCacheMaster().

* 32bit CI needs to build modules correctly

* revive meaningful offset tests

* adjust revived meaningful offset tests

these tests create several edge cases that are otherwise uncovered (at
least not consistently) by the test suite, so although they're no longer
testing what they were meant to test, it's still a good idea to keep
them in hope that they'll expose some issue in the future.

* Replication: showLatestBacklog() refactored out.

* Test: add the tracking unit as default.

* Test: take PSYNC2 test master timeout high during switch.

This will likely avoid false positives due to trailing pings.

* Fix handling of special chars in ACL LOAD.

Now it is also possible for ACL SETUSER to accept empty strings
as valid operations (doing nothing), so for instance

    ACL SETUSER myuser ""

Will have just the effect of creating a user in the default state.

This should fix #7329.

* fix pingoff  test race

* donot free protected client in freeClientsInAsyncFreeQueue

related #7234

* AOF: append origin SET if no expire option

* Revert "avoid using sendfile if tls-replication is enabled"

This reverts commit b9abecfc4c97db01fa6f09180c68a92ea5974ac1.

* Revert "Implements sendfile for redis."

This reverts commit 9cf500a3f67e4e2ce51414c354e3472faf095d5b.

* return the correct proto version
HELLO should return the current proto version, while the code hardcoded
3

* Avoid rejecting WATCH / UNWATCH, like MULTI/EXEC/DISCARD

Much like MULTI/EXEC/DISCARD, the WATCH and UNWATCH are not actually
operating on the database or server state, but instead operate on the
client state. the client may send them all in one long pipeline and check
all the responses only at the end, so failing them may lead to a
mismatch between the client state on the server and the one on the
client end, and execute the wrong commands (ones that were meant to be
discarded)

the watched keys are not actually stored in the client struct, but they
are in fact part of the client state. for instance, they're not cleared
or moved in SWAPDB or FLUSHDB.

* Don't queue commands in an already aborted MULTI state

* fix disconnectSlaves, to try to free each slave.

the recent change in that loop (iteration rather than waiting for it to
be empty) was intended to avoid an endless loop in case some slave would
refuse to be freed.

but the lookup of the first client remained, which would have caused it
to try the first one again and again instead of moving on.

* fix server crash in STRALGO command

* Temporary fix for #7353 issue about EVAL during -BUSY.

* Adapt EVAL+busy script test to new behavior.

* LRANK: Add command (the command will be renamed LPOS).

The `LRANK` command returns the index (position) of a given element
within a list. Using the `direction` argument it is possible to specify
going from head to tail (acending, 1) or from tail to head (decending,
-1). Only the first found index is returend. The complexity is O(N).

When using lists as a queue it can be of interest at what position a
given element is, for instance to monitor a job processing through a
work queue. This came up within the Python `rq` project which is based
on Redis[0].

[0]: https://github.com/rq/rq/issues/1197

Signed-off-by: Paul Spooren <mail@aparcar.org>

* LPOS: implement the final design.

* LPOS: update to latest proposal.

See https://gist.github.com/antirez/3591c5096bc79cad8b5a992e08304f48

* LPOS: tests + crash fix.

* fix memory leak

* help.h updated.

* Fix LCS object type checking. Related to #7379.

* Fix RM_ScanKey module api not to return int encoded strings

The scan key module API provides the scan callback with the current
field name and value (if it exists). Those arguments are RedisModuleString*
which means it supposes to point to robj which is encoded as a string.
Using createStringObjectFromLongLong function might return robj that
points to an integer and so break a module that tries for example to
use RedisModule_StringPtrLen on the given field/value.

The PR introduces a fix that uses the createObject function and sdsfromlonglong function.
Using those function promise that the field and value pass to the to the
scan callback will be Strings.

The PR also changes the Scan test module to use RedisModule_StringPtrLen
to catch the issue. without this, the issue is hidden because
RedisModule_ReplyWithString knows to handle integer encoding of the
given robj (RedisModuleString).

The PR also introduces a new test to verify the issue is solved.

* cluster.c remove if of clusterSendFail in markNodeAsFailingIfNeeded

* Tracking: fix enableBcastTrackingForPrefix() invalid sdslen() call.

Related to #7387.

* Use cluster connections too, to limit maxclients.

See #7401.

* fix comments in listpack.c

* ensure SHUTDOWN_NOSAVE in Sentinel mode

- enforcing of SHUTDOWN_NOSAVE flag in one place to make it consitent
  when running in Sentinel mode

* Fix comments in function raxLowWalk of listpack.c

* fix memory leak in sentinel connection sharing

* Clarify maxclients and cluster in conf. Remove myself too.

* Fix BITFIELD i64 type handling, see #7417.

* Include cluster.h for getClusterConnectionsCount().

* EXEC always fails with EXECABORT and multi-state is cleared

In order to support the use of multi-exec in pipeline, it is important that
MULTI and EXEC are never rejected and it is easy for the client to know if the
connection is still in multi state.

It was easy to make sure MULTI and DISCARD never fail (done by previous
commits) since these only change the client state and don't do any actual
change in the server, but EXEC is a different story.

Since in the past, it was possible for clients to handle some EXEC errors and
retry the EXEC, we now can't affort to return any error on EXEC other than
EXECABORT, which now carries with it the real reason for the abort too.

Other fixes in this commit:
- Some checks that where performed at the time of queuing need to be re-
  validated when EXEC runs, for instance if the transaction contains writes
  commands, it needs to be aborted. there was one check that was already done
  in execCommand (-READONLY), but other checks where missing: -OOM, -MISCONF,
  -NOREPLICAS, -MASTERDOWN
- When a command is rejected by processCommand it was rejected with addReply,
  which was not recognized as an error in case the bad command came from the
  master. this will enable to count or MONITOR these errors in the future.
- make it easier for tests to create additional (non deferred) clients.
- add tests for the fixes of this commit.

* updated copyright year

Changed "2015" to "2020"

* LPOS: option FIRST renamed RANK.

* Update comment to clarify change in #7398.

* BITOP: propagate only when it really SET or DEL targetkey (#5783)

For example:
    BITOP not targetkey sourcekey

If targetkey and sourcekey doesn't exist, BITOP has no effect,
we do not propagate it, thus can save aof and replica flow.

* change references to the github repo location (#7479)

* tests/valgrind: don't use debug restart (#7404)

* tests/valgrind: don't use debug restart

DEBUG REATART causes two issues:
1. it uses execve which replaces the original process and valgrind doesn't
   have a chance to check for errors, so leaks go unreported.
2. valgrind report invalid calls to close() which we're unable to resolve.

So now the tests use restart_server mechanism in the tests, that terminates
the old server and starts a new one, new PID, but same stdout, stderr.

since the stderr can contain two or more valgrind report, it is not enough
to just check for the absence of leaks, we also need to check for some known
errors, we do both, and fail if we either find an error, or can't find a
report saying there are no leaks.

other changes:
- when killing a server that was already terminated we check for leaks too.
- adding DEBUG LEAK which was used to test it.
- adding --trace-children to valgrind, although no longer needed.
- since the stdout contains two or more runs, we need slightly different way
  of checking if the new process is up (explicitly looking for the new PID)
- move the code that handles --wait-server to happen earlier (before
  watching the startup message in the log), and serve the restarted server too.

* squashme - CR fixes

* stabilize tests that look for log lines (#7367)

tests were sensitive to additional log lines appearing in the log
causing the search to come empty handed.

instead of just looking for the n last log lines, capture the log lines
before performing the action, and then search from that offset.

* skip a test that uses +inf on valgrind (#7440)

On some platforms strtold("+inf") with valgrind returns a non-inf result

[err]: INCRBYFLOAT does not allow NaN or Infinity in tests/unit/type/incr.tcl
Expected 'ERR*would produce*' to equal or match '1189731495357231765085759.....'

* defrag.c activeDefragSdsListAndDict when defrag sdsele, We can't use (#7492)

it to calculate hash, we should use newsds.

* RESTORE ABSTTL won't store expired keys into the db (#7472)

Similarly to EXPIREAT with TTL in the past, which implicitly deletes the
key and return success, RESTORE should not store key that are already
expired into the db.
When used together with REPLACE it should emit a DEL to keyspace
notification and replication stream.

* redis-cli --bigkeys fixed to handle non-printable key names

* redis-cli --hotkeys fixed to handle non-printable key names

* TLS: Add missing redis-cli options. (#7456)

* Tests: fix and reintroduce redis-cli tests.

These tests have been broken and disabled for 10 years now!

* TLS: add remaining redis-cli support.

This adds support for the redis-cli --pipe, --rdb and --replica options
previously unsupported in --tls mode.

* Fix writeConn().

* Use pkg-config to properly detect libssl and libcrypto libraries (#7452)

* TLS: Ignore client cert when tls-auth-clients off. (#7457)

* TLS: Session caching configuration support. (#7420)

* TLS: Session caching configuration support.
* TLS: Remove redundant config initialization.

* Add missing latency-monitor tcl test to test_helper.tcl. (#6782)

* Fix typo in deps README (#7500)

* fix: typo in CI job name (#7466)

* fix benchmark in cluster mode fails to authenticate (#7488)

Co-authored-by: Oran Agra <oran@redislabs.com> (styling)

* STORE variants: SINTER,SUNION,SDIFF,ZUNION use setKey instead of dbDelete+dbAdd (#7489)

one of the differences (other than consistent code with SORT, GEORADIUS), is that the LFU of the old key is retained.

* fix description about ziplist, the code is ok (#6318)

* fix description about ZIP_BIG_PREVLEN(the code is ok), it's similar to
antirez#4705

* fix description about ziplist entry encoding field (the code is ok),
the max length should be 2^32 - 1 when encoding is 5 bytes

* update release scripts for new hosts, and CI to run more tests (#7480)

* update daily CI to include cluster and sentinel tests
* update daily CI to run when creating a new release
* update release scripts to work on the new redis.io hosts

* runtest --stop pause stops before terminating the redis server (#7513)

in the majority of the cases (on this rarely used feature) we want to
stop and be able to connect to the shard with redis-cli.
since these are two different processes interracting with the tty we
need to stop both, and we'll have to hit enter twice, but it's not that
bad considering it is rarely used.

* fix recently added time sensitive tests failing with valgrind (#7512)

interestingly the latency monitor test fails because valgrind is slow
enough so that the time inside PEXPIREAT command from the moment of
the first mstime() call to get the basetime until checkAlreadyExpired
calls mstime() again is more than 1ms, and that test was too sensitive.

using this opportunity to speed up the test (unrelated to the failure)
the fix is just the longer time passed to PEXPIRE.

* RESTORE ABSTTL skip expired keys - leak (#7511)

* Replica always reports master's config epoch in CLUSTER NODES output. (#7235)

* Fix out of update help info in tcl tests. (#7516)

Before this commit, the output of "./runtest-cluster --help" is incorrect.
After this commit, the format of the following 3 output is consistent:
./runtest --help
./runtest-cluster --help
./runtest-sentinel --help

* redis-cli tests, fix valgrind timing issue (#7519)

this test when run with valgrind on github actions takes 160 seconds

* diskless master disconnect replicas when rdb child failed (#7518)

in case the rdb child failed, crashed or terminated unexpectedly redis
would have marked the replica clients with repl_put_online_on_ack and
then kill them only after a minute when no ack was received.

it would not stream anything to these connections, so the only effect of
this bug is a delay of 1 minute in the replicas attempt to re-connect.

* Refactor RM_KeyType() by using macro. (#7486)

* Fix command help for unexpected options (#7476)

* correct error msg for num connections reaching maxclients in cluster mode (#7444)

* Add registers dump support for Apple silicon (#7453)

Export following environment variables before building on macOS on Apple silicon

export ARCH_FLAGS="-arch arm64"
export SDK_NAME=macosx
export SDK_PATH=$(xcrun --show-sdk-path --sdk $SDK_NAME)
export CFLAGS="$ARCH_FLAGS -isysroot $SDK_PATH -I$SDK_PATH/usr/include"
export CXXFLAGS=$CFLAGS
export LDFLAGS="$ARCH_FLAGS"
export CC="$(xcrun -sdk $SDK_PATH --find clang) $CFLAGS"
export CXX="$(xcrun -sdk $SDK_PATH --find clang++) $CXXFLAGS"
export LD="$(xcrun -sdk $SDK_PATH --find ld) $LDFLAGS"

make
make test
..
All tests passed without errors!

Backtrack logging assumes x86 and required updating

* Notify systemd on sentinel startup (#7168)

Co-authored-by: Daniel Murnane <dmurnane@eitccorp.com>

* Send null for invalidate on flush (#7469)

* Stream avoid duplicate parse id (#7450)

* Support passing stack allocated module strings to moduleCreateArgvFromUserFormat (#7528)

Specifically, the key passed to the module aof_rewrite callback is a stack allocated robj. When passing it to RedisModule_EmitAOF (with appropriate "s" fmt string) redis used to panic when trying to inc the ref count of the stack allocated robj. Now support such robjs by coying them to a new heap robj. This doesn't affect performance because using the alternative "c" or "b" format strings also copies the input to a new heap robj.

* Adds GitHub issue templates (#7468)

Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: yoav-steinberg <yoav@monfort.co.il>

* Adds SHA256SUM to redis-stable tarball upload

* GitHub Actions workflows - use latest version of actions/checkout (#7534)

* Refactor streamAppendItem() by deleting redundancy condition. (#7487)

It will never happen that "lp != NULL && lp_bytes >= server.stream_node_max_bytes".
Assume that "lp != NULL && lp_bytes >= server.stream_node_max_bytes",
we got the following conditions:
a. lp != NULL
b. lp_bytes >= server.stream_node_max_bytes

If server.stream_node_max_bytes is 0, given condition a, condition b is always satisfied
If server.stream_node_max_bytes is not 0, given condition a and condition b, the codes just a
	few lines above set lp to NULL, a controdiction with condition a

So that condition b is recundant. We could delete it safely.

* Run daily CI on PRs to release a branch (#7535)

* Fix cluster redirect for module command with no firstkey. (#7539)

Before this commit, processCommand() did not notice that cmd could be a module command
which declared `getkeys-api` and handled it for the purpose of cluster redirect it
as if it doesn't use any keys.

This commit fixed it by reusing the codes in addReplyCommand().

* replication: need handle -NOPERM error after send ping (#7538)

* add missing caching command in client help (#7399)

* Add missing calls to raxStop (#7532)

Since the dynamic allocations in raxIterator are only used for deep walks, memory
leak due to missing call to raxStop can only happen for rax with key names longer
than 32 bytes.

Out of all the missing calls, the only ones that may lead to a leak are the rax
for consumer groups and consumers, and these were only in AOFRW and rdbSave, which
normally only happen in fork or at shutdown.

* Fix deprecated tail syntax in tests (#7543)

* Clarification on the bug that was fixed in PR #7539. (#7541)

Before that PR, processCommand() did not notice that cmd could be a module
command in which case getkeys_proc member has a different meaning.

The outcome was that a module command which doesn't take any key names in its
arguments (similar to SLOWLOG) would be handled as if it might have key name arguments
(similar to MEMORY), would consider cluster redirect but will end up with 0 keys
after an excessive call to getKeysFromCommand, and eventually do the right thing.

* Fixes to release scripts (#7547)

* Tests: drop TCL 8.6 dependency. (#7548)

This re-implements the redis-cli --pipe test so it no longer depends on a close feature available only in TCL 8.6.

Basically what this test does is run redis-cli --pipe, generates a bunch of commands and pipes them through redis-cli, and inspects the result in both Redis and the redis-cli output.

To do that, we need to close stdin for redis-cli to indicate we're done so it can flush its buffers and exit. TCL has bi-directional channels can only offers a way to "one-way close" a channel with TCL 8.6. To work around that, we now generate the commands into a file and feed that file to redis-cli directly.

As we're writing to an actual file, the number of commands is now reduced.

* testsuite may leave servers alive on error (#7549)

in cases where you have
test name {
  start_server {
    start_server {
      assert
    }
  }
}

the exception will be thrown to the test proc, and the servers are
supposed to be killed on the way out. but it seems there was always a
bug of not cleaning the server stack, and recently (#7404) we started
relying on that stack in order to kill them, so with that bug sometimes
we would have tried to kill the same server twice, and leave one alive.

luckly, in most cases the pattern is:
start_server {
  test name {
  }
}

* Properly reset errno for rdbLoad (#7542)

* Fix typo in deps/README.md (#7553)

* Outdent github issues guidelines

* Add contribution guidelines for vulnerability reports

* Fix harmless bug in rioConnRead (#7557)

this code is in use only if the master is disk-based, and the replica is
diskless. In this case we use a buffered reader, but we must avoid reading
past the rdb file, into the command stream. which Luckly rdb.c doesn't
really attempt to do (it knows how much it should read).

When rioConnRead detects that the extra buffering attempt reaches beyond
the read limit it should read less, but if the caller actually requested
more, then it should return with an error rather than a short read. the
bug would have resulted in short read.

in order to fix it, the code must consider the real requested size, and
not the extra buffering size.

* This PR introduces a new loaded keyspace event (#7536)

Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Itamar Haber <itamar@redislabs.com>

* Stabilize bgsave test that sometimes fails with valgrind (#7559)

on ci.redis.io the test fails a lot, reporting that bgsave didn't end.
increaseing the timeout we wait for that bgsave to get aborted.
in addition to that, i also verify that it indeed got aborted by
checking that the save counter wasn't reset.

add another test to verify that a successful bgsave indeed resets the
change counter.

* more strict check in rioConnRead (#7564)

* Fix prepareForShutdown function declaration (#7566)

* TLS: support cluster/replication without tls-port.

Initialize and configure OpenSSL even when tls-port is not used, because
we may still have tls-cluster or tls-replication.

Also, make sure to reconfigure OpenSSL when these parameters are changed
as TLS could have been enabled for the first time.

* Daily github action: run cluster and sentinel tests with tls (#7575)

* Add optional tls verification (#7502)

Adds an `optional` value to the previously boolean `tls-auth-clients` configuration keyword.

Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>

* Fix failing tests due to issues with wait_for_log_message (#7572)

- the test now waits for specific set of log messages rather than wait for
  timeout looking for just one message.
- we don't wanna sample the current length of the log after an action, due
  to a race, we need to start the search from the line number of the last
  message we where waiting for.
- when attempting to trigger a full sync, use multi-exec to avoid a race
  where the replica manages to re-connect before we completed the set of
  actions that should force a full sync.
- fix verify_log_message which was broken and unused

* TLS: Propagate and handle SSL_new() failures. (#7576)

The connection API may create an accepted connection object in an error
state, and callers are expected to check it before attempting to use it.

Co-authored-by: mrpre <mrpre@163.com>

* Fix TLS cluster tests. (#7578)

Fix consistency test added in af5167b7f without considering TLS
redis-cli configuration.

* fix leak in error handling of debug populate command (#7062)

valsize was not modified during the for loop below instead of getting from c->argv[4], therefore there is no need to put inside the for loop.. Moreover, putting the check outside loop will also avoid memory leaking, decrRefCount(key) should be called in the original code if we put the check in for loop

* Add SignalModifiedKey hook in XGROUP CREATE with MKSTREAM option (#7562)

* Avoid an out-of-bounds read in the redis-sentinel (#7443)

The Redis sentinel would crash with a segfault after a few minutes
because it tried to read from a page without read permissions. Check up
front whether the sds is long enough to contain redis:slave or
redis:master before memcmp() as is done everywhere else in
sentinelRefreshInstanceInfo().

Bug report and commit message from Theo Buehler. Fix from Nam Nguyen.

Co-authored-by: Nam Nguyen <namn@berkeley.edu>

* Show threading configuration in INFO output (#7446)

Co-authored-by: Oran Agra <oran@redislabs.com>

* Clarify  RM_BlockClient() error condition. (#6093)

* Call propagate instead of writing directly to AOF/replicas (#6658)

Use higher-level API to funnel all generic propagation through
single function call.

* broadcast a PONG message when slot's migration is over, which may reduce the moved request of clients (#7571)

* Fix running single test 14-consistency-check.tcl (#7587)

* CI: Add daily CentOS 7.x jobs. (#7582)

* Remove dead code from update_zmalloc_stat_alloc (#7589)

this seems like leftover from before 6eb51bf

* module hook for master link up missing on successful psync (#7584)

besides, hooks test was time sensitive. when the replica managed to
reconnect quickly after the client kill, the test would fail

* Fix test-centos7-tls daily job. (#7598)

* remove duplicate semicolon (#7438)

* fix new rdb test failing on timing issues (#7604)

apparenlty on github actions sometimes 500ms is not enough

* Add a ZMSCORE command returning an array of scores. (#7593)

Syntax: `ZMSCORE KEY MEMBER [MEMBER ...]`

This is an extension of #2359
amended by Tyson Andre to work with the changed unstable API,
add more tests, and consistently return an array.

- It seemed as if it would be more likely to get reviewed
  after updating the implementation.

Currently, multi commands or lua scripting to call zscore multiple times
would almost definitely be less efficient than a native ZMSCORE
for the following reasons:

- Need to fetch the set from the string every time instead of reusing the C
  pointer.
- Using pipelining or multi-commands would result in more bytes sent by
  the client for the repeated `ZMSCORE KEY` sections.
- Need to specially encode the data and decode it from the client
  for lua-based solutions.
- The fastest solution I've seen for large sets(thousands or millions)
  involves lua and a variadic ZADD, then a ZINTERSECT, then a ZRANGE 0 -1,
  then UNLINK of a temporary set (or lua). This is still inefficient.

Co-authored-by: Tyson Andre <tysonandre775@hotmail.com>

* Fix tests/cluster/cluster.tcl about wrong usage of lrange. (#6702)

* add force option to 'create-cluster create' script call (#7612)

* reintroduce REDISCLI_CLUSTER_YES env variable in redis-cli

the variable was introduced only in the 5.0 branch in #5879 bc6c1c40db

* redis-cli --cluster-yes - negate force flag for clarity

this internal flag is there so that some commands do not comply to `--cluster-yes`

* [Redis-benchmark] Support zset type

* Assertion and panic, print crash log without generating SIGSEGV

This makes it possible to add tests that generate assertions, and run
them with valgrind, making sure that there are no memory violations
prior to the assertion.

New config options:
- crash-log-enabled - can be disabled for cleaner core dumps
- crash-memcheck-enabled - useful for faster termination after a crash
- use-exit-on-panic - to be used by the test suite so that valgrind can
  detect leaks and memory corruptions

Other changes:
- Crash log is printed even on system that dont HAVE_BACKTRACE, i.e. in
  both SIGSEGV and assert / panic
- Assertion and panic won't print registers and code around EIP (which
  was useless), but will do fast memory test (which may still indicate
  that the assertion was due to memory corrpution)

I had to reshuffle code in order to re-use it, so i extracted come code
into function without actually doing any changes to the code:
- logServerInfo
- logModulesInfo
- doFastMemoryTest (with the exception of it being conditional)
- dumpCodeAroundEIP

changes to the crash report on segfault:
- logRegisters is called right after the stack trace (before info) done
  just in order to have more re-usable code
- stack trace skips the first two items on the stack (the crash log and
  signal handler functions)

* Fix potential race in bugReportStart

this race would only happen when two threads paniced at the same time,
and even then the only consequence is some extra log lines.

race reported in #7391

* Accelerate diskless master connections, and general re-connections (#6271)

Diskless master has some inherent latencies.
1) fork starts with delay from cron rather than immediately
2) replica is put online only after an ACK. but the ACK
   was sent only once a second.
3) but even if it would arrive immediately, it will not
   register in case cron didn't yet detect that the fork is done.

Besides that, when a replica disconnects, it doesn't immediately
attempts to re-connect, it waits for replication cron (one per second).
in case it was already online, it may be important to try to re-connect
as soon as possible, so that the backlog at the master doesn't vanish.

In case it disconnected during rdb transfer, one can argue that it's
not very important to re-connect immediately, but this is needed for the
"diskless loading short read" test to be able to run 100 iterations in 5
seconds, rather than 3 (waiting for replication cron re-connection)

changes in this commit:
1) sync command starts a fork immediately if no sync_delay is configured
2) replica sends REPLCONF ACK when done reading the rdb (rather than on 1s cron)
3) when a replica unexpectedly disconnets, it immediately tries to
   re-connect rather than waiting 1s
4) when when a child exits, if there is another replica waiting, we spawn a new
   one right away, instead of waiting for 1s replicationCron.
5) added a call to connectWithMaster from replicationSetMaster. which is called
   from the REPLICAOF command but also in 3 places in cluster.c, in all of
   these the connection attempt will now be immediate instead of delayed by 1
   second.

side note:
we can add a call to rdbPipeReadHandler in replconfCommand when getting
a REPLCONF ACK from the replica to solve a race where the replica got
the entire rdb and EOF marker before we detected that the pipe was
closed.
in the test i did see this race happens in one about of some 300 runs,
but i concluded that this race is unlikely in real life (where the
replica is on another host and we're more likely to first detect the
pipe was closed.
the test runs 100 iterations in 3 seconds, so in some cases it'll take 4
seconds instead (waiting for another REPLCONF ACK).

Removing unneeded startBgsaveForReplication from updateSlavesWaitingForBgsave
Now that CheckChildrenDone is calling the new replicationStartPendingFork
(extracted from serverCron) there's actually no need to call
startBgsaveForReplication from updateSlavesWaitingForBgsave anymore,
since as soon as updateSlavesWaitingForBgsave returns, CheckChildrenDone is
calling replicationStartPendingFork that handles that anyway.
The code in updateSlavesWaitingForBgsave had a bug in which it ignored
repl-diskless-sync-delay, but removing that code shows that this bug was
hiding another bug, which is that the max_idle should have used >= and
not >, this one second delay has a big impact on my new test.

* Remove hiredis so we can add it as a subtree

* Squashed 'deps/hiredis/' content from commit 39de5267c

git-subtree-dir: deps/hiredis
git-subtree-split: 39de5267c092859b4cab4bdf79081e9634b70e39

* fix migration's broadcast PONG message, after the slot modification (#7590)

* remove superfluous else block (#7620)

The else block would be executed when newlen == 0 and in the case memmove won't be called, so there's no need to set start.

* Fix applying zero offset to null pointer when creating moduleFreeContextReusedClient (#7323)

Before this fix we where attempting to select a db before creating db the DB, see: #7323

This issue doesn't seem to have any implications, since the selected DB index is 0,
the db pointer remains NULL, and will later be correctly set before using this dummy
client for the first time.

As we know, we call 'moduleInitModulesSystem()' before 'initServer()'. We will allocate
memory for server.db in 'initServer', but we call 'createClient()' that will call 'selectDb()'
in 'moduleInitModulesSystem()', before the databases where created. Instead, we should call
'createClient()' for moduleFreeContextReusedClient after 'initServer()'.

* fix memory leak in ACLLoadFromFile error handling (#7623)

* [Redis-benchmark] Remove zrem test, add zpopmin test

* Create PUSH handlers in redis-cli

Add logic to redis-cli to display RESP3 PUSH messages when we detect
STDOUT is a tty, with an optional command-line argument to override
the default behavior.

The new argument:  --show-pushes <yn>

Examples:

$ redis-cli -3 --show-pushes no
$ echo "client tracking on\nget k1\nset k1 v1"| redis-cli -3 --show-pushes y

* Print error info if failed opening config file (#6943)

* Optimize calls to mstime in trackInstantaneousMetric() (#6472)

* add sentinel command help (#7579)

* Reduce the probability of failure when start redis in runtest-cluster #7554 (#7635)

When runtest-cluster, at first, we need to create a cluster use spawn_instance,
a port which is not used is choosen, however sometimes we can't run server on
the port. possibley due to a race with another process taking it first.
such as redis/redis/runs/896537490. It may be due to the machine problem or
In order to reduce the probability of failure when start redis in
runtest-cluster, we attemp to use another port when find server do not start up.

Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: yanhui13 <yanhui13@meituan.com>

* see #7544, added RedisModule_HoldString api. (#7577)

Added RedisModule_HoldString that either returns a
shallow copy of the given String (by increasing
the String ref count) or a new deep copy of String
in case its not possible to get a shallow copy.

Co-authored-by: Itamar Haber <itamar@redislabs.com>

* fixed a typo in redis.conf (#7636)

* see #7250, fix signature of RedisModule_DeauthenticateAndCloseClient (#7645)

In redismodule.h, RedisModule_DeauthenticateAndCloseClient returns void
`void REDISMODULE_API_FUNC(RedisModule_DeauthenticateAndCloseClient)(RedisModuleCtx *ctx, uint64_t client_id);`
But in module.c, RM_DeauthenticateAndCloseClient returns int
`int RM_DeauthenticateAndCloseClient(RedisModuleCtx *ctx, uint64_t client_id)`

It it safe to change return value from `void` to `int` from the user's perspective.

* Removes dead code (#7642)

Appears to be handled by server.stream_node_max_bytes in reality.

* Avoid redundant calls to signalKeyAsReady (#7625)

signalKeyAsReady has some overhead (namely dictFind) so we should
only call it when there are clients blocked on the relevant type (BLOCKED_*)

* Prevent dictRehashMilliseconds from rehashing while a safe iterator is present (#5948)

* Fix comment about ACLGetCommandPerm()

* Run daily workflow on main repo only (no forks). (#7646)

* Implement SMISMEMBER key member [member ...] (#7615)

This is a rebased version of #3078 originally by shaharmor
with the following patches by TysonAndre made after rebasing
to work with the updated C API:

1. Add 2 more unit tests
   (wrong argument count error message, integer over 64 bits)
2. Use addReplyArrayLen instead of addReplyMultiBulkLen.
3. Undo changes to src/help.h - for the ZMSCORE PR,
   I heard those should instead be automatically
   generated from the redis-doc repo if it gets updated

Motivations:

- Example use case: Client code to efficiently check if each element of a set
  of 1000 items is a member of a set of 10 million items.
  (Similar to reasons for working on #7593)
- HMGET and ZMSCORE already exist. This may lead to developers deciding
  to implement functionality that's best suited to a regular set with a
  data type of sorted set or hash map instead, for the multi-get support.

Currently, multi commands or lua scripting to call sismember multiple times
would almost definitely be less efficient than a native smismember
for the following reasons:

- Need to fetch the set from the string every time
  instead of reusing the C pointer.
- Using pipelining or multi-commands would result in more bytes sent
  and received by the client for the repeated SISMEMBER KEY sections.
- Need to specially encode the data and decode it from the client
  for lua-based solutions.
- Proposed solutions using Lua or SADD/SDIFF could trigger writes to
  memory, which is undesirable on a redis replica server
  or when commands get replicated to replicas.

Co-Authored-By: Shahar Mor <shahar@peer5.com>
Co-Authored-By: Tyson Andre <tysonandre775@hotmail.com>

* Start redis after network is online (#7639)

The two lines allow systemd to start redis.service after the network is online. Only after the network is online that Redis could bind to IP address other than 127.0.0.1 during initial boot up process.

* Correct error message of runtest-cluster and runtest-moduleapi (#7647)

* using proto-max-bulk-len in checkStringLength for SETRANGE and APPEND

* config: proto-max-bulk-len must be 1mb or greater

* CLIENT_MASTER should ignore server.proto_max_bulk_len

* Fix unidentical function declaration in bio.c.  lazyfree.c: lazyfreeFreeSlotsMapFromBioThread (#7228)

* redis-benchmark: fix wrong random key for hset (#4895)

* allow --pattern to be used along with --bigkeys (#3586)

Adds --pattern option to cli's --bigkeys, --hotkeys & --scan modes

* Adds redis-cli and redis-benchmark dependencies for `make test` target

Obsoletes the need to run `make` before `make test`.

* Test:Fix invalid cases in hash.tcl and dump.tcl (#4611)

* Replace usage of wrongtypeerr with helper (#7633)

* Replace usage of wrongtypeerr with helper

* Fixed timer warning (#5953)

* fix misleading typo hasActiveChildProcess doc comment (#7588)

and a misspell in rax.c

* Add oom-score-adj configuration option to control Linux OOM killer. (#1690)

Add Linux kernel OOM killer control option.

This adds the ability to control the Linux OOM killer oom_score_adj
parameter for all Redis processes, depending on the process role (i.e.
master, replica, background child).

A oom-score-adj global boolean flag control this feature. In addition,
specific values can be configured using oom-score-adj-values if
additional tuning is required.

* Set the initial seed for random() (#5679)

* wait command optimization (#7333)

Client that issued WAIT last will most likely have the highest replication offset, so imagine a probably common case where all clients are waiting for the same number of replicas. we prefer the loop to start from the last client (one waiting for the highest offset), so that the optimization in the function will call replicationCountAcksByOffset for each client until it found a good one, and stop calling it for the rest of the clients.
the way the loop was implemented would mean that in such case it is likely to call replicationCountAcksByOffset for all clients.

Note: the change from > to >= is not directly related to the above.

Co-authored-by: 曹正斌 <caozb@jiedaibao.com>

* Annotate module API functions in redismodule.h for use with -fno-common (#6900)

In order to keep the redismodule.h self-contained but still usable with
gcc v10 and later, annotate each API function tentative definition with
the __common__ attribute.  This avoids the 'multiple definition' errors
modules will otherwise see for all API functions at link time.

Further details at gcc.gnu.org/gcc-10/porting_to.html

Turn the existing __attribute__ ((unused)), ((__common__)) and ((print))
annotations into conditional macros for any compilers not accepting this
syntax.  These macros only expand to API annotations under gcc.

Provide a pre- and post- macro for every API function, so that they can
be defined differently by the file that includes redismodule.h.

Removing REDISMODULE_API_FUNC in the interest of keeping the function
declarations readable.

Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: Oran Agra <oran@redislabs.com>

* Squashed 'deps/hiredis/' changes from d5b4c69b7..00272d669

00272d669 Rename sds calls so they don't conflict in Redis.

git-subtree-dir: deps/hiredis
git-subtree-split: 00272d669b11e96b8311d9bfe167c117f8887dd6

* Use Hiredis' sdscompat.h to map sds* calls to hi_sds*

* TLS: relax verification on CONFIG SET. (#7665)

Avoid re-configuring (and validating) SSL/TLS configuration on `CONFIG
SET` when TLS is not actively enabled for incoming connections, cluster
bus or replication.

This fixes failures when tests run without `--tls` on binaries that were
built with TLS support.

An additional benefit is that it's now possible to perform a multi-step
configuration process while TLS is disabled. The new configuration will
be verified and applied only when TLS is effectively enabled.

* Module API: fix missing RM_CLIENTINFO_FLAG_SSL. (#7666)

The `REDISMODULE_CLIENTINFO_FLAG_SSL` flag was already a part of the `RedisModuleClientInfo` structure but was not implemented.

* Trim trailing spaces in error replies coming from rejectCommand (#7668)

65a3307bc9 added rejectCommand which takes an robj reply and passes it
through addReplyErrorSafe to addReplyErrorLength.
The robj contains newline at it's end, but addReplyErrorSafe converts it
to spaces, and passes it to addReplyErrorLength which adds the protocol
newlines.

The result was that most error replies (like OOM) had extra two trailing
spaces in them.

* [module] using predefined REDISMODULE_NO_EXPIRE in RM_GetExpire (#7669)

It was already defined in the API header and the documentation, but not used by the implementation.

* edit auth failed message (#7648)

Edit auth failed message include user disabled case in hello command

* OOM Crash log include size of allocation attempt. (#7670)

Since users often post just the crash log in github issues, the log
print that's above it is missing.
No reason not to include the size in the panic message itself.

* PERSIST should signalModifiedKey (Like EXPIRE does) (#7671)

* Add comments on 'slave.repldboff' when use diskless replication (#7679)

* Fixed hset error since it's shared with hmset (#7678)

* Update clusterMsgDataPublish to clusterMsgModule (#7682)

Correcting the variable to clusterMsgModule.

* Fix flock cluster config may cause failure to restart after kill -9 (#7674)

After fork, the child process(redis-aof-rewrite) will get the fd opened
by the parent process(redis), when redis killed by kill -9, it will not
graceful exit(call prepareForShutdown()), so redis-aof-rewrite thread may still
alive, the fd(lock) will still be held by redis-aof-rewrite thread, and
redis restart will fail to get lock, means fail to start.

This issue was causing failures in the cluster tests in github actions.

Co-authored-by: Oran Agra <oran@redislabs.com>

* Modules: Invalidate saved_oparray after use (#7688)

We wanna avoid a chance of someone using the pointer in it after it'll be freed / realloced.

* RedisModuleEvent_LoadingProgress always at 100% progress (#7685)

It was also using the wrong struct, but luckily RedisModuleFlushInfo and RedisModuleLoadingProgress
are identical.

* use dictSlots for getting total slots number in dict (#7691)

* fix make warnings (#7692)

Co-authored-by: Salvatore Sanfilippo <antirez@gmail.com>
Co-authored-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: Madelyn Olson <matolson@amazon.com>
Co-authored-by: Qu Chen <quchen@amazon.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
Co-authored-by: Liu Zhen <zhenliu215252@sohu-inc.com>
Co-authored-by: Kevin Fwu <kfwu@litespeedtech.com>
Co-authored-by: xhe <xw897002528@gmail.com>
Co-authored-by: Paul Spooren <mail@aparcar.org>
Co-authored-by: meir@redislabs.com <meir@redislabs.com>
Co-authored-by: root <root@mongo.config1.com>
Co-authored-by: chenhui0212 <chenhui19900212@gmail.com>
Co-authored-by: Tomasz Poradowski <tomasz.poradowski@generiscorp.com>
Co-authored-by: Dave Nielsen <dnielsen@gmail.com>
Co-authored-by: zhaozhao.zz <276441700@qq.com>
Co-authored-by: huangzhw <hzweveryday@126.com>
Co-authored-by: Yossi Gottlieb <yossigo@users.noreply.github.com>
Co-authored-by: James Hilliard <james.hilliard1@gmail.com>
Co-authored-by: WuYunlong <xzsyeb@126.com>
Co-authored-by: Jiayuan Chen <mrpre@163.com>
Co-authored-by: Abhishek Soni <abhisheksoni2720@gmail.com>
Co-authored-by: 马永泽 <1014057907@qq.com>
Co-authored-by: 杨博东 <bodong.ybd@alibaba-inc.com>
Co-authored-by: jimgreen2013 <jimgreen2013@qq.com>
Co-authored-by: Qu Chen <QuChen88@users.noreply.github.com>
Co-authored-by: Developer-Ecosystem-Engineering <65677710+Developer-Ecosystem-Engineering@users.noreply.github.com>
Co-authored-by: dmurnane <danieltm@gmail.com>
Co-authored-by: Daniel Murnane <dmurnane@eitccorp.com>
Co-authored-by: Luke Palmer <luke@lukepalmer.net>
Co-authored-by: yoav-steinberg <yoav@monfort.co.il>
Co-authored-by: Itamar Haber <itamar@redislabs.com>
Co-authored-by: Scott Brenner <scott@scottbrenner.me>
Co-authored-by: Remi Collet <remi@remirepo.net>
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
Co-authored-by: Sungho Hwang <sgc109@naver.com>
Co-authored-by: Brian P O'Rourke <brian@orourke.io>
Co-authored-by: grishaf <grisha@drivenets.com>
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: namtsui <384455+namtsui@users.noreply.github.com>
Co-authored-by: Nam Nguyen <namn@berkeley.edu>
Co-authored-by: Arun Ranganathan <arun.ranga@hotmail.ca>
Co-authored-by: Kevin McGehee <mcgehee@amazon.com>
Co-authored-by: fayadexinqing <2418967201@qq.com>
Co-authored-by: hujie <hujiecs@qq.com>
Co-authored-by: Tyson Andre <tyson.andre@uwaterloo.ca>
Co-authored-by: Tyson Andre <tysonandre775@hotmail.com>
Co-authored-by: Frank Meier <40453138+meierfra-ergon@users.noreply.github.com>
Co-authored-by: Frank Meier <frank.meier@ergon.ch>
Co-authored-by: ShooterIT <wangyuancode@163.com>
Co-authored-by: michael-grunder <michael.grunder@gmail.com>
Co-authored-by: xuannianz <47107252+xuannianz@users.noreply.github.com>
Co-authored-by: Wang Yuan <wangyuan21@baidu.com>
Co-authored-by: yanhui13 <yanhui13@meituan.com>
Co-authored-by: Hamed Momeni <hamed.ma7@gmail.com>
Co-authored-by: 杨博东 <yangbodong22011@gmail.com>
Co-authored-by: Jim Brunner <brunnerj@amazon.com>
Co-authored-by: Rajat Pawar <rpawar@g2.com>
Co-authored-by: Shahar Mor <shahar@peer5.com>
Co-authored-by: YoongHM <yoonghm@users.noreply.github.com>
Co-authored-by: pingfan108 <xuanyuan108@126.com>
Co-authored-by: Muhammad Zahalqa <m@tryfinally.com>
Co-authored-by: Wagner Francisco Mezaroba <wagnerfrancisco@gmail.com>
Co-authored-by: Mota <motablow@gmail.com>
Co-authored-by: HarveyLiu <harvey_liuhw@163.com>
Co-authored-by: RemRain <chenyaosf@gmail.com>
Co-authored-by: caozb <1162650653@qq.com>
Co-authored-by: 曹正斌 <caozb@jiedaibao.com>
Co-authored-by: Nathan Scott <natoscott@users.noreply.github.com>
Co-authored-by: guybe7 <guy.benoish@redislabs.com>
Co-authored-by: Raghav Muddur <r@nmvk.com>
Co-authored-by: huangzhw <huang_zhw@126.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants