Skip to content

Conversation

@romange
Copy link

@romange romange commented May 8, 2020

During PUB/SUB heavy load, the cluster bus becomes overloaded. As a result PONG packets got delayed, which in turn causes spurious PFAIL states, then FAIL and then CLUSTER-DOWN state.
This PR mitigates this issue by treating pings as proof of liveness in addition to PONG replies by resetting ping_sent timer.

@antirez
Copy link
Contributor

antirez commented May 8, 2020

@romange that's a great idea, but what about also considering Pub/Sub packets a proof of liveness? Or if we even go more extreme, and claim that every incoming traffic is good?

@antirez antirez added this to the Redis 6.0.x milestone May 8, 2020
@antirez antirez added the cluster label May 8, 2020
@romange
Copy link
Author

romange commented May 8, 2020

@antirez It seems that PONG does not bring any additional data besides liveness proof so, yes, that could definitely work :) Is there a convenient entry point for all the incoming traffic for cluster bus that you could point me to? I will do the necessary changes.

@antirez
Copy link
Contributor

antirez commented May 8, 2020

Yes, basically the important thing is that Redis ping/pongs are not timestamped. So we can get any very old pong, and claim it's a new one... so any data will do, and we can consider Ping/Pongs a fix for a silent channel, more than a semantical way to measure the latency in the communication link and interrupt the communication once the latency is greater than a given maximum level. Btw why we do that (why ping/pongs are not timestamped)? Because for the mixed data/metadata nature of the communication channel, that would be already problematic indeed, and would cause a lot of failures... so it looks like it makes sense. But let me check the code better and reply.

antirez added a commit that referenced this pull request May 8, 2020
We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in #7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.
@antirez
Copy link
Contributor

antirez commented May 8, 2020

Hi again @romange, actually we can't just update our pong time: it is important that we have a mechanism so that we send timely PINGs to the other instances, as well as receiving PONGs, because both are used also for configuration updates! So we would interfere if we update the pong received field when we see data. However it was possible to find an alternative solution by adding a new explicit field: data_received. We don't use this field to decide if we need to send a PING to the target instance, but we use it for failure detection. Please could you check the cluster-data-as-pong branch? The relevant commit is f99acba. Are you able to test it in your environment? Thanks! P.S. Cluster tests are passing after the change. PP.SS. If you want to review the code, that would be appreciated.

@romange
Copy link
Author

romange commented May 8, 2020 via email

@antirez
Copy link
Contributor

antirez commented May 8, 2020

@romange nope just for that, also because otherwise I update the pong_received field, even if I didn't receive any ping actually, and this will delay the trigger that will send the next ping. And technically if there is data passing, we could delay sending pings for ever. That could be ok, if only ping/pongs in Redis Cluster would be just liveness checks, but they carry information about the configuration as well, and are useful to refresh the cluster state. So we want to continue to send PINGs even if actually we have other proofs of liveness.

@romange
Copy link
Author

romange commented May 8, 2020

Ok, I understand. The code below. Yes, I totally did not think about what happens when we have too many GOSSIP messages :) and missed this case. What you say can definitely happen. I will review and will test your fix with pub/sub traffic and will let you know.

Thanks again!

        if (node->link &&
            node->ping_sent == 0 &&
            (now - node->pong_received) > server.cluster_node_timeout/2)
        {
            clusterSendPing(node->link, CLUSTERMSG_TYPE_PING);
            continue;
        }

@romange
Copy link
Author

romange commented May 8, 2020

Looks good from correctness point of view. I would move the block below up before you test
for erver.cluster_node_timeout/2, i.e. instead of checking for

now - node->ping_sent > server.cluster_node_timeout/2 &&
 /* and in such interval we are not seeing any traffic at all. */
now - node->data_received > server.cluster_node_timeout/2

you could test for delay > server.cluster_node_timeout/2 - it would be a bit more consistent.

        mstime_t delay = now - node->ping_sent;

        /* We consider every incoming data as proof of liveness, since
         * our cluster bus link is also used for data: under heavy data
         * load pong delays are possible. */
        mstime_t data_delay = now - node->data_received;
        if (data_delay < delay) delay = data_delay;

@romange
Copy link
Author

romange commented May 8, 2020

@antirez Unfortunately, the fix as it is does not help. However, if you set receive_data on sender, i.e. on line 1748 instead of if (link->node) link->node->data_received = now;

do something like this

sender = clusterLookupNode(hdr->sender);
if (sender) {
    sender->data_received = now;
}

then it works. What do you think ?

@antirez
Copy link
Contributor

antirez commented May 8, 2020

@romange I was not sure the link of the receiving side is always populated with the node reference. I'll look what is happening there, thanks!

@madolson
Copy link
Contributor

madolson commented May 8, 2020

It doesn't look like the link->node is ever set on the receiving side. It's set to NULL when the connection is accepted, then never updated. Looking through some clusters with debug mode, 2455:S 08 May 2020 15:55:42.809 . Ping packet received: (nil) is everywhere.

@antirez
Copy link
Contributor

antirez commented May 8, 2020

@romange yes you are perfectly right. Basically outgoing connections have link->node actually set, but for incoming connections the code is apparently very conservative and don't want to trust that the network stack at any level (I was a bit exaggerated probably...) so it forces the lookup for every packet we receive. Today it no longer looks as smart as it looked when I wrote such code :-D But for now I'll just fix the fix, and later consider to do bigger changes...

antirez added a commit that referenced this pull request May 8, 2020
We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in #7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.
@antirez
Copy link
Contributor

antirez commented May 8, 2020

@romange I understand correctly that you pointed at a refactoring like that? 5c20f24

@antirez
Copy link
Contributor

antirez commented May 8, 2020

Thank you @madolson, indeed!

@romange
Copy link
Author

romange commented May 8, 2020 via email

@antirez
Copy link
Contributor

antirez commented May 8, 2020

@romange you are right, but we tend to write the most obvious code we can! :-D

@romange
Copy link
Author

romange commented May 8, 2020 via email

@romange romange closed this May 8, 2020
@antirez
Copy link
Contributor

antirez commented May 9, 2020

@romange Merged into unstable :-) Backporting to Redis 5 and 6.

antirez added a commit that referenced this pull request May 9, 2020
We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in #7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.
antirez added a commit that referenced this pull request May 9, 2020
We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in #7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.
@antirez
Copy link
Contributor

antirez commented May 9, 2020

Reopening to remember to backport to 5.0 on monday, since it does not apply cleanly.

@antirez antirez reopened this May 9, 2020
JackieXie168 pushed a commit to JackieXie168/redis that referenced this pull request May 15, 2020
We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in redis#7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.
mayongze added a commit to mayongze/redis that referenced this pull request Jul 8, 2020
* XREADGROUP with NOACK should propagate only one XGROUP SETID command

* ACL: deny commands execution of disabled users.

* ACL GENPASS: emit 256 bits instead of 128.

* ACL GENPASS: take number of bits as argument.

* getRandomBytes(): use HMAC-SHA256.

Now that we have an interface to use this API directly, via ACL GENPASS,
we are no longer sure what people could do with it. So why don't make it
a strong primitive exported by Redis in order to create unique IDs and
so forth?

The implementation was tested against the test vectors that can
be found in RFC4231.

* ACL: re-enable command execution of disabled users.

After all I changed idea again: enabled/disabled should have a more
clear meaning, and it only means: you can't authenticate with such user
with new connections, however old connections continue to work as
expected.

* Minor aesthetic changes to redis#7135.

* Also use propagate() in streamPropagateGroupID().

* optimize memory usage of deferred replies

When deffered reply is added the previous reply node cannot be used so
all the extra space we allocated in it is wasted. in case someone uses
deffered replies in a loop, each time adding a small reply, each of
these reply nodes (the small string reply) would have consumed a 16k
block.
now when we add anther diferred reply node, we trim the unused portion
of the previous reply block.

see redis#7123

* LCS -> STRALGO LCS.

STRALGO should be a container for mostly read-only string
algorithms in Redis. The algorithms should have two main
characteristics:

1. They should be non trivial to compute, and often not part of
programming language standard libraries.
2. They should be fast enough that it is a good idea to have optimized C
implementations.

Next thing I would love to see? A small strings compression algorithm.

* Implemented CRC64 based on slice by 4

* Made crc64 test consistent

* Added crcspeed library

* Fix STRALGO command flags.

* Keep track of meaningful replication offset in replicas too

Now both master and replicas keep track of the last replication offset
that contains meaningful data (ignoring the tailing pings), and both
trim that tail from the replication backlog, and the offset with which
they try to use for psync.

the implication is that if someone missed some pings, or even have
excessive pings that the promoted replica has, it'll still be able to
psync (avoid full sync).

the downside (which was already committed) is that replicas running old
code may fail to psync, since the promoted replica trims pings form it's
backlog.

This commit adds a test that reproduces several cases of promotions and
demotions with stale and non-stale pings

Background:
The mearningful offset on the master was added recently to solve a problem were
the master is left all alone, injecting PINGs into it's backlog when no one is
listening and then gets demoted and tries to replicate from a replica that didn't
have any of the PINGs (or at least not the last ones).

however, consider this case:
master A has two replicas (B and C) replicating directly from it.
there's no traffic at all, and also no network issues, just many pings in the
tail of the backlog. now B gets promoted, A becomes a replica of B, and C
remains a replica of A. when A gets demoted, it trims the pings from its
backlog, and successfully replicate from B. however, C is still aware of
these PINGs, when it'll disconnect and re-connect to A, it'll ask for something
that's not in the backlog anymore (since A trimmed the tail of it's backlog),
and be forced to do a full sync (something it didn't have to do before the
meaningful offset fix).

Besides that, the psync2 test was always failing randomly here and there, it
turns out the reason were PINGs. Investigating it shows the following scenario:

cycle 1: redis #1 is master, and all the rest are direct replicas of #1
cycle 2: redis #2 is promoted to master, #1 is a replica of #2 and redis#3 is replica of #1
now we see that when #1 is demoted it prints:
17339:S 21 Apr 2020 11:16:38.523 * Using the meaningful offset 3929963 instead of 3929977 to exclude the final PINGs (14 bytes difference)
17339:S 21 Apr 2020 11:16:39.391 * Trying a partial resynchronization (request e2b3f8817735fdfe5fa4626766daa938b61419e5:3929964).
17339:S 21 Apr 2020 11:16:39.392 * Successful partial resynchronization with master.
and when redis#3 connects to the demoted #2, #2 says:
17339:S 21 Apr 2020 11:16:40.084 * Partial resynchronization not accepted: Requested offset for secondary ID was 3929978, but I can reply up to 3929964

so the issue here is that the meaningful offset feature saved the day for the
demoted master (since it needs to sync from a replica that didn't get the last
ping), but it didn't help one of the other replicas which did get the last ping.

* allow dictFind using static robj

since the recent addition of OBJ_STATIC_REFCOUNT and the assertion in
incrRefCount it is now impossible to use dictFind using a static robj,
because dictEncObjKeyCompare will call getDecodedObject which tries to
increment the refcount just in order to decrement it later.

* Rework comment in dictEncObjKeyCompare().

* fix loading race in psync2 tests

* hickup, re-fix dictEncObjKeyCompare

come to think of it, in theory (not in practice), getDecodedObject can
return the same original object with refcount incremented, so the
pointer comparision in the previous commit was invalid.
so now instead of checking the encoding, we explicitly check the
refcount.

* Extend XINFO STREAM output

Introducing XINFO STREAM <key> FULL

* Fix create-cluster BIN_PATH.

* XINFO STREAM FULL should have a default COUNT of 10

* fix pipelined WAIT performance issue.

If client gets blocked again in `processUnblockedClients`, redis will not send
`REPLCONF GETACK *` to slaves untill next eventloop, so the client will be
blocked for 100ms by default(10hz) if no other file event fired.

move server.get_ack_from_slaves sinppet after `processUnblockedClients`, so
that both the first WAIT command that puts client in blocked context and the
following WAIT command processed in processUnblockedClients would trigger
redis-sever to send `REPLCONF GETACK *`, so that the eventloop would get
`REPLCONG ACK <reploffset>` from slaves and unblocked ASAP.

* Comment clearly why we moved some code in redis#6623.

* redis-cli: try to make clusterManagerFixOpenSlot() more readable.

Also improve the message to make clear that there is no *clear* owner,
not that there is no owner at all.

* redis-cli: simplify cluster nodes coverage display.

* redis-cli: safer cluster fix with unreachalbe masters.

* Fix tracking table max keys option in redis.conf.

* lazyfree & eviction: record latency generated by lazyfree eviction

1. add eviction-lazyfree monitor
2. put eviction-del & eviction-lazyfree into eviction-cycle
   that means eviction-cycle contains all the latency in
   the eviction cycle including del and lazyfree
3. use getMaxmemoryState to check if we can break in lazyfree-evict

* CLIENT KILL USER <username>.

* MIGRATE AUTH2 for ACL support.

* redis-cli command help updated.

* redis-cli: fix hints with subcommands.

* Update help.h again before Redis 6 GA.

* Save a call to stopThreadedIOIfNeeded() for the base case.

Probably no performance changes, but the code should be trivial to
read as in "No threading? Use the normal function and return".

* Revert "optimize memory usage of deferred replies"

This reverts commit fb732f7.

* Cast printf() argument to the format specifier.

We could use uint64_t specific macros, but after all it's simpler to
just use an obvious equivalent type plus casting: this will be a no op
and is simpler than fixed size types printf macros.

* Update redis-cli.c

* optimize memory usage of deferred replies - fixed

When deffered reply is added the previous reply node cannot be used so
all the extra space we allocated in it is wasted. in case someone uses
deffered replies in a loop, each time adding a small reply, each of
these reply nodes (the small string reply) would have consumed a 16k
block.
now when we add anther diferred reply node, we trim the unused portion
of the previous reply block.

see redis#7123

cherry picked from commit fb732f7
with fix to handle a crash with LIBC allocator, which apparently can
return the same pointer despite changing it's size.
i.e. shrinking an allocation of 16k into 56 bytes without changing the
pointer.

* Support setcpuaffinity on linux/bsd

Currently, there are several types of threads/child processes of a
redis server. Sometimes we need deeply optimise the performance of
redis, so we would like to isolate threads/processes.

There were some discussion about cpu affinity cases in the issue:
https://github.com/antirez/redis/issues/2863

So implement cpu affinity setting by redis.conf in this patch, then
we can config server_cpulist/bio_cpulist/aof_rewrite_cpulist/
bgsave_cpulist by cpu list.

Examples of cpulist in redis.conf:
server_cpulist 0-7:2      means cpu affinity 0,2,4,6
bio_cpulist 1,3           means cpu affinity 1,3
aof_rewrite_cpulist 8-11  means cpu affinity 8,9,10,11
bgsave_cpulist 1,10-11    means cpu affinity 1,10,11

Test on linux/freebsd, both work fine.

Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>

* Client Side Caching: Add Tracking Prefix Number Stats in Server Info

* reformat code

* add daily github actions with libc malloc and valgrind

* fix memlry leaks with diskless replica short read.
* fix a few timing issues with valgrind runs
* fix issue with valgrind and watchdog schedule signal

about the valgrind WD issue:
the stack trace test in logging.tcl, has issues with valgrind:
==28808== Can't extend stack to 0x1ffeffdb38 during signal delivery for thread 1:
==28808==   too small or bad protection modes

it seems to be some valgrind bug with SA_ONSTACK.
SA_ONSTACK seems unneeded since WD is not recursive (SA_NODEFER was removed),
also, not sure if it's even valid without a call to sigaltstack()

* XPENDING should not update consumer's seen-time

Same goes for XGROUP DELCONSUMER (But in this case, it doesn't
have any visible effect)

* Rework a bit the documentation for CPU pinning.

* Fix NetBSD build by fixing redis_set_thread_title() support.

See redis#7188.

* Fix compiler warnings on function rev(unsigned long)

* Add --user argument to redis-benchmark.c (ACL)

* Fix CRC64 initialization outside the Redis server itself.

* Move CRC64 initialization in main().

* Drop not needed part from redis#7194.

* make struct user anonymous (only typedefed)

This works because this struct is never referenced by its name,
but always by its type.

This prevents a conflict with struct user from <sys/user.h>
when compiling against uclibc.

Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>

* Test: --dont-clean should do first cleanup.

* Dump recent backlog on master query generating errors.

* Don't propagate spurious MULTI on DEBUG LOADAOF.

* Remove unreachable branch.

* Redis Benchmark: Fix coredump because of double free

* add include guard for lolwut.h

* add jemalloc-bg-thread config in redis conf

* stringmatchlen() should not expect null terminated strings.

* Use dictSize to get the size of dict in dict.c

* Cluster: introduce data_received field.

We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in redis#7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.

* Cluster: refactor ping/data delay handling.

* Cluster: clarify we always resolve the sender.

* TLS: Fix test failures on recent Debian/Ubuntu.

Seems like on some systems choosing specific TLS v1/v1.1 versions no
longer works as expected. Test is reduced for v1.2 now which is still
good enough to test the mechansim, and matters most anyway.

* TLS: Add crypto locks for older OpenSSL support.

This is really required only for older OpenSSL versions.

Also, at the moment Redis does not use OpenSSL from multiple threads so
this will only be useful if modules end up doing that.

* Handle keys with hash tag when computing hash slot using tcl cluster client.

* Add a test to prove current tcl cluster client can not handle keys with hash tag.

* fix redis 6.0 not freeing closed connections during loading.

This bug was introduced by a recent change in which readQueryFromClient
is using freeClientAsync, and despite the fact that now
freeClientsInAsyncFreeQueue is in beforeSleep, that's not enough since
it's not called during loading in processEventsWhileBlocked.
furthermore, afterSleep was called in that case but beforeSleep wasn't.

This bug also caused slowness sine the level-triggered mode of epoll
kept signaling these connections as readable causing us to keep doing
connRead again and again for ll of these, which keep accumulating.

now both before and after sleep are called, but not all of their actions
are performed during loading, some are only reserved for the main loop.

fixes issue redis#7215

* fix unstable replication test

this test which has coverage for varoius flows of diskless master was
failing randomly from time to time.

the failure was:
[err]: diskless all replicas drop during rdb pipe in tests/integration/replication.tcl
log message of '*Diskless rdb transfer, last replica dropped, killing fork child*' not found

what seemed to have happened is that the master didn't detect that all
replicas dropped by the time the replication ended, it thought that one
replica is still connected.

now the test takes a few seconds longer but it seems stable.

* Some rework of redis#7234.

* NetBSD build update.

This platform supports CPU affinity (but not OpenBSD).

* Redis-cli 6.0.1 `--cluster-yes` doesn't work (fix redis#7246)

This make it so that all prompts for all redis-cli --cluster commands are automatically answered with a yes.

* fix typo ...

* Track events processed while blocked globally.

Related to redis#7234.

* Tracking: send eviction messages when evicting entries.

A fix for redis#7249.

* rax.c updated from upstream antirez/rax.

* Regression test for redis#7249.

* Added a refcount on timer events to prevent deletion of recursive timer calls

* Converge hash validation for adding and removing

* do not handle --cluster-yes for cluster fix mode

* Cache master without checking of deferred close flags.

The context is issue redis#7205: since the introduction of threaded I/O we close
clients asynchronously by default from readQueryFromClient(). So we
should no longer prevent the caching of the master client, to later
PSYNC incrementally, if such flags are set. However we also don't want
the master client to be cached with such flags (would be closed
immediately after being restored). And yet we want a way to understand
if a master was closed because of a protocol error, and in that case
prevent the caching.

* Remove the client from CLOSE_ASAP list before caching the master.

This was broken in 1a7cd2c: we identified a crash in the CI, what
was happening before the fix should be like that:

1. The client gets in the async free list.
2. However freeClient() gets called again against the same client
   which is a master.
3. The client arrived in freeClient() with the CLOSE_ASAP flag set.
4. The master gets cached, but NOT removed from the CLOSE_ASAP linked
   list.
5. The master client that was cached was immediately removed since it
   was still in the list.
6. Redis accessed a freed cached master.

This is how the crash looked like:

=== REDIS BUG REPORT START: Cut & paste starting from here ===
1092:S 16 May 2020 11:44:09.731 # Redis 999.999.999 crashed by signal: 11
1092:S 16 May 2020 11:44:09.731 # Crashed running the instruction at: 0x447e18
1092:S 16 May 2020 11:44:09.731 # Accessing address: 0xffffffffffffffff
1092:S 16 May 2020 11:44:09.731 # Failed assertion:  (:0)

------ STACK TRACE ------
EIP:
src/redis-server 127.0.0.1:21300(readQueryFromClient+0x48)[0x447e18]

And the 0xffff address access likely comes from accessing an SDS that is
set to NULL (we go -1 offset to read the header).

* add regression test for the race in redis#7205

with the original version of 6.0.0, this test detects an excessive full
sync.
with the fix in 1a7cd2c, this test detects memory corruption,
especially when using libc allocator with or without valgrind.

* Improve the PSYNC2 test reliability.

* fix valgrind test failure in replication test

in b441628 i added more keys to that test to make it run longer
but in valgrind this now means the test times out, give valgrind more
time.

* Redis Benchmark: generate random test data

The function of generating random data is designed by antirez. See redis#7196.

* fix clear USER_FLAG_ALLCOMMANDS flag in acl

in ACLSetUserCommandBit, when the command bit overflows, no operation
is performed, so no need clear the USER_FLAG_ALLCOMMANDS flag.

in ACLSetUser, when adding subcommand, we don't need to call
ACLGetCommandID ahead since subcommand may be empty.

* Redis-Benchmark: avoid potentical memmory leaking

* improve DEBUG MALLCTL to be able to write to write only fields.

also support:
  debug mallctl-str thread.tcache.flush VOID

* TLS: Improve tls-protocols clarity in redis.conf.

* fix a rare active defrag edge case bug leading to stagnation

There's a rare case which leads to stagnation in the defragger, causing
it to keep scanning the keyspace and do nothing (not moving any
allocation), this happens when all the allocator slabs of a certain bin
have the same % utilization, but the slab from which new allocations are
made have a lower utilization.

this commit fixes it by removing the current slab from the overall
average utilization of the bin, and also eliminate any precision loss in
the utilization calculation and move the decision about the defrag to
reside inside jemalloc.

and also add a test that consistently reproduce this issue.

* Tracking: flag CLIENT_TRACKING_BROKEN_REDIR when redir broken

* Fix reply bytes calculation error

Fix redis#7275.

* Replace addDeferredMultiBulkLength with addReplyDeferredLen in comment

* fix server crash for STRALGO command

* using moreargs variable

* EAGAIN for tls during diskless load

* Disconnect chained replicas when the replica performs PSYNC with the master always to avoid replication offset mismatch between master and chained replicas.

* Fix redis#7306 less aggressively.

Citing from the issue:

btw I suggest we change this fix to something else:
* We revert the fix.
* We add a call that disconnects chained replicas in the place where we trim the replica (that is a master i this case) offset.
This way we can avoid disconnections when there is no trimming of the backlog.

Note that we now want to disconnect replicas asynchronously in
disconnectSlaves(), because it's in general safer now that we can call
it from freeClient(). Otherwise for instance the command:

    CLIENT KILL TYPE master

May crash: clientCommand() starts running the linked of of clients,
looking for clients to kill. However it finds the master, kills it
calling freeClient(), but this in turn calls replicationCacheMaster()
that may also call disconnectSlaves() now. So the linked list iterator
of the clientCommand() will no longer be valid.

* Make disconnectSlaves() synchronous in the base case.

Otherwise we run into that:

Backtrace:
src/redis-server 127.0.0.1:21322(logStackTrace+0x45)[0x479035]
src/redis-server 127.0.0.1:21322(sigsegvHandler+0xb9)[0x4797f9]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fd373c5e390]
src/redis-server 127.0.0.1:21322(_serverAssert+0x6a)[0x47660a]
src/redis-server 127.0.0.1:21322(freeReplicationBacklog+0x42)[0x451282]
src/redis-server 127.0.0.1:21322[0x4552d4]
src/redis-server 127.0.0.1:21322[0x4c5593]
src/redis-server 127.0.0.1:21322(aeProcessEvents+0x2e6)[0x42e786]
src/redis-server 127.0.0.1:21322(aeMain+0x1d)[0x42eb0d]
src/redis-server 127.0.0.1:21322(main+0x4c5)[0x42b145]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7fd3738a3830]
src/redis-server 127.0.0.1:21322(_start+0x29)[0x42b409]

Since we disconnect all the replicas and free the replication backlog in
certain replication paths, and the code that will free the replication
backlog expects that no replica is connected.

However we still need to free the replicas asynchronously in certain
cases, as documented in the top comment of disconnectSlaves().

* add CI for 32bit build

* PSYNC2: second_replid_offset should be real meaningful offset

After adjustMeaningfulReplOffset(), all the other related variable
should be updated, including server.second_replid_offset.

Or the old version redis like 5.0 may receive wrong data from
replication stream, cause redis 5.0 can sync with redis 6.0,
but doesn't know meaningful offset.

* Clarify what is happening in PR redis#7320.

* Test: PSYNC2 test can now show server logs.

* tests: each test client work on a distinct port range

apparently when running tests in parallel (the default of --clients 16),
there's a chance for two tests to use the same port.
specifically, one test might shutdown a master and still have the
replica up, and then another test will re-use the port number of master
for another master, and then that replica will connect to the master of
the other test.

this can cause a master to count too many full syncs and fail a test if
we run the tests with --single integration/psync2 --loop --stop

see Probmem 2 in redis#7314

* avoid using sendfile if tls-replication is enabled

this obviously broke the tests, but went unnoticed so far since tls
wasn't often tested.

* daily CI test with tls

* Replication: log backlog creation event.

* fix clusters mixing accidentally by gossip

`clusterStartHandshake` will start hand handshake
and eventually send CLUSTER MEET message, which is strictly prohibited
in the REDIS CLUSTER SPEC.
Only system administrator can initiate CLUSTER MEET message.
Futher, according to the SPEC, rather than IP/PORT pairs, only nodeid
can be trusted.

* Set a protocol error if master use the inline protocol.

We want to react a bit more aggressively if we sense that the master is
sending us some corrupted stream. By setting the protocol error we both
ensure that the replica will disconnect, and avoid caching the master so
that a full SYNC will be required. This is protective against
replication bugs.

* Remove the meaningful offset feature.

After a closer look, the Redis core devleopers all believe that this was
too fragile, caused many bugs that we didn't expect and that were very
hard to track. Better to find an alternative solution that is simpler.

* Remove the PSYNC2 meaningful offset test.

* Another meaningful offset test removed.

* Fix TLS certificate loading for chained certificates.

This impacts client verification for chained certificates (such as Lets
Encrypt certificates). Client Verify requires the full chain in order to
properly verify the certificate.

* tests: find_available_port start search from next port

i.e. don't start the search from scratch hitting the used ones again.
this will also reduce the likelihood of collisions (if there are any
left) by increasing the time until we re-use a port we did use in the
past.

* Drop useless line from replicationCacheMaster().

* 32bit CI needs to build modules correctly

* revive meaningful offset tests

* adjust revived meaningful offset tests

these tests create several edge cases that are otherwise uncovered (at
least not consistently) by the test suite, so although they're no longer
testing what they were meant to test, it's still a good idea to keep
them in hope that they'll expose some issue in the future.

* Replication: showLatestBacklog() refactored out.

* Test: add the tracking unit as default.

* Test: take PSYNC2 test master timeout high during switch.

This will likely avoid false positives due to trailing pings.

* Fix handling of special chars in ACL LOAD.

Now it is also possible for ACL SETUSER to accept empty strings
as valid operations (doing nothing), so for instance

    ACL SETUSER myuser ""

Will have just the effect of creating a user in the default state.

This should fix redis#7329.

* fix pingoff  test race

* donot free protected client in freeClientsInAsyncFreeQueue

related redis#7234

* AOF: append origin SET if no expire option

* Revert "avoid using sendfile if tls-replication is enabled"

This reverts commit b9abecf.

* Revert "Implements sendfile for redis."

This reverts commit 9cf500a.

* return the correct proto version
HELLO should return the current proto version, while the code hardcoded
3

* Avoid rejecting WATCH / UNWATCH, like MULTI/EXEC/DISCARD

Much like MULTI/EXEC/DISCARD, the WATCH and UNWATCH are not actually
operating on the database or server state, but instead operate on the
client state. the client may send them all in one long pipeline and check
all the responses only at the end, so failing them may lead to a
mismatch between the client state on the server and the one on the
client end, and execute the wrong commands (ones that were meant to be
discarded)

the watched keys are not actually stored in the client struct, but they
are in fact part of the client state. for instance, they're not cleared
or moved in SWAPDB or FLUSHDB.

* Don't queue commands in an already aborted MULTI state

* fix disconnectSlaves, to try to free each slave.

the recent change in that loop (iteration rather than waiting for it to
be empty) was intended to avoid an endless loop in case some slave would
refuse to be freed.

but the lookup of the first client remained, which would have caused it
to try the first one again and again instead of moving on.

* fix server crash in STRALGO command

* Temporary fix for redis#7353 issue about EVAL during -BUSY.

* Adapt EVAL+busy script test to new behavior.

* LRANK: Add command (the command will be renamed LPOS).

The `LRANK` command returns the index (position) of a given element
within a list. Using the `direction` argument it is possible to specify
going from head to tail (acending, 1) or from tail to head (decending,
-1). Only the first found index is returend. The complexity is O(N).

When using lists as a queue it can be of interest at what position a
given element is, for instance to monitor a job processing through a
work queue. This came up within the Python `rq` project which is based
on Redis[0].

[0]: rq/rq#1197

Signed-off-by: Paul Spooren <mail@aparcar.org>

* LPOS: implement the final design.

* LPOS: update to latest proposal.

See https://gist.github.com/antirez/3591c5096bc79cad8b5a992e08304f48

* LPOS: tests + crash fix.

* fix memory leak

* help.h updated.

* Fix LCS object type checking. Related to redis#7379.

* Fix RM_ScanKey module api not to return int encoded strings

The scan key module API provides the scan callback with the current
field name and value (if it exists). Those arguments are RedisModuleString*
which means it supposes to point to robj which is encoded as a string.
Using createStringObjectFromLongLong function might return robj that
points to an integer and so break a module that tries for example to
use RedisModule_StringPtrLen on the given field/value.

The PR introduces a fix that uses the createObject function and sdsfromlonglong function.
Using those function promise that the field and value pass to the to the
scan callback will be Strings.

The PR also changes the Scan test module to use RedisModule_StringPtrLen
to catch the issue. without this, the issue is hidden because
RedisModule_ReplyWithString knows to handle integer encoding of the
given robj (RedisModuleString).

The PR also introduces a new test to verify the issue is solved.

* cluster.c remove if of clusterSendFail in markNodeAsFailingIfNeeded

* Tracking: fix enableBcastTrackingForPrefix() invalid sdslen() call.

Related to redis#7387.

* Use cluster connections too, to limit maxclients.

See redis#7401.

* fix comments in listpack.c

* ensure SHUTDOWN_NOSAVE in Sentinel mode

- enforcing of SHUTDOWN_NOSAVE flag in one place to make it consitent
  when running in Sentinel mode

* Fix comments in function raxLowWalk of listpack.c

* fix memory leak in sentinel connection sharing

* Clarify maxclients and cluster in conf. Remove myself too.

* Fix BITFIELD i64 type handling, see redis#7417.

* Include cluster.h for getClusterConnectionsCount().

* EXEC always fails with EXECABORT and multi-state is cleared

In order to support the use of multi-exec in pipeline, it is important that
MULTI and EXEC are never rejected and it is easy for the client to know if the
connection is still in multi state.

It was easy to make sure MULTI and DISCARD never fail (done by previous
commits) since these only change the client state and don't do any actual
change in the server, but EXEC is a different story.

Since in the past, it was possible for clients to handle some EXEC errors and
retry the EXEC, we now can't affort to return any error on EXEC other than
EXECABORT, which now carries with it the real reason for the abort too.

Other fixes in this commit:
- Some checks that where performed at the time of queuing need to be re-
  validated when EXEC runs, for instance if the transaction contains writes
  commands, it needs to be aborted. there was one check that was already done
  in execCommand (-READONLY), but other checks where missing: -OOM, -MISCONF,
  -NOREPLICAS, -MASTERDOWN
- When a command is rejected by processCommand it was rejected with addReply,
  which was not recognized as an error in case the bad command came from the
  master. this will enable to count or MONITOR these errors in the future.
- make it easier for tests to create additional (non deferred) clients.
- add tests for the fixes of this commit.

* updated copyright year

Changed "2015" to "2020"

* LPOS: option FIRST renamed RANK.

* Update comment to clarify change in redis#7398.

Co-authored-by: antirez <antirez@gmail.com>
Co-authored-by: Valentino Geron <valentino@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Madelyn Olson <matolson@amazon.com>
Co-authored-by: Guy Benoish <guy.benoish@redislabs.com>
Co-authored-by: srzhao <srzhao@sysnew.com>
Co-authored-by: zhaozhao.zz <zhaozhao.zz@alibaba-inc.com>
Co-authored-by: Benjamin Sergeant <bsergean@gmail.com>
Co-authored-by: zhenwei pi <pizhenwei@bytedance.com>
Co-authored-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: Deliang Yang <yang623601391@gmail.com>
Co-authored-by: Muhammad Zahalqa <m@tryfinally.com>
Co-authored-by: Titouan Christophe <titouan.christophe@railnova.eu>
Co-authored-by: Brad Dunbar <dunbarb2@gmail.com>
Co-authored-by: ShooterIT <wangyuancode@163.com>
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
Co-authored-by: WuYunlong <xzsyeb@126.com>
Co-authored-by: David Carlier <devnexen@gmail.com>
Co-authored-by: Benjamin Sergeant <bsergeant@mz.com>
Co-authored-by: hujie <hujiecs@qq.com>
Co-authored-by: Qu Chen <quchen@amazon.com>
Co-authored-by: Liu Zhen <zhenliu215252@sohu-inc.com>
Co-authored-by: Kevin Fwu <kfwu@litespeedtech.com>
Co-authored-by: xhe <xw897002528@gmail.com>
Co-authored-by: Paul Spooren <mail@aparcar.org>
Co-authored-by: meir@redislabs.com <meir@redislabs.com>
Co-authored-by: root <root@mongo.config1.com>
Co-authored-by: chenhui0212 <chenhui19900212@gmail.com>
Co-authored-by: Tomasz Poradowski <tomasz.poradowski@generiscorp.com>
Co-authored-by: Dave Nielsen <dnielsen@gmail.com>
@madolson madolson self-assigned this Jul 27, 2020
@madolson madolson modified the milestones: Redis 6.0.x, 5.0 backport Jul 27, 2020
@madolson
Copy link
Contributor

Looks like this got lost but was intended to get backported to 5.0, added it to the 5.0 sprint.

@oranagra oranagra removed this from the 5.0 backport milestone Oct 20, 2020
oranagra pushed a commit that referenced this pull request Oct 27, 2020
We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in #7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.

(cherry picked from commit 960186a)
@oranagra
Copy link
Member

backported to 5.0.10

@oranagra oranagra closed this Oct 27, 2020
@uvletter
Copy link
Contributor

I have a puzzle that the load of PUB/SUB is shipped in inbound_link, in the meanwhile PONG is in link(aka outbound link), which means they're bifurcation channels, why would heavy PUB/SUB load delay the PONG packets. @madolson Can you have a look at this problem? Thanks!

pulllock pushed a commit to pulllock/redis that referenced this pull request Jun 28, 2023
We want to send pings and pongs at specific intervals, since our packets
also contain information about the configuration of the cluster and are
used for gossip. However since our cluster bus is used in a mixed way
for data (such as Pub/Sub or modules cluster messages) and metadata,
sometimes a very busy channel may delay the reception of pong packets.
So after discussing it in redis#7216, this commit introduces a new field that
is not exposed in the cluster, is only an internal information about
the last time we received any data from a given node: we use this field
in order to avoid detecting failures, claiming data reception of new
data from the node is a proof of liveness.

(cherry picked from commit 960186a)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants