Skip to content

teamcity: failed test: TestMergeQueue #39370

@cockroach-teamcity

Description

@cockroach-teamcity

The following tests appear to have failed on master (testrace): TestMergeQueue/sticky-bit, TestMergeQueue/non-collocated, TestMergeQueue/lhs-undersize, TestMergeQueue/both-empty, TestMergeQueue/combined-threshold, TestMergeQueue, TestMergeQueue/sanity

You may want to check for open issues.

#1425638:

TestMergeQueue/lhs-undersize
--- FAIL: testrace/TestMergeQueue/lhs-undersize (0.000s)
Test ended in panic.

------- Stdout: -------
W190806 18:50:28.359360 74165 storage/store.go:3618  [s1,r1/1:{/Min-a}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
I190806 18:50:28.364547 73937 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:50:28.365134 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:50:28.365963 73930 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:50:28.367265 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
W190806 18:50:30.405436 74187 storage/store.go:3618  [s1,r2/1:{a-c}] handle raft ready: 11.2s [applied=1, batches=1, state_assertions=0]
I190806 18:50:31.593318 74203 storage/compactor/compactor.go:325  [s1,compactor] purging suggested compaction for range "a" - "b" that contains live data
I190806 18:50:31.595257 74203 storage/compactor/compactor.go:370  [s1,compactor] processing compaction #1/1 ("b"-"c") for 16 MiB (reasons: size=false used=true avail=false)
I190806 18:50:31.596365 74203 storage/compactor/compactor.go:386  [s1,compactor] processed compaction #1/1 ("b"-"c") for 16 MiB in 0.0s
W190806 18:50:42.775278 74173 storage/store.go:3618  [s1,r2/1:{a-c}] handle raft ready: 9.6s [applied=1, batches=1, state_assertions=0]
I190806 18:50:42.776831 109805 storage/replica_command.go:283  [s1,r2/1:{a-c}] initiating a split of this range at key "b" [r7] (manual)
I190806 18:50:43.459206 109805 storage/replica_command.go:597  [merge,s1,r2/1:{a-b}] initiating a merge of r7:{b-c} [(n1,s1):1, next=2, gen=9] into this range (lhs+rhs has (size=16 MiB+16 MiB qps=0.00+0.00 --> 0.00qps) below threshold (size=32 MiB, qps=0.00))
I190806 18:50:43.536933 74142 storage/store.go:2530  [merge,s1,r2/1:{a-b},txn=36c51167] removing replica r7/1



TestMergeQueue/combined-threshold
--- FAIL: testrace/TestMergeQueue/combined-threshold (0.000s)
Test ended in panic.

------- Stdout: -------
I190806 18:50:55.352809 73937 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:50:55.353401 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
W190806 18:50:55.795208 74165 storage/store.go:3618  [s1,r2/1:{a-c}] handle raft ready: 10.1s [applied=1, batches=1, state_assertions=0]
I190806 18:50:58.537349 74203 storage/compactor/compactor.go:325  [s1,compactor] purging suggested compaction for range "a" - "b" that contains live data
I190806 18:50:58.538327 74203 storage/compactor/compactor.go:370  [s1,compactor] processing compaction #1/1 ("b"-"c") for 32 MiB (reasons: size=false used=true avail=false)
I190806 18:50:58.539037 74203 storage/compactor/compactor.go:386  [s1,compactor] processed compaction #1/1 ("b"-"c") for 32 MiB in 0.0s
I190806 18:51:08.105045 73937 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:51:08.106097 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
W190806 18:51:09.961565 74146 storage/store.go:3618  [s1,r2/1:{a-c}] handle raft ready: 11.7s [applied=1, batches=1, state_assertions=0]
I190806 18:51:09.996107 117855 storage/replica_command.go:283  [s1,r2/1:{a-c}] initiating a split of this range at key "b" [r8] (manual)
I190806 18:51:11.719828 126369 storage/replica_command.go:597  [merge,s1,r2/1:{a-b}] initiating a merge of r8:{b-c} [(n1,s1):1, next=2, gen=11] into this range (lhs+rhs has (size=16 MiB+16 MiB qps=5.57+0.00 --> 5.57qps) below threshold (size=32 MiB, qps=5.57))
I190806 18:51:12.087178 74159 storage/store.go:2530  [merge,s1,r2/1:{a-b},txn=082a986e] removing replica r8/1



TestMergeQueue/sticky-bit
....Wait, 3 minutes]:
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:307
sync.runtime_notifyListWait(0xc0051eed90, 0x3a)
	/usr/local/go/src/runtime/sema.go:510 +0xf9
sync.(*Cond).Wait(0xc0051eed80)
	/usr/local/go/src/sync/cond.go:56 +0x8e
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc00085d050, 0x56b7ec0, 0xc002c49b60)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:192 +0x9c
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x56b7ec0, 0xc002c49b60)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:161 +0x56
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc001ac56c0, 0xc002d8b540, 0xc001ac56b0)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x160
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:189 +0xc4

goroutine 73966 [chan receive, 3 minutes]:
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func1(0x56b7ec0, 0xc0025c40f0)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:151 +0x64
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc001a439b0, 0xc0010b3e00, 0xc00029d100)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x160
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:189 +0xc4

goroutine 74229 [select]:
github.com/cockroachdb/cockroach/pkg/gossip.newInfoStore.func1(0x56b7ec0, 0xc001d9dd70)
	/go/src/github.com/cockroachdb/cockroach/pkg/gossip/infostore.go:190 +0x225
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc001ac4840, 0xc0010b3cc0, 0xc001ac4820)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x160
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:189 +0xc4

goroutine 74199 [select, 3 minutes]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).startLeaseRenewer.func1(0x56b7ec0, 0xc0025c4ed0)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:1601 +0x3ed
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc0001e2a50, 0xc0010b3e00, 0xc0001e2a30)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x160
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:189 +0xc4

goroutine 74193 [select]:
github.com/cockroachdb/cockroach/pkg/storage.(*Store).coalescedHeartbeatsLoop(0xc00086c700, 0x56b7ec0, 0xc001a99170)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:3757 +0x1ad
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc0001e2780, 0xc0010b3e00, 0xc0001e2770)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x160
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:189 +0xc4


****************************************************************************

This node experienced a fatal error (printed above), and as a result the
process is terminating.

Fatal errors can occur due to faulty hardware (disks, memory, clocks) or a
problem in CockroachDB. With your help, the support team at Cockroach Labs
will try to determine the root cause, recommend next steps, and we can
improve CockroachDB based on your report.

Please submit a crash report by following the instructions here:

    https://github.com/cockroachdb/cockroach/issues/new/choose

If you would rather not post publicly, please contact us directly at:

    support@cockroachlabs.com

The Cockroach Labs team appreciates your feedback.



TestMergeQueue
--- FAIL: testrace/TestMergeQueue (0.000s)
Test ended in panic.

------- Stdout: -------
I190806 18:48:12.159610 74116 gossip/gossip.go:394  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:39663" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0 cluster_name:"" 
W190806 18:48:12.270914 74116 gossip/gossip.go:1498  [n2] no incoming or outgoing connections
I190806 18:48:12.275041 74210 gossip/client.go:124  [n2] started gossip client to 127.0.0.1:39663
I190806 18:48:12.275868 74116 gossip/gossip.go:394  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:37045" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0 cluster_name:"" 
I190806 18:48:12.317776 74116 storage/client_test.go:491  gossip network initialized
I190806 18:48:12.323743 74116 storage/replica_command.go:283  [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2] (manual)
I190806 18:48:12.405640 74116 storage/replica_command.go:283  [s1,r2/1:{a-/Max}] initiating a split of this range at key "b" [r3] (manual)
I190806 18:48:12.471268 74116 storage/replica_command.go:283  [s1,r3/1:{b-/Max}] initiating a split of this range at key "c" [r4] (manual)
I190806 18:48:14.366389 73874 gossip/gossip.go:1512  [n2] node has connected to cluster via gossip



TestMergeQueue/non-collocated
...2/1:{a-c}] initiating a split of this range at key "b" [r9] (manual)
I190806 18:51:38.148019 126448 storage/store_snapshot.go:775  [s1,r9/1:{b-c}] sending PREEMPTIVE snapshot c16e1c0f at applied index 10
I190806 18:51:42.138474 126448 storage/store_snapshot.go:818  [s1,r9/1:{b-c}] streamed snapshot to (n2,s2):?: kv pairs: 16, log entries: 0, rate-limit: 8.0 MiB/sec, 4.00s
I190806 18:51:44.589984 134481 storage/replica_raftstorage.go:823  [s2,r9/?:{-}] applying PREEMPTIVE snapshot at index 10 (id=c16e1c0f, encoded size=16777935, 1 rocksdb batches, 0 log entries)
I190806 18:51:45.283083 134481 storage/replica_raftstorage.go:829  [s2,r9/?:{b-c}] applied PREEMPTIVE snapshot in 693ms [clear=0ms batch=491ms entries=0ms commit=200ms]
I190806 18:51:45.297739 126448 storage/replica_command.go:1188  [s1,r9/1:{b-c}] change replicas (ADD_REPLICA (n2,s2):2): existing descriptor r9:{b-c} [(n1,s1):1, next=2, gen=13]
I190806 18:51:45.313934 126448 storage/replica_raft.go:289  [s1,r9/1:{b-c},txn=43eaee32] proposing ADD_REPLICA((n2,s2):2): updated=(n1,s1):1,(n2,s2):2 next=3
I190806 18:51:45.772325 126448 storage/replica_command.go:1188  [s2,r9/2:{b-c}] change replicas (REMOVE_REPLICA (n1,s1):1): existing descriptor r9:{b-c} [(n1,s1):1, (n2,s2):2, next=3, gen=14]
I190806 18:51:45.795861 126448 storage/replica_raft.go:289  [s2,r9/2:{b-c},txn=8d99f24c] proposing REMOVE_REPLICA((n1,s1):1): updated=(n2,s2):2 next=3
I190806 18:51:45.819199 136507 storage/store.go:2530  [replicaGC,s1,r9/1:{b-c}] removing replica r9/1
I190806 18:51:45.821872 136507 storage/replica_destroy.go:146  [replicaGC,s1,r9/1:{b-c}] removed 6 (1+5) keys in 2ms [clear=0ms commit=1ms]
I190806 18:51:45.835497 136416 storage/store_snapshot.go:775  [merge,s2,r9/2:{b-c}] sending PREEMPTIVE snapshot f3bc429e at applied index 22
I190806 18:51:45.840380 136477 storage/replica_raftstorage.go:823  [s1,r9/?:{-}] applying PREEMPTIVE snapshot at index 22 (id=f3bc429e, encoded size=1023, 1 rocksdb batches, 0 log entries)
I190806 18:51:45.840752 136416 storage/store_snapshot.go:818  [merge,s2,r9/2:{b-c}] streamed snapshot to (n1,s1):?: kv pairs: 19, log entries: 0, rate-limit: 8.0 MiB/sec, 0.00s
I190806 18:51:45.842701 136477 storage/replica_raftstorage.go:829  [s1,r9/?:{b-c}] applied PREEMPTIVE snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I190806 18:51:45.848958 136416 storage/replica_command.go:1188  [merge,s2,r9/2:{b-c}] change replicas (ADD_REPLICA (n1,s1):3): existing descriptor r9:{b-c} [(n2,s2):2, next=3, gen=15]
I190806 18:51:45.867396 136416 storage/replica_raft.go:289  [merge,s2,r9/2:{b-c},txn=2af3e620] proposing ADD_REPLICA((n1,s1):3): updated=(n2,s2):2,(n1,s1):3 next=4
I190806 18:51:45.899996 136443 storage/queue.go:1127  [replicate] purgatory is now empty
I190806 18:51:45.909560 136416 storage/replica_command.go:1188  [merge,s1,r9/3:{b-c}] change replicas (REMOVE_REPLICA (n2,s2):2): existing descriptor r9:{b-c} [(n2,s2):2, (n1,s1):3, next=4, gen=16]
I190806 18:51:45.941413 136416 storage/replica_raft.go:289  [merge,s1,r9/3:{b-c},txn=7ab5ee3a] proposing REMOVE_REPLICA((n2,s2):2): updated=(n1,s1):3 next=4
I190806 18:51:45.965656 136544 storage/store.go:2530  [replicaGC,s2,r9/2:{b-c}] removing replica r9/2
I190806 18:51:45.972046 136544 storage/replica_destroy.go:146  [replicaGC,s2,r9/2:{b-c}] removed 7 (0+7) keys in 1ms [clear=1ms commit=1ms]
I190806 18:51:45.982831 136416 storage/replica_command.go:597  [merge,s1,r2/1:{a-b}] initiating a merge of r9:{b-c} [(n1,s1):3, next=4, gen=17] into this range (lhs+rhs has (size=0 B+0 B qps=0.00+0.00 --> 0.00qps) below threshold (size=0 B, qps=0.00))
I190806 18:51:46.138246 136416 storage/replica_command.go:597  [merge,s1,r2/1:{a-b}] initiating a merge of r9:{b-c} [(n1,s1):3, next=4, gen=17] into this range (lhs+rhs has (size=0 B+0 B qps=0.00+0.00 --> 0.00qps) below threshold (size=0 B, qps=0.00))
I190806 18:51:46.260577 74143 storage/store.go:2530  [merge,s1,r2/1:{a-b},txn=fc4d3f2e] removing replica r9/3



TestMergeQueue/sanity
...90806 18:49:25.976409 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms:
  - context deadline exceeded
W190806 18:49:26.358437 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms:
  - context deadline exceeded
W190806 18:49:26.559232 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms:
  - context deadline exceeded
W190806 18:49:27.176975 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms:
  - context deadline exceeded
I190806 18:49:27.357032 73937 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:27.357586 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
W190806 18:49:27.830660 74153 storage/store.go:3618  [s1,r2/1:{a-b}] handle raft ready: 0.6s [applied=1, batches=1, state_assertions=0]
W190806 18:49:27.892317 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms:
  - context deadline exceeded
W190806 18:49:27.953521 74154 storage/store.go:3618  [s1,r1/1:{/Min-a}] handle raft ready: 0.5s [applied=1, batches=1, state_assertions=0]
I190806 18:49:27.954591 73930 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:27.955181 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:49:28.309812 74203 storage/compactor/compactor.go:386  [s1,compactor] processed compaction #1-2/2 ("a"-"c") for 32 MiB in 3.6s
W190806 18:49:28.347224 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms:
  - context deadline exceeded
W190806 18:49:28.483491 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms:
  - context deadline exceeded
I190806 18:49:29.092577 73930 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:29.093051 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:49:29.579721 73930 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:29.580190 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:49:30.077323 73930 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:30.077884 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:49:34.726753 73937 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:34.727494 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
W190806 18:49:41.835307 74170 storage/store.go:3618  [s1,r5/1:{b-c}] handle raft ready: 12.7s [applied=1, batches=1, state_assertions=0]



TestMergeQueue/both-empty
...veness heartbeat" timed out after 450ms
I190806 18:49:46.761339 73930 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:46.762145 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:49:57.757224 73930 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:57.757821 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:49:57.768423 73937 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:57.770015 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:49:57.848417 74203 storage/compactor/compactor.go:370  [s1,compactor] processing compaction #1-2/2 ("a"-"c") for 32 MiB (reasons: size=false used=true avail=false)
I190806 18:49:58.740532 73937 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:58.741029 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:49:58.752882 73930 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:58.753439 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
I190806 18:49:59.359570 73937 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:49:59.360224 73937 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
W190806 18:49:59.837820 74180 storage/store.go:3618  [s1,r2/1:{a-b}] handle raft ready: 13.7s [applied=1, batches=1, state_assertions=0]
I190806 18:49:59.929737 74203 storage/compactor/compactor.go:386  [s1,compactor] processed compaction #1-2/2 ("a"-"c") for 32 MiB in 2.1s
I190806 18:50:00.050683 73968 storage/store.go:2530  [merge,s1,r2/1:{a-b},txn=6addd336] removing replica r5/1
W190806 18:50:00.063864 74132 storage/store.go:3618  [s1,r1/1:{/Min-a}] handle raft ready: 0.6s [applied=2, batches=1, state_assertions=0]
I190806 18:50:00.070079 73930 storage/node_liveness.go:836  [liveness-hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
W190806 18:50:00.070687 73930 storage/node_liveness.go:484  [liveness-hb] failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 450ms
W190806 18:50:01.284288 73874 gossip/gossip.go:1501  [n2] first range unavailable; resolvers exhausted
I190806 18:50:03.491288 73874 gossip/gossip.go:1512  [n2] node has connected to cluster via gossip
W190806 18:50:15.039995 74160 storage/store.go:3618  [s1,r2/1:{a-c}] handle raft ready: 12.9s [applied=1, batches=1, state_assertions=0]
I190806 18:50:15.048434 99953 storage/replica_command.go:283  [s1,r2/1:{a-c}] initiating a split of this range at key "b" [r6] (manual)
I190806 18:50:16.615210 99953 storage/replica_command.go:597  [merge,s1,r2/1:{a-b}] initiating a merge of r6:{b-c} [(n1,s1):1, next=2, gen=7] into this range (lhs+rhs has (size=0 B+0 B qps=3.96+0.00 --> 3.96qps) below threshold (size=0 B, qps=3.96))
I190806 18:50:16.785072 74151 storage/store.go:2530  [merge,s1,r2/1:{a-b},txn=590d108d] removing replica r6/1




Please assign, take a look and update the issue accordingly.

Metadata

Metadata

Assignees

No one assigned

    Labels

    C-test-failureBroken test (automatically or manually discovered).O-robotOriginated from a bot.

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions