-
Notifications
You must be signed in to change notification settings - Fork 4.1k
storage: make cmdQ and tsCache concurrent #4768
Description
I saw nvanbenschoten make some improvement in cmdQ and tsCache.
It's a very good progress. But I have some other opinions.
When we use cmdQ or tsCache, we have a big locker like:
r.mu.Lock()
var wg sync.WaitGroup
var spans []roachpb.Span
readOnly := ba.IsReadOnly()
for _, union := range ba.Requests {
h := union.GetInner().Header()
spans = append(spans, roachpb.Span{Key: h.Key, EndKey: h.EndKey})
}
r.mu.cmdQ.GetWait(readOnly, &wg, spans...)
cmdKeys = append(cmdKeys, r.mu.cmdQ.Add(readOnly, spans...)...)
r.mu.Unlock()
It will block the system performance heavily when there are concurrent request.
Especially when each request(batch) has lots of cmds(will do more work in locker area, make other go routine wait more time)
I fact, when there are rare conflict, nvanbenschoten's solution will enlarge the locker area(because need more time to analysis) and decrease the system tps.
And my opinion is: "Because each item in tsCache will removed after 10 second, and each item in cmdQ will removed after several millisecond, So it's not worth to do much work for each item. the life of them are too short"
I saw currently we use red-black tree as base for them. red-black tree is hard for concurrence.
I'm doing some research on it, use skiplist to replace red-black tree and make them concurrent.
And here is preliminary benchmark result:
I the test, I make a client to query value for rand key, and each batch has 32 queries(reads).
The client can run with n threads. When I increase client threads number, you can see the system tps can't increase in current solution, But can increase obviously in my new solution.
Do you agree this improvement direction? I will try to complete it ASAP.