sql: propagate row-level locking modes through execbuilder to row.Fetcher#44429
sql: propagate row-level locking modes through execbuilder to row.Fetcher#44429craig[bot] merged 2 commits intocockroachdb:masterfrom
Conversation
|
I took a quick glance, and I see that we're modifying existing processor protos. I think we might need to bump |
09230ec to
fb2a052
Compare
rohany
left a comment
There was a problem hiding this comment.
Overall PR LGTM besides yahor's comment. I agree, its unclear how to test this until the lock strength actually does something (it's basically a noop at the kvfetcher layer)
Reviewed 34 of 34 files at r1.
Reviewable status:complete! 0 of 0 LGTMs obtained (waiting on @nvanbenschoten)
pkg/sql/sqlbase/locking.go, line 21 at r1 (raw file):
// ToScanLockingStrength converts a tree.LockingStrength to its corresponding // ScanLockingStrength. func ToScanLockingStrength(s tree.LockingStrength) ScanLockingStrength {
Why do we need a locking strength defined in sqlbase and tree?
…cher Relates to cockroachdb#40205. This commit is a follow-up to cockroachdb#44015. It propagates the row-level locking modes from the optbuilder all the way down to the row.Fetcher. This requires passing the locking information through the execbuilder and then through DistSQL. The commit leaves off at the point where `roachpb.ScanRequest`s are constructed. I couldn't find a good way to test this. It's possible I missed something and would love any tips, but I think we may need to wait until the plumbing into the KV API is complete before performing end-to-end tests. The next step will be to hook the locking modes into the KV API. Release note: None
Scans that are performing row-level locking cannot currently be distributed because their locks would not be propagated back to the root transaction coordinator. I hope to lift this restriction in the future by sending back information about locked rows to the Root TxnCoordSender from its Leaves, but for now, we can't allow this.
fb2a052 to
438b216
Compare
nvb
left a comment
There was a problem hiding this comment.
TFTRs! I added one more commit to disable distributing TableReaders that intend to perform row-level locking.
I think we might need to bump DistSQLVersion so that there was no problem in a mixed-version cluster scenario.
Done.
Reviewable status:
complete! 0 of 0 LGTMs obtained (waiting on @rohany)
pkg/sql/sqlbase/locking.go, line 21 at r1 (raw file):
Previously, rohany (Rohan Yadav) wrote…
Why do we need a locking strength defined in sqlbase and tree?
We need it in tree because the variants correspond to AST nodes. sqlbase has a dependency on tree, so we can't have tree reference this enum without creating a dependency cycle. We need it in sqlbase because we need to send these over the wire in processor specs, so they need to be protobuf enums. I'm open to a better way to do this, but I couldn't find one.
|
bors r+ |
44429: sql: propagate row-level locking modes through execbuilder to row.Fetcher r=nvanbenschoten a=nvanbenschoten Relates to #40205. This commit is a follow-up to #44015. It propagates the row-level locking modes from the optbuilder all the way down to the row.Fetcher. This requires passing the locking information through the execbuilder and then through DistSQL. The commit leaves off at the point where `roachpb.ScanRequest`s are constructed. I couldn't find a good way to test this. It's possible I missed something and would love any tips, but I think we may need to wait until the plumbing into the KV API is complete before performing end-to-end tests. The next step will be to hook the locking modes into the KV API. Release note: None 44446: roachtest: don't use 20.1 nodelocal on older versions r=dt a=dt The ability to target the nodelocal directory *of a particular node* was added in 20.1, so when roachtesting earlier versions we still need to use the old google cloud storage bucket. Release note: none. 44461: kv: small readability improvement to the heartbeat r=andreimatei a=andreimatei Rearrange protection against a race to make it more obvious. A casual reading suggested to me that this protection was missing. Release note: None 44463: kv: simplify retriable error handling r=andreimatei a=andreimatei The contract of handleRetryableErrLocked() was weird for no discernible reason. Release note: None Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com> Co-authored-by: David Taylor <tinystatemachine@gmail.com> Co-authored-by: Andrei Matei <andrei@cockroachlabs.com>
Build succeeded |
Fixes half of cockroachdb#40476. In cockroachdb#52388, we added support for a new `Error` wait policy to be attached to BatchRequests. This wait policy dictates that if a request encounters a conflicting locks held by another active transaction, it should raise an error instead of blocking. These semantics closely match those of FOR {UPDATE,SHARE} NOWAIT in PostgreSQL. With NOWAIT, the statement reports an error, rather than waiting, if a selected row cannot be locked immediately. This means that we are able to use the new kv-level wait policy to implement NOWAIT in SQL. Doing so is fairly mechanical, as a lot of the plumbing necessary was already put into place in cockroachdb#44429. The only difference I can see in the semantics is that in PostgreSQL, rows that are being INSERTed by one transaction are not even considered for locking by other transactions, so NOWAIT will not throw an error when encountering these rows. Instead, it will silently ignore them. This has less to do with NOWAIT itself, and more to do with a difference in the concurrency control implementation between Cockroach and PostgreSQL. Since this isn't specific to NOWAIT, I don't think it's a blocker for this change. Release note (sql change): SELECT ... FOR {UPDATE,SHARE} NOWAIT is now supported. The option can be used to throw and error instead of blocking on contended row-level lock acquisition.
Fixes half of cockroachdb#40476. In cockroachdb#52388, we added support for a new `Error` wait policy to be attached to BatchRequests. This wait policy dictates that if a request encounters a conflicting locks held by another active transaction, it should raise an error instead of blocking. These semantics closely match those of FOR {UPDATE,SHARE} NOWAIT in PostgreSQL. With NOWAIT, the statement reports an error, rather than waiting, if a selected row cannot be locked immediately. This means that we are able to use the new kv-level wait policy to implement NOWAIT in SQL. Doing so is fairly mechanical, as a lot of the plumbing necessary was already put into place in cockroachdb#44429. The only difference I can see in the semantics is that in PostgreSQL, rows that are being INSERTed by one transaction are not even considered for locking by other transactions, so NOWAIT will not throw an error when encountering these rows. Instead, it will silently ignore them. This has less to do with NOWAIT itself, and more to do with a difference in the concurrency control implementation between Cockroach and PostgreSQL. Since this isn't specific to NOWAIT, I don't think it's a blocker for this change. Release note (sql change): SELECT ... FOR {UPDATE,SHARE} NOWAIT is now supported. The option can be used to throw and error instead of blocking on contended row-level lock acquisition.
Fixes half of cockroachdb#40476. In cockroachdb#52388, we added support for a new `Error` wait policy to be attached to BatchRequests. This wait policy dictates that if a request encounters a conflicting locks held by another active transaction, it should raise an error instead of blocking. These semantics closely match those of FOR {UPDATE,SHARE} NOWAIT in PostgreSQL. With NOWAIT, the statement reports an error, rather than waiting, if a selected row cannot be locked immediately. This means that we are able to use the new kv-level wait policy to implement NOWAIT in SQL. Doing so is fairly mechanical, as a lot of the plumbing necessary was already put into place in cockroachdb#44429. The only difference I can see in the semantics is that in PostgreSQL, rows that are being INSERTed by one transaction are not even considered for locking by other transactions, so NOWAIT will not throw an error when encountering these rows. Instead, it will silently ignore them. This has less to do with NOWAIT itself, and more to do with a difference in the concurrency control implementation between Cockroach and PostgreSQL. Since this isn't specific to NOWAIT, I don't think it's a blocker for this change. Release note (sql change): SELECT ... FOR {UPDATE,SHARE} NOWAIT is now supported. The option can be used to throw and error instead of blocking on contended row-level lock acquisition.
52522: sql: support FOR {UPDATE,SHARE} NOWAIT r=nvanbenschoten a=nvanbenschoten
Fixes half of #40476.
In #52388, we added support for a new `Error` wait policy to be attached
to BatchRequests. This wait policy dictates that if a request encounters
a conflicting locks held by another active transaction, it should raise
an error instead of blocking.
These semantics closely match those of FOR {UPDATE,SHARE} NOWAIT in
PostgreSQL. With NOWAIT, the statement reports an error, rather than
waiting, if a selected row cannot be locked immediately. This means that
we are able to use the new kv-level wait policy to implement NOWAIT in
SQL. Doing so is fairly mechanical, as a lot of the plumbing necessary
was already put into place in #44429.
The only difference I can see in the semantics is that in PostgreSQL,
rows that are being INSERTed by one transaction are not even considered
for locking by other transactions, so NOWAIT will not throw an error
when encountering these rows. Instead, it will silently ignore them.
This has less to do with NOWAIT itself, and more to do with a difference
in the concurrency control implementation between Cockroach and
PostgreSQL. Since this isn't specific to NOWAIT, I don't think it's a
blocker for this change.
Release note (sql change): SELECT ... FOR {UPDATE,SHARE} NOWAIT is now
supported. The option can be used to throw and error instead of blocking
on contended row-level lock acquisition.
52628: importccl: support IMPORT INTO for DELIMITED and PGCOPY r=pbardea a=Anzoteh96
Previously, IMPORT INTO is supported only for CSV and AVRO. This PR extends the support of IMPORT INTO for DELIMITED and PGCOPY, in the hope that this will be useful when supporting default columns for imports from these files.
This involves populating the targeted columns from import processor when creating new input reader and row converter for DELIMITED and PGCOPY, when the targeted columns are specified in `IMPORT INTO`.
Fixes #52405
Release note (general): IMPORT INTO is now supported for DELIMITED and PGCOPY file formats.
52769: sql: avoid flake in scan_test r=asubiotto a=jordanlewis
The new sqlliveness package doesn't get along well with the kv batch
size "atomic", so turn down its check frequency in the test. Possibly we
should make the "atomic" actually atomic, but I'm not sure about the
runtime performance change.
Fixes #52683
Release note: None
Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
Co-authored-by: anzoteh96 <anzot@cockroachlabs.com>
Co-authored-by: Jordan Lewis <jordanthelewis@gmail.com>
Relates to #40205.
This commit is a follow-up to #44015. It propagates the row-level locking modes
from the optbuilder all the way down to the row.Fetcher. This requires passing
the locking information through the execbuilder and then through DistSQL. The
commit leaves off at the point where
roachpb.ScanRequests are constructed.I couldn't find a good way to test this. It's possible I missed something and
would love any tips, but I think we may need to wait until the plumbing into
the KV API is complete before performing end-to-end tests.
The next step will be to hook the locking modes into the KV API.
Release note: None