Skip to content

sql: propagate row-level locking modes through execbuilder to row.Fetcher#44429

Merged
craig[bot] merged 2 commits intocockroachdb:masterfrom
nvb:nvanbenschoten/lockScanNode
Jan 28, 2020
Merged

sql: propagate row-level locking modes through execbuilder to row.Fetcher#44429
craig[bot] merged 2 commits intocockroachdb:masterfrom
nvb:nvanbenschoten/lockScanNode

Conversation

@nvb
Copy link
Copy Markdown
Contributor

@nvb nvb commented Jan 28, 2020

Relates to #40205.

This commit is a follow-up to #44015. It propagates the row-level locking modes
from the optbuilder all the way down to the row.Fetcher. This requires passing
the locking information through the execbuilder and then through DistSQL. The
commit leaves off at the point where roachpb.ScanRequests are constructed.

I couldn't find a good way to test this. It's possible I missed something and
would love any tips, but I think we may need to wait until the plumbing into
the KV API is complete before performing end-to-end tests.

The next step will be to hook the locking modes into the KV API.

Release note: None

@nvb nvb requested a review from rohany January 28, 2020 00:46
@nvb nvb requested a review from a team as a code owner January 28, 2020 00:46
@cockroach-teamcity
Copy link
Copy Markdown
Member

This change is Reviewable

@yuzefovich
Copy link
Copy Markdown
Member

I took a quick glance, and I see that we're modifying existing processor protos. I think we might need to bump DistSQLVersion so that there was no problem in a mixed-version cluster scenario.

@nvb nvb force-pushed the nvanbenschoten/lockScanNode branch from 09230ec to fb2a052 Compare January 28, 2020 04:23
Copy link
Copy Markdown
Contributor

@rohany rohany left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall PR LGTM besides yahor's comment. I agree, its unclear how to test this until the lock strength actually does something (it's basically a noop at the kvfetcher layer)

Reviewed 34 of 34 files at r1.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @nvanbenschoten)


pkg/sql/sqlbase/locking.go, line 21 at r1 (raw file):

// ToScanLockingStrength converts a tree.LockingStrength to its corresponding
// ScanLockingStrength.
func ToScanLockingStrength(s tree.LockingStrength) ScanLockingStrength {

Why do we need a locking strength defined in sqlbase and tree?

nvb added 2 commits January 28, 2020 15:21
…cher

Relates to cockroachdb#40205.

This commit is a follow-up to cockroachdb#44015. It propagates the row-level locking modes
from the optbuilder all the way down to the row.Fetcher. This requires passing
the locking information through the execbuilder and then through DistSQL. The
commit leaves off at the point where `roachpb.ScanRequest`s are constructed.

I couldn't find a good way to test this. It's possible I missed something and
would love any tips, but I think we may need to wait until the plumbing into
the KV API is complete before performing end-to-end tests.

The next step will be to hook the locking modes into the KV API.

Release note: None
Scans that are performing row-level locking cannot currently be
distributed because their locks would not be propagated back to
the root transaction coordinator.

I hope to lift this restriction in the future by sending back
information about locked rows to the Root TxnCoordSender from its
Leaves, but for now, we can't allow this.
@nvb nvb force-pushed the nvanbenschoten/lockScanNode branch from fb2a052 to 438b216 Compare January 28, 2020 20:21
Copy link
Copy Markdown
Contributor Author

@nvb nvb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TFTRs! I added one more commit to disable distributing TableReaders that intend to perform row-level locking.

I think we might need to bump DistSQLVersion so that there was no problem in a mixed-version cluster scenario.

Done.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @rohany)


pkg/sql/sqlbase/locking.go, line 21 at r1 (raw file):

Previously, rohany (Rohan Yadav) wrote…

Why do we need a locking strength defined in sqlbase and tree?

We need it in tree because the variants correspond to AST nodes. sqlbase has a dependency on tree, so we can't have tree reference this enum without creating a dependency cycle. We need it in sqlbase because we need to send these over the wire in processor specs, so they need to be protobuf enums. I'm open to a better way to do this, but I couldn't find one.

@nvb
Copy link
Copy Markdown
Contributor Author

nvb commented Jan 28, 2020

bors r+

craig bot pushed a commit that referenced this pull request Jan 28, 2020
44429: sql: propagate row-level locking modes through execbuilder to row.Fetcher r=nvanbenschoten a=nvanbenschoten

Relates to #40205.

This commit is a follow-up to #44015. It propagates the row-level locking modes
from the optbuilder all the way down to the row.Fetcher. This requires passing
the locking information through the execbuilder and then through DistSQL. The
commit leaves off at the point where `roachpb.ScanRequest`s are constructed.

I couldn't find a good way to test this. It's possible I missed something and
would love any tips, but I think we may need to wait until the plumbing into
the KV API is complete before performing end-to-end tests.

The next step will be to hook the locking modes into the KV API.

Release note: None

44446: roachtest: don't use 20.1 nodelocal on older versions r=dt a=dt

The ability to target the nodelocal directory *of a particular node* was added in 20.1,
so when roachtesting earlier versions we still need to use the old google cloud storage bucket.

Release note: none.

44461: kv: small readability improvement to the heartbeat r=andreimatei a=andreimatei

Rearrange protection against a race to make it more obvious. A casual
reading suggested to me that this protection was missing.

Release note: None

44463: kv: simplify retriable error handling r=andreimatei a=andreimatei

The contract of handleRetryableErrLocked() was weird for no discernible
reason.

Release note: None

Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
Co-authored-by: David Taylor <tinystatemachine@gmail.com>
Co-authored-by: Andrei Matei <andrei@cockroachlabs.com>
@craig
Copy link
Copy Markdown
Contributor

craig bot commented Jan 28, 2020

Build succeeded

@craig craig bot merged commit 438b216 into cockroachdb:master Jan 28, 2020
@nvb nvb deleted the nvanbenschoten/lockScanNode branch January 31, 2020 17:08
nvb added a commit to nvb/cockroach that referenced this pull request Aug 7, 2020
Fixes half of cockroachdb#40476.

In cockroachdb#52388, we added support for a new `Error` wait policy to be attached
to BatchRequests. This wait policy dictates that if a request encounters
a conflicting locks held by another active transaction, it should raise
an error instead of blocking.

These semantics closely match those of FOR {UPDATE,SHARE} NOWAIT in
PostgreSQL. With NOWAIT, the statement reports an error, rather than
waiting, if a selected row cannot be locked immediately. This means that
we are able to use the new kv-level wait policy to implement NOWAIT in
SQL. Doing so is fairly mechanical, as a lot of the plumbing necessary
was already put into place in cockroachdb#44429.

The only difference I can see in the semantics is that in PostgreSQL,
rows that are being INSERTed by one transaction are not even considered
for locking by other transactions, so NOWAIT will not throw an error
when encountering these rows. Instead, it will silently ignore them.
This has less to do with NOWAIT itself, and more to do with a difference
in the concurrency control implementation between Cockroach and
PostgreSQL. Since this isn't specific to NOWAIT, I don't think it's a
blocker for this change.

Release note (sql change): SELECT ... FOR {UPDATE,SHARE} NOWAIT is now
supported. The option can be used to throw and error instead of blocking
on contended row-level lock acquisition.
nvb added a commit to nvb/cockroach that referenced this pull request Aug 8, 2020
Fixes half of cockroachdb#40476.

In cockroachdb#52388, we added support for a new `Error` wait policy to be attached
to BatchRequests. This wait policy dictates that if a request encounters
a conflicting locks held by another active transaction, it should raise
an error instead of blocking.

These semantics closely match those of FOR {UPDATE,SHARE} NOWAIT in
PostgreSQL. With NOWAIT, the statement reports an error, rather than
waiting, if a selected row cannot be locked immediately. This means that
we are able to use the new kv-level wait policy to implement NOWAIT in
SQL. Doing so is fairly mechanical, as a lot of the plumbing necessary
was already put into place in cockroachdb#44429.

The only difference I can see in the semantics is that in PostgreSQL,
rows that are being INSERTed by one transaction are not even considered
for locking by other transactions, so NOWAIT will not throw an error
when encountering these rows. Instead, it will silently ignore them.
This has less to do with NOWAIT itself, and more to do with a difference
in the concurrency control implementation between Cockroach and
PostgreSQL. Since this isn't specific to NOWAIT, I don't think it's a
blocker for this change.

Release note (sql change): SELECT ... FOR {UPDATE,SHARE} NOWAIT is now
supported. The option can be used to throw and error instead of blocking
on contended row-level lock acquisition.
nvb added a commit to nvb/cockroach that referenced this pull request Aug 16, 2020
Fixes half of cockroachdb#40476.

In cockroachdb#52388, we added support for a new `Error` wait policy to be attached
to BatchRequests. This wait policy dictates that if a request encounters
a conflicting locks held by another active transaction, it should raise
an error instead of blocking.

These semantics closely match those of FOR {UPDATE,SHARE} NOWAIT in
PostgreSQL. With NOWAIT, the statement reports an error, rather than
waiting, if a selected row cannot be locked immediately. This means that
we are able to use the new kv-level wait policy to implement NOWAIT in
SQL. Doing so is fairly mechanical, as a lot of the plumbing necessary
was already put into place in cockroachdb#44429.

The only difference I can see in the semantics is that in PostgreSQL,
rows that are being INSERTed by one transaction are not even considered
for locking by other transactions, so NOWAIT will not throw an error
when encountering these rows. Instead, it will silently ignore them.
This has less to do with NOWAIT itself, and more to do with a difference
in the concurrency control implementation between Cockroach and
PostgreSQL. Since this isn't specific to NOWAIT, I don't think it's a
blocker for this change.

Release note (sql change): SELECT ... FOR {UPDATE,SHARE} NOWAIT is now
supported. The option can be used to throw and error instead of blocking
on contended row-level lock acquisition.
craig bot pushed a commit that referenced this pull request Aug 18, 2020
52522: sql: support FOR {UPDATE,SHARE} NOWAIT r=nvanbenschoten a=nvanbenschoten

Fixes half of #40476.

In #52388, we added support for a new `Error` wait policy to be attached
to BatchRequests. This wait policy dictates that if a request encounters
a conflicting locks held by another active transaction, it should raise
an error instead of blocking.

These semantics closely match those of FOR {UPDATE,SHARE} NOWAIT in
PostgreSQL. With NOWAIT, the statement reports an error, rather than
waiting, if a selected row cannot be locked immediately. This means that
we are able to use the new kv-level wait policy to implement NOWAIT in
SQL. Doing so is fairly mechanical, as a lot of the plumbing necessary
was already put into place in #44429.

The only difference I can see in the semantics is that in PostgreSQL,
rows that are being INSERTed by one transaction are not even considered
for locking by other transactions, so NOWAIT will not throw an error
when encountering these rows. Instead, it will silently ignore them.
This has less to do with NOWAIT itself, and more to do with a difference
in the concurrency control implementation between Cockroach and
PostgreSQL. Since this isn't specific to NOWAIT, I don't think it's a
blocker for this change.

Release note (sql change): SELECT ... FOR {UPDATE,SHARE} NOWAIT is now
supported. The option can be used to throw and error instead of blocking
on contended row-level lock acquisition.

52628: importccl: support IMPORT INTO for DELIMITED and PGCOPY r=pbardea a=Anzoteh96

Previously, IMPORT INTO is supported only for CSV and AVRO. This PR extends the support of IMPORT INTO for DELIMITED and PGCOPY, in the hope that this will be useful when supporting default columns for imports from these files.

This involves populating the targeted columns from import processor when creating new input reader and row converter for DELIMITED and PGCOPY, when the targeted columns are specified in `IMPORT INTO`. 

Fixes #52405

Release note (general): IMPORT INTO is now supported for DELIMITED and PGCOPY file formats. 

52769: sql: avoid flake in scan_test r=asubiotto a=jordanlewis

The new sqlliveness package doesn't get along well with the kv batch
size "atomic", so turn down its check frequency in the test. Possibly we
should make the "atomic" actually atomic, but I'm not sure about the
runtime performance change.

Fixes #52683

Release note: None

Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
Co-authored-by: anzoteh96 <anzot@cockroachlabs.com>
Co-authored-by: Jordan Lewis <jordanthelewis@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants