Skip to content

rangecache: fix a span use-after-Finish#73478

Merged
craig[bot] merged 1 commit intocockroachdb:masterfrom
andreimatei:tracing.fix-rangecache-fork
Dec 6, 2021
Merged

rangecache: fix a span use-after-Finish#73478
craig[bot] merged 1 commit intocockroachdb:masterfrom
andreimatei:tracing.fix-rangecache-fork

Conversation

@andreimatei
Copy link
Copy Markdown
Contributor

Lookups of range descriptors use a form of pretty unusual unstructured
concurrency: a request spawns a goroutine that might outlive it (in case
the request is canceled), but at the same time the goroutine wants to be
part of the same trace as the request. Before this patch, the goroutine
was responsible for forking the request's span asynchronously. This was
pretty broken because the request's span might be finished by the time
it's forked. This is a use-after-Finish, and I'm trying to stop
tolerating such uses. This patch fixes it by forking the span
synchronously.

Release note: None

@andreimatei andreimatei requested a review from a team as a code owner December 5, 2021 01:52
@cockroach-teamcity
Copy link
Copy Markdown
Member

This change is Reviewable

Copy link
Copy Markdown
Contributor

@aayushshah15 aayushshah15 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @aayushshah15 and @andreimatei)


pkg/kv/kvclient/rangecache/range_cache.go, line 649 at r1 (raw file):

	}
	requestKey := makeLookupRequestKey(key, prevDesc, useReverseScan)
	reqCtx, reqSpan := tracing.EnsureChildSpan(ctx, rc.tracer, "range lookup")

Could you say why we don't just fork synchronously inside the DoChan callback?

Lookups of range descriptors use a form of pretty unusual unstructured
concurrency: a request spawns a goroutine that might outlive it (in case
the request is canceled), but at the same time the goroutine wants to be
part of the same trace as the request. Before this patch, the goroutine
was responsible for forking the request's span asynchronously. This was
pretty broken because the request's span might be finished by the time
it's forked. This is a use-after-Finish, and I'm trying to stop
tolerating such uses. This patch fixes it by forking the span
synchronously.

Release note: None
@andreimatei andreimatei force-pushed the tracing.fix-rangecache-fork branch from fff5ca0 to b9a4a72 Compare December 6, 2021 15:13
Copy link
Copy Markdown
Contributor Author

@andreimatei andreimatei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @aayushshah15)


pkg/kv/kvclient/rangecache/range_cache.go, line 649 at r1 (raw file):

Previously, aayushshah15 (Aayush Shah) wrote…

Could you say why we don't just fork synchronously inside the DoChan callback?

Like this?

Copy link
Copy Markdown
Contributor

@aayushshah15 aayushshah15 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @aayushshah15)

Copy link
Copy Markdown
Contributor Author

@andreimatei andreimatei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TFTR

bors r+

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @aayushshah15)

@craig
Copy link
Copy Markdown
Contributor

craig bot commented Dec 6, 2021

Build failed (retrying...):

@craig craig bot merged commit 69b29a5 into cockroachdb:master Dec 6, 2021
@craig
Copy link
Copy Markdown
Contributor

craig bot commented Dec 6, 2021

Build succeeded:

@andreimatei andreimatei deleted the tracing.fix-rangecache-fork branch January 21, 2022 17:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants