Skip to content

[client channel] combine two subchannel data structures into one#40880

Closed
markdroth wants to merge 2 commits intogrpc:masterfrom
markdroth:client_channel_combine_subchannel_maps
Closed

[client channel] combine two subchannel data structures into one#40880
markdroth wants to merge 2 commits intogrpc:masterfrom
markdroth:client_channel_combine_subchannel_maps

Conversation

@markdroth
Copy link
Copy Markdown
Member

Currently, the client channel maintains two different data structures for tracking subchannels:

  • subchannel_refcount_map_, which tracks the number of subchannel wrappers per subchannel. This is used to determine when to create and remove channelz node linkage.
  • subchannel_wrappers_, which tracks the set of all subchannel wrappers across all subchannels. This was originally introduced back in Apply health check service name changes to existing subchannels. #20039 as part of propagating the health check service name to subchannels, but we switched to using a different approach for the health check service name in pass health check service name through LB policies via a channel arg #26441. However, in between those two, we started using this data structure for handling keepalive time in Keepalive Throttling #23313 -- which is actually somewhat inefficient, since we may wind up setting the keepalive time on the underlying subchannel more than once in the case where there is more than one subchannel wrapper per subchannel (which happens frequently during updates).

This PR combines these two data structures into one: a map from subchannel to a set of subchannel wrappers for that subchannel. This is used both for channelz node updates and keepalive propagation -- and it's more efficient for the latter, because we can now update each subchannel exactly once.

This also paves the way for a subsequent change that will be needed as part of the MAX_CONCURRENT_STREAMS design.

@markdroth markdroth added the release notes: no Indicates if PR should not be in release notes label Oct 9, 2025
@markdroth markdroth marked this pull request as ready for review October 10, 2025 00:37
@markdroth markdroth requested a review from ctiller October 10, 2025 00:37
@ctiller
Copy link
Copy Markdown
Member

ctiller commented Oct 24, 2025

The changes seem reasonable, but I'd like to explore testing/experiment status: I'm not convinced there's sufficient testing right now to call this safe.

@markdroth
Copy link
Copy Markdown
Member Author

Is there a specific testing gap you're worried about on this one?

The only two features affected by these data structures are:

So it looks to me like we do have coverage here.

@markdroth
Copy link
Copy Markdown
Member Author

I chatted with @ctiller, and we agreed that this can go forward, but he didn't have time to review before his vacation.

@murgatroid99, can you please review? Thanks!

@markdroth markdroth requested review from murgatroid99 and removed request for ctiller October 31, 2025 22:49
@copybara-service copybara-service bot closed this in fa58250 Nov 7, 2025
@markdroth markdroth deleted the client_channel_combine_subchannel_maps branch November 8, 2025 00:02
yuanweiz pushed a commit to yuanweiz/grpc that referenced this pull request Nov 12, 2025
…c#40880)

Currently, the client channel maintains two different data structures for tracking subchannels:
- `subchannel_refcount_map_`, which tracks the number of subchannel wrappers per subchannel.  This is used to determine when to create and remove channelz node linkage.
- `subchannel_wrappers_`, which tracks the set of all subchannel wrappers across all subchannels.  This was originally introduced back in grpc#20039 as part of propagating the health check service name to subchannels, but we switched to using a different approach for the health check service name in grpc#26441.  However, in between those two, we started using this data structure for handling keepalive time in grpc#23313 -- which is actually somewhat inefficient, since we may wind up setting the keepalive time on the underlying subchannel more than once in the case where there is more than one subchannel wrapper per subchannel (which happens frequently during updates).

This PR combines these two data structures into one: a map from subchannel to a set of subchannel wrappers for that subchannel.  This is used both for channelz node updates and keepalive propagation -- and it's more efficient for the latter, because we can now update each subchannel exactly once.

This also paves the way for a subsequent change that will be needed as part of the MAX_CONCURRENT_STREAMS design.

Closes grpc#40880

COPYBARA_INTEGRATE_REVIEW=grpc#40880 from markdroth:client_channel_combine_subchannel_maps f289c20
PiperOrigin-RevId: 829602334
sreenithi pushed a commit to sreenithi/grpc that referenced this pull request Nov 26, 2025
…c#40880)

Currently, the client channel maintains two different data structures for tracking subchannels:
- `subchannel_refcount_map_`, which tracks the number of subchannel wrappers per subchannel.  This is used to determine when to create and remove channelz node linkage.
- `subchannel_wrappers_`, which tracks the set of all subchannel wrappers across all subchannels.  This was originally introduced back in grpc#20039 as part of propagating the health check service name to subchannels, but we switched to using a different approach for the health check service name in grpc#26441.  However, in between those two, we started using this data structure for handling keepalive time in grpc#23313 -- which is actually somewhat inefficient, since we may wind up setting the keepalive time on the underlying subchannel more than once in the case where there is more than one subchannel wrapper per subchannel (which happens frequently during updates).

This PR combines these two data structures into one: a map from subchannel to a set of subchannel wrappers for that subchannel.  This is used both for channelz node updates and keepalive propagation -- and it's more efficient for the latter, because we can now update each subchannel exactly once.

This also paves the way for a subsequent change that will be needed as part of the MAX_CONCURRENT_STREAMS design.

Closes grpc#40880

COPYBARA_INTEGRATE_REVIEW=grpc#40880 from markdroth:client_channel_combine_subchannel_maps f289c20
PiperOrigin-RevId: 829602334
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants