upstream: update host's socket factory when metadata is updated.#16708
upstream: update host's socket factory when metadata is updated.#16708lizan merged 7 commits intoenvoyproxy:mainfrom
Conversation
Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
|
/retest |
|
Retrying Azure Pipelines: |
| void metadata(MetadataConstSharedPtr new_metadata) override { | ||
| absl::WriterMutexLock lock(&metadata_mutex_); | ||
| metadata_ = new_metadata; | ||
| // Update data members dependent on metadata. |
There was a problem hiding this comment.
Could resolveTransportSocketFactory be moved out of the lock guard?
The referenced Metadata should be immutable and valid.
There was a problem hiding this comment.
@lambdai I can add a scope around the lock to unlock it before calling resolveTransportSocketFactory. Why do you think it is necessary? Deadlock?
There was a problem hiding this comment.
Sorry for the late reply.
It's not necessary but nice to have.
The only reason is performance. resolveTransportSocketFactory doesn't need the lock to protect
There was a problem hiding this comment.
Take it or leave it. It's probably not a bottleneck.
auto factory = resolveTransportSocketFactory(address_, new_metadata.get());
{
absl::WriterMutexLock lock(&metadata_mutex_);
metadata_ = new_metadata;
socket_factory_ = factory;
}
There was a problem hiding this comment.
Updated as you suggested. There is small performance improvement to resolve factory outside of mutex protected scope. Thanks!
…tected scope. Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
|
/retest |
|
Retrying Azure Pipelines: |
Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
|
/retest |
|
Retrying Azure Pipelines: |
| HealthCheckHostMonitorPtr health_checker_; | ||
| std::atomic<uint32_t> priority_; | ||
| Network::TransportSocketFactory& socket_factory_; | ||
| std::reference_wrapper<Network::TransportSocketFactory> socket_factory_; |
There was a problem hiding this comment.
This needs a ABSL_GUARDED_BY annotation as well? Should this use same mutex as metadata?
There was a problem hiding this comment.
Annotation added .
Yeah, after checking the code layout I believe that it is possible that metadata update and creating a new upstream connection (which needs access to transport socket factory) to happen at the same time on different threads. To protect against race conditions, a mutex around socket_factory_ has been added.
It is possible that metadata_ is updated at the same time when socket_factory_ is used to create a new upstream connection. Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
|
/retest |
|
Retrying Azure Pipelines: |
lizan
left a comment
There was a problem hiding this comment.
/assign-from @envoyproxy/maintainers
for non-tetrands review
|
@envoyproxy/maintainers assignee is @lizan |
|
/assign-from @envoyproxy/maintainers |
|
@envoyproxy/maintainers assignee is @lizan |
|
/assign-from @envoyproxy/maintainers |
|
@envoyproxy/maintainers assignee is @jmarantz |
jmarantz
left a comment
There was a problem hiding this comment.
Some superficial comments here. I think someone that knows the semantics of this corner of the code also needs to review.
| metadata_ = new_metadata; | ||
| } | ||
| { | ||
| absl::WriterMutexLock lock(&socket_factory_mutex_); |
There was a problem hiding this comment.
is it semantically sound to have a separate lock for the socket factory? If there's a thread wakeup between these two locks will there be an inconsistent view of the data?
There was a problem hiding this comment.
Simplified it and one mutex is used to protect metadata_ and fields dependent on it - in this case socket_factory_.
|
/wait |
Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
jmarantz
left a comment
There was a problem hiding this comment.
Thanks, looks much better.
This was pre-existing but most people I know including myself have seen worse performance from reader/writer locks vs just plain locks. This was pre-existing though.
|
@jmarantz Thanks for reviewing and explaining issue with locks. |
* main: listener: match rebalancer to listener IP family type (envoyproxy#16914) jwt_authn: implementation of www-authenticate header (envoyproxy#16216) listener: reset the file event in framework instead of listener filter doing itself (envoyproxy#17227) Small typo fix (envoyproxy#17247) Doc: Clarify request/response attributes are http-only (envoyproxy#17204) bazel/README.md: add aspell comment (envoyproxy#17072) docs: Fix broken URL links in HTTP upgrades doc (envoyproxy#17225) remove the wrong comment on test (envoyproxy#17233) upstream: allow clusters to skip waiting on warmup for initialization (envoyproxy#17179) JwtAuthn: support completing padding on forward jwt payload header (envoyproxy#16752) remove support for v2 UNSUPPORTED_REST_LEGACY (envoyproxy#16968) metrics service: fix wrong argument arrange on MetricsServiceSink (envoyproxy#17127) deps: update cel-cpp to 0.6.1 (envoyproxy#16293) Add ability to filter ConfigDump. (envoyproxy#16774) examples: fix Wasm example. (envoyproxy#17218) upstream: update host's socket factory when metadata is updated. (envoyproxy#16708) Signed-off-by: Garrett Bourg <bourg@squareup.com>
…oyproxy#16708) During dynamic endpoint updates via EDS, `BaseDynamicClusterImpl::updateDynamicHostList` method tries to minimize deletions/insertions. If only metadata was changed, endpoint is updated in-place by calling `HostDescriptionImpl::metadata`, which updates metadata but not fields dependent on metadata. In particular, if metadata's transport socket matcher was changed, socket_factory was not updated. Risk Level: Low Testing: added unit test Docs Changes: no Release Notes: no Platform Specific Features: Fixes envoyproxy#16536 Signed-off-by: Christoph Pakulski <christoph@tetrate.io> Signed-off-by: chris.xin <xinchuantao@qq.com>
…oyproxy#16708) During dynamic endpoint updates via EDS, `BaseDynamicClusterImpl::updateDynamicHostList` method tries to minimize deletions/insertions. If only metadata was changed, endpoint is updated in-place by calling `HostDescriptionImpl::metadata`, which updates metadata but not fields dependent on metadata. In particular, if metadata's transport socket matcher was changed, socket_factory was not updated. Risk Level: Low Testing: added unit test Docs Changes: no Release Notes: no Platform Specific Features: Fixes envoyproxy#16536 Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
Commit Message:
Update host's socket factory when metadata is updated.
Additional Description:
During dynamic endpoint updates via EDS,
BaseDynamicClusterImpl::updateDynamicHostListmethodtries to minimize deletions/insertions. If only metadata was changed, endpoint is updated in-place by calling
HostDescriptionImpl::metadata, which updates metadata but not fields dependent on metadata. In particular,if metadata's transport socket matcher was changed, socket_factory was not updated.
Risk Level: Low
Testing: added unit test
Docs Changes: no
Release Notes: no
Platform Specific Features:
Fixes #16536