Merged
Conversation
Member
Author
|
cc @tonya11en |
mergeconflict
approved these changes
Apr 18, 2019
mergeconflict
left a comment
There was a problem hiding this comment.
Yeah, makes sense. If I can figure out how to do a per-thread in-memory histogram, that's probably optimal. And either way, a config setting makes sense.
mpuncel
added a commit
to mpuncel/envoy
that referenced
this pull request
Apr 19, 2019
* master: (26 commits) docs: update docs to recommend /retest repokitteh command (envoyproxy#6655) http timeout integration test: wait for 15s for upstream reset (envoyproxy#6646) access log: add response code details to the access log formatter (envoyproxy#6626) build: add ppc build badge to README (envoyproxy#6629) Revert dispatcher stats (envoyproxy#6649) Batch implementation with timer (envoyproxy#6452) fault filter: reset token bucket on data start (envoyproxy#6627) event: update libevent dependency to fix race condition (envoyproxy#6637) examples: standardize docker-compose version and yaml extension (envoyproxy#6613) quiche: Implement SpdyUnsafeArena using SpdySimpleArena (envoyproxy#6612) router: support customizable retry back-off intervals (envoyproxy#6568) api: create OpenRCA service proto file (envoyproxy#6497) ext_authz: option for clearing route cache of authorized requests (envoyproxy#6503) build: update jinja to 2.10.1. (envoyproxy#6623) tools: check spelling in pre-push hook (envoyproxy#6631) security: blameless postmortem template. (envoyproxy#6553) Implementing Endpoint lease for ClusterLoadAssigment (envoyproxy#6477) add HTTP integration tests exercising timeouts (envoyproxy#6621) event: fix DispatcherImplTest::InitializeStats flake (envoyproxy#6619) Add tag extractor for RDS route config name (envoyproxy#6618) ... Signed-off-by: Michael Puncel <mpuncel@squareup.com>
htuch
pushed a commit
that referenced
this pull request
Apr 23, 2019
Reintroduce dispatcher stats previously reverted in #6649. Dispatcher stats are now gated on a new bootstrap config parameter, enable_dispatcher_stats, and disabled by default. Risk Level: Low Testing: Manually verified that config is respected (see #6582, this still requires integration tests). Docs Changes: Included a note about how statsd doesn't play well with these high-volume histograms. Signed-off-by: Dan Rosen <mergeconflict@google.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This reverts:
109d23a
bad70bf
77748b2
@mergeconflict as I mentioned in Slack (https://envoyproxy.slack.com/archives/C78HA81DH/p1555624409062200), for statsd backends this writes raw histograms out to statsd at an extremely high rate which is not optimal. We should probably make this feature opt-in for perf reasons, but we need to either have a giant warning about statsd or we need to use an in-memory histogram for this data and maybe just write out summary data to the stats sinks? WDYT?
Either way I would prefer to revert for now if that is OK with you while we figure out the right solution.