gzip: make use of generalized compression filter#10306
gzip: make use of generalized compression filter#10306mattklein123 merged 4 commits intoenvoyproxy:masterfrom
Conversation
Signed-off-by: Dmitry Rozhkov <dmitry.rozhkov@linux.intel.com>
mattklein123
left a comment
There was a problem hiding this comment.
LGTM with some small comments. Thank you!
| // .. attention: | ||
| // | ||
| // **This field is deprecated**. Set the `compressor` field instead. | ||
| google.protobuf.UInt32Value content_length = 2 [deprecated = true]; |
There was a problem hiding this comment.
Please update https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated. Also are we going to deprecate the standalone gzip filter entirely? Perhaps that should be done in a different change since I think doing the warning/stat dance for that deprecation will not be built-in?cc @zuercher who has been working on this recently.
There was a problem hiding this comment.
Also are we going to deprecate the standalone gzip filter entirely?
The filter itself isn't deprecated. Our users shouldn't notice any changes in its functionality except the new deprecation warning on using content_length and other deprecated config fields.
There was a problem hiding this comment.
But why would we keep the gzip filter when there is the generic compression filter? Should we deprecate the gzip filter and have users migrate to using the generic compression filter?
There was a problem hiding this comment.
The generic filter is not exposed to users directly. The idea was to expose it through a new config field of the existing "gzip" filter. Then to introduce a new filter - "brotli" - exposing the same field.
...
http_filters:
- name: envoy.filters.http.gzip
config:
compression_level: speed
compressor:
remove_accept_encoding_header: true
- name: envoy.filters.http.brotli
config:
quality: 12
compressor:
remove_accept_encoding_header: trueThis way users would do the transition seamlessly.
If I understood correctly you wanted to keep "gzip" as it was and to expose the generic filter as a normal HTTP filter parameterized with a compression lib config:
...
http_filters:
- name: envoy.filters.http.compress
config:
remove_accept_encoding_header: true
compressor_library:
- name: gzip
config:
compression_level: speed
- name: envoy.filters.http.compress
config:
remove_accept_encoding_header: true
compressor_library:
- name: brotli
config:
quality: 12This way users may experience unexpected results when both the old gzip and the new generic filter end up in the same filter chain for some reason.
But since this PR has been merged already the gzip filter is capable of cooperation with other compression filters now and users won't get unexpected results.
Still I need to know which way is preferable.
There was a problem hiding this comment.
Yes I would definitely prefer the 2nd option, as it's more extensible for the future. I think we should add that, allow the generic compression filter to support gzip, and then deprecate the gzip filter?
| - text/html | ||
| - application/json | ||
| disable_on_etag_header: true | ||
| compressor: |
There was a problem hiding this comment.
Please add a DEPRECATED_FEATURE_TEST that tests the old path.
|
This is awesome! |
Signed-off-by: Dmitry Rozhkov <dmitry.rozhkov@linux.intel.com>
Signed-off-by: Dmitry Rozhkov <dmitry.rozhkov@linux.intel.com>
Signed-off-by: Dmitry Rozhkov <dmitry.rozhkov@linux.intel.com>
|
Thanks! I've updated the doc and the integration test. |
mattklein123
left a comment
There was a problem hiding this comment.
Thanks! We can ship and iterate on the full gzip filter deprecation.
|
Alright, I’ll proceed with the second option then.
…On Wednesday, March 11, 2020, Matt Klein ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In api/envoy/config/filter/http/gzip/v2/gzip.proto
<#10306 (comment)>:
> @@ -38,7 +40,10 @@ message Gzip {
google.protobuf.UInt32Value memory_level = 1 [(validate.rules).uint32 = {lte: 9 gte: 1}];
// Minimum response length, in bytes, which will trigger compression. The default value is 30.
- google.protobuf.UInt32Value content_length = 2 [(validate.rules).uint32 = {gte: 30}];
+ // .. attention:
+ //
+ // **This field is deprecated**. Set the `compressor` field instead.
+ google.protobuf.UInt32Value content_length = 2 [deprecated = true];
Yes I would definitely prefer the 2nd option, as it's more extensible for
the future. I think we should add that, allow the generic compression
filter to support gzip, and then deprecate the gzip filter?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#10306 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAIEYPS5CXSPDP4475OB5DRG6XJDANCNFSM4LEKVK2Q>
.
|
|
This broke the |
|
I guess the answer depends on how long it'll take to deprecate the gzip filter... |
|
Nevermind, noticed a |
Description:
This is the second part of the refactoring started in #7057. The
GzipFilterclass is removed completely and generalizedCompressionFilteris instantiated instead byGzipFilterFactory. Instances of `CompressionFilter are configured to use "gzip" compressors to deflate HTTP streams.Risk Level: Medium
Testing: unit tests, manual tests
Docs Changes:
updated docs/root/configuration/http_filters/gzip_filter.rstRelease Notes: N/A
Deprecated: the statistics counter
<stat_prefix>.gzip.header_gzipis deprecated in favor of<stat_prefix>.gzip.header_compressor_usedprovided by the generalized compression filter (the meaning stays the same). Also a new counter<stat_prefix>.gzip.header_compressor_overshadowedis added which contains a number of requests skipped by this filter instance because they were handled by another compression filter in the same filter chain.