feat: Add gzip/deflate Content-Encoding support to OTLP traces endpoint#19024
feat: Add gzip/deflate Content-Encoding support to OTLP traces endpoint#19024harupy merged 10 commits intomlflow:masterfrom
Conversation
f8f0c84 to
d254eeb
Compare
Signed-off-by: Miaoxiang <zs1030823169@gmail.com>
- Extract Content-Encoding decompression logic into `decompress_otlp_body` helper in `mlflow/tracing/utils/otlp.py` - Support identity, gzip, and deflate encodings using pattern match (Python 3.10+) - Simplify handler by delegating decompression to helper - Improves maintainability and allows future reuse for other OTLP endpoints Signed-off-by: Miaoxiang <zs1030823169@gmail.com>
d254eeb to
03395bf
Compare
|
Hi @harupy , Thank you for your comments. I have made some adjustments. Please review again |
Signed-off-by: Miaoxiang <zs1030823169@gmail.com>
|
Documentation preview for 8f09079 is available at: More info
|
Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com> Signed-off-by: Miaoxiang <zs1030823169@gmail.com>
Removed docstring comments from test functions for clarity. Signed-off-by: Miaoxiang <zs1030823169@gmail.com>
Signed-off-by: Miaoxiang <zs1030823169@gmail.com>
- Parameterize compression tests to cover multiple encodings (identity, gzip, deflate-rfc, deflate-raw) - Parameterize error tests to cover invalid data for each encoding type - Add type hints to test functions for better code clarity - Use pytest 8.4+ check parameter for cleaner exception validation - Increase test coverage from 3 to 7 test cases 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
|
@Miaoxiang-philips I've pushed a commit. Can you take a look and make sure everything looks ok? |
I've checked it, and it's fine for me. |
|
@Miaoxiang-philips Thanks for the confirmation. I wonder if we could add a lightweight integration test (that can reproducer the error on master but works on this PR) using docker (or anything that could've detected the error) |
Sure, but I couldn't find anything related to integration testing in CONTRIBUTING.md. Could you give me some help information? If you are only concerned about whether the change is reasonable, I can provide more information: otel collector uses zip as compression by default when use otlphttp exporter, unless explicitly declared as none: otlphttpexporter Generally, the /v1/traces endpoint needs to handle at least zip compression, which is quite normal |
|
@Miaoxiang-philips Thanks for the reply! Could you share the steps to reproduce the error? I’ll turn them into a test case. |
|
Sure, I updated to the description of PR @harupy |
|
@Miaoxiang-philips Thanks! I think I'm missing something. I ran http://localhost:55679/debug/tracez:
Never mind, I was able to reproduce the error. |
|
Awesome, manually confirmed it works! |
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
| """ | ||
| from fastapi import HTTPException, status | ||
|
|
||
| match content_encoding: |
There was a problem hiding this comment.
I removed identity. According to https://github.com/open-telemetry/opentelemetry-collector/blob/73f090cd0d59a3faa237168750d44a0036d8fd5c/config/configcompression/compressiontype.go#L23, there is no compression named identity.
There was a problem hiding this comment.
I tested '' (empty string), 'none', and null. content_encoding is None for all of them.
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
exporters:
otlphttp:
endpoint: http://host.docker.internal:5000 # if running otel collector by docker, This can be connect to localhost:5000
compression: ... 👈
headers:
x-mlflow-experiment-id: "0" # default id
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlphttp]
…nt (mlflow#19024) Signed-off-by: Miaoxiang <zs1030823169@gmail.com> Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> Co-authored-by: miaoxiang.wang <miaoxiang.wang.extern@porsche.digital> Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com> Co-authored-by: harupy <17039389+harupy@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com>
…nt (#19024) Signed-off-by: Miaoxiang <zs1030823169@gmail.com> Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> Co-authored-by: miaoxiang.wang <miaoxiang.wang.extern@porsche.digital> Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com> Co-authored-by: harupy <17039389+harupy@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com>

🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
Related #18613
What changes are proposed in this pull request?
MLflow's OTLP/HTTP traces ingestion currently does not support any compression format.
OpenTelemetry Collector sends OTLP/HTTP requests using gzip compression by default, which causes MLflow to return:
This PR adds support for the required OTLP/HTTP compression encodings:
The server now automatically decompresses the request body before parsing the protobuf payload. This makes MLflow compatible with all standard OTLP/HTTP exporters without requiring the user to disable compression.
This change does not affect any existing functionality and maintains full backward compatibility.
Reproduce steps:
If you don't have a ready-made method, you can use
telemetrygento simulate data transmission, for example:telemetrygen download link: telemetrygen package
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.