Conversation
|
Documentation preview for f7e2f52 is available at: More info
|
|
@joelrobin18 we might want to think about creating a v2 autologging module for the scope of the breaking changes to simplify things here. We could do version validation handling (there are other autologging integrations where this has been done) to prevent having to complicate the maintainability with embedding large amounts of try/catch logic or conditional logic within a single implementation. |
|
Hi @BenWilson2 Thank you for the feedbacks. Im trying to refactor the code to use lesser try/catch blocks as well as address the above comments as well. |
|
Very much appreatiated if we could fully integrate v2. |
|
Hi @ashdam Could you please add the below code at the top of your agent and check? from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from openinference.instrumentation.agno import AgnoInstrumentor
# Configure OTLP to export to MLflow
exporter = OTLPSpanExporter(
endpoint="http://localhost:5000/v1/traces",
headers={"x-mlflow-experiment-id": "0"}
)
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(tracer_provider)
AgnoInstrumentor().instrument() |
|
@joelrobin18 i have tested it. Not really an expert on MLFlow but currenly im using MLflow 3.6.0 OSS with PostgreSQL backend with Agno v2.2.6 Its agno is capturing calls but it raises the following error i got this error in agno-os: |
|
It looks like we are using FileStore as backend here. Can you share a simple repo code for the same? |
c79701b to
2a1df06
Compare
| _AUTOLOGGING_CLEANUP_CALLBACKS = {} | ||
|
|
||
|
|
||
| def register_cleanup_callback(autologging_integration, callback): |
There was a problem hiding this comment.
This is needed to clean up the OTel instrumentation after we disable them using mlflow.autolog(disable=True). Let me know if there is any better way to do this.
There was a problem hiding this comment.
Does the approach we do in some flavors like DSPy works? https://github.com/mlflow/mlflow/blob/master/mlflow/dspy/autolog.py#L54-L60
Basically
- Add an empty
_autologfunction - Decorate it with
@autologging_integration, instead of the mainautologfunction. - Call that function inside the main
autologfunction.
This is hacky, but in this way, we can let autolog() function to be called when disable=True is specified.
|
Hi @joelrobin18, Thanks for looking into this! Yes, I'm definitely using PostgreSQL as the backend store, not FileStore. Here's the configuration: MLflow Server SetupDeployment: Azure Container Apps running MLflow v3.6.0 mlflow server \
--host 0.0.0.0 \
--port 5000 \
--backend-store-uri postgresql://agnoadmin:****@psql-agno-storage.postgres.database.azure.com:5432/mlflow_db?sslmode=require \
--default-artifact-root wasbs://mlflow-artifacts@saagenticaifinancedemo.blob.core.windows.net/ \
--serve-artifactsVerified Backend Configuration: $ az containerapp show --name mlflow-server --query "properties.template.containers[0].env"
[
{
"name": "MLFLOW_BACKEND_STORE_URI",
"secretRef": "postgres-uri" # Points to PostgreSQL connection string
},
...
]Agno v2.2.6 Integration CodeFollowing your recommendation, I'm using OpenInference AgnoInstrumentor: from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from openinference.instrumentation.agno import AgnoInstrumentor
mlflow_tracking_uri = os.getenv("MLFLOW_TRACKING_URI")
mlflow_experiment_id = os.getenv("MLFLOW_EXPERIMENT_ID", "0")
exporter = OTLPSpanExporter(
endpoint=f"{mlflow_tracking_uri}/v1/traces",
headers={"x-mlflow-experiment-id": mlflow_experiment_id},
)
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(tracer_provider)
AgnoInstrumentor().instrument()Dependencies:
The Issue✅ What's Working:
❌ The Error: From Agent Execution Logs: File "/app/.venv/lib/python3.12/site-packages/openinference/instrumentation/agno/_runs_wrapper.py", line 512, in arun_stream
async for response in wrapped(*args, **kwargs):
File "/app/.venv/lib/python3.12/site-packages/agno/team/team.py", line 2452, in _arun_stream
async for event in self._ahandle_model_response_stream(
...
File "/app/.venv/lib/python3.12/site-packages/openinference/instrumentation/agno/_model_wrapper.py", line 493, in arun_stream
async for chunk in wrapped(*args, **kwargs):The ParadoxMLflow is configured with PostgreSQL backend ( Is this a known limitation, or is there a missing configuration flag for enabling database trace storage? I'm happy to provide a minimal reproducible repo if that helps debug this further! |
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com>
|
Thank you very much for your work @joelrobin18 . this is highly anticipated in my company :) |
|
@BenWilson2 @joelrobin18 any news? :) |
| _AUTOLOGGING_CLEANUP_CALLBACKS = {} | ||
|
|
||
|
|
||
| def register_cleanup_callback(autologging_integration, callback): |
There was a problem hiding this comment.
Does the approach we do in some flavors like DSPy works? https://github.com/mlflow/mlflow/blob/master/mlflow/dspy/autolog.py#L54-L60
Basically
- Add an empty
_autologfunction - Decorate it with
@autologging_integration, instead of the mainautologfunction. - Call that function inside the main
autologfunction.
This is hacky, but in this way, we can let autolog() function to be called when disable=True is specified.
mlflow/agno/autolog.py
Outdated
| _logger.info("OpenTelemetry instrumentation enabled for Agno V2") | ||
|
|
||
| except ImportError as exc: | ||
| _logger.warning( |
There was a problem hiding this comment.
Can we raise this as an exception (with the current message)? Enable tracing is the single purpose of calling mlflow.agno.autolog(), so it does not make much sense to pass through if we fail to do that.
@ashdam I can see Agno traces logged successfuly via otel on my local, could not reproduce the error. Could you double check if the tracking URI points to the correct MLflow instance. The error message indicates the backend is actually file store. You can also test it locally to see if this is related to your azure app container settings or not. |
|
Yes, we only have 1 server of MLFlow and it had postgreSQL configured. Its really hard (devops + security) for me to replicate everything , including agno, in local tbh :( |
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
|
@B-Step62 @joelrobin18 Thank you guys for your work :) |
|
@ashdam I still believe what happens inside the app container is that MLflow is started with file store. The error message incldues the class name of the store and mlflow/mlflow/server/otel_api.py Line 128 in 82edf51 If the store is configured to SQL properly, you should see server logs like this One common gotcha is that the multi-line commands are not properly formatted in the YAML file and it only runs the first line For example, this works but this does not work (only |
|
Thank you for the help :) I will test it next week :D thank you! |
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com> Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Co-authored-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: joelrobin18 <joelrobin1818@gmail.com> Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Co-authored-by: B-Step62 <yuki.watanabe@databricks.com> Signed-off-by: Tian Lan <sky.blue266000@gmail.com>
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
Related Issues/PRs
Fix #18335
What changes are proposed in this pull request?
Agno v2 introduces several breaking changes, including full support for OpenTelemetry instrumentation. This PR updates the tracing implementation to be compatible with Agno v2 by using MLflow’s native integration with OTel-based tracing.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.