Conversation
|
Documentation preview for db24e91 is available at: More info
|
| "json_dict": None, | ||
| "pydantic": None, | ||
| "raw": _LLM_ANSWER, | ||
| "tasks_output": [ |
There was a problem hiding this comment.
crewAIInc/crewAI@6b52587 introduced new messages field 4 days ago. We can choose to wait for 1.4.2 and condition value here, but I think we don't need exact match here anyway.
| assert len(traces) == 1 | ||
| assert traces[0].info.status == "OK" | ||
| assert len(traces[0].data.spans) == 9 | ||
| assert len(traces[0].data.spans) == 10 if _IS_CREWAI_V1 else 9 |
There was a problem hiding this comment.
What's the additional span? Shall we add assertion for the new span?
There was a problem hiding this comment.
It's another LLM call. I don't think adding assertion has a value here.
TomeHirata
left a comment
There was a problem hiding this comment.
Left a comment, otherwise LGTM
tests/crewai/test_crewai_autolog.py
Outdated
| _IS_CREWAI_V1 = Version(crewai.__version__) >= Version("1.0.0") | ||
|
|
||
|
|
||
| @pytest.fixture |
There was a problem hiding this comment.
This fixture needs to be applied before other fixtures (e.g. agent). It seems autouse=True does not guarantee that.
There was a problem hiding this comment.
Interesting. Have you tried changes like this? I have and all the tests passed
diff --git a/tests/crewai/test_crewai_autolog.py b/tests/crewai/test_crewai_autolog.py
index 7ca71e7388..09e33f8423 100644
--- a/tests/crewai/test_crewai_autolog.py
+++ b/tests/crewai/test_crewai_autolog.py
@@ -22,7 +22,7 @@ _LLM_ANSWER = "What about Tokyo?"
_IS_CREWAI_V1 = Version(crewai.__version__).major >= 1
-@pytest.fixture
+@pytest.fixture(autouse=True)
def set_api_key(monkeypatch):
monkeypatch.setenv("OPENAI_API_KEY", "000")
@@ -130,7 +130,7 @@ _AGENT_1_BACKSTORY = "An expert in analyzing travel data to pick ideal destinati
@pytest.fixture
-def simple_agent_1(set_api_key):
+def simple_agent_1():
return Agent(
role="City Selection Expert",
goal=_AGENT_1_GOAL,
@@ -144,7 +144,7 @@ _AGENT_2_GOAL = "Provide the BEST insights about the selected city"
@pytest.fixture
-def simple_agent_2(set_api_key):
+def simple_agent_2():
return Agent(
role="Local Expert at this city",
goal=_AGENT_2_GOAL,
@@ -164,7 +164,7 @@ class SampleTool(BaseTool):
@pytest.fixture
-def tool_agent_1(set_api_key):
+def tool_agent_1():
return Agent(
role="City Selection Expert",
goal=_AGENT_1_GOAL,There was a problem hiding this comment.
Fixture execution order:
% uv run --with 'crewai,litellm' pytest --setup-plan tests/crewai/test_crewai_autolog.py::test_kickoff_enable_disable_autolog
...
tests/crewai/test_crewai_autolog.py::test_kickoff_enable_disable_autolog[autolog]
SETUP S event_loop_policy
SETUP S enable_mlflow_testing
SETUP S enable_test_mode_by_default_for_autologging_integrations
SETUP S tmp_path_factory
SETUP S serve_wheel (fixtures used: tmp_path_factory)
SETUP M clean_up_envs
SETUP F clean_up_last_active_run
SETUP F clean_up_last_logged_model_id
SETUP F clean_up_leaked_runs
SETUP F clean_up_mlruns_directory
SETUP F clean_up_telemetry_threads
SETUP F prevent_infer_pip_requirements_fallback
SETUP F reset_active_experiment_id
SETUP F reset_active_model_context
SETUP F reset_mlflow_uri
SETUP F reset_tracing
SETUP F tmp_path (fixtures used: tmp_path_factory)
SETUP F tracking_uri_mock (fixtures used: tmp_path)
SETUP F validate_trace_finish
SETUP F monkeypatch
SETUP F set_api_key (fixtures used: monkeypatch) 👈
SETUP F simple_agent_1 👈
SETUP F task_1 (fixtures used: simple_agent_1) 👈
SETUP F autolog[<function autolog at 0x14daab400>]
tests/crewai/test_crewai_autolog.py::test_kickoff_enable_disable_autolog[autolog]
There was a problem hiding this comment.
Actually you're right. The order is not guaranteed between fixtures that are marked with autouse, but if all other fixtures are not marked as autouse, the one marked with autouse runs first.
|
/review ✅ Review completed. Review OutputNow let me review the changes against the Python style guide and check for issues. Let me analyze the diff: Key changes in this PR:
Let me check the changes for style guide violations and potential issues: Issues found:
However, looking more carefully at the context and the nature of these changes:
Let me check if there are any actual bugs or style violations that need to be addressed: After careful review of the changes against the Python style guide, I found: No significant issues. The changes are appropriate for handling CrewAI v1 compatibility:
The code follows the project's Python style guide and implements a pragmatic solution for supporting both CrewAI v0.x and v1.x versions. No issues found 🤖 Generated with Claude Code |
tests/crewai/test_crewai_autolog.py
Outdated
|
|
||
|
|
||
| @pytest.fixture | ||
| def set_api_key(monkeypatch): |
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Signed-off-by: Jackie Himel <jacqueline.himel@vanderbilt.edu>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Signed-off-by: Tian Lan <sky.blue266000@gmail.com>
🛠 DevTools 🛠
Install mlflow from this PR
For Databricks, use the following command:
What changes are proposed in this pull request?
Fix https://github.com/mlflow/dev/actions/runs/19266812224/job/55084667648
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/tracking: Tracking Service, tracking client APIs, autologgingarea/models: MLmodel format, model serialization/deserialization, flavorsarea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/scoring: MLflow Model server, model deployment tools, Spark UDFsarea/evaluation: MLflow model evaluation features, evaluation metrics, and evaluation workflowsarea/gateway: MLflow AI Gateway client APIs, server, and third-party integrationsarea/prompts: MLflow prompt engineering features, prompt templates, and prompt managementarea/tracing: MLflow Tracing features, tracing APIs, and LLM tracing functionalityarea/projects: MLproject format, project running backendsarea/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesHow should the PR be classified in the release notes? Choose one:
rn/none- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change- The PR will be mentioned in the "Breaking Changes" sectionrn/feature- A new user-facing feature worth mentioning in the release notesrn/bug-fix- A user-facing bug fix worth mentioning in the release notesrn/documentation- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yesshould be selected for bug fixes, documentation updates, and other small changes.Noshould be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.