Change default for Python version from 3.8 to 3.9#13896
Change default for Python version from 3.8 to 3.9#13896MichaReiser merged 6 commits intoastral-sh:ruff-0.8from
Conversation
|
| code | total | + violation | - violation | + fix | - fix |
|---|---|---|---|---|---|
| UP006 | 145 | 145 | 0 | 0 | 0 |
| UP035 | 71 | 71 | 0 | 0 | 0 |
Linter (preview)
ℹ️ ecosystem check detected linter changes. (+8555 -8359 violations, +0 -66 fixes in 10 projects; 1 project error; 43 projects unchanged)
apache/airflow (+6176 -6179 violations, +0 -58 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL
- airflow/api/__init__.py:32:5: DOC201 `return` is not documented in docstring - airflow/api/__init__.py:32:5: DOC501 Raised exception `AirflowException` missing from docstring + airflow/api/__init__.py:43:15: DOC501 Raised exception `AirflowException` missing from docstring + airflow/api/__init__.py:44:5: DOC201 `return` is not documented in docstring - airflow/api/auth/backend/deny_all.py:38:5: DOC201 `return` is not documented in docstring + airflow/api/auth/backend/deny_all.py:44:5: DOC201 `return` is not documented in docstring - airflow/api/client/__init__.py:27:5: DOC201 `return` is not documented in docstring + airflow/api/client/__init__.py:38:5: DOC201 `return` is not documented in docstring ... 9019 additional changes omitted for rule DOC201 - airflow/api/common/delete_dag.py:43:5: DOC501 Raised exception `AirflowException` missing from docstring - airflow/api/common/delete_dag.py:43:5: DOC501 Raised exception `DagNotFound` missing from docstring + airflow/api/common/delete_dag.py:61:15: DOC501 Raised exception `AirflowException` missing from docstring + airflow/api/common/delete_dag.py:64:15: DOC501 Raised exception `DagNotFound` missing from docstring ... 2949 additional changes omitted for rule DOC501 - airflow/api/common/mark_tasks.py:186:5: DOC402 `yield` is not documented in docstring + airflow/api/common/mark_tasks.py:190:13: DOC402 `yield` is not documented in docstring - airflow/api_fastapi/common/db/common.py:33:5: DOC402 `yield` is not documented in docstring + airflow/api_fastapi/common/db/common.py:47:9: DOC402 `yield` is not documented in docstring + airflow/assets/__init__.py:128:8: PLR1714 Consider merging multiple comparisons: `value in ("self", "context")`. Use a `set` if the elements are hashable. - airflow/assets/__init__.py:128:8: PLR1714 Consider merging multiple comparisons: `value in {"self", "context"}`. - airflow/assets/__init__.py:238:9: DOC402 `yield` is not documented in docstring + airflow/assets/__init__.py:243:9: DOC402 `yield` is not documented in docstring ... 329 additional changes omitted for rule DOC402 + airflow/decorators/__init__.pyi:117:25: PYI041 Use `float` instead of `int | float` - airflow/decorators/__init__.pyi:117:25: PYI041 [*] Use `float` instead of `int | float` + airflow/decorators/__init__.pyi:256:25: PYI041 Use `float` instead of `int | float` - airflow/decorators/__init__.pyi:256:25: PYI041 [*] Use `float` instead of `int | float` + airflow/jobs/job.py:308:39: PYI041 Use `float` instead of `int | float` - airflow/jobs/job.py:308:39: PYI041 [*] Use `float` instead of `int | float` ... 53 additional changes omitted for rule PYI041 - airflow/models/dag.py:1038:36: PYI061 `Literal[None, ...]` can be replaced with `Literal[...] | None` - airflow/models/dagrun.py:1317:23: RUF038 `Literal[True, False]` can be replaced with `bool` - airflow/models/dagrun.py:1439:23: RUF038 `Literal[True, False]` can be replaced with `bool` + airflow/www/decorators.py:55:27: PLR1714 Consider merging multiple comparisons: `k in ("val", "value")`. Use a `set` if the elements are hashable. - airflow/www/decorators.py:55:27: PLR1714 Consider merging multiple comparisons: `k in {"val", "value"}`. + airflow/www/views.py:4340:21: PLR1714 Consider merging multiple comparisons: `parsed_url.scheme in ("http", "https")`. Use a `set` if the elements are hashable. - airflow/www/views.py:4340:21: PLR1714 Consider merging multiple comparisons: `parsed_url.scheme in {"http", "https"}`. ... 35 additional changes omitted for rule PLR1714 ... 12380 additional changes omitted for project
apache/superset (+1170 -1172 violations, +0 -8 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL
+ RELEASING/changelog.py:104:9: DOC201 `return` is not documented in docstring - RELEASING/changelog.py:107:9: DOC201 `return` is not documented in docstring + RELEASING/changelog.py:113:13: DOC201 `return` is not documented in docstring - RELEASING/changelog.py:52:9: DOC201 `return` is not documented in docstring + RELEASING/changelog.py:54:13: DOC201 `return` is not documented in docstring - RELEASING/changelog.py:87:9: DOC201 `return` is not documented in docstring ... 1824 additional changes omitted for rule DOC201 - scripts/benchmark_migration.py:43:5: DOC501 Raised exception `Exception` missing from docstring + scripts/benchmark_migration.py:51:11: DOC501 Raised exception `Exception` missing from docstring - scripts/cancel_github_workflows.py:162:5: DOC501 Raised exception `ClickException` missing from docstring + scripts/cancel_github_workflows.py:164:15: DOC501 Raised exception `ClickException` missing from docstring ... 2340 additional changes omitted for project
aws/aws-sam-cli (+1 -1 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ tests/integration/publish/publish_app_integ_base.py:60:16: PLR1714 Consider merging multiple comparisons: `f.suffix in (".yaml", ".json")`. Use a `set` if the elements are hashable. - tests/integration/publish/publish_app_integ_base.py:60:16: PLR1714 Consider merging multiple comparisons: `f.suffix in {".yaml", ".json"}`.
bokeh/bokeh (+407 -406 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL
+ examples/advanced/extensions/parallel_plot/parallel_plot.py:107:5: DOC201 `return` is not documented in docstring - examples/advanced/extensions/parallel_plot/parallel_plot.py:15:5: DOC201 `return` is not documented in docstring - examples/basic/data/server_sent_events_source.py:53:9: DOC402 `yield` is not documented in docstring + examples/basic/data/server_sent_events_source.py:60:13: DOC402 `yield` is not documented in docstring - examples/interaction/js_callbacks/js_on_event.py:16:5: DOC201 `return` is not documented in docstring + examples/interaction/js_callbacks/js_on_event.py:21:5: DOC201 `return` is not documented in docstring + examples/models/daylight.py:83:12: DTZ901 Use of `datetime.datetime.min` without timezone information - examples/models/gauges.py:33:5: DOC201 `return` is not documented in docstring + examples/models/gauges.py:34:5: DOC201 `return` is not documented in docstring ... 395 additional changes omitted for rule DOC201 - src/bokeh/__init__.py:63:5: DOC202 Docstring should not have a returns section because the function doesn't return anything ... 803 additional changes omitted for project
latchbio/latch (+2 -3 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
- src/latch/types/metadata.py:500:45: PYI061 `Literal[None, ...]` can be replaced with `Literal[...] | None` + src/latch_cli/services/stop_pod.py:22:8: PLR1714 Consider merging multiple comparisons: `res.status_code in (403, 404)`. Use a `set` if the elements are hashable. - src/latch_cli/services/stop_pod.py:22:8: PLR1714 Consider merging multiple comparisons: `res.status_code in {403, 404}`. + src/latch_cli/snakemake/single_task_snakemake.py:362:8: PLR1714 Consider merging multiple comparisons: `parsed.scheme not in ("", "docker")`. Use a `set` if the elements are hashable. - src/latch_cli/snakemake/single_task_snakemake.py:362:8: PLR1714 Consider merging multiple comparisons: `parsed.scheme not in {"", "docker"}`.
lnbits/lnbits (+216 -0 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ lnbits/app.py:257:6: UP006 Use `list` instead of `List` for type annotation + lnbits/app.py:9:1: UP035 `typing.List` is deprecated, use `list` instead + lnbits/commands.py:235:23: UP006 Use `list` instead of `List` for type annotation + lnbits/commands.py:465:6: UP006 Use `tuple` instead of `Tuple` for type annotation + lnbits/commands.py:494:6: UP006 Use `tuple` instead of `Tuple` for type annotation + lnbits/commands.py:6:1: UP035 `typing.List` is deprecated, use `list` instead + lnbits/commands.py:6:1: UP035 `typing.Tuple` is deprecated, use `tuple` instead + lnbits/core/crud.py:1018:6: UP006 Use `list` instead of `List` for type annotation + lnbits/core/crud.py:1235:43: UP006 Use `list` instead of `List` for type annotation ... 140 additional changes omitted for rule UP006 + lnbits/core/crud.py:4:1: UP035 `typing.Dict` is deprecated, use `dict` instead ... 206 additional changes omitted for project
pandas-dev/pandas (+0 -4 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
- pandas/core/groupby/groupby.py:4069:39: PYI061 `Literal[None, ...]` can be replaced with `Literal[...] | None` - pandas/core/groupby/indexing.py:299:39: PYI061 `Literal[None, ...]` can be replaced with `Literal[...] | None` - pandas/io/html.py:1027:28: PYI061 `Literal[None, ...]` can be replaced with `Literal[...] | None` - pandas/io/html.py:223:32: PYI061 `Literal[None, ...]` can be replaced with `Literal[...] | None`
python/typeshed (+2 -4 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select E,F,FA,I,PYI,RUF,UP,W
- stdlib/ast.pyi:1480:16: RUF038 `Literal[True, False]` can be replaced with `bool` - stdlib/ast.pyi:1481:35: RUF038 `Literal[True, False]` can be replaced with `bool` - stdlib/ast.pyi:1484:45: RUF038 `Literal[True, False]` can be replaced with `bool` + stdlib/random.pyi:45:31: PYI041 Use `float` instead of `int | float` + stdlib/random.pyi:52:27: PYI041 Use `float` instead of `int | float` - stubs/pyxdg/xdg/Menu.pyi:97:11: RUF038 `Literal[True, False, ...]` can be replaced with `Literal[...] | bool`
zulip/zulip (+581 -589 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL
- analytics/lib/fixtures.py:19:5: DOC201 `return` is not documented in docstring - analytics/lib/fixtures.py:19:5: DOC501 Raised exception `AssertionError` missing from docstring + analytics/lib/fixtures.py:56:15: DOC501 Raised exception `AssertionError` missing from docstring + analytics/lib/fixtures.py:77:5: DOC201 `return` is not documented in docstring + confirmation/models.py:125:5: DOC201 `return` is not documented in docstring - confirmation/models.py:279:5: DOC201 `return` is not documented in docstring + confirmation/models.py:283:5: DOC201 `return` is not documented in docstring - confirmation/models.py:298:5: DOC201 `return` is not documented in docstring ... 857 additional changes omitted for rule DOC201 - confirmation/models.py:298:5: DOC501 Raised exception `InvalidError` missing from docstring + confirmation/models.py:304:15: DOC501 Raised exception `InvalidError` missing from docstring ... 1160 additional changes omitted for project
... Truncated remaining completed project reports due to GitHub comment length restrictions
pypa/setuptools (error)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
ruff failed
Cause: Failed to parse /home/runner/work/ruff/ruff/checkouts/pypa:setuptools/ruff.toml
Cause: TOML parse error at line 8, column 1
|
8 | [lint]
| ^^^^^^
Unknown rule selector: `UP027`
Changes by rule (14 rules affected)
| code | total | + violation | - violation | + fix | - fix |
|---|---|---|---|---|---|
| DOC201 | 12115 | 6057 | 6058 | 0 | 0 |
| DOC501 | 3924 | 1962 | 1962 | 0 | 0 |
| DOC402 | 408 | 204 | 204 | 0 | 0 |
| DOC202 | 158 | 79 | 79 | 0 | 0 |
| UP006 | 145 | 145 | 0 | 0 | 0 |
| UP035 | 71 | 71 | 0 | 0 | 0 |
| PYI041 | 68 | 2 | 0 | 0 | 66 |
| PLR1714 | 62 | 31 | 31 | 0 | 0 |
| PYI061 | 14 | 0 | 14 | 0 | 0 |
| RUF038 | 7 | 0 | 7 | 0 | 0 |
| DOC502 | 4 | 2 | 2 | 0 | 0 |
| DOC403 | 2 | 1 | 1 | 0 | 0 |
| DTZ901 | 1 | 1 | 0 | 0 | 0 |
| RUF100 | 1 | 0 | 1 | 0 | 0 |
Formatter (stable)
ℹ️ ecosystem check detected format changes. (+111 -85 lines in 16 files in 4 projects; 1 project error; 49 projects unchanged)
aws/aws-sam-cli (+34 -30 lines across 4 files)
tests/integration/pipeline/test_init_command.py~L98
self.assertEqual(init_process_execute.process.returncode, 0)
- with open(EXPECTED_JENKINS_FILE_PATH, "r") as expected, open(
- os.path.join(".aws-sam", "pipeline", "generated-files", "Jenkinsfile"), "r"
- ) as output:
+ with (
+ open(EXPECTED_JENKINS_FILE_PATH, "r") as expected,
+ open(os.path.join(".aws-sam", "pipeline", "generated-files", "Jenkinsfile"), "r") as output,
+ ):
self.assertEqual(expected.read(), output.read())
# also check the Jenkinsfile is not overriddentests/unit/commands/samconfig/test_samconfig.py~L1066
}
# NOTE: Because we don't load the full Click BaseCommand here, this is mounted as top-level command
- with samconfig_parameters(
- ["start-lambda"], self.scratch_dir, **config_values
- ) as config_path, tempfile.NamedTemporaryFile() as key_file, tempfile.NamedTemporaryFile() as cert_file:
+ with (
+ samconfig_parameters(["start-lambda"], self.scratch_dir, **config_values) as config_path,
+ tempfile.NamedTemporaryFile() as key_file,
+ tempfile.NamedTemporaryFile() as cert_file,
+ ):
from samcli.commands.local.start_lambda.cli import cli
LOG.debug(Path(config_path).read_text())tests/unit/commands/samconfig/test_samconfig.py~L1171
}
# NOTE: Because we don't load the full Click BaseCommand here, this is mounted as top-level command
- with samconfig_parameters(
- ["start-lambda"], self.scratch_dir, **config_values
- ) as config_path, tempfile.NamedTemporaryFile() as key_file, tempfile.NamedTemporaryFile() as cert_file:
+ with (
+ samconfig_parameters(["start-lambda"], self.scratch_dir, **config_values) as config_path,
+ tempfile.NamedTemporaryFile() as key_file,
+ tempfile.NamedTemporaryFile() as cert_file,
+ ):
from samcli.commands.local.start_lambda.cli import cli
LOG.debug(Path(config_path).read_text())tests/unit/lib/build_module/test_build_strategy.py~L747
def test_will_call_incremental_build_strategy(self, mocked_read, mocked_write, runtime):
build_definition = FunctionBuildDefinition(runtime, "codeuri", None, "package_type", X86_64, {}, "handler")
self.build_graph.put_function_build_definition(build_definition, Mock(full_path="function_full_path"))
- with patch.object(
- self.build_strategy, "_incremental_build_strategy"
- ) as patched_incremental_build_strategy, patch.object(
- self.build_strategy, "_cached_build_strategy"
- ) as patched_cached_build_strategy:
+ with (
+ patch.object(self.build_strategy, "_incremental_build_strategy") as patched_incremental_build_strategy,
+ patch.object(self.build_strategy, "_cached_build_strategy") as patched_cached_build_strategy,
+ ):
self.build_strategy.build()
patched_incremental_build_strategy.build_single_function_definition.assert_called_with(build_definition)tests/unit/lib/build_module/test_build_strategy.py~L767
def test_will_call_cached_build_strategy(self, mocked_read, mocked_write, runtime):
build_definition = FunctionBuildDefinition(runtime, "codeuri", None, "package_type", X86_64, {}, "handler")
self.build_graph.put_function_build_definition(build_definition, Mock(full_path="function_full_path"))
- with patch.object(
- self.build_strategy, "_incremental_build_strategy"
- ) as patched_incremental_build_strategy, patch.object(
- self.build_strategy, "_cached_build_strategy"
- ) as patched_cached_build_strategy:
+ with (
+ patch.object(self.build_strategy, "_incremental_build_strategy") as patched_incremental_build_strategy,
+ patch.object(self.build_strategy, "_cached_build_strategy") as patched_cached_build_strategy,
+ ):
self.build_strategy.build()
patched_cached_build_strategy.build_single_function_definition.assert_called_with(build_definition)tests/unit/lib/build_module/test_build_strategy.py~L841
build_definition = FunctionBuildDefinition(runtime, "codeuri", None, "package_type", X86_64, {}, "handler")
self.build_graph.put_function_build_definition(build_definition, Mock(full_path="function_full_path"))
- with patch.object(
- build_strategy, "_incremental_build_strategy"
- ) as patched_incremental_build_strategy, patch.object(
- build_strategy, "_cached_build_strategy"
- ) as patched_cached_build_strategy:
+ with (
+ patch.object(build_strategy, "_incremental_build_strategy") as patched_incremental_build_strategy,
+ patch.object(build_strategy, "_cached_build_strategy") as patched_cached_build_strategy,
+ ):
build_strategy.build()
if not use_container:tests/unit/lib/remote_invoke/test_remote_invoke_executors.py~L79
given_output_format = "text"
test_execution_info = RemoteInvokeExecutionInfo(given_payload, None, given_parameters, given_output_format)
- with patch.object(self.boto_action_executor, "_execute_action") as patched_execute_action, patch.object(
- self.boto_action_executor, "_execute_action_file"
- ) as patched_execute_action_file:
+ with (
+ patch.object(self.boto_action_executor, "_execute_action") as patched_execute_action,
+ patch.object(self.boto_action_executor, "_execute_action_file") as patched_execute_action_file,
+ ):
given_result = Mock()
patched_execute_action.return_value = given_result
tests/unit/lib/remote_invoke/test_remote_invoke_executors.py~L96
given_output_format = "json"
test_execution_info = RemoteInvokeExecutionInfo(None, given_payload_file, given_parameters, given_output_format)
- with patch.object(self.boto_action_executor, "_execute_action") as patched_execute_action, patch.object(
- self.boto_action_executor, "_execute_action_file"
- ) as patched_execute_action_file:
+ with (
+ patch.object(self.boto_action_executor, "_execute_action") as patched_execute_action,
+ patch.object(self.boto_action_executor, "_execute_action_file") as patched_execute_action_file,
+ ):
given_result = Mock()
patched_execute_action_file.return_value = given_result
langchain-ai/langchain (+32 -23 lines across 5 files)
libs/community/tests/unit_tests/document_loaders/test_mongodb.py~L50
mock_collection.find = mock_find
mock_collection.count_documents = mock_count_documents
- with patch(
- "motor.motor_asyncio.AsyncIOMotorClient", return_value=MagicMock()
- ), patch(
- "langchain_community.document_loaders.mongodb.MongodbLoader.aload",
- new=mock_async_load,
+ with (
+ patch("motor.motor_asyncio.AsyncIOMotorClient", return_value=MagicMock()),
+ patch(
+ "langchain_community.document_loaders.mongodb.MongodbLoader.aload",
+ new=mock_async_load,
+ ),
):
loader = MongodbLoader(
"mongodb://localhost:27017",libs/community/tests/unit_tests/tools/audio/test_tools.py~L44
def test_huggingface_tts_run_with_requests_mock() -> None:
os.environ["HUGGINGFACE_API_KEY"] = "foo"
- with tempfile.TemporaryDirectory() as tmp_dir, patch(
- "uuid.uuid4"
- ) as mock_uuid, patch("requests.post") as mock_inference, patch(
- "builtins.open", mock_open()
- ) as mock_file:
+ with (
+ tempfile.TemporaryDirectory() as tmp_dir,
+ patch("uuid.uuid4") as mock_uuid,
+ patch("requests.post") as mock_inference,
+ patch("builtins.open", mock_open()) as mock_file,
+ ):
input_query = "Dummy input"
mock_uuid_value = uuid.UUID("00000000-0000-0000-0000-000000000000")libs/community/tests/unit_tests/vectorstores/test_azure_search.py~L220
]
ids_provided = [i.metadata.get("id") for i in documents]
- with patch.object(
- SearchClient, "upload_documents", mock_upload_documents
- ), patch.object(SearchIndexClient, "get_index", mock_default_index):
+ with (
+ patch.object(SearchClient, "upload_documents", mock_upload_documents),
+ patch.object(SearchIndexClient, "get_index", mock_default_index),
+ ):
vector_store = create_vector_store()
ids_used_at_upload = vector_store.add_documents(documents, ids=ids_provided)
assert len(ids_provided) == len(ids_used_at_upload)libs/langchain/tests/unit_tests/smith/evaluation/test_runner_utils.py~L316
proj.id = "123"
return proj
- with mock.patch.object(
- Client, "read_dataset", new=mock_read_dataset
- ), mock.patch.object(Client, "list_examples", new=mock_list_examples), mock.patch(
- "langchain.smith.evaluation.runner_utils._arun_llm_or_chain",
- new=mock_arun_chain,
- ), mock.patch.object(Client, "create_project", new=mock_create_project):
+ with (
+ mock.patch.object(Client, "read_dataset", new=mock_read_dataset),
+ mock.patch.object(Client, "list_examples", new=mock_list_examples),
+ mock.patch(
+ "langchain.smith.evaluation.runner_utils._arun_llm_or_chain",
+ new=mock_arun_chain,
+ ),
+ mock.patch.object(Client, "create_project", new=mock_create_project),
+ ):
client = Client(api_url="http://localhost:1984", api_key="123")
chain = mock.MagicMock()
chain.input_keys = ["foothing"]libs/partners/huggingface/tests/unit_tests/test_chat_models.py~L231
def test_bind_tools(chat_hugging_face: Any) -> None:
tools = [MagicMock(spec=BaseTool)]
- with patch(
- "langchain_huggingface.chat_models.huggingface.convert_to_openai_tool",
- side_effect=lambda x: x,
- ), patch("langchain_core.runnables.base.Runnable.bind") as mock_super_bind:
+ with (
+ patch(
+ "langchain_huggingface.chat_models.huggingface.convert_to_openai_tool",
+ side_effect=lambda x: x,
+ ),
+ patch("langchain_core.runnables.base.Runnable.bind") as mock_super_bind,
+ ):
chat_hugging_face.bind_tools(tools, tool_choice="auto")
mock_super_bind.assert_called_once()
_, kwargs = mock_super_bind.call_argsprefecthq/prefect (+38 -27 lines across 5 files)
src/integrations/prefect-dbt/prefect_dbt/cloud/jobs.py~L752
run_status = DbtCloudJobRunStatus(run_data.get("status"))
if run_status == DbtCloudJobRunStatus.SUCCESS:
try:
- async with self._dbt_cloud_credentials.get_administrative_client() as client: # noqa
+ async with (
+ self._dbt_cloud_credentials.get_administrative_client() as client
+ ): # noqa
response = await client.list_run_artifacts(
run_id=self.run_id, step=step
)tests/runner/test_webserver.py~L151
webserver = await build_server(runner)
client = TestClient(webserver)
- with mock.patch(
- "prefect.runner.server.get_client", new=mock_get_client
- ), mock.patch.object(runner, "execute_in_background"):
+ with (
+ mock.patch("prefect.runner.server.get_client", new=mock_get_client),
+ mock.patch.object(runner, "execute_in_background"),
+ ):
with client:
response = client.post(f"/deployment/{deployment_id}/run")
assert response.status_code == 201, response.json()tests/server/orchestration/api/test_task_run_subscriptions.py~L326
)
await queue.put(task_run)
- with patch("asyncio.sleep", return_value=None), pytest.raises(
- asyncio.TimeoutError
+ with (
+ patch("asyncio.sleep", return_value=None),
+ pytest.raises(asyncio.TimeoutError),
):
extra_task_run = ServerTaskRun(
id=uuid4(),tests/server/orchestration/api/test_task_run_subscriptions.py~L356
)
await queue.retry(task_run)
- with patch("asyncio.sleep", return_value=None), pytest.raises(
- asyncio.TimeoutError
+ with (
+ patch("asyncio.sleep", return_value=None),
+ pytest.raises(asyncio.TimeoutError),
):
extra_task_run = ServerTaskRun(
id=uuid4(),tests/test_task_worker.py~L106
async def test_handle_sigterm(mock_create_subscription):
task_worker = TaskWorker(...)
- with patch("sys.exit") as mock_exit, patch.object(
- task_worker, "stop", new_callable=AsyncMock
- ) as mock_stop:
+ with (
+ patch("sys.exit") as mock_exit,
+ patch.object(task_worker, "stop", new_callable=AsyncMock) as mock_stop,
+ ):
await task_worker.start()
mock_create_subscription.assert_called_once()tests/test_task_worker.py~L120
async def test_task_worker_client_id_is_set():
- with patch("socket.gethostname", return_value="foo"), patch(
- "os.getpid", return_value=42
+ with (
+ patch("socket.gethostname", return_value="foo"),
+ patch("os.getpid", return_value=42),
):
task_worker = TaskWorker(...)
task_worker._client = MagicMock(api_url="http://localhost:4200")tests/workers/test_base_worker.py~L1905
):
async with WorkerTestImpl(work_pool_name=work_pool.name) as worker:
await worker.start(run_once=True)
- with mock.patch(
- "prefect.workers.base.load_prefect_collections"
- ) as mock_load_prefect_collections, mock.patch(
- "prefect.client.orchestration.PrefectHttpxAsyncClient.post"
- ) as mock_send_worker_heartbeat_post, mock.patch(
- "prefect.workers.base.distributions"
- ) as mock_distributions:
+ with (
+ mock.patch(
+ "prefect.workers.base.load_prefect_collections"
+ ) as mock_load_prefect_collections,
+ mock.patch(
+ "prefect.client.orchestration.PrefectHttpxAsyncClient.post"
+ ) as mock_send_worker_heartbeat_post,
+ mock.patch("prefect.workers.base.distributions") as mock_distributions,
+ ):
mock_load_prefect_collections.return_value = {
"prefect_aws": "1.0.0",
}tests/workers/test_base_worker.py~L1963
async with CustomWorker(work_pool_name=work_pool.name) as worker:
await worker.start(run_once=True)
- with mock.patch(
- "prefect.workers.base.load_prefect_collections"
- ) as mock_load_prefect_collections, mock.patch(
- "prefect.client.orchestration.PrefectHttpxAsyncClient.post"
- ) as mock_send_worker_heartbeat_post, mock.patch(
- "prefect.workers.base.distributions"
- ) as mock_distributions:
+ with (
+ mock.patch(
+ "prefect.workers.base.load_prefect_collections"
+ ) as mock_load_prefect_collections,
+ mock.patch(
+ "prefect.client.orchestration.PrefectHttpxAsyncClient.post"
+ ) as mock_send_worker_heartbeat_post,
+ mock.patch("prefect.workers.base.distributions") as mock_distributions,
+ ):
mock_load_prefect_collections.return_value = {
"prefect_aws": "1.0.0",
}yandex/ch-backup (+7 -5 lines across 2 files)
tests/unit/test_backup_tables.py~L65
read_bytes_mock = Mock(return_value=creation_statement.encode())
# Backup table
- with patch("os.path.getmtime", side_effect=mtime), patch(
- "ch_backup.logic.table.Path", read_bytes=read_bytes_mock
+ with (
+ patch("os.path.getmtime", side_effect=mtime),
+ patch("ch_backup.logic.table.Path", read_bytes=read_bytes_mock),
):
table_backup.backup(
context,tests/unit/test_pipeline.py~L164
forward_file_path, backward_file_name, read_conf, encrypt_conf, write_conf
)
- with open(original_file_path, "rb") as orig_fobj, open(
- backward_file_name, "rb"
- ) as res_fobj:
+ with (
+ open(original_file_path, "rb") as orig_fobj,
+ open(backward_file_name, "rb") as res_fobj,
+ ):
orig_contents = orig_fobj.read()
res_contents = res_fobj.read()
pypa/setuptools (error)
ruff failed
Cause: Failed to parse /home/runner/work/ruff/ruff/checkouts/pypa:setuptools/ruff.toml
Cause: TOML parse error at line 8, column 1
|
8 | [lint]
| ^^^^^^
Unknown rule selector: `UP027`
Formatter (preview)
ℹ️ ecosystem check detected format changes. (+110 -86 lines in 16 files in 4 projects; 1 project error; 49 projects unchanged)
aws/aws-sam-cli (+34 -30 lines across 4 files)
ruff format --preview
tests/integration/pipeline/test_init_command.py~L98
self.assertEqual(init_process_execute.process.returncode, 0)
- with open(EXPECTED_JENKINS_FILE_PATH, "r") as expected, open(
- os.path.join(".aws-sam", "pipeline", "generated-files", "Jenkinsfile"), "r"
- ) as output:
+ with (
+ open(EXPECTED_JENKINS_FILE_PATH, "r") as expected,
+ open(os.path.join(".aws-sam", "pipeline", "generated-files", "Jenkinsfile"), "r") as output,
+ ):
self.assertEqual(expected.read(), output.read())
# also check the Jenkinsfile is not overriddentests/unit/commands/samconfig/test_samconfig.py~L1066
}
# NOTE: Because we don't load the full Click BaseCommand here, this is mounted as top-level command
- with samconfig_parameters(
- ["start-lambda"], self.scratch_dir, **config_values
- ) as config_path, tempfile.NamedTemporaryFile() as key_file, tempfile.NamedTemporaryFile() as cert_file:
+ with (
+ samconfig_parameters(["start-lambda"], self.scratch_dir, **config_values) as config_path,
+ tempfile.NamedTemporaryFile() as key_file,
+ tempfile.NamedTemporaryFile() as cert_file,
+ ):
from samcli.commands.local.start_lambda.cli import cli
LOG.debug(Path(config_path).read_text())tests/unit/commands/samconfig/test_samconfig.py~L1171
}
# NOTE: Because we don't load the full Click BaseCommand here, this is mounted as top-level command
- with samconfig_parameters(
- ["start-lambda"], self.scratch_dir, **config_values
- ) as config_path, tempfile.NamedTemporaryFile() as key_file, tempfile.NamedTemporaryFile() as cert_file:
+ with (
+ samconfig_parameters(["start-lambda"], self.scratch_dir, **config_values) as config_path,
+ tempfile.NamedTemporaryFile() as key_file,
+ tempfile.NamedTemporaryFile() as cert_file,
+ ):
from samcli.commands.local.start_lambda.cli import cli
LOG.debug(Path(config_path).read_text())tests/unit/lib/build_module/test_build_strategy.py~L723
def test_will_call_incremental_build_strategy(self, mocked_read, mocked_write, runtime):
build_definition = FunctionBuildDefinition(runtime, "codeuri", None, "package_type", X86_64, {}, "handler")
self.build_graph.put_function_build_definition(build_definition, Mock(full_path="function_full_path"))
- with patch.object(
- self.build_strategy, "_incremental_build_strategy"
- ) as patched_incremental_build_strategy, patch.object(
- self.build_strategy, "_cached_build_strategy"
- ) as patched_cached_build_strategy:
+ with (
+ patch.object(self.build_strategy, "_incremental_build_strategy") as patched_incremental_build_strategy,
+ patch.object(self.build_strategy, "_cached_build_strategy") as patched_cached_build_strategy,
+ ):
self.build_strategy.build()
patched_incremental_build_strategy.build_single_function_definition.assert_called_with(build_definition)tests/unit/lib/build_module/test_build_strategy.py~L741
def test_will_call_cached_build_strategy(self, mocked_read, mocked_write, runtime):
build_definition = FunctionBuildDefinition(runtime, "codeuri", None, "package_type", X86_64, {}, "handler")
self.build_graph.put_function_build_definition(build_definition, Mock(full_path="function_full_path"))
- with patch.object(
- self.build_strategy, "_incremental_build_strategy"
- ) as patched_incremental_build_strategy, patch.object(
- self.build_strategy, "_cached_build_strategy"
- ) as patched_cached_build_strategy:
+ with (
+ patch.object(self.build_strategy, "_incremental_build_strategy") as patched_incremental_build_strategy,
+ patch.object(self.build_strategy, "_cached_build_strategy") as patched_cached_build_strategy,
+ ):
self.build_strategy.build()
patched_cached_build_strategy.build_single_function_definition.assert_called_with(build_definition)tests/unit/lib/build_module/test_build_strategy.py~L813
build_definition = FunctionBuildDefinition(runtime, "codeuri", None, "package_type", X86_64, {}, "handler")
self.build_graph.put_function_build_definition(build_definition, Mock(full_path="function_full_path"))
- with patch.object(
- build_strategy, "_incremental_build_strategy"
- ) as patched_incremental_build_strategy, patch.object(
- build_strategy, "_cached_build_strategy"
- ) as patched_cached_build_strategy:
+ with (
+ patch.object(build_strategy, "_incremental_build_strategy") as patched_incremental_build_strategy,
+ patch.object(build_strategy, "_cached_build_strategy") as patched_cached_build_strategy,
+ ):
build_strategy.build()
if not use_container:tests/unit/lib/remote_invoke/test_remote_invoke_executors.py~L79
given_output_format = "text"
test_execution_info = RemoteInvokeExecutionInfo(given_payload, None, given_parameters, given_output_format)
- with patch.object(self.boto_action_executor, "_execute_action") as patched_execute_action, patch.object(
- self.boto_action_executor, "_execute_action_file"
- ) as patched_execute_action_file:
+ with (
+ patch.object(self.boto_action_executor, "_execute_action") as patched_execute_action,
+ patch.object(self.boto_action_executor, "_execute_action_file") as patched_execute_action_file,
+ ):
given_result = Mock()
patched_execute_action.return_value = given_result
tests/unit/lib/remote_invoke/test_remote_invoke_executors.py~L96
given_output_format = "json"
...*[Comment body truncated]*
MichaReiser
left a comment
There was a problem hiding this comment.
Thanks. This looks good to me.
I think we want to preserve two formatter tests for now and there's one change that I think we can revert.
We have to wait merging this until our next minor release
crates/ruff_linter/src/rules/pyupgrade/rules/outdated_version_block.rs
Outdated
Show resolved
Hide resolved
...ruff_python_formatter/tests/snapshots/black_compatibility@cases__context_managers_38.py.snap
Outdated
Show resolved
Hide resolved
41b4426 to
57cd678
Compare
57cd678 to
d89c5ff
Compare
e59cad6 to
75e561d
Compare
There was a problem hiding this comment.
I updated the black test import script to allow overriding the test options. I also removed black tests that no longer exist upstream (because they were moved/renamed). I removed them because I tried to override the settings for them and that didn't work... because they no longer exist.
|
What's important for the changelog is to point out that this can result in new violations and formatter changes. |
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Micha Reiser <micha@reiser.io>
`ruff` 0.8.0 (released 2024-11-22) no longer defaults to supporting Python 3.8, > Ruff now defaults to Python 3.9 instead of 3.8 if no explicit Python version > is configured using [`ruff.target-version`](https://docs.astral.sh/ruff/settings/#target-version) > or [`project.requires-python`](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#python-requires) > ([microsoft#13896](astral-sh/ruff#13896)) > — https://github.com/astral-sh/ruff/blob/f3dac27e9aa6ac6a20fc2fb27ff2e4f5d369b076/CHANGELOG.md#080 We want to support Python 3.8 until February 2025, so we need to set `target-version`. > The minimum Python version to target, e.g., when considering automatic code > upgrades, like rewriting type annotations. Ruff will not propose changes > using features that are not available in the given version. > — https://docs.astral.sh/ruff/settings/#target-version
Summary
Update default Python version from 3.8 to 3.9.
(for solving #13786 )
Test Plan