Conversation
|
Nice, thanks @kenodegard! |
CodSpeed Performance ReportMerging #25 will not alter performanceComparing Summary
|
art049
left a comment
There was a problem hiding this comment.
Thank you, agree entirely, we need this :)
No problem with the ruff bump and config improvement; love it!
LMK what you think about the details
src/pytest_codspeed/plugin.py
Outdated
| plugin.lib, | ||
| item.nodeid, | ||
| item.config, | ||
| lambda: ihook.pytest_runtest_call(item=item), |
There was a problem hiding this comment.
In the current state this might introduce variance in the code we measure since we'll also measure the pytest hook.
I think it would be better to do it the other way around: have pytest_runtest_call call _run_with_instrumentation
A way to do it might be to and override runtest but don't hesitate if you have another idea!
There was a problem hiding this comment.
After digging some more I found we can use pytest_pyfunc_call instead of pytest_runtest_protocol (runtest invokes this hook, see https://github.com/pytest-dev/pytest/blob/5bb1363435a8cb3e2010505dbeb1e015c36beed6/src/_pytest/python.py#L1762-L1764)
I got the tests to pass locally but I may be missing something so let me know if you don't think this is the right thing to do
|
@art049 anything more to do here? |
Co-authored-by: Edgar Ramírez Mondragón <16805946+edgarrmondragon@users.noreply.github.com>
|
It's a bit annoying that formatter changes and logic change are in the same PR in the end. |
There was a problem hiding this comment.
Since you're touching the core of the plugin, I have some additional feedback just to make sure the behavior stays the same.
Really appreciate you refactoring the test protocol thing! 🔥
Edit: thanks for splitting the formating things into #29 !
| if ( | ||
| plugin.is_codspeed_enabled | ||
| and plugin.lib is not None | ||
| and plugin.should_measure | ||
| ): | ||
| return wrap_pyfunc_with_instrumentation( | ||
| plugin.lib, | ||
| self._request.node.nodeid, | ||
| config, | ||
| func, | ||
| )(*args, **kwargs) |
There was a problem hiding this comment.
When used with the fixture, the fixture payload shouldn't be executed wrapped again within pyfunc_call since it's already wrapped by the test itself.
for example, this would probably fail because of the warmup:
# This doesn't have any pytest marker but defines a bench since its using the fixture
def test_bench(benchmark):
# ... some preprocessing
called_once = False
@benchmark
def _():
nonlocal called_once
if not called_once:
called_once = True
else:
raise Exception("bench codewas called twice but actual bench context only once")There was a problem hiding this comment.
Don't hesitate to add that as an additional test
There was a problem hiding this comment.
I started to get really confused with the intention here since the above example also fails with master, see #30
I suspect we need to do something similar to what pytest-rerunfailures does to clear the cached results between the cache priming run and the instrumented run: https://github.com/pytest-dev/pytest-rerunfailures/blob/a53b9344c0d7a491a3cc53d91c7319696651d21b/src/pytest_rerunfailures.py#L565-L567
| @pytest.hookimpl(hookwrapper=True) | ||
| def pytest_pyfunc_call(pyfuncitem: pytest.Function) -> Iterator[None]: | ||
| plugin = get_plugin(pyfuncitem.config) | ||
| if ( | ||
| plugin.is_codspeed_enabled | ||
| and should_benchmark_item(pyfuncitem) | ||
| and not has_benchmark_fixture(pyfuncitem) | ||
| ): | ||
| plugin.benchmark_count += 1 | ||
| if plugin.lib is not None and plugin.should_measure: | ||
| pyfuncitem.obj = wrap_pyfunc_with_instrumentation( | ||
| plugin.lib, | ||
| pyfuncitem.nodeid, | ||
| pyfuncitem.config, | ||
| pyfuncitem.obj, | ||
| ) | ||
| yield |
There was a problem hiding this comment.
Love this refactor!
However, when a benchmark is defined from a fixture, we should still perform the warmup at this level and not in the benchmark fixture(see the other comment I left on it).
| if SUPPORTS_PERF_TRAMPOLINE: | ||
| # Warmup CPython performance map cache | ||
| __codspeed_root_frame__() |
There was a problem hiding this comment.
I think the warmup shouldn't be included in the wrapper but handled at the protocol level
There was a problem hiding this comment.
Ah, this also explains the issue we ran into on our own trying to use tmp_path in benchmark tests, e.g.:
@pytest.mark.benchmark
def test_tmp_path_benchmark(tmp_path: Path):
tmp_path.mkdir()| lib.zero_stats() | ||
| lib.start_instrumentation() | ||
| try: | ||
| return __codspeed_root_frame__() | ||
| finally: | ||
| lib.stop_instrumentation() | ||
| uri = get_git_relative_uri(nodeid, config.rootpath) | ||
| lib.dump_stats_at(uri.encode("ascii")) |
There was a problem hiding this comment.
since you introduced this try block, can you add a test bench raising an exception?
f2595e8 to
26b5d09
Compare
Resolves #24
Changes include:
Callitem.ihook.pytest_runtest_call(item=item)instead of directly callingitem.runtest()to benefit from pytest's automatic handling ofsys.last_type,sys.last_value,sys.last_execption(apparently not handling this causes stdout/stderr to be redirected and inaccessible within tests, see https://github.com/pytest-dev/pytest/blob/3b798e54221f1895a983000c7e5bc8afdacd5011/src/_pytest/runner.py#L165-L182)pytest_runtest_protocolhook topytest_pyfunc_callhook (deeper in the call stack,pytest_pyfunc_callis invoked fromruntestwhich in turn is called frompytest_runtest_call)