Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-46841: Inline cache for BINARY_SUBSCR. #31618

Merged
merged 11 commits into from Mar 1, 2022

Conversation

markshannon
Copy link
Contributor

@markshannon markshannon commented Feb 28, 2022

Copy link
Member

@brandtbucher brandtbucher left a comment

Thanks, looks good overall.

I still think that it makes more sense (and is a bit simpler) to just store pointers across four cache entries for now, and explore something like this separately as a possible improvement later. It seems to me that the way you've done it here is quite a bit more complicated, for negligible gain: 6 byte savings per unquickened site, offset by the cost of an extra 2 bytes and a pointer indirection per quickened site, plus 10 more wasted bytes for the vast majority of quickened calls.

But I'll defer to your judgement here.

Lib/test/test_capi.py Show resolved Hide resolved
Python/specialize.c Outdated Show resolved Hide resolved
Python/specialize.c Outdated Show resolved Hide resolved
Include/cpython/code.h Outdated Show resolved Hide resolved
@markshannon
Copy link
Contributor Author

@markshannon markshannon commented Mar 1, 2022

I've added some comments on about how to handle pointers to faster-cpython/ideas#263
and removed the need for a per-code-object cache from this PR and shrinks the inline cache a bit.

@markshannon markshannon merged commit 3b0f1c5 into python:main Mar 1, 2022
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants