Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-46841: Use inline caching for calls #31709

Merged
merged 9 commits into from
Mar 7, 2022

Conversation

brandtbucher
Copy link
Member

@brandtbucher brandtbucher commented Mar 6, 2022

Also:

  • remove the "old" non-inline caching machinery
  • fix some bugs in the collection of PRECALL/CALL specialization stats
  • shrink the cache size requirements for PRECALL/CALL instructions

Next steps tracked at faster-cpython/ideas#310.

https://bugs.python.org/issue46841

@bedevere-bot
Copy link

🤖 New build scheduled with the buildbot fleet by @markshannon for commit 2021895 🤖

If you want to schedule another build, you need to add the ":hammer: test-with-buildbots" label again.

@bedevere-bot bedevere-bot removed the 🔨 test-with-buildbots Test PR w/ buildbots; report in status section label Mar 7, 2022
@markshannon markshannon added the 🔨 test-with-buildbots Test PR w/ buildbots; report in status section label Mar 7, 2022
@bedevere-bot
Copy link

🤖 New build scheduled with the buildbot fleet by @markshannon for commit 2021895 🤖

If you want to schedule another build, you need to add the ":hammer: test-with-buildbots" label again.

@bedevere-bot bedevere-bot removed the 🔨 test-with-buildbots Test PR w/ buildbots; report in status section label Mar 7, 2022
Copy link
Member

@markshannon markshannon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent.
A couple comments, but nothing to block merging.

);
}
/* Maximum size of code to quicken, in code units. */
#define MAX_SIZE_TO_QUICKEN 10000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just to get the unpack sequence benchmark to work again, or something else?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, just for unpack_sequence.

PyObject *isinstance;
PyObject *len;
PyObject *list_append;
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the existence of PyList_Type as part of the API means that list.append must be per-process unique.
In other words, list_append could be static.

I'm happy to leave it as is for now, though. We should look to make the whole struct static, although the mutability of builtin functions makes that tricky for isinstance and len.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe each interpreter has its own builtins module (check out _PyBuiltin_Init), so making this static could be tricky. As you said, though: probably worth looking into in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants