Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large Performance Overhead during Initialization #86

Open
SimonKlx opened this issue Jan 30, 2024 · 3 comments
Open

Large Performance Overhead during Initialization #86

SimonKlx opened this issue Jan 30, 2024 · 3 comments

Comments

@SimonKlx
Copy link

Hello,

i am using CaDiCal as a backend for the Bitwuzla SMT solver.
Currently i am benchmarking my program and trying to optimize it for runtime.
When generating a flame graph i realized that the majority of the time that bitwuzla is running is spend in the initialization step of CaDiCal.
I am by no means an expert in memory management or very low level C, but judging from the functions that the time is spend in and looking at the code it feels like there is a lot of resizing happening whenever a new variable is initialized leading to a lot of costly memory operations.

This might be some kind of niche problem, since we are checking very many, but relatively easy to solve queries.
However it feels wrong that the initialization takes more time than the actual solving.
When i think about this problem, i would iterate over the whole Term an then allocate the vector size that i need only once.
Am i missing something?

Just by briefly looking at the code i was not able to understand everything, but maybe you can tell whether this is a hard problem to solve or if you have thought about this before.

I attached two screenshots from the flamegraph showing the time spend in each function.

Thanks,

Simon

Screenshot from 2024-01-30 11-58-59
Screenshot from 2024-01-30 11-58-36

@m-fleury
Copy link
Collaborator

If you already know how many variables you need or have an estimate, you could use the reserve function (https://github.com/arminbiere/cadical/blob/master/src/cadical.hpp#L522) to bypass the resizing that is currently done (using std::vector::resize whenever new variables are added).

@arminbiere
Copy link
Owner

arminbiere commented Feb 2, 2024

To avoid this memory thrashing is one of the reasons I actually started Kissat with a completely new memory layout. Porting this improved memory handling back to CaDiCaL is impossible, so I fear you have to go for reserve.

@mpreiner
Copy link

mpreiner commented Feb 8, 2024

@SimonKlx Can you share the benchmark you are using to produce this flame graph? I want to have a look whether we can improve this on the Bitwuzla side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants