Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does Z3 use bounds on Bitvectors to reduce them to lower size/width? #6137

Closed
ThomasHaas opened this issue Jul 5, 2022 · 3 comments
Closed

Comments

@ThomasHaas
Copy link
Contributor

I have a bounded model checking application which encodes the data-flow of a program with 64-bit bitvectors.
Using static analysis, it may be possible to derive a priori bounds on bv-variables, that would allow them to be encoded with less bits.
Say I know that some variable is bounded by 2^16, I'm wondering if there is a difference in directly encoding the variable as 16-bit or as 64-bit but asserting 0 <= V <= 2^16 as an additional constraint.
Will this constraint be used to effectively reduce the variable down to 16 bit during Bit-blasting and then get discarded entirely?

I'm asking this for two reasons:

  • First, it would be a lot easier to keep the variables as 64-bit in the encoding and just let Z3 decide how many bits it actually needs.
  • Second, I noticed performance degradations when adding such interval constraints on an integer-based encoding, i.e., adding variable bounds in LIA can make matters worse (by multiple factors in my use-case). However, I suspect that in the case of bitvectors, the constraints are used only for the bit-blasting and then effectively thrown away, so that there is no way that I get performance degradation by adding interval information.

As a side question: Does Z3 try to infer bounds automatically to reduce the Bit-Blasting? If so, is it possible to check how many boolean variables it actually ended up creating to get a gist of how well it derived bounds? In particular, It would be interesting to see if adding explicit bounds to some variables actually reduces the bit-blasted formula (if not, then Z3 was able to derive the bounds by itself).

@NikolajBjorner
Copy link
Contributor

It tries to, but in the current tactic called reduce-bv-size it doesn't detect this pattern.
Since we are close to a release, I am adding a pass that is currently disabled to detect unsigned upper bounds.
It is also very restrictive but covers a basic use case, such as

(declare-const x (_ BitVec 64))
(assert (bvule x (_ bv256 64)))
(assert (= (bvmul x x) x))
(check-sat)

Bit-size reduction would only be available as pre-processing so your BMC queries cannot use the incremental mode.

@ThomasHaas
Copy link
Contributor Author

I understand the problem with incrementality if I add such bounds later on.
But can't Z3 use full optimiziations on all the formulas asserted before the first push?
In my case, I would add the bounds initially and only add other types of constraints incrementally.

NikolajBjorner added a commit that referenced this issue Aug 16, 2022
- add option smt.bv.reduce_size.
  - it allows to apply incremental pre-processing of bit-vectors by identifying ranges that are known to be constant.
    This rewrite is beneficial, for instance, when bit-vectors are constrained to have many high-level bits set to 0.
@NikolajBjorner
Copy link
Contributor

You can try it out smt.bv.size_reduce=true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants