Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add combine_keys function to PublicKey #291

Merged
merged 1 commit into from
Jun 8, 2021

Conversation

Tibo-lg
Copy link
Contributor

@Tibo-lg Tibo-lg commented Mar 31, 2021

Added a static method to PublicKey to enable computing the sum of a slice of public keys, as at the moment it looked like it was only possible to compute the sum of two public keys.

@Tibo-lg
Copy link
Contributor Author

Tibo-lg commented Mar 31, 2021

Looking at the CI failure and the fact that there is no use of Vec in the entire code base makes me think that there might have been a reason for not having this type of function in the first place, but I'd appreciate if someone can tell me if it's worth trying to find an alternative or if I should just give up.

@elichai
Copy link
Member

elichai commented Mar 31, 2021

Hmmm if we make PublicKey #[repr(transparent)] we should be able to strictly transmute a &[&PublicKey] to a &[*const ffi::PublicKey] and skip the heap allocation

@Tibo-lg Tibo-lg force-pushed the add-combine-keys branch 3 times, most recently from 21a0482 to 3db7e6b Compare March 31, 2021 08:57
@Tibo-lg
Copy link
Contributor Author

Tibo-lg commented Mar 31, 2021

@elichai Thanks for the suggestions, the transmutation trick seems indeed to work! Or at least it works for me locally. Is it safe to assume that the SIGILL on the CI is not caused by my change? (I can see it also happened on master)

@elichai
Copy link
Member

elichai commented Mar 31, 2021

We can see that miri doesn't complain here: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=bb329246b3e8ad06a104d06d2aba429e

@elichai
Copy link
Member

elichai commented Mar 31, 2021

maybe also assert that keys.len() < i32::MAX

@Tibo-lg
Copy link
Contributor Author

Tibo-lg commented Mar 31, 2021

maybe also assert that keys.len() < i32::MAX

Added a debug_assert I hope that's what you had in mind.

src/key.rs Outdated
Comment on lines 374 to 377
#[cfg(all(feature = "std"))]
pub fn combine_keys(keys: &[&PublicKey]) -> Result<PublicKey, Error> {
use std::mem::transmute;
use std::i32::MAX;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you use core and drop the feature gate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done thanks that's much better indeed.

@real-or-random
Copy link
Collaborator

Is it safe to assume that the SIGILL on the CI is not caused by my change? (I can see it also happened on master)

Yes, this will be fixed by #290

elichai
elichai previously approved these changes Mar 31, 2021
Copy link
Member

@elichai elichai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (@apoelstra what's our policy about assert vs debug_assert?)

@elichai
Copy link
Member

elichai commented Apr 7, 2021

@Tibo-lg Can you rebase please? that will solve the CI failures

@Tibo-lg
Copy link
Contributor Author

Tibo-lg commented Apr 7, 2021

@Tibo-lg Can you rebase please? that will solve the CI failures

Done!

Copy link
Member

@elichai elichai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-ACK with the same wondering about debug_assert vs assert

Copy link
Member

@apoelstra apoelstra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack 7d32182

Funny, I have no recollection of the C function taking an arbitrary number of pubkeys.

Re debug_assert vs assert, we don't really have a policy ... but I think here where it's (a) an overflow condition, and (b) a programmer error, it's okay to use debug_assert. I'm not thrilled with it, but this is what libstd does all over the place, so it's consistent with the Rust ecosystem at least.

@apoelstra apoelstra merged commit e89fd5d into rust-bitcoin:master Jun 8, 2021
/// Adds the keys in the provided slice together, returning the sum. Returns
/// an error if the result would be the point at infinity, i.e. we are adding
/// a point to its own negation
pub fn combine_keys(keys: &[&PublicKey]) -> Result<PublicKey, Error> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Post-merge note: PublicKey is a copy type and I am not sure why we need to have it here by reference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC the reason is because we need a &[&PublicKey] to transmute. It would also be possible to take a &[PublicKey] and do the conversion internally but I guess I preferred to keep the library code minimal. I don't have a strong opinion if people want to change it though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah the C code takes pointer to a list of pointers, so we'd have to allocate a vector of pointers and we try to avoid allocations where possible

@real-or-random
Copy link
Collaborator

Funny, I have no recollection of the C function taking an arbitrary number of pubkeys.

To be honest, maybe upstream should reconsider this. The function is mostly useless to end users.

@real-or-random
Copy link
Collaborator

Funny, I have no recollection of the C function taking an arbitrary number of pubkeys.

To be honest, maybe upstream should reconsider this. The function is mostly useless to end users.

It was added by the PR of the old Schnorr module to normal secp256k1.h (outside the module) https://github.com/bitcoin-core/secp256k1/pull/212/files#diff-4c52bd5cae65c9519af920228863ca34102abef6414e790498caa9fb05f03bbdR431

@Tibo-lg Do you need this function or did you simply create this PR because to make the bindings more complete?

@apoelstra
Copy link
Member

@real-or-random it is used somewhere by the LN folks (which is why it still exists in upstream BTW), so it would not surprise me if it has some use in rust-lightning

@Tibo-lg
Copy link
Contributor Author

Tibo-lg commented Jun 10, 2021

@real-or-random

@Tibo-lg Do you need this function or did you simply create this PR because to make the bindings more complete?

I need it to efficiently compute adaptor points (for adaptor signatures) based on multiple signature points, and using this version over calling combine multiple times is really much faster. I had actually some clever algorithmic optimization to compute adaptor points but it ended up being slower than just using this combine_keys without optimizing because it required adding points 2 by 2. If someone knows a way to speed things up when calling just combine it'd be really useful to me and I'd be happy to drop the combine_keys, but my guess (without knowing much about the internals of secp256k1) is that going from external to internal representation many times is what causes the slow down and so I don't think it can be improved.

If someone is curious I had made some quick and dirty benchmarks to compare both versions when adding 1000 pubkeys here.
Results show that pairwise addition is roughly 10 times slower:

test benches::add_batch    ... bench:     273,601 ns/iter (+/- 57,851)
test benches::add_no_batch ... bench:   2,760,124 ns/iter (+/- 146,883)

For just 10 keys it's still about 5 times slower:

test benches::add_batch    ... bench:       4,987 ns/iter (+/- 409)
test benches::add_no_batch ... bench:      24,859 ns/iter (+/- 2,331)

@real-or-random
Copy link
Collaborator

real-or-random commented Jun 10, 2021

Thanks. combine_keys is entirely fine if we have combine. Speaking as an upstream maintainer, I was rather curious why people need to add pubkeys in the first place. So this is a variant of adaptor signatures with more than two parties involved? Can you tell me more or is there some writeup? (Again, just for my curiosity.)

Post merge review:
Here we assert that there is at least one key (because upstream does the same). This should be documented, or we should wrap it into a nicer error in combine_keys. And there should be a test for the empty slice.

assert!(n >= 1);

@Tibo-lg
Copy link
Contributor Author

Tibo-lg commented Jun 10, 2021

Speaking as an upstream maintainer, I was rather curious why people need to add pubkeys in the first place. So this is a variant of adaptor signatures with more than two parties involved? Can you tell me more or is there some writeup? (Again, just for my curiosity.)

It's for Discrete Log Contracts (DLC) with numerical decompositions (or multiple oracles). You can check this post for example for a small write up. Note that as I mentioned above the memoization optimization that I describe there is actually slower than just using combine_keys with the "simple pre-computation" version.

Post merge review:

Tried to address your comments here: #304

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants