Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate whether we can use Secret Store's DKG, and secp256k1 for threshold schemes. #54

Open
afck opened this issue Jun 6, 2018 · 12 comments
Labels
crypto-question Contains a question about cryptography.

Comments

@afck
Copy link
Collaborator

afck commented Jun 6, 2018

Parity's Secret Store implements ECDKG to generate secpk256k1 key shares: It generates a random polynomial f of degree t in Zn (integers modulo n), where n is the order of the group secpk256k1 (G) and distributes the value f(i) to node i > 0, without any node knowing the polynomial. The values f(i) * g, with g the base point of G, are public.

This is used for threshold Schnorr signatures, where ECDKG generates both the nodes' secret keys (once) and the shares for the random nonce r (once per signing protocol execution).

G doesn't seem to have a pairing, and DDH is probably infeasible due to its large embedding degree. So our threshold schemes (#38, #40) don't work in it.

Questions:

  • Is the above correct at all?
  • Does threshold encryption also work with G?
  • Do Schnorr threshold signatures have the uniqueness property required by the common coin?
  • Is it still possible to have secure coin flips with only one message round?
  • The Secret Store implementation seems to be incomplete. Can we verify and complete it?
  • Can we use Parity's DKG instead of our own (Distributed Key Generation #47)?
@afck
Copy link
Collaborator Author

afck commented Jun 6, 2018

Schnorr threshold signatures

Let fn hash<T>(t: T) -> Zn be a hash function.

Simple signatures

A secret key is an sk: Zn, and the public key is pk = sk * g, in G. A signature of a message msg is a pair (s, e): (Zn, Zn):

let k = random();
let r = k * g;
let e = hash((r, msg));
let s = k - sk * e;
let sig = (s, e);

Note that r can be reconstructed from the signature, since k * g == s * g + e * pk. To verify the message, just check that e is indeed hash((r, msg)), i.e. the signature (s, e) is valid if:

e == hash((s * g + e * pk, msg))

The value k must be unknown — otherwise you can compute the secret key sk = (s - k) / e — and unique for each signature — if it is reused, you get sk = (s1 - s0) / (e0 - e1).

Threshold signatures

DKG creates signature shares sk[1], …, sk[n], and a master key sk[0], that lie on a polynomial of degree t. So t + 1 shares would suffice to recreate the master key by Lagrange interpolation, e.g.:

sk[0] == interpolate(0, [(1, sk[1]),, (t + 1, sk[t + 1])]);

Similarly, the pk[i] = sk[i] * g are on a polynomial; they are all known publicly.

Using the same DKG, we can create a random k[0], …, k[n], and publicly known r[i] = k[i] * g.

Node i can produce a signature share for message msg as follows:

let e = hash((r[0], msg));
let s[i] = k[i] - sk[i] * e;
let sig[i] = (s[i], e);

(The value e is also publicly known, so it can actually be omitted.) Since r[i] is public, we can verify that r[i] == s[i] * g + e * pk[i].

Interpolation is linear, so given a list s_vec of shares (i, s[i]), we can compute the full signature as if it had been signed by sk[0]:

s[0] == interpolate(0, s_vec)

Again, disclosing any k[i] or reusing the polynomial k would reveal the secret key shares.

Threshold signatures in a single message round?

For the common coin, running a full DKG in each coin flip is prohibitively slow. We still want a single-message-round coin!

I was hoping we could e.g. in the setup phase generate some number m of random polynomials k[m], and then use interpolation to extrapolate further values, i.e. for each i, the k[_][i] would be a polynomial of degree s. Can that approach be secure?

It certainly isn't for s == 1: If I know that your k[2][i] == 2 * k[1][i] - k[0][i] (linear extrapolation), then in the third round I can compute your private key share:

s[j][i] == k[j][i] - sk[i] * e[j]; // for each `j`, so:
2 * s[1][i] - s[0][i] - s[2][i] == sk[i] * (2 * e[1] - e[0] - e[2]);
sk[i] == (2 * s[1][i] - s[0][i] - s[2][i]) / (2 * e[1] - e[0] - e[2]);

I suspect something like that also works for s > 1? (Edit: Yes it does; k[s] always depends linearly on the k[j] with j < s…) Is there any other way to prepare an infinite list of polynomials k[m]?

@ChronusZ
Copy link

ChronusZ commented Jun 8, 2018

It doesn't really make sense to use threshold Schnorr signatures for a common coin. In order to make a threshold Schnorr signature, you already need to somehow generate shares of a unique random value. But then you can just reconstruct that random value directly instead of using it in a Schnorr scheme.

As for generating a random value more easily: It's not one round, but if you already have k shared then you can generate a sequence of random values much more efficiently than full DKG by relying on the "exponent q-strong DH assumption": for a randomly chosen k, given g^k, g^(k^2), ..., g^(k^q) it's assumed hard to calculate g^(k^(q+1)). Using general SMPC techniques, shares of k^(q+1) can be constructed securely from shares of k^q and k by just having everyone multiply their shares, then use AVSS to share their share and use lagrange interpolation on those shares of shares to get the result. After generating shares of k^(q+1), everyone reveals the exponential of their share and you compute the coin as H(g^(k^(q+1))). Note though afaik this can only be done assuming 2f+1 honest nodes are actively participating; thus it doesn't really mesh with the early termination message idea which requires the coin to be constructable with only f+1 active honest nodes.

@afck
Copy link
Collaborator Author

afck commented Jun 9, 2018

Thank you! You're right, those g^(k^i)s would suffice, of course!

But in the end I think "not one round" is too high a price to pay: Our pairing-based single-round common coin won't actually be that much code, and every added message round means more latency in the consensus algorithm in the end. So I'm in favor of taking the approach in #47 instead.

@ChronusZ
Copy link

ChronusZ commented Jun 13, 2018

Oh, here's a nice idea: Use the same BLS-style threshold signature scheme, but instead of testing a share sig by computing the pairing equation e(sig, G) == e(H(m), P), you instead just have each node send along with its share a non-interactive (using the Fiat-Shamir transform) Chaum-Pedersen proof that there exists s such that sig=s*H(m) and P=s*G. This increases the communication complexity to 3x that of the pairing-based protocol, but that's still not very significant, and it's still one round with the same message complexity. And you get the benefits of 1) verifying the Chaum-Pedersen proof is (relatively) very cheap (just some hashing and multiplications, along with two parallelizable exponentiations; compared to two parallelizable pairings), 2) works in any curve where DDH holds, so no need to open oneself up to possible improvements on attacking curves with low embedding degree, and 3) no need to rely on shady implementations of pairing crypto.

I haven't fully thought it through (will do so tomorrow), but I believe you can also do a similar system for threshold decrypting of ElGamal encryptions without using pairings. This would allow fully removing the pairing-based crypto from the protocol.

@amiller you might also be interested in this.

@amiller
Copy link

amiller commented Jun 13, 2018

This kind of approach is very interesting. It would definitely be great to remove the need for pairing crypto.
The tradeoff is roughly the following:
Right now, it's possible to use pairings to check the individual shares, and to use pairings to check the final aggregate signature. What you are suggesting is to use an ordinary NIZK to check the individual shares instead of a pairing. However, if we do this in a group without pairings, then there is no efficient way to check just the final signature at the end. With the current scheme, we can use optimism, where we combine the first t+1 shares and then check the combination with a single pairing, and only if that fails do we check the individual shares to identify the faulty party.
(Some of the above is discussed in amiller/HoneyBadgerBFT#11 )

But I am not sure on balance if this tradeoff is worth it anyway, and also, do you think there's a way to get the same optimism option even in just a DDH group?

@ChronusZ
Copy link

ChronusZ commented Jun 14, 2018

Good point. The best idea I can come up with for optimistic verification would be to modify the NIZK, so that rather than the prover generating the sigma protocol challenge in the clear as the hash of its commitments, the verifier precomputes a random challenge c and sends the prover an encryption of c with an additively homomorphic encryption scheme, and then the prover evaluates the sigma proof z=r+s*c linearly on the encryption of c. This is secure even if c is constant across rounds, since anything the prover could ever learn about c could be just as easily learned in the first round, and semantic security of the encryption scheme guarantees by rewinding that a valid prover could give us r+s*c and r+s*c' for c != c', hence allowing the knowledge extractor to recover s.

Using this modification, rather than getting r_1+s_1*c_1,...,r_n+s_n*c_n, we can get r_1+s_1*c,...,r_n+s_n*c. Thus summing over these proofs gives a proof that the sum of the signatures is correct.

I think that an RSA library could be used to write a simple implementation of Benaloh's additively homomorphic scheme, so this isn't an awful solution. Still better than using pairings I think, but still maybe a bit more of a pain than one would like.

(Note as well though, that if you do the idea mentioned in that linked issue, of verifying normal signatures over the shares, then since verifying each NIZK is not significantly different computationally from verifying a signature, it would be reasonable to avoid the optimism entirely)

@afck
Copy link
Collaborator Author

afck commented Jun 14, 2018

If I understand correctly (using additive pseudo-Rust), the signature share i itself would be sig[i] = sk[i] * hash(msg), together with a proof that log(hash(msg), sig[i]) == log(g, pk[i]). That proof consists of:

a1[i] = r[i] * g;
a2[i] = r[i] * hash(msg);
f[i] = r[i] + c * sk[i];

for some random r[i], and given a random challenge c.

The verifier checks that:

f[i] * g == a1[i] + c * pk[i] &&
    f[i] * hash(msg) == a3[i] + c * sig[i]

With the corresponding Lagrange coefficients lc[i] and, for example, the first t + 1 signatures, we'd produce:

main_sig = lc[0] * sig[0] + ... + lc[t] + sig[t];
main_a1 = lc[0] * a1[0] + ... + lc[t] + a1[t];
main_a2 = lc[0] * a2[0] + ... + lc[t] + a2[t];
main_f = lc[0] * f[0] + ... + lc[t] + f[t];

Then that would be a valid group signature (including the valid proof), but main_sig would be independent of the particular t + 1-sized subset. (The others wouldn't.)

For c, could we just use another hash of msg?

@ChronusZ
Copy link

c needs to either be generated at random after a1 and a2 are generated, or kept secret from the signer. If c was just a hash of msg, then it would be neither of these, and hence the signer could just pick arbitrary sig[i]=r*hash(msg) and set a1[i]=(r*c-c-sk[i]*c)*g, a2[i]=-c*sig[i] and f[i]=r*c-c.

The default way of doing things would be to have c=hash(a1[i]||a2[i]) (Fiat-Shamir transform). In the random oracle model this enforces the causal order of c and a1[i], a2[i]. The issue with this is that now c is different for each proof share, so we can't reconstruct the aggregate proof.

For aggregatable proofs, my idea was to go after the second option: keep c secret from the signer. At HoneyBadger initialization, each node j generates c[j] uniformly at random and then computes an additively homomorphic encryption e[j]=E(pk[j], c[j]) (this key can be a different one than the threshold key share) and gives e[j] to everyone else. Then node i can compute its proofs for node j by computing f[i]=E(pk[j], r[i])+sk[i]*e[j], which is an encryption of r[i]+sk[i]*c.

@ChronusZ
Copy link

Actually after more thought the additive homorphism idea was dumb. Encrypting/decrypting the f[i] is probably significantly more expensive than just doing the elliptic curve exponentiations...

@ChronusZ
Copy link

Here's a more reasonable way of doing optimism. I'll describe it as a three-round protocol, but the first two steps can be done in advance by pipelining with the previous coin flip messages. It's maybe a bit annoying to implement since it requires pipelining, but it's efficient.

To start, everyone sends a1 and a2 to node j. Then node j waits for 2t+1 such messages, picks the set S as the set of nodes it received these messages from, and then picks a uniformly random value c[j] and sends c[j] to all the nodes in S. Upon receiving c[j], these nodes can then construct their proofs using c as the challenge, which can then be combined into a single aggregate proof since they all share the same challenge. Since |S|=2t+1, there are at least t+1 honest nodes in S, so we can wait for t+1 proofs, then aggregate them and check the combined proof; if this fails, we can check the individual proofs.

Using the pipelining, node i in round k generates a random value r[i][k+2] and (upon receiving j's (k-1)-th message) sends to node j a message of the form (r[i][k+2]*g, r[i][k+2]*hash(m[k+2]), c[i][k+1], r[i][k]+sk[i]*c[j][k], sk[i]*hash(m[k])). Note that this works because we know in advance which messages will be signed in future rounds.

@afck
Copy link
Collaborator Author

afck commented Jun 18, 2018

Thanks, you're right, I think I get the additive homorphism idea now! 🎉

Actually after more thought the additive homorphism idea was dumb.

😩

By pipelining you mean that the message contains the third step of the current coin flip, the second step of the next one, and the first one of the one after that?

I think in Honey Badger the flips aren't really totally ordered: we may need only one or two flips for each instance of ABA (often none at all), but there are several ABA instances in parallel.

So if we'd use this method, we'd have a separate coin for each proposer (i.e. for each of the parallel ABA instances), but we'd need to reuse the coin from epoch 41's proposer 5 ABA for epoch 42's proposer 5 ABA to avoid having to restart the two-step initialization. The latter could only start once the former has terminated for everyone, we'd need to keep track of how often the coin was flipped (so the simple Term message we introduced wouldn't suffice) and carry that information over to the latter, so we know where to continue with the coin. That would complicate the code quite a bit, and introduce a lot more coupling between otherwise independent modules.
But I guess that's what you mean by "a bit annoying to implement"…

@ChronusZ
Copy link

Good point about needing to keep track of how many coin rounds happened in an epoch. Alternatively you can avoid pipelining on the coins themselves and instead pipeline on the BVAL and AUX messages. Still adds some extra coupling within the ABBA, but removes all the coupling outside of ABBA.

A further alternative is to avoid pipelining entirely and split the coin in two calls: setup and flip, where setup does the first two rounds and flip does the last round. This has the advantage of simplifying the implementation, and if setup is called more than two "message rounds" back, then it shouldn't increase the latency usually, but it is possible for the setup messages to be delayed longer than the other messages, which would then require blocking the flip until the setup finishes.

@afck afck added the crypto-question Contains a question about cryptography. label Aug 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
crypto-question Contains a question about cryptography.
Projects
None yet
Development

No branches or pull requests

3 participants