Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to blame the disruptive signing participant? #5

Open
siv2r opened this issue Jun 17, 2024 · 6 comments
Open

How to blame the disruptive signing participant? #5

siv2r opened this issue Jun 17, 2024 · 6 comments

Comments

@siv2r
Copy link
Owner

siv2r commented Jun 17, 2024

In NonceAgg or PartialSigAgg, we can assign blame to a signer in two ways:

Method 1: Index of Invalid Value

  • This approach is used by BIP327.
  • We identify the index in the list where the invalid nonce or partial signature is found and assign blame to that index.

Method 2: Participant Identifier

  • This BIP currently uses this method.
  • In addition to the list of nonces/partial signatures, we receive a list of participant identifiers associated with each value.
  • When an invalid nonce or partial signature is detected, we assign blame to the corresponding participant identifier instead of the index.

I went with Method 2 because FROST includes the participant identifier parameter, which is absent in MuSig2. However, this approach has the following issue: If the participant identifier list contains invalid ids, we can’t accurately assign blame.


How to fix this?

Option 1: we just assume all the values in the identifier list are valid.

  • We leave the identification mechanism to the coordinator or the participant group
  • Participants/coordinator should be able to accurately identify the other participants from whom they receive the nonces/partialsigs. Thus, they can construct a valid list of identifiers.

Option 2: we can check for invalid id values inside NonceAgg or PartialSigAgg

cc @jonasnick @real-or-random @jesseposner

@siv2r
Copy link
Owner Author

siv2r commented Jun 17, 2024

Option 2: we can check for invalid id values inside NonceAgg or PartialSigAgg

If we plan on doing these checks, we should also consider additional checks like:

$$\text{MIN\_PARTICIPANTS} \le \text{len}(id_{i..u}) \le \text{MAX\_PARTICIPANTS}$$ $$\text{MIN\_PARTICIPANTS} \le \text{len}(\text{pubnonce}_{i..u}) \le \text{MAX\_PARTICIPANTS}$$

which would require the NonceAgg and PartialSigAgg functions to also take MIN_PARTICIPANTS and MAX_PARTICIPANTS as input parameters.

@jesseposner
Copy link
Collaborator

Participant IDs can be random, in which case $1 \le id_{i} \le \text{MAXPARTICIPANTS}$ will not be true.

If I'm understanding this correctly, with either option, it's the responsibility of the caller of the API to assemble the correct participant identifier list. We should certainly validate the data to the extent possible, but I think the only things we can check are for invalid secp256k1 scalars and duplicates.

@real-or-random
Copy link

Participant IDs can be random, in which case 1≤idi≤MAX_PARTICIPANTS will not be true.

In BIP DKG, participant ids are long-term pubkeys of the participants, but internally (when it comes to Lagrange coefficients), we just use indices 1...n.

I think there's some meta issue. With Jesse working on the implementation, Jonas and me working on BIP DKG, and Sivaram working on the signing BIP, it seems we have diverged on some design decisions, and also some of the terminology. We should probably synchronize, but I'm not entirely sure what's the best process. It may be a good idea to wait for Jonas, who is currently out of office.

@jesseposner
Copy link
Collaborator

In BIP DKG, participant ids are long-term pubkeys of the participants, but internally (when it comes to Lagrange coefficients), we just use indices 1...n.

Interesting, I'm curious to learn more about how we map from pubkeys to indices. Currently, in the implementation, we pass the pubkey to a hashing function to generate an index hash, and we don't get monotonically increasing integers, but rather randomized hash integers.

We should probably synchronize, but I'm not entirely sure what's the best process. It may be a good idea to wait for Jonas, who is currently out of office.

Some synchronous time when Jonas is available sounds great.

@real-or-random
Copy link

Interesting, I'm curious to learn more about how we map from pubkeys to indices.

What we currently do in BIP DKG is simply to expect the caller to provide an (ordered) list of pubkeys, and the position in the list is the index (where the first index is 1 instead of 0). The caller is free to pre-sort the list if they explicitly don't care about ordering. This is similar to key aggregation in the MuSig2 BIP.

Currently, in the implementation, we pass the pubkey to a hashing function to generate an index hash, and we don't get monotonically increasing integers, but rather randomized hash integers.

Yeah, this sounds like a great topic for bike shedding. :) IIRC we considered these (tiny) advantages of indices:

  • Integers are just a bit simpler because you don't need to hash, and iterating over a range of integers is simpler than iterating over a list of hashes.
  • You can precompute Lagrange coefficients independently of the pubkeys.

The disadvantage of hashing is that the implementer needs to be careful not to use index 0. But we found this risk to be acceptable because it's on the side of the implementation and not pushed to the user.

@jesseposner
Copy link
Collaborator

Ah, that makes sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants