Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

transports/webrtc: Implement deterministic certificates generation #12

Open
wants to merge 18 commits into
base: anton/webrtc-transport
Choose a base branch
from

Conversation

thomaseizinger
Copy link

@thomaseizinger thomaseizinger commented Oct 25, 2022

Description

This PR implements deterministic certificate generation by means of ring and the simple_x509 library.

Closes libp2p#3049.

Links to any relevant issues

Open tasks

Change checklist

  • I have performed a self-review of my own code
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • A changelog entry has been made in the appropriate crates

@thomaseizinger thomaseizinger changed the title WIP: Deterministic certificates Implement deterministic certificates generation Oct 25, 2022
///
/// let transport = Transport::new(id_keys, certificate);
/// ```
pub fn new(id_keys: identity::Keypair, certificate: RTCCertificate) -> Self {
pub fn new(id_keys: identity::Keypair, certificate: Certificate) -> Self {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't think we need to expose Certificate at all if we're able to derive it from id_keys

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do that but then it would always be deterministic and would not allow users to put a random one in if they want.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I think I am getting around to this view. I don't think it is necessary for users to put a random certificate in. The more I think about it, the more I'd consider this a footgun actually. Making this an implementation detail gives us more control over the certificate generation and also makes sure we use a proper HKDF to seed the RNG.

@thomaseizinger
Copy link
Author

I can't find any mention of EDDSA on https://w3c.github.io/webcrypto/#algorithm-overview so I don't think browsers will support that.

@thomaseizinger
Copy link
Author

I've found https://docs.rs/x509-certificate/0.15.0/x509_certificate/index.html but it doesn't allow to configure the not_before parameter which means the certificate will not be deterministic.

@thomaseizinger
Copy link
Author

My investigation so far is that:

  • Browsers don't seem to support EDDSA.
  • EDDSA makes deterministic certificates very easy because it doesn't require any additional randomness for the signature.
  • rcgen can be used if we could use EDDSA.
  • rcgen can't be used for ECDSA because it does not allow you to configure the randomness required for signing. It just instantiates SystemRandom.

@thomaseizinger
Copy link
Author

Just found https://github.com/zartarn15/simple_x509 which looks promising from an API PoV.

@thomaseizinger
Copy link
Author

I did it!

We have deterministic ECDSA certificates 🥳

@thomaseizinger thomaseizinger marked this pull request as ready for review October 26, 2022 06:24
@thomaseizinger thomaseizinger changed the title Implement deterministic certificates generation transports/webrtc: Implement deterministic certificates generation Oct 26, 2022
Copy link
Owner

@melekes melekes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

transports/webrtc/src/tokio/certificate.rs Outdated Show resolved Hide resolved
transports/webrtc/src/tokio/certificate.rs Outdated Show resolved Hide resolved
transports/webrtc/src/tokio/certificate.rs Show resolved Hide resolved
@thomaseizinger
Copy link
Author

Damn, false alarm! For some reason there is still some hidden randomness somewhere, I just messed up the test.

@melekes
Copy link
Owner

melekes commented Oct 26, 2022

can someone with more crypto knowledge also review this? cc @tomaka @mxinden

@thomaseizinger
Copy link
Author

This is won't work until briansmith/ring@fe9e4d0 is released.

@thomaseizinger
Copy link
Author

I started by sending a PR to rcgen: rustls/rcgen#98

@tomaka
Copy link

tomaka commented Oct 27, 2022

I haven't really reviewed, but we should be extremely careful that the deterministic way in which the certificate is built doesn't change between two versions of libp2p or one of its direct or indirect dependencies.

Since the certificate hash is in the multiaddr, and that you're hardcoding multiaddrs as bootnodes when releasing a version of a peer-to-peer software, the effect of this hash changing would be that someone downloading the software for the first time can't connect anymore.

IMO this needs at least a unit test that compares a generated certificate with an expected one, with a huge capital case comment saying "this test shouldn't be removed and do not modify this expected certificate, otherwise code sitting on top of us will break".

@tomaka
Copy link

tomaka commented Oct 27, 2022

For what it's worth, what I initially had in mind when I proposed deterministically-generated certificates for the spec was to do something like:

let mut certificate = HARDCODED_CERT_PREFIX.to_vec();
certificate.extend_from_slice(private_key);  // (derived from the libp2p public key in the original idea, but here would be derived from an actual private key)
certificate.extend_from_slice(HARDCODED_CERT_SUFFIX);

Where the prefix and suffix are in the spec.

Of course it doesn't actually need to be so strict here, because there's no need for all libp2p implementations to agree on how the certificate generation is done, but it would make sure that it's deterministic.

@thomaseizinger
Copy link
Author

IMO this needs at least a unit test that compares a generated certificate with an expected one, with a huge capital case comment saying "this test shouldn't be removed and do not modify this expected certificate, otherwise code sitting on top of us will break".

I did have that in a previous generation but it wasn't useful in figuring out what function of my generation code is not deterministic. I fully agree with you that need to assert against a hardcoded certificate and I am planning on adding asserting against one as soon as we actually have the deterministic generation working!

For what it's worth, what I initially had in mind when I proposed deterministically-generated certificates for the spec was to do something like:

let mut certificate = HARDCODED_CERT_PREFIX.to_vec();
certificate.extend_from_slice(private_key);  // (derived from the libp2p public key in the original idea, but here would be derived from an actual private key)
certificate.extend_from_slice(HARDCODED_CERT_SUFFIX);

Where the prefix and suffix are in the spec.

Of course it doesn't actually need to be so strict here, because there's no need for all libp2p implementations to agree on how the certificate generation is done, but it would make sure that it's deterministic.

I am not sure I follow how this is meant to work. Can you explain? Would this certificate be entirely static and the private known to everyone?

The struggle is that ECDSA signature require a random nonce for each signature. If you sign multiple messages with the same private key and not change the nonce, you will leak your private key eventually. If Schnorr wouldn't have patented his signature algorithm, we wouldn't be in this mess but so is life :D

EDDSA also doesn't have this problem but it operates on an entirely different curve and it is not required to be supported by browsers.

Long story short, to deterministically produce an ECDSA signature, you need full control over the randomness source that is used to generate the nonce and that is only possible as of ring 0.17, which is not released yet.

@tomaka
Copy link

tomaka commented Oct 28, 2022

I am not sure I follow how this is meant to work. Can you explain? Would this certificate be entirely static and the private known to everyone?

Sorry got confused. I didn't mean to include the private key in the certificate, but the public key and signature.
It's been ages since I've last looked at what is inside of a certificate, but the basic idea is to hardcode in the source code everything apart from the public key and signature.

Would this certificate be entirely static and the private known to everyone?

In what I had originally suggested for the WebRTC libp2p spec, yes, the private key would be known to everyone. But that's not what I meant here.

@thomaseizinger
Copy link
Author

I am not sure I follow how this is meant to work. Can you explain? Would this certificate be entirely static and the private known to everyone?

Sorry got confused. I didn't mean to include the private key in the certificate, but the public key and signature.
It's been ages since I've last looked at what is inside of a certificate, but the basic idea is to hardcode in the source code everything apart from the public key and signature.

There is not much that is mandatory for it to be valid. I think it is only issuer and validity dates.

Is your idea to use hardcoded bytes as a form of template so we don't need to deal with certificate generation libraries? That could work actually!

Unfortunately, if we want to use ring for signing, we still have to wait for the next release but we might get away with not depending on rcgen then!

@tomaka
Copy link

tomaka commented Oct 28, 2022

That could work actually!

Most importantly, it would guarantee determinism without relying on the implementation details of rcgen.
I fear that otherwise a naive rcgen contributor could for example do a small change that changes the order of the fields of the certificate, not knowing that it has consequences for us. And the consequences for us would be a silent breakage, which is quite bad.

@melekes
Copy link
Owner

melekes commented Oct 28, 2022

And the consequences for us would be a silent breakage, which is quite bad.

How it's silent if we will have a test that asserts bytes?

@tomaka
Copy link

tomaka commented Oct 28, 2022

The test will fail only if you update rcgen in the rust-libp2p repo.
If rcgen publishes a patch version for example, a user can do cargo update and get a new version of rcgen with a different fields order.

@melekes
Copy link
Owner

melekes commented Oct 30, 2022

The test will fail only if you update rcgen in the rust-libp2p repo. If rcgen publishes a patch version for example, a user can do cargo update and get a new version of rcgen with a different fields order.

Right, thanks.

@thomaseizinger
Copy link
Author

Stupid me, I already replaced rcgen in my latest attempt with simple_x509 🤦‍♂️

@thomaseizinger
Copy link
Author

Stupid me, I already replaced rcgen in my latest attempt with simple_x509 man_facepalming

Oh I remember why. It is because webrtc also depends on rcgen and we need all dependencies to update, even if we are not using the codepath that triggers rcgen in webrtc.

@thomaseizinger
Copy link
Author

The latest patch proves that the deterministic certification generation works and it turns out we don't need to patch rcgen. It is enough to patch webrtc with the latest ring version.

@thomaseizinger
Copy link
Author

The test will fail only if you update rcgen in the rust-libp2p repo. If rcgen publishes a patch version for example, a user can do cargo update and get a new version of rcgen with a different fields order.

I've pinned the simple_x509 version in the Cargo.toml. To my knowledge, that should prevent this issue.

I toyed around with actually hardcoding the bytes but ASN.1 isn't exactly friendly for that due to this TLV structure. ECDSA public keys and signatures can vary in length, so I would have to compute these lengths and insert them in the correct ASN.1 encoding, then DER-encode the entire thing, run the signature algorithm and build the other x509 ASN.1 structure and serialize all of that to DER.

We could use a ASN.1 library like yasna for this but that exposes us to a dependency again. Overall, I think simple_x509 should to the trick for now. It is an implementation detail so we can always change it to something else.

@thomaseizinger
Copy link
Author

That could work actually!

Most importantly, it would guarantee determinism without relying on the implementation details of rcgen. I fear that otherwise a naive rcgen contributor could for example do a small change that changes the order of the fields of the certificate, not knowing that it has consequences for us. And the consequences for us would be a silent breakage, which is quite bad.

I thought more about this.

Should we introduce some kind of identifier / version that users can pass in for how they want their certificate to be generated? In case ring ever ships RFC6979, it would be nice to "upgrade" the certificate generation to use that instead of seeding an RNG. With a bit more effort, we could probably get rid of the rng altogether and directly use the output of the HKDF as the the ECDSA private key.

That would be a much cleaner implementation but it will yield a different certificate. Without some kind of version / identifier, we won't be able to provide an upgrade path to users and thus are forever locked into the way we generate certificates.

Thoughts?

@thomaseizinger
Copy link
Author

That could work actually!

Most importantly, it would guarantee determinism without relying on the implementation details of rcgen. I fear that otherwise a naive rcgen contributor could for example do a small change that changes the order of the fields of the certificate, not knowing that it has consequences for us. And the consequences for us would be a silent breakage, which is quite bad.

I thought more about this.

Should we introduce some kind of identifier / version that users can pass in for how they want their certificate to be generated?

[snip]
Without some kind of version / identifier, we won't be able to provide an upgrade path to users and thus are forever locked into the way we generate certificates.

Thoughts?

I think the upgrade path here is to operate old bootnodes for a while until new versions are shipped that point to new bootnodes that use a different certhash in their multiaddr.

As a result, I don't think we need to ship an upgrade mechanism in code here but leave it up to the application.

@melekes
Copy link
Owner

melekes commented Nov 9, 2022

I think the upgrade path here is to operate old bootnodes for a while until new versions are shipped that point to new bootnodes that use a different certhash in their multiaddr.
As a result, I don't think we need to ship an upgrade mechanism in code here but leave it up to the application.

I agree. We can just add another method like generate_from_seed and have new nodes use this method to derive a certificate.

Copy link

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feature @thomaseizinger.

I still have to dive deeper into the cryptographic primitives used here.

In the meantime, a couple of comments.

@MarcoPolo given your experience on libp2p/go-libp2p#1833, would you mind taking a look as well?


let certificate = simple_x509::X509::builder()
.issuer_utf8(Vec::from(ORGANISATION_NAME_OID), "rust-libp2p")
.subject_utf8(Vec::from(ORGANISATION_NAME_OID), "rust-libp2p")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to set these? If I am not mistaken, go doesn't do it for webtransport.

https://github.com/libp2p/go-libp2p/blob/3387eb7069a522a3a0081404f8b7d82c5c4626f8/p2p/transport/webtransport/crypto.go#L62

Comment on lines +71 to +72
pub fn new(id_keys: identity::Keypair) -> Result<Self, CertificateError> {
let certificate = Certificate::new(&id_keys)?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related past discussion #12 (comment)

I am not yet decided on whether we should only support deterministic certificate generation. Rational:

  • The certificate is valid till the year 3000.
  • In case the TLS private key is compromised (leaked, insufficiently secure, ...) an attacker can impersonate the local peer after the Noise handshake with a remote.
  • Thus the impersonation attack surface is increased from 1 (libp2p private key) to 2 (libp2p private key + TLS private key).

Related section in the specification:

  • Why exchange fingerprints in an additional authentication handshake on top of
    an established WebRTC connection? Why not only exchange signatures of ones TLS
    fingerprints signed with ones libp2p private key on the plain WebRTC
    connection?

    Once A and B established a WebRTC connection, A sends
    signature_libp2p_a(fingerprint_a) to B and vice versa. While this has the
    benefit of only requring two messages, thus one round trip, it is prone to a
    key compromise and replay attack. Say that E is able to attain
    signature_libp2p_a(fingerprint_a) and somehow compromise A's TLS private
    key, E can now impersonate A without knowing A's libp2p private key.

    If one requires the signatures to contain both fingerprints, e.g.
    signature_libp2p_a(fingerprint_a, fingerprint_b), the above attack still
    works, just that E can only impersonate A when talking to B.

    Adding a cryptographic identifier of the unique connection (i.e. session) to
    the signature (signature_libp2p_a(fingerprint_a, fingerprint_b, connection_identifier)) would protect against this attack. To the best of our
    knowledge the browser does not give us access to such identifier.

https://github.com/libp2p/specs/tree/master/webrtc#faq

This might be overly cautious. One could argue that people should rotate their libp2p private keys regularly and thus also automatically rotate the TLS private keys.

Also related, people interested in running WebRTC on their bootstrap nodes could as well use the dnsaddr feature. One would only hardcode the bootstrap node libp2p public key in the client. The client would learn the current TLS certificate fingerprint via the dnsaddr TXT record.

What do folks think? In addition to the deterministic certificate generation, should we also offer a random certificate code path? Which one should be the default?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think providing an option of having deterministic certificate is good. In general there would be two options: 1) generate a new certificate and (optionally) save/load it from disk. 2) derive a certificate from libp2p private key. Users will be free to choose whenever option they see fit. We could add more docs explaining the downsides of each way.

What do folks think? In addition to the deterministic certificate generation, should we also offer a random certificate code path? Which one should be the default?

If that would simplify UX, sure 👍

serde = { version = "1.0", features = ["derive"] }
sha2 = "0.10.6"
simple_x509 = "=0.2.2" # Version MUST be pinned to ensure a patch-update doesn't accidentially break deterministic certs.
Copy link

@mxinden mxinden Nov 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This crate has 0 stars on Github, 983 total downloads (none this month) and I don't see any popular repository in the authors GitHub.

I am not saying that this makes a crate untrustworthy by itself. I do believe we should have some indicator of trust for all dependencies that we use. Do we have any trust anchor for this crate?

Copy link
Owner

@melekes melekes Nov 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have any trust anchor for this crate?

The fact that we're pinning the concrete version and library is small enough for us to review it? Alternatively, we can fork it to rust-libp2p if that would increase trust.

use rand::{CryptoRng, Rng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use ring::signature::{EcdsaKeyPair, KeyPair, ECDSA_P256_SHA256_FIXED_SIGNING};
use ring::test::rand::FixedSliceSequenceRandom;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this meant for production code?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mxinden
Copy link

mxinden commented Jan 4, 2023

Friendly ping @thomaseizinger. What are the next steps here? Would be unfortunate for the work here to never make it anywhere.

@melekes
Copy link
Owner

melekes commented Jan 4, 2023

Friendly ping @thomaseizinger. What are the next steps here? Would be unfortunate for the work here to never make it anywhere.

I think we're still waiting for a new briansmith/ring#1551 ring version, which takes ages 😞 That said, maybe we can merge this code somehow with a compilation flag or smth like that. Wdyt?

@thomaseizinger
Copy link
Author

I think the proper way of resolving this issue is to use RFC6979, likely by implementing it for ring. That way, we can actually get rid of the randomness altogether and make the signatures properly deterministic.

Anything else I'd consider a hack that is prone to break as soon as implementation details change.

Once ring supports RFC6979 and all dependencies are updated to that ring version, the API we can expose is simply one that takes a bytes slice as seed (and not an RNG). It is then up to the user to fill that byte-slice with random data, use a HKDF, provide a fixed byte array or do whatever else may come to mind.

@dariusc93
Copy link

dariusc93 commented May 2, 2024

Any updates or news on this or would it be best to wait on RFC6979?

/CC @thomaseizinger

EDIT: We could also open a new issue over at rust-libp2p about this (if one doesnt exist already) so there is a consistent point of reference.

@DougAnderson444
Copy link

I think we're still waiting for a new briansmith/ring#1551 ring version

0.17.0 has been released

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants