Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Add fuzzer tests #143

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

WIP: Add fuzzer tests #143

wants to merge 3 commits into from

Conversation

Kobzol
Copy link

@Kobzol Kobzol commented Aug 28, 2019

Hi, I wanted to experiment with fuzzing in Rust, so I thought that I would start with this crate, since you have suggested that some fuzz tests would be nice (#63).

I simply added cargo fuzz and created a single fuzz target for each Collapse implemetation. Since the trait is so simple, this should be enough. Is that ok?

We should also define what do you actually consider a bug in the library and what do you consider a "bug" in the input. For example immediately after I started fuzzing perf, it crashed, because it fails on an assertion (https://github.com/jonhoo/inferno/blob/master/src/collapse/perf.rs#L173) if the input (file) is empty. For cmd usage this is probably not that much different from a controlled notification about a bad input with a Result, but for a library usage a crash like that on an invalid input is probably not something that you want. If I comment out the assert, the fuzzer doesn't seem to find any other crash in perf, so maybe the assert is superfluous?

I didn't run the fuzzers for long, they also found one other crash in dtrace (related to overflow in subtraction).

Cargo fuzz reguires nightly though, so the fuzz tests would have to be executed in CI using nightly.

@Kobzol Kobzol force-pushed the fuzz branch 2 times, most recently from 2f9b316 to 4602c2d Compare August 28, 2019 11:14
@Kobzol Kobzol changed the title [WIP] Add fuzzer tests WIP: Add fuzzer tests Aug 28, 2019
@jonhoo
Copy link
Owner

jonhoo commented Aug 28, 2019

That's awesome, thank you!

I think it would probably be better to return errors about malformed input in those cases rather than rely on assertions (cc @jasonrhansen), which would also let you fuzz better.

Running fuzzing only on nightly seems fine to me!

@jasonrhansen
Copy link
Collaborator

As for the assertion in perf.rs, we could replace it with something like this:

        if self.event_filter.is_none() {
            return Err(io::Error::new(
                io::ErrorKind::InvalidData,
                "no event filter found while processing input",
            ));
        }

@jasonrhansen
Copy link
Collaborator

In dtrace.rs maybe we should check if the stack is empty at the beginning of on_stack_end and log a warning with an early return? So something like this:

        if self.stack.is_empty() {
            warn!("on_stack_end called with an empty stack");
            return;
        }

@Kobzol
Copy link
Author

Kobzol commented Aug 29, 2019

@jonhoo Do you want to run fuzzing on CI? Cirrus CI is only used for FreeBSD, the rest is tested on Azure, which uses some predefined templates?

I could add cargo +nightly install cargo-fuzz && cargo +nightly fuzz run <targets> to Cirrus, but I'm not sure how to modify the Azure templates :)

@jonhoo
Copy link
Owner

jonhoo commented Aug 29, 2019

I think we probably only need to run fuzzing on one of the CIs. Probably Azure. What you'll want to do is add another "stage" before the empty line before the line that reads resources:

 - stage: fuzz
   displayName: fuzzing tests
   jobs:
     - job: fuzz
       displayName: cargo +nightly fuzz
       pool:
         vmImage: ubuntu-16.04
       steps:
        - template: azure/install-rust.yml@templates
          parameters:
            rust: nightly
        - checkout: self
          submodules: recursive
        - script: cargo install cargo-fuzz
          displayName: Install cargo fuzz
        - script: cargo fuzz run <targets>
          displayName: Run fuzzer

azure-pipelines.yml Outdated Show resolved Hide resolved
@Kobzol
Copy link
Author

Kobzol commented Aug 30, 2019

Is Azure currently integrated into this GitHub (like for example in tokio) or do you only see the CI results privately? We have to choose some timeout for the fuzzing and maybe some conditions for the fuzzing stage (for example only for merge commits).

@jonhoo
Copy link
Owner

jonhoo commented Aug 30, 2019

Looks like it was misconfigured in some way — should be fixed now! You may want to merge with master.

@Kobzol
Copy link
Author

Kobzol commented Aug 30, 2019

Ok, I rebased onto master. However there is a problem with clippy that will possibly require an update of crossbeam-channel.

@jonhoo
Copy link
Owner

jonhoo commented Aug 30, 2019

Ah, yes, that seems to be crossbeam-rs/crossbeam#404.
I've pushed changes to master that should make things work again. Try merging again?

@jonhoo
Copy link
Owner

jonhoo commented Aug 30, 2019

Looks like coverage sometimes crashes due to xd009642/tarpaulin#190. Since your fuzzing stage now comes after coverage testing, it might not even run if that triggers. Try adding the following to the new stage you added:

   dependsOn: test

@jonhoo
Copy link
Owner

jonhoo commented Aug 30, 2019

@Kobzol
Copy link
Author

Kobzol commented Aug 30, 2019

Cool :) Do you have access to the artifact directory? The test was written to /home/vsts/work/1/s/fuzz/artifacts/perf/crash-da39a3ee5e6b4b0d3255bfef95601890afd80709, but it probably has to be picked up manually. Maybe it would be possible to recreate the same input locally by running the fuzzer with the same random seed, but I didn't find such option in the documentation of cargo fuzz nor libfuzzer (it expects the generated corpus, but that is also on the Azure filesystem).

@jonhoo
Copy link
Owner

jonhoo commented Aug 30, 2019

Hmm, I think for that you need to add another step at the end that publishes a Pipeline Artifact:

- publish: $(System.DefaultWorkingDirectory)/fuzz/artifacts/
  condition: failed()
  artifact: fuzz

I think that step should still execute even if the previous once failed (due to the condition), but not entirely sure. I also don't know where to download that artifact from once it's been published, though the docs seem to hint at it being possible.

@Kobzol
Copy link
Author

Kobzol commented Aug 30, 2019

Uhh, the terminology here is atrocious, multiple types of artifacts.. in GitLab it's pretty easy, in Azure I'm a bit lost :-) I think that you might be able to click on the pipeline run and then it shows Artifacts: -, maybe it will become clickable if there are any build artifacts exported.

I half expect that the failed() condition will only fire when the immediate predecessor step fails (let's hope not :-) ). Let's see.

@jonhoo
Copy link
Owner

jonhoo commented Aug 30, 2019

Huh, that seems to not even have triggered a build...? :/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants