-
Notifications
You must be signed in to change notification settings - Fork 359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve echidna heuristics documentation #980
Comments
Hi @aviggiano, thanks a lot of creating a new issue. Sorry for the delay to get you some answers, let's take a look to your list of questions:
|
@ggrieco-tob Thanks for the thorough response. I can help out with number 3, if you think it's a good idea. Since I already developed terraform templates to create instances with different sizes, measure CPU, memory, and elapsed time, it would be fairly easy to spawn a bunch of machines and take note of the results. The only problem is that I don't know which projects/configs to use. I could take some well known DeFi projects that already use echidna, such as Compound and Uniswap, for example, but if you have other recommendations that would be great. |
Hi Regarding the performance benchmark, I did some tests with Uniswap's V3 core contracts ( From what I was able to find out, until #963 gets merged, choosing a bigger instance with more cores does not yield significant results, as it doubles the cost but improves the test speed only slightly (see In fact, for this test, it seems like the cheaper the instance, the better. A I will re-run these tests once multicore is available, and expand the benchmark to other projects in order to get a more comprehensive dataset than the current one. |
@aviggiano can you re-test these experiments but using #963?. We want to merge that one soon and we want to make sure it is solid. |
@ggrieco-tob great news! It seems like #963 really provides a significant boost in performance and cost-benefit: For the sake of simplicity, I've ran a single test (the longest one, I will try to do the same experiment in the future with other codebases and other configurations. |
I am developing fuzzy.fyi, a project that helps execute long runs of echidna on AWS, testing it on a ERC4626 vault from Pods. The idea is to test smart contracts on the cloud without compromising the developer's workflow with intensive resource-consuming fuzzy campaigns.
While working on this tool, we stumbled upon some choices of parameters that influence the fuzzer performance, which don't seem to be very well documented. It seems that some of these parameters are "rules of thumb"/"heuristics", so I would like to ask them here:
testLimit
andseqLen
values? What values does Trail of Bits usually use during its audits? How long should a "good" run last (hours, days, weeks)?c5
) or a memory-optimized instance (such asr5
)?2xlarge
instance, should we expect half of the time to run a campaign from axlarge
instance?testLimit
andseqLen
? Meaning: when should you increase one or the other? How can we calculate the fuzzer "performance" (meaning, probability to find bugs), assuming the choice of these variables has an impact on the performance?testLimit
100k and corpus enabled or test 1 run withtestLimit
1M?testLimit
1M) are terminated by the OOM killer after many hours. What is the recommendation when that happens? I think getting a bigger instance would just hide the problem.The text was updated successfully, but these errors were encountered: