Skip to content

Commit

Permalink
benchmarking.md: Explain how to compare benchmarks
Browse files Browse the repository at this point in the history
  • Loading branch information
glebm committed Aug 19, 2024
1 parent e99165b commit aabc7a6
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 3 deletions.
30 changes: 30 additions & 0 deletions docs/benchmarking.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,33 @@ tools/linux_reduced_cpu_variance_run.sh build-reld/clx_render_benchmark
See `tools/build_and_run_benchmark.py --help` for more information.

You can also [profile](profiling-linux.md) your benchmarks.


## Comparing benchmark runs

You can use [compare.py from Google Benchmark](https://github.com/google/benchmark/blob/main/docs/tools.md) to compare 2 benchmarks.

First, install the tool:

```bash
git clone git@github.com:google/benchmark.git ~/google-benchmark
cd ~/google-benchmark/tools
pip3 install -r requirements.txt
cd -
```

Then, build the 2 binaries that you'd like to compare. For example:

```bash
BASELINE=master
BENCHMARK=dun_render_benchmark
git checkout "$BASELINE"
tools/build_and_run_benchmark.py -B "build-reld-${BASELINE}" --no-run "$BENCHMARK"

git checkout -
tools/build_and_run_benchmark.py --no-run "$BENCHMARK"

tools/linux_reduced_cpu_variance_run.sh ~/google-benchmark/tools/compare.py -a benchmarks \
"build-reld-${BASELINE}/${BENCHMARK}" "build-reld/${BENCHMARK}" \
--benchmark_repetitions=10
```
8 changes: 5 additions & 3 deletions tools/build_and_run_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ def main():
nargs="*",
help="arguments passed to the benchmark binary",
)
parser.add_argument("--run", action=argparse.BooleanOptionalAction, default=True, help="If false, only builds the target")
args = parser.parse_args()
build = args.build
if not build:
Expand All @@ -81,9 +82,10 @@ def main():
try:
maybe_create_build_dir(build, configure_args)
build_target(build, args.target)
run_benchmark(build, args.target, args.benchmark_args, args.gperf)
if args.gperf:
run_pprof(build, args.target, args.port)
if args.run:
run_benchmark(build, args.target, args.benchmark_args, args.gperf)
if args.gperf:
run_pprof(build, args.target, args.port)
except subprocess.CalledProcessError as e:
print("Error:", e.cmd[0], "failed", file=sys.stderr)
return e.returncode
Expand Down

0 comments on commit aabc7a6

Please sign in to comment.