Skip to content

Commit

Permalink
Cleanup readme
Browse files Browse the repository at this point in the history
  • Loading branch information
jlb6740 committed Sep 26, 2023
1 parent a4bf43a commit 0a857bc
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,25 @@
# WasmScore

## Intro
WasmScore aims to provide a view of WebAssembly performance when executed outside the browser. It uses a containerized suite of codes and leverages [Sightglass](https://github.com/bytecodealliance/sightglass) to benchmark the underline platform. After running, an execution score and an efficiency score is provided for scoring the performance of Wasm on the underlying platform. In addition to scoring wasm performance, the benchmark is also a tool capable of executing any assortment of indivdual tests, suites, or benchmarks supported by the driver. WasmScore is work in development.
WasmScore aims to benchmark platform performance when executing WebAssembly outside the browser. It leverages [Sightglass](https://github.com/bytecodealliance/sightglass) to run benchmarks and measure performance and then summarizes these results as both an execution score and an efficiency score. In addition to providing scores for the platform, the benchmark is also a tool capable of executing other tests, suites, or individual benchmarks supported by the driver. WasmScore is work in development.

## Description
One of the most important and challenging aspect of benchmarking is deciding how to interpret the results; should you consider the results to be good or bad? To decide, you really need a baseline to serve as a point of comparison where this baseline depends on what it is you're trying to achieve. For example, that baseline could that same original source but before some code transformation was applied, or that baseline could be a modified configuration of the runtime that executes the WebAssembly. In the case of WasmScore (and specifically the wasmscore test), for every Wasm real code and micro that is run, WasmScore also executes the native code compile from the same high-level source used to generate the Wasm, to serve as a baseline. In this way WasmScore provides native execution of codes to serve as a comparison point for the Wasm performance where this baseline can be seen as the theoretical upper-bound for the performance of WebAssembly. This allows a user to quickly gauge the performance impact (hit) when using Wasm instead of using a native compile of the same code. It also allows developers to find opportunities to improve compilers, or to improve Wasm runtimes, or improve the Wasm spec, or to suggest other solutions (such as Wasi) to address gaps.
A basic part of benchmarking is interpreting the results; should you consider the results to be good or bad? To decide, you need a baseline to serve as a point of comparison. For example, that baseline could be a measure of the performance before some code optimization was applied or before some configuration change was made to the runtime. In the case of WasmScore (specifically the wasmscore test) that baseline is the execution of the native code compiled from the same high-level source used to generate the Wasm. In this way the native execution of codes that serves as a comparison point for the Wasm performance also serves as an upper-bound for the performance of WebAssembly. This allows gauging the performance impact when using Wasm instead of a native compile of the same code. It also allows developers to find opportunities to improve compilers, or to improve Wasm runtimes, or improve the Wasm spec, or to suggest other solutions (such as Wasi) to address gaps.

## Benchmarks
Typically a benchmark reports either the amount of work done over a constant amount of time or it reports the time taken to do a constant amount of work. The benchmarks here all do the later. The initial commit of the benchmarks available have been pulled from Sightglass however the benchmarks used with WasmScore come from the local directory here and have no dependency on the benchmarks stored in the Sightglass repo. However, how the benchmarks here are built and run do directly dependent on changes to the external Sightglass repo.
Typically a benchmark reports either the amount of work done over a constant amount of time or it reports the time taken to do a constant amount of work. The benchmarks here all do the later. The initial commit of the benchmarks available are pulled directly from Sightglass. How the benchmarks stored here are built and run do will depend on the external Sightglass revision being used

Benchmarks are often categorized based on their purpose and origin. Two such buckets are (1) codes written with the original intent of being user facing (hot paths in library codes, a typical application usage, etc) and (2) codes written specifically to target benchmarking some important or commonly used code construct or platform component. WasmScore does not aim to favor either of these benchmarking buckets as both are valuable in the evaluation of standalone Wasm performance depending on what you want to test and what you are trying to achieve.
Benchmarks are often categorized based on their purpose and origin. Two example buckets include (1) codes written with the original intent of being user facing and (2) codes written specifically to target benchmarking some important or commonly used code construct or platform component. WasmScore does not aim to favor one of these over the other as both are valuable and relevant in the evaluation of standalone Wasm depending on what you are trying to learn.

## WasmScore Principles
WasmScore aims to serve as a standalone Wasm benchmark and benchmarking framework that:
- Is convenient to build and run with useful and easy to interpret results.
- Is portable, enabling cross-platform comparisons.
- Provides a breadth of coverage for typical current standalone use cases and expected future use cases.
- Can be executed in a way that is convenient to analyze.
WasmScore aims to:
- Be convenient to build and run with useful and easy to interpret results.
- Be portable, enabling cross-platform comparisons.
- Inform a wide coverage of typical and interesting standalone use cases.
- Be convenient to analyze with common perf tools.

## WasmScore Tests
Any number of test can be created but "wasmscore" is the initial and default test. It includes a mix of relevant in use codes and platform targeted benchmarks for testing Wasm performance outside the browser. The test is a collection of several subtests (also referred to as suites):
"wasmscore" is the initial and default test. It includes a mix of benchmarks for testing Wasm performance outside the browser. The test is a collection of several subtests:

### wasmscore (default):
- App: [‘Meshoptimizer’]
Expand Down

0 comments on commit 0a857bc

Please sign in to comment.