Skip to content

Extremely simple to use test framework and runner for Metarhia technology stack 🧪

License

Notifications You must be signed in to change notification settings

metarhia/metatests

Repository files navigation

metatests

ci status snyk npm version npm downloads/month npm downloads license

metatests is an extremely simple to use test framework and runner for Metarhia technology stack built on the following principles:

  • Test cases are files, tests are either imperative (functions) or declarative (arrays and structures).

  • Assertions are done using the built-in Node.js assert module. The framework also provides additional testing facilities (like spies).

  • Tests can be run in parallel.

  • All tests are executed in isolated sandboxes. The framework allows to easily mock modules required by tests and provides ready-to-use mocks for timers and other core functionality.

  • Testing asynchronous operations must be supported.

  • Testing pure functions without asynchronous operations and state can be done without extra boilerplate code using DSL based on arrays.

    mt.case(
      'Test common.duration',
      { common },
      {
        // ...
        'common.duration': [
          ['1d', 86400000],
          ['10h', 36000000],
          ['7m', 420000],
          ['13s', 13000],
          ['2d 43s', 172843000],
          // ...
        ],
        // ...
      },
    );

    (Prior art)

  • The framework must work in Node.js and browsers (using Webpack or any other module bundler that supports CommonJS modules and emulates Node.js globals).

Contributors

API

Interface: metatests

case(caption, namespace, list, runner)

  • caption: <string> case caption
  • namespace: <Object> namespace to use in this case test
  • list: <Object> hash of <Array>, hash keys are function and method names. <Array> contains call parameters last <Array> item is an expected result (to compare) or <Function> (pass result to compare)
  • runner: <Runner> runner for this case test, optional, default: metatests.runner.instance

Create declarative test

class DeclarativeTest extends Test

DeclarativeTest.prototype.constructor(caption, namespace, list, options)
DeclarativeTest.prototype.run()
DeclarativeTest.prototype.runNow()

equal(val1, val2)

strictEqual(val1, val2)

class reporters.Reporter

reporters.Reporter.prototype.constructor(options)
  • options: <Object>
    • stream: <stream.Writable> optional
reporters.Reporter.prototype.error(test, error)

Fail test with error

reporters.Reporter.prototype.finish()
reporters.Reporter.prototype.log(...args)
reporters.Reporter.prototype.logComment(...args)
reporters.Reporter.prototype.record(test)
  • test: <Test>

Record test

class reporters.ConciseReporter extends Reporter

reporters.ConciseReporter.prototype.constructor(options)
reporters.ConciseReporter.prototype.error(test, error)
reporters.ConciseReporter.prototype.finish()
reporters.ConciseReporter.prototype.listFailure(test, res, message)
reporters.ConciseReporter.prototype.parseTestResults(test, subtest)
reporters.ConciseReporter.prototype.printAssertErrorSeparator()
reporters.ConciseReporter.prototype.printSubtestSeparator()
reporters.ConciseReporter.prototype.printTestSeparator()
reporters.ConciseReporter.prototype.record(test)

class reporters.TapReporter extends Reporter

reporters.TapReporter.prototype.constructor(options)
reporters.TapReporter.prototype.error(test, error)
reporters.TapReporter.prototype.finish()
reporters.TapReporter.prototype.listFailure(test, res, offset)
reporters.TapReporter.prototype.logComment(...args)
reporters.TapReporter.prototype.parseTestResults(test, offset = 0)
reporters.TapReporter.prototype.record(test)

class runner.Runner extends EventEmitter

runner.Runner.prototype.constructor(options)
runner.Runner.prototype.addTest(test)
runner.Runner.prototype.finish()
runner.Runner.prototype.removeReporter()
runner.Runner.prototype.resume()
runner.Runner.prototype.runTodo(active = true)
runner.Runner.prototype.setReporter(reporter)
runner.Runner.prototype.wait()

runner.instance

speed(caption, count, cases)

  • caption: <string> name of the benchmark
  • count: <number> amount of times ro run each function
  • cases: <Array> functions to check

Microbenchmark each passed function and compare results.

measure(cases[, options])

  • cases: <Array> cases to test, each case contains
    • fn: <Function> function to check, will be called with each args provided
    • name: <string> case name, function.name by default
    • argCases: <Array> array of arguments to create runs with. When omitted fn will be run once without arguments. Total amount of runs will be runs * argCases.length.
    • n: <number> number of times to run the test, defaultCount from options by default
  • options: <Object>
    • defaultCount: <number> number of times to run the function by default, default: 1e6
    • runs: <number> number of times to run the case, default: 20
    • preflight: <number> number of times to pre-run the case for each set of arguments, default: 10
    • preflightCount: <number> number of times to run the function in the preflight stage, default: 1e4
    • listener: <Object> appropriate function will be called to report events, optional
      • preflight: <Function> called when preflight is starting, optional
      • run: <Function> called when run is starting, optional
      • cycle: <Function> called when run is done, optional
      • done: <Function> called when all runs for given configurations are done, optional
        • name: <string> case name
        • args: <Array> current configuration
        • results: <Array> results of all runs with this configuration
      • finish: <Function> called when measuring is finished, optional
        • results: <Array> all case results

Returns: <Array> results of all cases as objects of structure

  • name: <string> case name
  • args: <Array> arguments for this run
  • count: <number> number of times case was run
  • time: <number> time in nanoseconds it took to make count runs
  • result: <any> result of one of the runs

Microbenchmark each passed configuration multiple times

convertToCsv(results)

  • results: <Array> all results from measure run

Returns: <string> valid CSV representation of the results

Convert metatests.measure result to csv.

class ImperativeTest extends Test

ImperativeTest.prototype.constructor(caption, func, options)
ImperativeTest.prototype.afterEach(func)

Set a function to run after each subtest.

The function must either return a promise or call a callback.

ImperativeTest.prototype.assert(value[, message])
  • value: <any> value to check
  • message: <string> description of the check, optional

Check if value is truthy.

ImperativeTest.prototype.assertNot(value[, message])
  • value: <any> value to check
  • message: <string> description of the check, optional

Check if value is falsy.

ImperativeTest.prototype.bailout([err][, message])

Fail this test and throw an error.

If both err and message are provided err.toString() will be appended to message.

ImperativeTest.prototype.beforeEach(func)
  • func: <Function>
    • subtest: <ImperativeTest> test instance
    • callback: <Function>
      • context: <any> context of the test. It will pe passed as a second argument to test function and is available at test.context
    • Returns: <Promise>|<void> nothing or Promise resolved with context

Set a function to run before each subtest.

The function must either return a promise or call a callback.

ImperativeTest.prototype.case(message, namespace, list, options = {})

Create a declarative case() subtest of this test.

ImperativeTest.prototype.cb([msg][, cb])

Returns: <Function> function to pass to callback

Create error-first callback wrapper to perform automatic checks.

This will check for test.mustCall() the callback and {test.error()} the first callback argument.

ImperativeTest.prototype.cbFail([fail][, cb[, afterAllCb]])
  • fail: <string> test.fail message
  • cb: <Function> callback function to call if there was no error
  • afterAllCb: <Function> function called after callback handling

Returns: <Function> function to pass to callback

Create error-first callback wrapper to fail test if call fails.

This will check for test.mustCall() the callback and if the call errored will use test.fail() and test.end()

ImperativeTest.prototype.contains(actual, subObj[, message[, sort[, test]]])
  • actual: <any> actual data
  • subObj: <any> expected properties
  • message: <string> description of the check, optional
  • sort: <boolean | Function> if true or a sort function sort data properties, default: false
  • cmp: <Function> test function, default: compare.strictEqual
    • actual: <any>
    • expected: <any>
    • Returns: <boolean> true if actual is equal to expected, false otherwise

Check that actual contains all properties of subObj.

Properties will be compared with test function.

ImperativeTest.prototype.containsGreedy(actual, subObj[, message[, sort[, test]]])
  • actual: <any> actual data
  • subObj: <any> expected properties
  • message: <string> description of the check, optional
  • cmp: <Function> test function, default: compare.strictEqual
    • actual: <any>
    • expected: <any>
    • Returns: <boolean> true if actual is equal to expected, false otherwise

Check greedily that actual contains all properties of subObj.

Similar to test.contains() but will succeed if at least one of the properties in actual match the one in subObj.

ImperativeTest.prototype.defer(fn, options)
  • fn: <Function> function to call before the end of test. Can return a promise that will defer the end of test.
  • options: <Object>
    • ignoreErrors: <boolean> ignore errors from fn function, default: false

Defer a function call until the 'before' end of test.

ImperativeTest.prototype.doesNotThrow(fn[, message])

Check that fn doesn't throw.

ImperativeTest.prototype.end()

Finish the test.

This will fail if the test has unfinished subtests or plan is not complete.

ImperativeTest.prototype.endAfterSubtests()

Mark this test to call end after its subtests are done.

ImperativeTest.prototype.equal(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for non-strict equality.

ImperativeTest.prototype.error(err[, message])
  • err: <any> error to check
  • message: <string> description of the check, optional

Fail if err is instance of Error.

ImperativeTest.prototype.fail([message][, err])
  • message: <string | Error> failure message or error, optional
  • err: <Error> error, optional

Fail this test recording failure message.

This doesn't call test.end().

ImperativeTest.prototype.is(checkFn, val[, message])
  • checkFn: <Function> condition function
    • val: <any> provided value
  • Returns: <boolean> true if condition is satisfied and false otherwise
  • val: <any> value to check the condition against
  • message: <string> check message, optional

Check whether val satisfies custom checkFn condition.

ImperativeTest.prototype.isArray(val[, message])
  • val: <any> value to check
  • message: <string> check message, optional

Check if val satisfies Array.isArray.

ImperativeTest.prototype.isBuffer(val[, message])
  • val: <any> value to check
  • message: <string> check message, optional

Check if val satisfies Buffer.isBuffer.

ImperativeTest.prototype.isError(actual[, expected[, message]])
  • actual: <any> actual error to compare
  • expected: <any> expected error, default: new Error()
  • message: <string> description of the check, optional

Check if actual is equal to expected error.

ImperativeTest.prototype.isRejected(input, err)
  • input: <Promise | Function> promise of function returning thenable
  • err: <any> value to be checked with test.isError() against rejected value

Check that input rejects.

ImperativeTest.prototype.isResolved(input[, expected])
  • input: <Promise | Function> promise of function returning thenable
  • expected: <any> if passed it will be checked with test.strictSame() against resolved value

Verify that input resolves.

ImperativeTest.prototype.mustCall([fn[, count[, name]]])
  • fn: <Function> function to be checked, default: () => {}
  • count: <number> amount of times fn must be called, default: 1
  • name: <string> name of the function, default: 'anonymous'

Returns: <Function> function to check with, will forward all arguments to fn, and result from fn

Check that fn is called specified amount of times.

ImperativeTest.prototype.mustNotCall([fn[, name]])
  • fn: <Function> function to not be checked, default: () => {}
  • name: <string> name of the function, default: 'anonymous'

Returns: <Function> function to check with, will forward all arguments to fn, and result from fn

Check that fn is not called.

ImperativeTest.prototype.notEqual(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for non-strict not-equality.

ImperativeTest.prototype.notOk(value[, message])
  • value: <any> value to check
  • message: <string> description of the check, optional

Check if value is falsy.

ImperativeTest.prototype.notSameTopology(obj1, obj2[, message])
  • obj1: <any> actual data
  • obj2: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected to not have the same topology.

ImperativeTest.prototype.ok(value[, message])
  • value: <any> value to check
  • message: <string> description of the check, optional

Check if value is truthy.

ImperativeTest.prototype.on(name, listener)
ImperativeTest.prototype.pass([message])

Record a passing assertion.

ImperativeTest.prototype.plan(n)

Plan this test to have exactly n assertions and end test after

this amount of assertions is reached.

ImperativeTest.prototype.regex(regex, input[, message])

Test whether input matches the provided RegExp.

ImperativeTest.prototype.rejects(input, err)
  • input: <Promise | Function> promise of function returning thenable
  • err: <any> value to be checked with test.isError() against rejected value

Check that input rejects.

ImperativeTest.prototype.resolves(input[, expected])
  • input: <Promise | Function> promise of function returning thenable
  • expected: <any> if passed it will be checked with test.strictSame() against resolved value

Verify that input resolves.

ImperativeTest.prototype.run()

Start running the test.

ImperativeTest.prototype.same(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for non-strict equality.

ImperativeTest.prototype.sameTopology(obj1, obj2[, message])
  • obj1: <any> actual data
  • obj2: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected to have same topology.

Useful for comparing objects with circular references for equality.

ImperativeTest.prototype.strictEqual(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for strict equality.

ImperativeTest.prototype.strictNotSame(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for strict non-equality.

ImperativeTest.prototype.strictSame(actual, expected[, message])
  • actual: <any> actual data
  • expected: <any> expected data
  • message: <string> description of the check, optional

Compare actual and expected for strict equality.

ImperativeTest.prototype.test(caption, func, options)
  • caption: <string> name of the test
  • func: <Function> test function
    • test: <ImperativeTest> test instance
  • options: <TestOptions>
    • run: <boolean> auto start test, default: true
    • async: <boolean> if true do nothing, if false auto-end test on nextTick after func run, default: true
    • timeout: <number> time in milliseconds after which test is considered timeouted.
    • parallelSubtests: <boolean> if true subtests will be run in parallel, otherwise subtests are run sequentially, default: false
    • dependentSubtests: <boolean> if true each subtest will be executed sequentially in order of addition to the parent test short-circuiting if any subtest fails, default: false

Returns: <ImperativeTest> subtest instance

Create a subtest of this test.

If the subtest fails this test will fail as well.

ImperativeTest.prototype.testAsync(message, func, options = {})

Create an asynchronous subtest of this test.

Simple wrapper for test.test() setting async option to true.

ImperativeTest.prototype.testSync(message, func, options = {})

Create a synchronous subtest of this test

Simple wrapper for test.test() setting async option to false.

ImperativeTest.prototype.throws(fn[, expected[, message]])
  • fn: <Function> function to run
  • expected: <any> expected error, default: new Error()
  • message: <string> description of the check, optional

Check that fn throws expected error.

ImperativeTest.prototype.type(obj, type[, message])
  • obj: <any> value to check
  • type: <string | Function> class or class name to check
  • message: <string> description of the check, optional

Check if obj is of specified type.

test(caption, func[, options[, runner]])

  • caption: <string> name of the test
  • func: <Function> test function
    • test: <ImperativeTest> test instance
  • options: <TestOptions>
    • run: <boolean> auto start test, default: true
    • async: <boolean> if true do nothing, if false auto-end test on nextTick after func run, default: true
    • timeout: <number> time in milliseconds after which test is considered timeouted.
    • parallelSubtests: <boolean> if true subtests will be run in parallel, otherwise subtests are run sequentially, default: false
    • dependentSubtests: <boolean> if true each subtest will be executed sequentially in order of addition to the parent test short-circuiting if any subtest fails, default: false
  • runner: <Runner> runner instance to use to run this test

Returns: <ImperativeTest> test instance

Create a test case.

testSync(caption, func, options = {}, runner = runnerInstance)

Create a synchronous test

Simple wrapper for test() setting async option to false.

testAsync(caption, func, options = {}, runner = runnerInstance)

Create an asynchronous test

Simple wrapper for test() setting async option to true.