First we should ask why there should be a built-in test runner. The intent behind the Node.js test runner is to provide a limited set of testing functionality that can be used to test projects without requiring a third-party dependency. It will also provide a base set of primitives that testing frameworks can use to standardise upon.
Until now, all test runners in Node.js were built as third-party packages, like Mocha, Jasmine, or Jest. This means that to write and run tests in your project you must start by choosing to add a dependency. Dependencies take maintenance and can add complexity to your configuration both locally and in your CI/CD pipelines. Other languages, like Ruby, Go, and Python, have their own built-in test runner. Both Deno and Bun ship a test runner too. So it seems natural to provide a dependency-free, built-in runner.
Let's have a look at how it works by test driving a piece of code. We won't write anything complicated, it is just to illustrate how the test runner works. I recommend using the latest version of Node.js, which is 20.2.0 as I write this.
To see this in action we'll write unit tests for and implement a straightforward data structure, a stack. Start by creating a directory to write the project in and two files,
Immediately you can run the test command:
And you will see a result. Since there is no code, the test file runs successfully and is counted as a pass. This is really powerful already. All we've done is create two files and the test runner has detected that one is a test file and run it. We've not had to install any dependencies, there isn't even a
This works because of the test runner execution model. When you run
.mjs, that match any of the following patterns:
- files inside a directory called
- files called
- files that start with
- files that end with
You can also explicitly pass a list of files and directories to the
node --test command. So, we could have called
stack.test.mjs a variety of things, like
stack_test.js. It all depends on your preference.
Each file that the test runner discovers is then executed in a separate child process. If the process exits with a code of 0 then the test is considered to pass. That's why our empty file shows as a passing test already.
Open the two files you created in your editor. In
stack.test.mjs import the
test function from
node:test is the standard library module that you can import and use to create tests within your test file. Note that you must use
node:test and not just
test here as you can do with other standard library modules.
test refers to the npm package
test which is a userland port of
node:test that works all the way back to Node version 14.
test function allows us to name specific tests, as well as create groups of subtests. Pass a name and a function to
test and if the function completes without throwing an error then it is deemed a pass. Write the following in your test file:
Run the tests with
node --test and you will see one pass and one fail.
Manually throwing errors is not the most expressive or efficient way to write tests. Thankfully Node has an assertion module which we can use. When an assertion from
node:assert fails it throws an
AssertionError which works well with the test runner.
The assertion module comes with two modes, strict and legacy. The legacy mode uses the
== operator in equality assertions but
== is not recommended. I would encourage using strict mode.
We can rewrite the above tests with
node:assert like this:
node --test now and you will see one failure with more information than the plain
Assert has a bunch of useful assertions, including asserting that:
- objects are equal with
- an object is truthy with
- a function
throwsor a promise
- and my favourite, that non-primitive objects are equal with
test function takes an object as an optional parameter. You can use this to skip tests or only run certain tests.
You can always skip a test. However the
only option only takes precedence when you run the test suite with the
--test-only flag. There is also a
todo option, which still runs the test but tags it as a "todo" test to reporters. There is also a shortcut for these options, where you can call
test.todo for the same result.
On the command line, using the
--test-name-pattern will let you pass a string to match test names. Only test names that match will be run. So the following command will only run the test called "will pass".
Other options include:
timeout, which fails the test if it doesn't complete within the time set
concurrency, by default tests are run one at a time
signal, which is an
AbortSignalthat you can pass to tests to cancel them mid-process
The last two seem more useful for building a test framework on top of the test runner than for users.
With just the
test function you can also group tests into subtests. Let's explore this as we actually start to build up tests for our stack implementation. When making subtests, your root test function should receive a test context parameter. You must call
test on the context object to add subtests. As the
test function returns a promise you will need to
await each of the tests. If the root test completes before the subtests it will mark any unfinished tests as failures.
These tests will fail because we haven't yet defined a
Stack. Let's add the minimum required to make these pass. In
Then import the class at the top of
Run the tests and they now pass.
Let's add another test to see what happens when we pop an item off an empty stack.
Running the tests will fail because we haven't defined a
pop method on the stack yet. Add that to the
Running the tests now will also fail. The issue is that we are leaking some data between tests. The
stack object inside the root test has an item added in
push test which is being returned when we call
pop in the test we just wrote. Rather than define the
stack object once we should redefine it every time to make sure it is in the state we expect. The test runner provides hooks for running behaviour like this before and after tests. In this case, we can use the
beforeEach hook to define a fresh stack object for each of our tests.
I'm personally not a fan of
test as the function name, I like my syntax to be a bit more expressive. The test runner also makes available
describe sets up a suite of tests and
it is an alias for
test. When using
describe you don't need to
await tests and there's no need to use a suite context, you can import hooks like
beforeEach and use them within the suite.
We can rewrite out
Stack tests with
it syntax like so:
You can have multiple suites per test file and nest suites within each other. We can add more tests like this:
When you run the above test you'll get output that looks like this by default.
describe suites indent their subtests and things are very readable. This is the default test reporter spec. There are two other built-in reporters, tap and dot. Tap reports using the Test Anything Protocol, which I find a bit more wordy than spec. The dot reporter is very simple and produces a
. for a passing test and a
X for each failing test.
You can choose your reporter by passing a
--test-reporter flag, you can pass multiple reporters as well as file destinations for them, and you can write your own test reporters too. Rômulo Vitoi at Nearform wrote a great post on writing custom test reporters as well as some examples, like this GitHub reporter which annotates test failures directly in a GitHub pull request.
This has been an overview covering the basics of working with Node's test runner. Everything we wrote above was dependency free testing that you can use in your Node.js applications today, as long as you depend on Node 20.
For small projects, I've found that the test runner and assert modules have provided everything I need to write test suites. Ensuring that your code is well tested is an important part of writing clean code and having the tools built into the platform makes it easier to get setup and writing tests from the very start.
I'm excited to see how this develops further. If you're starting a new project soon, I'd suggest giving the test runner a try to see how it works for you. Let me know what you think about it on Twitter.