Writing tests for the Web Audio API
In this post I want to give a brief overview of how you can help the adoption of the Web Audio API by writing tests for the W3C’s official test suite. Writing tests helps the adoption of the API in three ways
- It makes it easier for a wider range of browser vendors to support the API
- It makes it easier for an individual vendor to implement the standard correctly
- It makes it easier for new additions to the specification to be included and approved
The W3C’s test suite for the “web platform” (the suite of technologies, Web Audio included, that make up the modern web) is on GitHub. So go ahead and clone it
git clone https://github.com/w3c/web-platform-tests
Or, if you think you might contribute, you may find it easier to fork the repository into your own GitHub account, and clone your fork.
The repository requires git submodules, which you can update within your checkout
git submodule update --init --recursive
The repo has the latest instructions for getting started.
Running a Web Audio test locally
You need to arrange for the contents of the repository to be served up
by a local webserver. There’s
for doing that using the included
serve.py python script and some
simple edits to your
/etc/hosts file - or you may have your own
preferred way. Once you have the server up and running, try and run
the test for the Web Audio API
GainNode, on my machine it was at
The test suite is also mirrored to
w3c-test.org, including the
and the rest of the web audio tests.
Understanding the Web Audio API tests
The Web Audio API test suite is very minimal at the moment, but that’s where you can help.
The Web Audio API is under development, so that latest version of the editor’s draft of the specification is what we should be writing our tests against. Go and take a look at the specification if you’re not familiar with it.
Notice that the specification is grouped into sections, (for example
§4.7 The GainNode interface). The directory structure of the tests
repo reflects this structure, so we have
Tests come in two different flavours:
- Functional tests. These tests assert that the audio processing
performed by a certain node does the right thing. For example, we
might assert that the output of an
sineactually produces a sine wave at the correct frequency.
- IDL tests. These tests assert that the interface presented to the
programmer conforms to that written in the specification, so, for
AudioContexthas a method
We’ll look at both of these types of tests in turn.
Writing functional tests
Functional tests assert that a audio processing node performs its processing correctly. The process for writing a test is as follows:
- Find an area of the specification that doesn’t have tests.
- Read the specification and see if it could be tested as written. If you feel a test cannot be written against the current version of the spec, for example if there’s not enough information in the spec to determine precisely what the output should be, that’s great! You can help to improve the spec.
- Write the test
Let’s look at step 3 in more detail. Tests are written using the W3C’s testharness.js framework. Take a look at that documentation to familiarise yourself.
Don’t reinvent the wheel. If you’re considering writing a functional test for a node, both the Mozilla and Webkit source code already have a number of tests that you can port over, or use for inspiration:
As an example, consider the
GainNode test in the W3C test
suite. You’ll find it at
/webaudio/the-audio-api/the-gainnode-interface/test.html, or here on w3c-test.org.
This test works as follows:
- Create an
AudioBufferwith a series of sine wave ‘notes’ of gradually decreasing amplitude. This is the expected output.
- Recreate this using a
GainNodewith gradually decreasing
- Record the output of the audio graph created in 2. in an
- Assert that recorded output matches the expected output.
This test was based on the
corresponding test of the GainNode in the Webkit test suite,
but uses a generated buffer rather than a WAVE file as the expected
output. The reason for this is to allow the tests to run faster than
real time. If we were to create a node graph in a regular
AudioContext, and then capture the output in a buffer using a
scriptProcessorNode, for example, the test would take at least as
long to run as the audio generated. Using an
allows the implementation to generate the output as fast as it can.
In some cases it will be impossible to use
as when writing tests for the various streaming sources.
Writing IDL tests
In the W3C test suite we have a Ruby script (at
/webaudio/refresh_idl.rb) which extracts the IDL descriptions from
the specification, and updates the corresponding tests. It’s still
quite a manual process at the moment, and I would appreciate any
improvements you can suggest.
Contributing your test
The W3C test suite accepts contributions in the form of GitHub pull requests. Each pull request has to be reviewed by a peer. At the moment, I am the test coordinator for the Web Audio tests, so it is likely to be me that does the review and merge, but anyone who would like to help will be very welcome.
If you need any help, please get in touch with me in the comments
below, on the
public audio mailing list
or by raising an issue with the
webaudio label in GitHub.
Improve the specification
When starting to write a test for a part of the specification you may encounter a situation where there’s not enough information in the spec to determine precisely what the output should be. In these cases you can help to improve the specification:
- Open an issue against the spec at the specification’s GitHub repo. Include the test case you have written so far, if you can - it’s easier to discuss concrete problems that are illustrated by code
- You may need to ask the editors and other members on the W3C’s public audio mailing list for help. They really appreciate these questions as they do help to clarify complicated areas of the specification.