Interview with Adrian Holovaty of Soundslice
I recently spoke to Adrian Holovaty the founder of Soundslice, an interactive sheet music application, about how it was built and his experiences of using the Web Audio API for commercial development.
Could you tell me a bit about Soundslice, what the company does and what applications you’ve built?
Soundslice is a web site that helps musicians learn songs. Our ultimate goal is to be the easiest and highest-quality way to learn whatever song you’d like, on whatever instrument.
As part of this, we’ve built a web-based sheet-music and tablature player that makes it easy to explore a piece of music. It syncs the music notation with real recordings, so that you can hear how the music is supposed to sound as notes are highlighted, and it lets you loop, slow down and navigate the audio via the notation.
On the business side, we sell high-quality music transcriptions in this format, and we license our technology to other companies. I believe we’re one of the first companies to build production-quality sites - stuff that people actually pay money for! - using the Web Audio API.
Aside from the Web Audio API, we’re doing some quite advanced stuff
with HTML5’s <canvas>
, rendering music notation and tablature in an
efficient, cross-platform way.
How have you used Web Audio in the application?
We use the API in two ways: “real” audio playback and “synthetic” playback. In the Soundslice interface, you can toggle the audio source to be either a real recording (an MP3) or a synthetic version (generated from the notation, MIDI-style).
For the real recordings, we use the Web Audio API to play and loop it
with precise timing. We also use it to slow down the audio without
changing pitch, using ScriptProcessorNodes
.
For the synthetic audio, we use the Web Audio API to splice together instrument samples in real time, based on whatever notes are in the sheet music. For this, I use a single audio file that has every possible note in it – the same concept as CSS image sprites.
Fun fact: our “real” audio player actually has three audio backends,
which are chosen based on the browser’s capabilities – Web Audio API,
HTML5 <audio>
and a Flash fallback. Obviously we prefer Web Audio
API, but as anybody in this space knows, consistent audio performance
across browsers still requires a lot of hoop-jumping.
What has been good about using the API, what’s been bad?
The best part about it is the precision and control. Looping can have
perfect or near-perfect timing, you can easily play multiple audio
sources at once, and you can use ScriptProcessorNodes
to futz with
the content of the audio.
The bad things: lack of support in all browsers (not really the API’s
fault), and, to some extent, performance when doing crazy things in
ScriptProcessorNodes
(again, not really the API’s fault). Plus there are a
few small bugs I’ve needed to hack around.
Oh, and I’d love for there to be a built-in way to adjust playback
speed without changing pitch – the earlier <audio>
tag API has
that, so I don’t see why Web Audio API shouldn’t. It would make my job
a lot easier, and Soundslice would perform better on slower machines
if the slowdown algorithms were implemented directly in the browser.
How does developing with Web Audio compare to developing for other platforms or technologies?
I can only compare it to the HTML5 <audio>
tag API and
Flash/Actionscript. Overall, the Web Audio API is the nicest of the
three. The HTML5 API starts to feel very hacky when you have multiple
sounds playing at the same time, though its “playbackRate” API is
convenient. I don’t remember liking the Actionscript APIs very much
when I was using them a few years ago for an earlier version of
Soundslice.
If you could make changes to the API or to audio in the browser in general, what would you do?
Number one priority would be to add a built-in way of changing speed
without altering pitch, just as the HTML5 <audio>
API already has.
Number two priority would be to add a callback/event when a piece of audio has stopped playing. Currently you have to use setTimeout!
Number three priority would be to fix playback bugs like the one I mentioned above.
And of course, magically upgrading every browser in the world to one that has a stable implementation of the API would be lovely.