1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154
|
#include <emscripten/webaudio.h>
#include <emscripten/em_math.h>
// This program tests that sharing the WebAssembly Memory works between the
// audio generator thread and the main browser UI thread. Two sliders,
// frequency and volume, can be adjusted on the HTML page, and the audio thread
// generates a sine wave tone based on these parameters.
// Implement smooth transition between the UI values and the values that the
// audio callback are actually processing, to avoid crackling when user adjusts
// the sliders.
float targetToneFrequency = 440.0f; // [shared variable between main thread and audio thread]
float targetVolume = 0.3f; // [shared variable between main thread and audio thread]
#define SAMPLE_RATE 48000
#define PI 3.14159265359
float phase = 0.f; // [local variable to the audio thread]
float phaseIncrement = 440 * 2.f * PI / SAMPLE_RATE; // [local variable to the audio thread]
float currentVolume = 0.3; // [local variable to the audio thread]
// REPORT_RESULT is defined when running in Emscripten test harness. You can
// strip these out in your own project.
#ifdef REPORT_RESULT
volatile int audioProcessedCount = 0;
#endif
// This function will be called for every fixed 128 samples of audio to be processed.
bool ProcessAudio(int numInputs, const AudioSampleFrame *inputs, int numOutputs, AudioSampleFrame *outputs, int numParams, const AudioParamFrame *params, void *userData) {
#ifdef REPORT_RESULT
++audioProcessedCount;
#endif
// Interpolate towards the target frequency and volume values.
float targetPhaseIncrement = targetToneFrequency * 2.f * PI / SAMPLE_RATE;
phaseIncrement = phaseIncrement * 0.95f + 0.05f * targetPhaseIncrement;
currentVolume = currentVolume * 0.95f + 0.05f * targetVolume;
// Produce a sine wave tone of desired frequency to all output channels.
for(int o = 0; o < numOutputs; ++o)
for(int i = 0; i < 128; ++i)
{
float s = emscripten_math_sin(phase);
phase += phaseIncrement;
for(int ch = 0; ch < outputs[o].numberOfChannels; ++ch)
outputs[o].data[ch*128 + i] = s * currentVolume;
}
// Range reduce to keep precision around zero.
phase = emscripten_math_fmod(phase, 2.f * PI);
// We generated audio and want to keep this processor going. Return false here to shut down.
return true;
}
#ifdef REPORT_RESULT
bool observe_test_end(double time, void *userData) {
if (audioProcessedCount >= 100) {
REPORT_RESULT(0);
return false;
}
return true;
}
#endif
// This callback will fire after the Audio Worklet Processor has finished being
// added to the Worklet global scope.
void AudioWorkletProcessorCreated(EMSCRIPTEN_WEBAUDIO_T audioContext, bool success, void *userData) {
if (!success) return;
// Specify the input and output node configurations for the Wasm Audio
// Worklet. A simple setup with single mono output channel here, and no
// inputs.
int outputChannelCounts[1] = { 1 };
EmscriptenAudioWorkletNodeCreateOptions options = {
.numberOfInputs = 0,
.numberOfOutputs = 1,
.outputChannelCounts = outputChannelCounts
};
// Instantiate the noise-generator Audio Worklet Processor.
EMSCRIPTEN_AUDIO_WORKLET_NODE_T wasmAudioWorklet = emscripten_create_wasm_audio_worklet_node(audioContext, "tone-generator", &options, &ProcessAudio, 0);
// Connect the audio worklet node to the graph.
emscripten_audio_node_connect(wasmAudioWorklet, audioContext, 0, 0);
EM_ASM({
// Add a button on the page to toggle playback as a response to user click.
let startButton = document.createElement('button');
startButton.innerHTML = 'Toggle playback';
document.body.appendChild(startButton);
let audioContext = emscriptenGetAudioObject($0);
startButton.onclick = () => {
if (audioContext.state != 'running') {
audioContext.resume();
} else {
audioContext.suspend();
}
};
}, audioContext);
#ifdef REPORT_RESULT
emscripten_set_timeout_loop(observe_test_end, 10, 0);
#endif
}
// This callback will fire when the Wasm Module has been shared to the
// AudioWorklet global scope, and is now ready to begin adding Audio Worklet
// Processors.
void WebAudioWorkletThreadInitialized(EMSCRIPTEN_WEBAUDIO_T audioContext, bool success, void *userData) {
if (!success) return;
WebAudioWorkletProcessorCreateOptions opts = {
.name = "tone-generator",
};
emscripten_create_wasm_audio_worklet_processor_async(audioContext, &opts, AudioWorkletProcessorCreated, 0);
}
// Define a global stack space for the AudioWorkletGlobalScope. Note that all
// AudioWorkletProcessors and/or AudioWorkletNodes on the given Audio Context
// all share the same AudioWorkerGlobalScope, i.e. they all run on the same one
// audio thread (multiple nodes/processors do not each get their own thread).
// Hence one stack is enough.
uint8_t wasmAudioWorkletStack[4096];
int main() {
// Add a UI slider to the page to adjust the pitch of the tone.
EM_ASM({
let div = document.createElement('div');
div.innerHTML = 'Choose frequency: <input style="width: 800px;" type="range" min="20" max="10000" value="440" class="slider" id="pitch"> <span id="pitchValue">440</span><br>' +
'Choose volume: <input style="width: 300px;" type="range" min="0" max="100" value="30" class="slider" id="volume"> <span id="volumeValue">30%</span><br>';
document.body.appendChild(div);
document.querySelector('#pitch').oninput = (e) => {
document.querySelector('#pitchValue').innerHTML = HEAPF32[$0>>2] = parseInt(e.target.value);
};
document.querySelector('#volume').oninput = (e) => {
HEAPF32[$1>>2] = parseInt(e.target.value) / 100;
document.querySelector('#volumeValue').innerHTML = parseInt(e.target.value) + '%';
};
}, &targetToneFrequency, &targetVolume);
// Create an audio context
EmscriptenWebAudioCreateAttributes attrs = {
.latencyHint = "interactive",
.sampleRate = SAMPLE_RATE
};
EMSCRIPTEN_WEBAUDIO_T context = emscripten_create_audio_context(&attrs);
// and kick off Audio Worklet scope initialization, which shares the Wasm
// Module and Memory to the AudioWorklet scope and initializes its stack.
emscripten_start_wasm_audio_worklet_thread_async(context, wasmAudioWorkletStack, sizeof(wasmAudioWorkletStack), WebAudioWorkletThreadInitialized, 0);
}
|