This is the 27th day of my participation in the August Genwen Challenge.More challenges in August
The effect
technology
- NodeJs + Express + HTTP + FS to build the environment, read songs and generate a list
- Ajax to get audio
- The AudioContext API parses the audio
- Canvas draws graphic images
- RequestAnimationFrame invokes the animation
Focus on
Drawing canvas, AudioContext API application code in the ‘public/javascripts/index js’ This post focuses on AudioContext related API if used in the implementation process of web audio API Interesting, see the Web Audio API
AudioContext
The environment and context of audio processing can control the creation of nodes it contains, as well as audio processing, audio decoding operations, reading playback state, etc. You have to create an AudioContext object before you do anything related to audio, because that’s where everything happens.
So how do you create AudioContext?
const AC = new(window.AudioContext || window.webkitAudioContext)();
Copy the code
This environment contains a number of very important methods and properties, to name a few:
AudioBufferSourceNode: Represents an audio source that can mount a piece of audio data written to memory. There is no input, but an output method that can be called using the start method and closed using the stop method
const AudioBufferSourceNode = AC.createBufferSource(); // Create an audio source
AudioBufferSourceNode.start([when [, offset [, duration]]); // Schedule the sound to play at the specified time. If no time is specified, the sound starts to play immediately.
AudioBufferSourceNode.buffer = buffer; // Mount an audio clip in memory,
AudioBufferSourceNode.stop([when]); // Stop playing
AudioBufferSourceNode.onended = () = >{} // The callback function after the current audio playback ends
Copy the code
One thing to note here: GainNode: represents a volume controller through which you can control the volume of the output audio. The value is usually 0 to 1
const GainNode = AC.createGain()
GainNode.gain.value = 0.6
Copy the code
AnalyserNode: Represents an audio analyzer with nodes that provide real-time frequency and time domain analysis information
const AnalyserNode = AC.createAnalyser();
AnalyserNode.fftSize = number * 2 // FFT is a fast algorithm for the discrete Fourier transform to transform a signal into the frequency domain,
// The resulting value is an integer multiple of 2 between 32 and 2048, which is the default.
console.log(AnalyserNode.frequencyBinCount) // number, usually representing the number of data values to be used for visualization, is analyserNode.fftsize /2
Copy the code
AudioDestinationNode: Represents the final output address of the audio – usually the speaker
const AudioDestinationNode = AC.destination
Copy the code
CurrentTime: duration of playing in the current audio environment
AC.currentTime
Copy the code
(My understanding of currentTime is different from the translated API interpretation document), the following is the translated meaning:This picture is in EnglishScheduling Audio playback, visualizing timelines is used for scheduling audio playback, visualizing timelines, etc., so I understand this value as the duration of the current environment. In fact, I used this value in this case. Ac.currenttime is always 0 and does not change when the audio environment is suspended ()
DecodeAudioData: A method that is usually used to asynchronously decode audio files obtained using Ajax with a responseType of ArrayBuffer. DecodeAudioData only works on complete audio files
xhr.responseType = "arraybuffer"; // Returns a bit of binary data
xhr.onload = function() { / / similar to onreadystatechange
AC.decodeAudioData(this.response,(buffer) = >{
console.log(buffer) / / file after decoding, can hang on to AudioBufferSourceNode. The buffer
console.log(buffer.duration) // Audio length
},(err) = > {
console.log(`errMessage:${err}`);
});
}
Copy the code
Resume: Restarts an audio environment that has been paused
AC.resume()
Copy the code
Suspend: Suspends the progress of audio content. Temporarily stop audio hardware access and reduce CPU/ battery usage during the process.
AC.suspend()
Copy the code
One important step is missing, which is to correlate the audio with the volume controller, audio analyzer, and player so they can be used properly
AudioBufferSourceNode.connect(AnalyserNode );
AnalyserNode .connect(GainNode);
GainNode.connect(AudioDestinationNode );
Copy the code
Sort out the implementation process of music playback
- Node uses the FS module to read all the songs and generate a list to render to the front page
- When clicking play, we first look up whether there is the current song in the local cache. If so, we create an audio environment to play it. If not, we get the audio data through Ajax, and then parse it into buffer through AC.decodeAudioData and store it in the cache. Then create an audio environment for playback
- Before a different song is played, ac.suspend is called to stop the currently playing music so that only one song is playing at a time
- If we hit the pause/play button, we control the current audio playback and pause simply by resume/suspend
- The duration is ac.currentTime, the total duration is buffer.duration, and the progress bar is the ratio of the two
- The volume control is controlled via gainnode.gain.value
- Local file played by fileReader. ReadAsArrayBuffer will read the file, and then still by AC. DecodeAudioData decoding to play it
Canvas drawing
. First we get AnalyserNode frequencyBinCount, namely the visualization data value, then save as Uint8Array array types,
let arr = new Uint8Array(AnalyserNode.frequencyBinCount);
Copy the code
The getByteFrequencyData method is then used to copy the current frequency data into the Uint8Array passed to it,
AnalyserNode.getByteFrequencyData(arr);
Copy the code
You get something like thisSo this is the data that we need to visualize the audio, it’s the values that keep changing as the audio plays, starting with 0, and then going through the array, getting the values of each of the items in the array, which is the height that a set of bars should draw, you can see thatThe height of the line is arr[I]. This line is evenly divided into n small squares, each of which is 4 in height and 0.5 apart. Then we continuously calculate and update the positions of the small squares and red squares. This is done via requestAnimationFrame.
The end of the
Finally, attach the code address, welcome star and issue.