preface
Using an AudioContext has several benefits
-
There is no need to import the audio tag
-
Follow the state of the system, i.e. the phone is set to vibrate/silent mode, the sound will not be emitted
-
You can do audio parsing, do special effects, like the following, one is handwritten with canvas, one is done by Echart
Media encapsulates the AudioContext as follows
Method of use
1. Introduce the media
<script src="media.js"></script>Copy the code
Parameters of 2.
Souce: Can outgoing a URL or an ArrayBuffer
options
-
Loop: Boolean indicates whether to play in a loop
-
Volume: 0 to 1: controls the volume
-
Analyser: Whether to enable audio analysis. The default setting can be true. Can also be an object. Size is used to configure fftSize. Default: 1024
let media = new Media(source, {
loop: true, volume: 0.6, analyser: {size: 512}})Copy the code
3. The event
Onload: Audio load complete, logical download in this event
Onended: Triggered after audio playback ends
Method 4.
-
GetData retrieves the audio data for analysis, of type Uint8Array, with the Analyser option enabled
-
Play Play audio
-
Suspend play
-
Start (offset) Sets the audio start time. Offset ranges from 0 to duration
-
SetLoop (bool) Sets whether audio is played in a loop
-
SetVolume (val) Sets the audio volume. The value ranges from 0 to 1.0
-
GetCurrentTime Gets the current playback duration
-
SetOptions (options) can be set uniformly, for example: {loop: true, volume: 0.5}
5. The attribute
-
Duration Indicates the total audio duration
-
The state gets the current state of audio, running | suspend
-
Volume Obtains the current volume
-
Loop Gets whether the audio is looped
On the pit of
Here are some of the problems encountered in the encapsulation process and the solutions
1. Obtain the current time
The solution is to declare a delta to hold the difference, subtracting it every time start updates the value to get the careful audio time, but not exceeding duration.
start(offset) { this.delta = this.ctx.currentTime - offset // ... }getCurrentTime() {
return Math.min(this.ctx.currentTime - this.delta, this.duration)
}Copy the code
2. Realize slider control audio playback
The AudioContext has a start(offset) method that allows the audio to start from a certain point in time, but this method can only be used once and will give an error if called again, so if you want to use the slider to control the audio playback, you must handle the start(offset) problem. Solution: You can only regenerate source, before you call stop, otherwise multiple audio tracks will play simultaneously.
this.source.stop()
initBufferSource(decodedData) { this.source = this.ctx.createBufferSource() this.decodedData = decodedData this.source.buffer = this.decodedData}Copy the code
3. Onended events
I wanted to register an OnEnded event so I could do something else, but every time I scroll, the event is triggered, and the audio doesn’t finish playing. After searching for a long time, it is called every time stop is executed, so unbind this event before regenerating the source, and then rebind it
this.source.onended = null
this.onended && (this.source.onended = this.onended)Copy the code
4. Dealing with asynchrony is a problem
There’s a couple of asynchrony in there, and if you start with a URL, you call Request, which is ajax, to request the audio resource.
Another is decodeAudioData. AudioContext decodes the ArrayBuffer asynchronously, but the new API returns a Promise object.
The problem is that audio can only be operated after decoding is completed. After thinking for a long time, I don’t know how to solve this problem. Temporarily register an onload event, and please handle audio operation in this event
Github has a vUE case, you can have a look
code
Making the address
If you have any suggestions, comments can leave a message