One, foreword
- Development background: first attempt to realize the recording and playing functions in the small program.
- Development framework:
- Taro 2.2.6
- Taro – UI 2.3.4
- Description of difficulties:
- Small program recording, upload to the background
- PC, IOS, and Android audio playback resource address, support mp3 download link
Tips: This article focuses on applet audio recording and playback practices in a variety of environments. Suitable for: students who can’t play audio in IOS and those who are interested in the compatibility of small programs.
Two, small program recording, upload
2.1 Registering event Listeners
First, introduce the recording part. Here we mainly use the wx.getRecorderManager() module part of the applet.
Directly put the code, interested in wechat development documents to understand the various configurations.
import Taro, { Component } from '@tarojs/taro'
export default class Index extends Component {...// Declare the recording manager module
recorderManager = wx.getRecorderManager()
componentDidMount() {
// Throw an error
recorderManager.onError(() = > {
Taro.showToast({
title: 'Recording failed! '.duration: 1000.icon: 'none'})})// Processing at the end of recording
recorderManager.onStop(res= > {
if (res.duration < 1000) {
Taro.showToast({
title: 'Recording time is too short'.duration: 1000.icon: 'none'})}else {
// Content is the data structure that stores the recording for debugging
this.setState({ content: res })
wx.saveFile({
tempFilePath: res.tempFilePath,
success: result= > {
// a file upload interface is called
this.fileUpload(result.savedFilePath)
}
})
}
})
}
fileUpload(tempFilePath) {
Taro.uploadFile({
url: XXXApi,
filePath: tempFilePath,
name: 'file'.header: {
'content-type': 'multipart/form-data'.cookie: Taro.getStorageSync('cookie') // Upload requires a separate cookie processing
},
formData: {
method: 'POST' // Request mode
},
success: res= > {
// Process the recording after uploading successfully}}}})Copy the code
To sort it out:
- in
componentDidMount
Register several important events during the life cycle. Include:Listening recording error eventandListening end event - At the end of the recording, with
wx.savefile
Save the file locally - in
wx.savefile
In a successful callback,Invoke the file upload interface, upload the file to the server.
2.2 Implement the recording event handler function
Take a look at the DOM node section:
<Text>Upload the voice</Text>
<Text
onLongPress={this.handleRecordStart}
onTouchend={this.handleRecordStop}
>Long press to speak</Text>
Copy the code
There are just two events: handleRecordStart and handleRecordStop. They are triggered for a long time and when the finger is released.
Simple implementation:
// longpress (longpress) triggers
handleRecordStart(e) {
this.setState({
record: {
// Modify the recording data structure, at this time the recording button style will change.
text: 'Release save'.type: 'recording'}})// Start recording
this.recorderManager.start({
duration: 60000.sampleRate: 44100.numberOfChannels: 1.encodeBitRate: 192000.format: 'mp3'.frameSize: 50
})
Taro.showToast({
title: 'Recording'.duration: 60000.icon: 'none'})}// TouchEnd is triggered when the finger is released
handleRecordStop() {
// Restore the data structure of the recording modified in the start method
this.setState({
record: {
text: 'Long press record'.type: 'record'}})// End recording and hide the Toast dialog box
wx.hideToast()
// End the recording
this.recorderManager.stop()
}
Copy the code
A Record object is used to record the state of the recording.
Start method, duration indicates the recording duration. Set this parameter to 60000 ms. The format value is MP3, which means that the recorded audio file is in MP3 format.
Warm tip: the initial development did not set the format to MP3, resulting in the background colleagues increased workload (to convert M4A to MP3), here it is suggested that the front-end direct processing, very convenient.
Three, small program side recording playback
3.1 Recording Playback
When it comes to Audio playback, the first thing you might think of is the Audio tag, and then dynamically assign the SRC attribute to it. Yes, that’s true on the PC side. However, the small program is relatively poor, as shown below:
Audio playback here, we choose the wx. CreateInnerAudioContext () interface.
Tips: If the audio is uploaded to the background, the url at the end of.mp3 can be returned (for example: http://47.104.167.164/faceVideo/result_2020_07_21_12_33_43.mp3), which can consider direct use wx. CreateInnerAudioContext play () () method.
For part of the reason, after we uploaded the audio file in the background, the link returned was a cloud file ID (referring to the mp3 file that can be downloaded when the browser is opened). And after testing, it is found that Android terminal can play directly, while IOS terminal can play directly without sound.
Then, after consulting the architect of our group, we decided to download the file first, save it to the local phone, and finally play it (the test scheme was feasible).
Let’s go straight to the code:
// Applet audio playback API
innerAudioContext = wx.createInnerAudioContext()
// Download the audio file
downloadFile() {
const FileSystemManager = wx.getFileSystemManager()
const { voiceUrl } = this.state
wx.downloadFile({
url: voiceUrl,
header: { 'Content-type': 'audio/mp3' },
success: res= > {
// As soon as the server has the response data, it writes the response content to the file and enters the SUCCESS callback. The business needs to determine whether it downloaded the desired content
if (res.statusCode === 200) {
FileSystemManager.saveFile({
tempFilePath: res.tempFilePath,
// The file address is local to the mobile phone
filePath: `${wx.env.USER_DATA_PATH}/The ${new Date().getTime()}.mp3`.success: result= > {
if (result.errMsg == 'saveFile:ok') {
this.registerAudioContext(result.savedFilePath)
}
}
})
}
}
})
}
// Register the audio control
registerAudioContext(path) {
this.innerAudioContext.src = path
this.innerAudioContext.play()
// Avoid the failure to play in mute mode on IOS
this.innerAudioContext.obeyMuteSwitch = false
this.innerAudioContext.onEnded(res= > {
// isPlaying indicates whether the record isPlaying
this.setState({ isPlaying: false })
this.innerAudioContext.stop()
})
this.innerAudioContext.onError(res= > {
// Failed to play audio callback
})
this.innerAudioContext.onPlay(res= > {
// Start playing the audio callback
})
this.innerAudioContext.onStop(res= > {
// Play the audio stop callback})}Copy the code
Two things are done here:
- with
wx.downloadFile()
Interface to download the file, note the parametersheader
Properties,Content-type
A value ofaudio/mp3
. Identifies this file as an audio class file. Here we use the file manager in wechatwx.getFileSystemManager()
, in the interfacesaveFile()
Method to save a file locally - with
wx.createInnerAudioContext()
的play()
Method plays music that exists locallymp3
file
3.2 Performance Optimization
After playing, more and more recording files are stored on mobile phones. Let’s figure it out and do some performance tuning. That is, identifying excess documents at the right time.
The code is as follows:
componentWillUnmount() {
this.clearDir()
}
// Delete the downloaded audio file
clearDir() {
const FileSystemManager = wx.getFileSystemManager()
const __dirPath = wx.env.USER_DATA_PATH
FileSystemManager.readdir({
dirPath: __dirPath,
success: res= > {
const { errMsg, files } = res
if (errMsg == 'readdir:ok') {
files.forEach(item= > {
FileSystemManager.unlink({
filePath: `${__dirPath}/${item}`})})}})}Copy the code
To sort it out:
Use the readdir() method in the wx.getFilesystemManager () interface to read all files in the specified directory (wx.env.user_data_path). Do a forEach loop in its successful read callback, then delete the file with unlink(). Finally, this method is called in the lifecycle componentWillUnmount.
Iv. PC audio playback
Small procedure recording and playback are simple introduction, here also expand. Talk about the PC side of the original audio playback method.
There is no reference to the player plug-in in the project; instead, use the Audio tag. The HTML section reads as follows:
const { voice_url, isPlaying } = this.state;
return (
<>
<p>
<span>Audio:</span>
<Button onClick={this.onBtnClick}>{isPlaying ? 'Stop' : 'play '}</Button>
</p>
<audio
id={`audio`}
src={voice_url}
autoPlay={true}
ref={this.audioRef}
preload={'auto'}
onCanPlay={()= > {}}
onTimeUpdate={() => {}}>
<track src={voice_url} kind='captions' />
</audio>
</>
)
Copy the code
Then look at the PC end parsing play part, and the small program principle is similar, first download, then play. The code is as follows:
// Play or pause
onBtnClick = () = > {
const { isPlaying } = this.state;
// Distinguish between play and pause
if (isPlaying) {
this.audioRef.current.pause();
} else {
this.downloadFile();
}
this.setState({ isPlaying: !isPlaying });
};
// Download the file
downloadFile = () = > {
const { download_url } = this.state;
axios.get(download_url as string, { responseType: 'blob' }).then((res: any) = > {
const reader = new FileReader();
const data = res.data;
reader.onload = e= > {
this.executeDownload(data);
};
reader.readAsText(data);
});
};
// Preview the audio file in your browser
executeDownload = (data: any) = > {
if(! data) {return;
}
// A link to convert files to audio streams
const url = window.URL.createObjectURL(new Blob([data], { type: 'audio/mp3' }));
// The front end stores the link
this.setState({ voice_url: url });
};
Copy the code
Carding:
- create
audio
The tag acts as a container for audio playback - Click the play button on the page to trigger the file download method
- through
axios
To download the resource file, usenew FileReader()
Read the file, and when the file is fully loaded, usewindow.URL.createObjectURL()
Method to generate a link that previews the audio file on the browser audio
Listen to thesrc
A sound is played automatically when the property changes
Five, thank you
- If this article helped you, give it a thumbs up! Thanks for reading.
- There are still a lot of imperfections in this article, please feel free to ask questions in the comments section.