preface
Hello, everyone. I’m a sea monster.
Recently the project encountered a need to record on the web page, after a wave of search, found the React-Media-Recorder library. React recording, video recording, and screen recording functions are implemented from 0 to 1.
The complete project code is placedGithub
Requirements and Ideas
First of all, we should be clear about what we want to accomplish: sound recording, video recording, screen recording.
The principle behind recording a media stream is simple.
Just remember: inputstream
Stored in ablobList
, and finally go to previewblobUrl
.
Basis function
With the above simple ideas, we can first do a simple recording and video function.
The basic HTML structure is implemented here:
const App = () = > {
const [audioUrl, setAudioUrl] = useState<string>(' ');
const startRecord = async() = > {}const stopRecord = async() = > {}return (
<div>
<h1>Recording the react</h1>
<audio src={audioUrl} controls />
<button onClick={startRecord}>start</button>
<button>suspended</button>
<button>restore</button>
<button onClick={stopRecord}>stop</button>
</div>
);
}
Copy the code
There are four functions: Start, pause, resume and Stop, and a
Then start and stop:
const medisStream = useRef<MediaStream>();
const recorder = useRef<MediaRecorder>();
const mediaBlobs = useRef<Blob[]>([]);
/ /
const startRecord = async() = > {// Read the input stream
medisStream.current = await navigator.mediaDevices.getUserMedia({ audio: true.video: false });
// Generate the MediaRecorder object
recorder.current = new MediaRecorder(medisStream.current);
// Convert stream to blob to store
recorder.current.ondataavailable = (blobEvent) = > {
mediaBlobs.current.push(blobEvent.data);
}
// Generate the preview blob URL when stopped
recorder.current.onstop = () = > {
const blob = new Blob(mediaBlobs.current, { type: 'audio/wav' })
constmediaUrl = URL.createObjectURL(blob); setAudioUrl(mediaUrl); } recorder.current? .start(); }// End, not only to stop MediaRecorder, but also to stop all tracks
const stopRecord = async() => { recorder.current? .stop() medisStream.current? .getTracks().forEach((track) = > track.stop());
}
Copy the code
As you can see above, first get the input stream mediaStream from getUserMedia, and later you can open Video: True to get the video stream simultaneously.
MediaStream is then passed to mediaRecorder to hold the BLOB data in the current stream via onDataAvailable.
As a final step, call url.createObjecturl to generate a preview link. This API is very useful in the front end, such as when uploading an image to preview the image without actually uploading it to the back end to display it.
After clicking start, you can see that the current page is recording:
Now the rest of the pauses and resumes are implemented as well:
const pauseRecord = async() => { mediaRecorder.current? .pause(); }const resumeRecord = async() => { mediaRecorder.current? .resume() }Copy the code
Hooks
Now that we’ve implemented the simple functions, we’ll try to encapsulate the above functions as React Hook. First, we’ll throw the logic into a function and return the API:
const useMediaRecorder = () = > {
const [mediaUrl, setMediaUrl] = useState<string>(' ');
const mediaStream = useRef<MediaStream>();
const mediaRecorder = useRef<MediaRecorder>();
const mediaBlobs = useRef<Blob[]>([]);
const startRecord = async () => {
mediaStream.current = await navigator.mediaDevices.getUserMedia({ audio: true.video: false });
mediaRecorder.current = new MediaRecorder(mediaStream.current);
mediaRecorder.current.ondataavailable = (blobEvent) = > {
mediaBlobs.current.push(blobEvent.data);
}
mediaRecorder.current.onstop = () = > {
const blob = new Blob(mediaBlobs.current, { type: 'audio/wav' })
consturl = URL.createObjectURL(blob); setMediaUrl(url); } mediaRecorder.current? .start(); }const pauseRecord = async() => { mediaRecorder.current? .pause(); }const resumeRecord = async() => { mediaRecorder.current? .resume() }const stopRecord = async() => { mediaRecorder.current? .stop() mediaStream.current? .getTracks().forEach((track) = > track.stop());
mediaBlobs.current = [];
}
return {
mediaUrl,
startRecord,
pauseRecord,
resumeRecord,
stopRecord,
}
}
Copy the code
Get the return value in app.tsx:
const App = () = > {
const { mediaUrl, startRecord, resumeRecord, pauseRecord, stopRecord } = useMediaRecorder();
return (
<div>
<h1>Recording the react</h1>
<audio src={mediaUrl} controls />
<button onClick={startRecord}>start</button>
<button onClick={pauseRecord}>suspended</button>
<button onClick={resumeRecord}>restore</button>
<button onClick={stopRecord}>stop</button>
</div>
);
}
Copy the code
With that wrapped, you can now add more functionality to the Hook.
Clear data
CreateObjectURL API is called to generate the BLOb URL. The generated URL looks like this:
blob:http://localhost:3000/e571f5b7-13bd-4c93-bc53-0c84049deb0a
Copy the code
CreateObjectURL generates a URL -> blob reference after each url.createObjecturl. Such a reference also consumes resource memory, so we can provide a method to destroy the reference.
const useMediaRecorder = () = > {
const [mediaUrl, setMediaUrl] = useState<string>(' '); .return{...clearBlobUrl: () = > {
if (mediaUrl) {
URL.revokeObjectURL(mediaUrl);
}
setMediaUrl(' '); }}}Copy the code
Record the screen
For recording and recording, getUserMedia is used, while for recording screen, getDisplayMedia is called.
To better distinguish between these two cases, we can provide developers with audio, video, and screen parameters to tell us which interface to call to get the corresponding input stream data:
const useMediaRecorder = (params: Params) = > {
const {
audio = true,
video = false,
screen = false,
askPermissionOnMount = false,
} = params;
const [mediaUrl, setMediaUrl] = useState<string>(' ');
const mediaStream = useRef<MediaStream>();
const audioStream = useRef<MediaStream>();
const mediaRecorder = useRef<MediaRecorder>();
const mediaBlobs = useRef<Blob[]>([]);
const getMediaStream = useCallback(async() = > {if (screen) {
// Screen recording interface
mediaStream.current = await navigator.mediaDevices.getDisplayMedia({ video: true}); mediaStream.current? .getTracks()[0].addEventListener('ended'.() = > {
stopRecord()
})
if (audio) {
// Add an audio input stream
audioStream.current = await navigator.mediaDevices.getUserMedia({ audio: true}) audioStream.current? .getAudioTracks().forEach(audioTrack= > mediaStream.current?.addTrack(audioTrack));
}
} else {
// Common video and recording streams
mediaStream.current = await navigator.mediaDevices.getUserMedia(({ video, audio }))
}
}, [screen, video, audio])
/ / record
const startRecord = async() = > {/ / for flow
await getMediaStream();
mediaRecorder.current = newMediaRecorder(mediaStream.current!) ; mediaRecorder.current.ondataavailable =(blobEvent) = > {
mediaBlobs.current.push(blobEvent.data);
}
mediaRecorder.current.onstop = () = > {
const [chunk] = mediaBlobs.current;
const blobProperty: BlobPropertyBag = Object.assign(
{ type: chunk.type },
video ? { type: 'video/mp4'}, {type: 'audio/wav'});const blob = new Blob(mediaBlobs.current, blobProperty)
const url = URL.createObjectURL(blob);
setMediaUrl(url);
onStop(url, mediaBlobs.current);
}
mediaRecorder.current?.start();
}
...
}
Copy the code
Since we already allow users to record video and sound, when generating the URL, we also set the corresponding blobProperty to generate the blobUrl for the corresponding media type.
Finally, pass screen: true when calling hook to enable screen recording function:
Note: whether video recording, recording or screen recording is to call the system ability, and the web page only asks the browser to this ability, but the premise is that the browser has system permissions, so the system Settings must allow the browser to have these permissions to record screen.
If we want to get the user’s camera, microphone, and screen recording privileges as soon as the component is loaded, we can call it in useEffect: getMediaStream
useEffect(() = > {
if (askPermissionOnMount) {
getMediaStream().then();
}
}, [audio, screen, video, getMediaStream, askPermissionOnMount])
Copy the code
preview
To record, just set {video: true} on getUserMedia. In order to make it more convenient for users to watch the effects while recording, we can also return the video stream to users:
return{...getMediaStream: () = > mediaStream.current,
getAudioStream: () = > audioStream.current
}
Copy the code
The user can preview the mediaStream by assigning it directly to srcObject:
<button onClick={()= >previewVideo.current! SrcObject = getMediaStream () | | null} > preview</button>
Copy the code
Ban notes
Finally, we will realize the function of silence, the principle is also simple. Just take the audioTrack in the audioStream and set them to Enabled = False.
const toggleMute = (isMute: boolean) = >{ mediaStream.current? .getAudioTracks().forEach(track= >track.enabled = ! isMute); audioStream.current? .getAudioTracks().forEach(track= >track.enabled = ! isMute) setIsMuted(isMute); }Copy the code
It can be used to disable and turn on the sound channel:
<button onClick={()= >toggleMute(! isMuted)}>{isMuted ? 'Turn on sound' : 'silence '}</button>
Copy the code
conclusion
The above uses WebRTC API to simply implement a recording, video, screen recording tool Hook, here is a little summary:
getUserMedia
Can be used to get the microphone and camera streamgetDisplayMedia
Get the video and audio streams on the screen- The essence of recording is
stream -> blobList -> blob url
, includingMediaRecorder
Can listenstream
To obtainblob
data MediaRecorder
Also provides a start, end, pause, resume and other Record related interfacecreateObjectURL
与revokeObjectURL
Is the opposite, one is to create a reference, the other is to destroy- The forbidden tone can pass
track.enabled = false
To do this, close the track
The implementation of this small tool library to everyone here, details can view the react-Media-Recorder library source code, very simple and easy to understand, very suitable for entry to see the source code of students!
If you also like my article, you can point a wave of attention, or a key three even go again, than the heart ❤️