Related Technology Stack
For mobile terminals, most video chat functions are implemented by the Native end code, which has good performance, strong compatibility but high development cost. Therefore, this paper will introduce the implementation of video chat functions by pure H5.
- WebRTC
- Socket.io
- Node.js
Project screenshots:
Address: experience app.nihaoshijie.com.cn/webrtc/inde…
Safari is recommended.
WebRTC is introduced
WebRTC provides three core apis:
- GetUserMedia: You can get a local media stream that contains several tracks, such as video and audio tracks.
- GetDisplayMedia: Get the video stream from your computer screen, but you can’t get the audio media stream for now. If you need an audio stream, manually add it to the track and make it play simultaneously.
- RTCPeerConnection: Used to establish P2P connections and transmit multimedia data.
- RTCDataChannel: Establishes a two-way communication data channel that can pass multiple data types.
Through these apis, we can get local audio and video streams, and then establish point-to-point connections with other browsers to send audio and video streams to each other. We can also establish a two-way data channel to send real-time data such as text and files.
In this project, we mainly use the API related to getUserMedia and RTCPeerConnection.
GetUserMedia Audio and video capture API
You can use the getUserMedia interface provided by the browser to collect local audio and video files.
const localVideo = document.getElementById('local-video');
// In non-secure mode (non-HTTPS /localhost), navigator. MediaDevices returns undefined
const mediaStream = await navigator.mediaDevices.getUserMedia({
video: true.audio: true
});
localVideo.srcObject = mediaStream;
Copy the code
RTCPeerConnection API for creating point-to-point connections
RTCPeerConnection, as an API to create point-to-point connection, is the key to realize real-time audio and video communication. The following methods and events are mainly used:
Media negotiation methods
- createOffer
- createAnswer
- setLocalDesccription
- setRemoteDesccription
Important events
- onicecandidate
- ontrack
For point-to-point connections, you need to have sender and receiver, corresponding to the code above, respectively:
The sender:
const pc = new RTCPeerConnection(iceConfig);
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
sendToPeerViaSignalingServer(SIGNALING_OFFER, offer); // The sender sends a signaling message
Copy the code
Receiver:
const pc = new RTCPeerConnection(iceConfig);
await pc.setRemoteDescription(offer);
const answer = await pc.createAnswer();
await pc.setLocalDescription(answer);
sendToPeerViaSignalingServer(SIGNALING_ANSWER, answer); // The receiver sends a signaling message
Copy the code
Add audio and video streams:
// Add audio and video streams
mediaStream.getTracks().forEach(track= > {
pc.addTrack(track, mediaStream);
});
Copy the code
Receive audio and video streams:
remotePeer.ontrack = function(evt) {
const remoteVideo = document.getElementById('remote-video');
remoteVideo.srcObject = evt.streams[0];
}
Copy the code
Some will use addStream and onAddStream, which are older versions of the API and are no longer recommended.
Communication negotiation process
In WebRTC, there is a special Protocol called Session Description Protocol(SDP) that can be used to describe this kind of information. Therefore, the two parties involved in audio and video communication must exchange SDP information to learn about the media formats supported by the other party. The process of exchanging SDP is usually called media negotiation. The entire media negotiation process is as follows:
-
The caller creates an Offer(createOffer) and sends the Offer message (containing SDP information) to the receiver through the signaling server. At the same time, setLocalDesccription is called to save the Offer containing local SDP information.
-
After receiving the Offer information from the peer end, the receiver calls the setRemoteDesccription method to save the Offer containing the SDP information from the peer end. CreateAnswer (createAnswer) and send the Answer message (containing the SDP information of the receiving end) to the calling end through the signaling server.
-
After receiving the Answer information from the peer end, the calling end calls the setRemoteDesccription method to save the Answer containing the SDP information from the peer end.
More documentation about WebRTC can be found at the website.
Project code
Here is not the whole stick a large section of code to explain, interested in children’s shoes can refer to the source codeGitHubTo understand, here I simply explain the overall code logic.
- For the Vue end is the main implementation of video preview function and some call or response related CSS3 animation.
- Node.js mainly realizes real-time information communication (room-related API of socket. IO), including signaling information of identifying each other and communication of some basic profile picture nickname information of users.
- The overall logic of the WebRTC protocol is embodied by installing the code described above.