webRtc

WebRtc Web architecture

GithubFugang1996 /videochat: webrtc for videochat and simple SDK integration (github.com)

Graph TD webRTC --> Webpac ++APi --> Session Management Signaling --> Audio Session Management signaling --> Video Session Management signaling --> Media transport
Graph TD Audio -->OPUS/G.711PCM Encoded Audio --> Echo cancellation audio --> Noise reduction Audio --> Audio acquisition
Graph TD Video --> VP8 /H264 encoded Video -->JitteryBuffer Video --> Image Enhancement Video --> Video Capture
Graph TD Media Transmission -->RTP/SRTR Media transmission -->TURN/STUN Media transmission -->ICE/SDP/DTLS/UDP media transmission -->DTLS/UDP media transmission --> Network I/O

WebRTC network structure

  • Mesh network architecture: Each participant sends media streams to and receives media streams from all participants
  • MCU network structure: all participants are only connected to the central MCU media server, and MCU merges all participants’ video streams (video chat squares)
  • SFU network architecture: The central node media server is only responsible for forwarding

Live video

Screen sharing

Electronic whiteboard

Switching equipment

demo

  • Get the media stream corresponding to the camera and microphone
    
    const promsie = navigator.mediaDevices.getUserMedia({{audio:true.video:true}})
    Copy the code
  • New media tracks added
        MediaStream.addTrack()
    Copy the code
  • Media track removed
    MediaStream.removeTrack()
Copy the code
  • Specify frame rate, resolution, echo cancellation
    const constraints={ width: {min:640.idea:1280},height: {min:640.ideal:720},aspectRatio: {ideal:1.77777777778} }
    navigator.mediaDevices.getUserMedia({video:true}).then(mediaStream= >{
        const track = mediaStream.getVideoTracks()[0] // The first video track
        track.applyConstraints(constranints) // Add constraints
        .then(() = >{})Copy the code
  • Mute and black frequency
let audioTrack = mediaStream.getAudioTracks()[0];
audioTrack.enabled = false; / / mute

let videoTrack = mediaStream.getVideoTracks()[0];
videoTrack.enabled = false; / / black frequency
Copy the code
  • Stop playing the video
Const tracks = stream.gettracks () tracks.foreach (track=>{track.stop() // stop()})Copy the code
  • Screen sharing
let constranins // Constraint requirements
/ / for example
cursor // Mouse cursor display
displaySurface // Select the screen contents
logicalSurface // Logical display surface
const promise = navigator.mediaDevices.getDisplayMedia(constranins)
    
Copy the code
  • Example Query all media devices
Const enumeratorPromise = the navigator. MediaDevieds. EnumerateDevices () / / return all media equipmentCopy the code

There are too many APIS to use in the future. I will directly package it into an SDK and write an SDK document. The SDK document will be sent out separately, and then this will be continuously updated to make a virtual conference room