4. Can the collected audio and video data be recorded?

Recording on the server: The advantage is that there is no need to worry about recording failure caused by the client's own computer problems (such as insufficient disk space), and there is no need to preempt resources during recording (high CPU usage), which will lead to problems in other applications. The disadvantage is the complexity of the implementation. Client recording: easy to control, the client camera resolution is very high (such as 1280x720), so the client recorded video with high clarity; When uploading a local HD video to the server, the resolution of the video may be automatically reduced to 640x360 due to insufficient network bandwidth. As a result, the video will be particularly fuzzy when users look back, resulting in poor user experience. The disadvantage of client recording is the high recording failure rate. Special CPU consumption, memory, hard disk requirements are particularly high. WebRTC library to achieve client recording audio and video streams 01. What is the storage format of audio and video streams after recording? Direct recording of raw data system efficiency; Struct data int media_type; struct data int media_type; // Data type, 0: audio 1: video int64_t ts; // timestamp, record the time when the data was received int data_size; // Data size char* data; } media_data; The drawback is that after recording, we also need to do audio and video streaming and output multimedia files. Wait for some time to see the video recorded into some multimedia format (such as MP4) \ FLV media file itself is streaming live video in N minutes, recorded into a segment of FLV, and then recorded a segment of playback, so as to achieve the above mentioned while watching the effect of recording. Disadvantages: scenario can't satisfy people interaction, FLV can only exist along the video and audio Audio and literatures into FLV file condition is the premise of audio and video stream is in order to, in fact, audio and video data through the UDP the unreliable transmission network, is no guarantee that the order of the audio and video data. Therefore, when dealing with audio and video recording, you should not only consider recording, but also do the work of sorting audio and video data yourself. 02. How to play the recorded audio and video stream? If you want to play it back as if it were live, then you have to use a private player, because regular players don't support multi-channel video playback. Third, how long does it take to play back after you start recording? 1. Recording and watching, that is, users can watch the recording a few minutes after it starts. 2. 3. You can watch it after a period of time. After WebRTC records audio and video streams, it ends up with an ArrayBuffer that stores the data as a multimedia file via Blob objects: Abstract class fixed-length binary data buffer, used to store images, videos, etc. It describes the space used to store binary data, but does not really allocate space in memoryletbuffer = new ArrayBuffer(16); // Create a buffer of length 16letview = new Uint32Array(buffer); ArrayBufferView: Describes different types of arrays, including: Int8Array, Uint8Array, DataView that means Int8Array, Uint8Array, etc are the objects that JavaScript can really allocate in memory. Int8Array: Each element is an 8-bit integer in memory space. Uint8Array: A space in memory that allocates a block of 8-bit unsigned integers to each element. Blob: binary object var blob = new blob (array, options); Parameter description: array: ArrayBuffer, ArrayBufferView, Blob, DOMString options that specify the media type to be stored. Var MediaRecorder = new MediaRecorder(stream[, options]); Stream: local video stream obtained through getUserMedia or remote video stream obtained through RTCPeerConnection. Options specifies the video format, codec, bit rate, and other related information such as mimeType:'video/webm; codecs=vp8'. Ondataavailable event: This event will be triggered when the MediaRecoder captures data so that we can record audio and video data. // Start recording mediaRecorder. Start (10); Btndownload.onclick = ()=> {var a = document.createElement('a'Var blob = new blob (buffer, {type: 'video/webm'}); / / to create a url var url = window. Url. CreateObjectURL (blob); // set href a.href = url; // Hide a.style.display ='none'; // The downloaded file name a.donload ='aaa.webm'; // simulate clicking a.lick (); }Copy the code

5. Explain RTP and RTCP in WebRTC.

UDP, a real-time interactive live broadcasting system, is fast but unreliable. TCP, in extreme network conditions, will send -> confirm to ensure transmission reliability. Timeout -> Resend TCP resend: The timeout period of the timer increases exponentially by 2. If the timer still times out after the seventh time, the TCP connection is disconnected. RTP/RTCP adds RTP headers to audio and video data before sending them to UDP for transmissionCopy the code

7.SDP is the driving core of WebRTC

SDP: Text-based audio codec, transport protocol and other capabilities, consisting of a session description and multiple media descriptions. When two clients make a 1-to-1 call, they first exchange signaling, and an important part of the interaction is SDP exchange. After receiving the SDP information from the peer end, one party takes the intersection to negotiate the audio and video parameters and transmission protocol. Standard SDP specifications: Example: V =0o= -4007659306182774937 2 IN IP4 127.0.0.1s=-t=0 0 M = Audio 9 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126... A = rTPmap :111 OPUS /48000/2 // Description of RTP data A = FMTP :111 minptime=10; Useinbandfec =1 // Format parameter description... a=rtpmap:103 ISAC/16000 a=rtpmap:104 ISAC/32000... // Above is the audio media description, M = Video 9 UDP/TLS/RTP/SAVPF 96 97 98 99 100 101 102 122 127 121 125 107 108 109 124 120 123 119 114 115 116 . a=rtpmap:96 VP8/90000 ... SDP Description Format SDP structure: Session Description Location: The session-level scope is the entire session, starting at line V = and ending at the first media description. Media information Description: Location: The media level describes a single media stream from line M = to the next media description (m=) Content: Media type Media Format Transmission Protocol IP address and port to be transferred Structure of the SDP: 1) Session description V = version. This parameter is mandatory. Example: v=0, SDP version number. O =, Mandatory example: o=<username> <session ID > <version> <networktype> <address type> <address> description of a session initiator <username> : indicates the username. If you do not care about the username, it can be replaced by hyphens (-). <session ID > : indicates a number that must be unique in the entire session. You are advised to use the NTP timestamp. <version> : indicates the version number. After the session data is modified, the version value increases. <networktype> : Indicates the network type. Generally, the value is IN, indicating Internet. <addresstype> : Address type, usually IP4; <address> : indicates the IP address. SessionName (mandatory) : example: s=SessionName. In the SDP, there is only one Session SessionName, which indicates a Session. T: example: t=<starttime><stoptime>, which indicates the starttime and end time of a Session. NTP time, in seconds. If both values are zero, it indicates a persistent session. M =<media> <port> <transport> < FMT list> <media> : media type, such as audio/video, etc. <port> : port; <transport> : Transport protocol, there are two types - RTP/AVP and UDP; < FMT list> : specifies the media format, that is, the list of data Payload types. A = *. Rtpmap (Optional) : RTPMap is the combination of RTP and Map, that is, RTP parameter mapping table. <payloadtype> : Load type, corresponding to the load type of audio and video data in RTP packets. <encoding name> : indicates the encoder name, such as VP8, VP9, and OPUS. <sample rate> : sampling rate, such as audio sampling rate, frequency 32000, 48000, etc. <encodingparameters> : encodingparameters, such as whether the audio is dual-channel, mono by default. FMTP. For example: a = FMTP: "contenttype> <format specific parameters>. FMTP, format parameters; <payloadtype>, the load type, which also corresponds to the load type of audio and video data in RTP packets; < format Specific Parameters > refers to SDP session metadata in WebRTC: SDP session layer description in SDP standard specification; Traffic description, network description: Media layer description in the SDP standard specification Security description: Authenticates user identities during network connectivity detection. Quality of Service Description: Enable functions to ensure audio and video quality. For example, enable bandwidth assessment. If a user sends too much data and exceeds the assessed bandwidth, reduce the sending of data packets. Enable the anti-congestion function, when congestion is predicted, prevent congestion by reducing traffic, etc.. / / = = = = = = = security description = = = = = = = = = = = = a = ice - ufrag: 1 uee / / into the connectivity testing username = ice - a PWD: RQe + y7SOLQJET + duNJ + Qbk7z / / password, These are the credentials a=fingerprint: SHA-256 used for connectivity detection Sith F: lifted D: F6:9 b: BA: 5 b: F6:2 a: 7 F: 65:59:6 0-6 D: 6 b: F9: C7: AE: 46:44: B4: E4:73: F8:60, 67:4 D: 58: E2: EB: 9 c / / DTLS fingerprint authentication, To identify legitimate users... //======== qos description ========= a=rtcp-mux a=rtcp-rsize a= rTPMap :96 vp8/90000 a= rTCP-fb :96 GOOg-remb // Use Google's bandwidth evaluation algorithm A =rtcp-fb:96 transport-cc a=rtcp-fb:96 CCM FIR Request keyframe a= rTCP-fb :96 nack a=rtcp-fb:96 nack PLI // Similar to FIR...Copy the code

8. On media negotiation

Functions of media negotiation In WebRTC End-to-end media negotiation is implemented based on the RTCPeerConnection object. When communicating with each other, both parties must be aware of the codec used by each other and the SSRC RTCPeerConnection of the transmitted audio and video stream. It represents the connection established between the end and end. Offer, SDP message sent by the caller Answer, SDP message sent by the called party negotiation process:  1. The caller creates the Offer and callssetLocalDescriptoin saves the Offer to the Local Local domain and then sends the Offer to the called party via signaling. 2. The called party invokes after receiving the OffersetRemoteDescription saves the offer to the Remote domain. To Answer the call, the called party creates an SDP message of the Answer typesetLocalDescription saves the Answer to the Local domain. The called party sends the Answer to the calling party through signaling. At this point, the work of the called party is complete. 4. After receiving the Answer, the caller invokes the RTCPeerConnecton objectsetRemoteDescription method, which saves the Answer to its Remote field. The browser provides several handy apis: createOffer, which creates an Offer; Arguments: Examples of two callback functions created successfully and failed: pc.createOffer(setLocalAndSendMessage, handleCreateOfferError); CreateAnswer, createAnswer;setLocalDescription: sets the local SDP information.setRemoteDescription: Sets the remote SDP information.Copy the code

9.WebRTC builds the mystery of connectivity

The transmission efficiency and connectivity rate are basic policies for WebRTC connection establishment. 1. Scenario 1: The two parties are on the same network segment. Second, through the public network, through the company's gateway, from the public network around the company to achieve communication. 2. Scenario 2: The two parties are in different locations. Second, through the relay server for transfer (which will increase the transmission time between A and B) WebRTC: P2P is preferred; If the P2P mode is unavailable, the trunk mode is used.Copy the code