This is the sixth day of my participation in the August More text Challenge. For details, see: August More Text Challenge.
preface
About the SSD series: Some interesting content on the front end, aimed at 3-10 minutes, 500-1000 words, gain without being tired. Shared desktop application, wow, tall still! Actually not, let me take you 3 minutes to implement the desktop sharing program, but also can hear the opposite talk oh.
Effect demo and source code
Two tabs, one for the sharer and one for the viewer. By the way, how to save the sound to GIF diagram inside ????
Train of thought
The user1==> Screen Capture API ===> Web RTC ===> User2 Video Tag playbackCopy the code
With its core Screen Capture API + WebRTC API, let’s take a look at a wave.
Screen Capture API- Screen capture API
MDN explanation:
The screen capture API, which complements the existing media capture and streaming apis, lets users select a screen or part of a screen (such as a window) to capture as a media stream. That stream can then be logged or shared with others over the network.
Look at the dynamics first, then the code:
Isn’t it cool to be able to display your desktop on a web page with just over 10 lines of code?
<video id="deskVideo" autoplay controls></video>
<script>
(async function captureDesk() {
deskVideo.srcObject = await navigator.mediaDevices.getDisplayMedia({
video: {
cursor: "always"
},
audio: false}); }) ();</script>
Copy the code
Web RTC
MDN
Web Real-time Communications (WebRTC) is a real-time communication technology that allows Web applications or sites to establish peer-to-peer connections between browsers without the use of intermediaries. Implement the transmission of video and/or audio streams or any other data.
We understand that it’s point-to-point transmission, just solve the transmission problem.
implementation
Follow the RULES of the SSD series: 3-10 minutes, 500-1000 words, gain without being tired. I will do it myself, the number of words will exceed, and there is a transfer server. Just, with the help of the sound Internet cafe.
Agora, why him? Because he is free for 10,000 minutes a month for testing and personal use. The underlying rationale has been explained above, but the core is Screen Capture API + Web RTC.
Of course, soundnet also supports camera, microphone and other streams push, camera and microphone are MediaDevices related content will not be explained too much.
Registered account
Sound network management station login or registration, the link address has been given, you can operate by yourself.
Create an
See the Runthrough example project for detailed steps to create an application and obtain an AppID and Token
Download the SDK
Go here to download the Agora SDK
Sharer code writing
Here is a parameter to understand:
- AppId: indicates the ID of an application
- -Serena: Well, you can read it as a room
- Token: ticket
- Uid: user ID
- Role: user role, which can be anchor or viewer
Core code:
async function startBasicLive() {
rtc.client = AgoraRTC.createClient({ mode: "live".codec: "vp8" }); // Initialize the client
rtc.client.setClientRole(options.role); // Set the role
const uid = await rtc.client.join(options.appId, options.channel, options.token, options.uid);
rtc.localAudioTrack = await AgoraRTC.createMicrophoneAudioTrack(); / / the microphone
rtc.localVideoTrack = await AgoraRTC.createScreenVideoTrack(); / / desktop
// Publish these audio and video track objects to the channel.
await rtc.client.publish([rtc.localAudioTrack, rtc.localVideoTrack]);
console.log("publish success!");
}
btnShareDesk.onclick = startBasicLive; // Register the click event
Copy the code
Viewer side code writing
The core code
async function startBasicLive() {
rtc.client = AgoraRTC.createClient({ mode: "live".codec: "vp8" });
rtc.client.setClientRole(options.role);
rtc.client.on("user-published".async (user, mediaType) => {
// Start subscribing to remote users.
await rtc.client.subscribe(user, mediaType);
console.log("subscribe success", mediaType);
// Indicates that the subscription is a video.
if (mediaType === "video") {
// After the subscription is complete, get the remote video track object from 'user'.
const remoteVideoTrack = user.videoTrack;
// Insert a DIV node dynamically as a container to play the remote video track.
const playerContainer = document.createElement("div");
// Assign an ID to the DIV node. In this case, specify the UID of the remote user.
playerContainer.id = user.uid.toString();
playerContainer.style.width = "640px";
playerContainer.style.height = "480px";
document.body.append(playerContainer);
// After the subscription is complete, play the remote audio and video.
// Pass in the DIV node and have the SDK create the appropriate player under this node to play the remote video.
remoteVideoTrack.play(playerContainer);
}
// Indicates that this subscription is audio.
if (mediaType === "audio") {
// After the subscription is complete, get the remote audio track object from 'user'.
const remoteAudioTrack = user.audioTrack;
// Play the audio because there is no screen, there is no need to provide information about the DOM element.remoteAudioTrack.play(); }});const uid = await rtc.client.join(options.appId, options.channel, options.token, null);
console.log("uid", uid);
}
Copy the code
Demo complete code
Sharing desktop source code
summary
Is not very simple, everything looks not so difficult, so, you can easily into the pit ah.
Write in the last
Writing is not easy, your praise and comments are the biggest motivation for me to move forward.
Refer to the reference
Web RTC Screen Capture API Media Devices to achieve live video streaming through the sample project