About the Agora

Agora provides a simple and powerful video and voice calling SDK that allows developers to add hd voice and video to any phone or computer application. Agora Web SDK provides Web developers with a set of rapid development of video call applications, with the help of it, developers can easily achieve stable video call applications. The latest Agora Web SDK NG release adds a number of new features to the new SDK, such as Promise for all asynchronous methods, perfect TypeScript support, more flexible audio and video controls, and much smoother integration than the older version and greatly improved development experience. Today I’m going to share how to quickly access the SDK and implement some simple live video calls in the React app.

The preparatory work

  1. Register an account (registered address) on Soundnet.
  2. Then go to the console, complete real name verification, create an item in the list of items (select safe mode), getAppID.
  3. Edit the project, click Create Temporary Token, and fill in the channel namechannelGet the temporarytoken. (When the project is released, you can refer to the documents and the official warehouse to write the corresponding token generation code, and deploy it to your own server to obtain the token by calling the interface.)

Create the project and integrate the SDK

  1. Create a React App (How to Create a React App)
create-react-app agora-app
Copy the code
  1. Installing the SDK
npm install agora-rtc-sdk-ng --save
# or
yarn add agora-rtc-sdk-ng
Copy the code
  1. Introduce this module in the app.js file
import AgoraRTC from 'agora-rtc-sdk-ng'
const client = AgoraRTC.createClient({ mode: 'rtc'.codec: 'vp8' })
Copy the code

If you’re using TypeScript, you can also introduce the SDK’s type declaration objects:

import AgoraRTC, { IAgoraRTCClient } from 'agora-rtc-sdk-ng'
const client: IAgoraRTCClient = AgoraRTC.createClient({ mode: 'rtc'.codec: 'vp8' })
Copy the code

Realize basic video call

Now let’s implement the video call function by creating an async function startBasicCall() in app.js, The rest of our code will be written inside this function (in the NG version we can use promises in conjunction with async/await to make the entire code logic clearer).

async function startBasicCall() {
  // The code will be written here...
}
Copy the code

1. Create a local client

Create a local client object by calling the createClient() method. Mode to set the channel scene, we choose real-time audio and video “RTC”; To set the codec format, we select “VP8”.

const rtcClient = AgoraRTC.createClient({ mode: 'rtc'.codec: 'vp8' })
Copy the code

2. Join the target channel

To join a channel, you need to pass four parameters:

  • Project ID AppID
  • Channel Name Indicates the Name of a Channel
  • Temporary Token generated in security mode
  • Finally, pass in UID, which isnullThe SDK will temporarily generate one for you
const uid = await rtcClient.join(<AppId>, <Channel Name>, <Token>, null);
Copy the code

3. Create local audio and video tracks

After adding the channel we will create the local audio and video track

  • callcreateMicrophoneAudioTrack()Create a local audio track objectlocalAudioTrack.
  • callcreateCameraVideoTrack()Create a local video track objectlocalVideoTrack.
// Create a local audio track object from the audio collected by the microphone.
localAudioTrack = await AgoraRTC.createMicrophoneAudioTrack()
// Create a local video track object from the video captured by the camera.
localVideoTrack = await AgoraRTC.createCameraVideoTrack()
Copy the code

4. Play local audio and video tracks

After creating the local audio and video track, we can call the play() method to play the local video and audio. To play a video track, you need to specify a DOM element. You can either pass in a DOM object (native DOM objects, but not React encapsulated DOM objects) or the element ID. The SDK then automatically generates a

// Play the video track
localVideoTrack.play(document.getElementById('playerContainer'))
// or
localVideoTrack.play('playerContainer')
// Play the audio track
localAudioTrack.play()
Copy the code

Inside the App() component:

// Call startBasicCall() when the component is mounted
useEffect(() = > {
  startBasicCall()
}, [])
// React returns the DOM element
return (
  <div className="App">
    <div id="playerContainer" className="player-container"></div>
  </div>
)
Copy the code

Also note: The DOM elements of the playback video track must be set to width and height:

.player-container {
  height: 480px;
  width: 720px;
}
Copy the code

5. Publish local audio and video tracks

Once the local track has been created and successfully added to the channel, you can call the Publish () method to publish the local audio and video data to the current channel for other users in the channel to subscribe to.

await rtcClient.publish([localAudioTrack, localVideoTrack])
Copy the code

Note the following about the release:

  • aRTCClientObject can publish multiple audio tracks simultaneously, and the SDK automatically mixes to combine multiple audio tracks into a single audio track.
  • aRTCClientObject can publish only one video track at a time. If you want to change the video track, for example, you have already posted a camera video track, and you want to switch to a screen shared video track, you must first cancel the release.
  • You can call this method multiple times to publish different tracks, but you cannot publish the same track object repeatedly.
  • This method is asynchronous and needs to be used with promises or async/await.

6. Subscribe to a remote user and play audio and video

When a remote user of the same channel successfully publishes the audio and video tracks (the remote user calls Publish ()), the SDK triggers the user-published event. This event takes two parameters: the remote user object (user) and the mediaType published by the remote (mediaType). At this point we need to listen for this event and call subscribe() to subscribe.

rtcClient.on('user-published'.async (user, mediaType) => {
  // Start subscribing to remote users.
  await rtc.client.subscribe(user, mediaType)
  console.log('subscribe success')

  // Indicates that the subscription is a video.
  if (mediaType === 'video') {
    // After the subscription is complete, get the remote video track object from 'user'.
    const remoteVideoTrack = localVideoTrack
    // Create a DIV element
    const playerContainer = document.createElement('div')
    // Assign an ID to the DIV node. In this case, specify the UID of the remote user.
    playerContainer.id = uid.toString()
    // Add the class and set the height and width
    playerContainer.classList.add('player-container')
    // Dynamically insert a DIV node into the App component as a container for playing the remote video track.
    const App = document.querySelector('.App')
    App.append(playerContainer)
    // Pass in the DIV node and have the SDK create the appropriate player under this node to play the remote video.
    remoteVideoTrack.play(playerContainer)
  }

  // Indicates that this subscription is audio.
  if (mediaType === 'audio') {
    // After the subscription is complete, get the remote audio track object from 'user'.
    const remoteAudioTrack = loaclAudioTrack
    // Play the audio because there is no screen, there is no need to provide information about the DOM element.
    remoteAudioTrack.play()
  }
})
Copy the code

When a remote user unpublishes or leaves a channel, the SDK triggers a user-unpublished event. At this point we need to destroy the DIV node we just created dynamically.

rtcClient.on('user-unpublished'.(user, mediaType) = > {
  if (mediaType === 'video') {
    // Get the DIV node you just created dynamically.
    const playerContainer = document.getElementById(uid.toString())
    // Destroy this node.
    playerContainer.remove()
  }
})
Copy the code

In the test phase, a PC terminal and a mobile terminal are used to enter the website respectively, with the local camera screen on the top and the remote camera screen on the bottom. So far, we have preliminarly realized a simple video call function.

conclusion

The Agora Web SDK NG (Next Generation) is the latest Web end audio and video solution for the overall experience is very good. Firstly, the access mode is simple and easy, which can be embedded in various Web frameworks. Secondly, the documents are clear and can be quickly used. In addition, powerful functions, COMPLETE API interface, video call quality has reliable guarantee. Compared to the previous version (3.0), the NG version incorporates new features of WEB programming:

  • All asynchronous methods embrace the Promise experience of escaping callback hell, while using promises in conjunction with async/await makes the entire code logic cleaner and cleaner.
  • The NG version supports TypeScript perfectly and helps us write high-quality, maintainable code, which is important for large front-end application development.
  • The NG version also removes the Stream object and uses the Track object to control audio and video. Changing the minimum unit of audio/video control from one audio/video stream to one audio/video track, with each track object having its own method, has a smaller granularity and makes the code you write more flexible.

Looking forward to

Back to the project, although Agora Web SDK NG can smoothly embed the React project, it uses the native DOM API and doesn’t have good support for React features (such as binding to the React virtual DOM, We hope that the Agora team can provide the SDK dedicated to React to help React developers quickly base the SDK and realize interactive scenes of audio and video. This article was originally published on the RTC Developer community.