One, foreword

Today, you can easily experience the audio and video functions of Flutter by using the plugin Agora provided by Sonnet. First of all, go to the official website of sound network to see the general introduction:

Sound net SDK
Experience immediately
App ID

App ID

2. Rely on plug-ins

Since I implemented Flutter, the plugin should search Agore on pub.dev/packages/ and you can see:

agora_rtc_engine
Agore.io
pubspec.yaml

Iii. Project structure

1. The first page

The home page layout is very simple, just two buttons, respectively voice call and video call, first sketch:

Center
Row
Row
RaisedButton

@override
  Widget build(BuildContext context) {
    return Scaffold(
      body: Center(
        child: Row(
          crossAxisAlignment: CrossAxisAlignment.start,
          mainAxisAlignment: MainAxisAlignment.spaceEvenly,// The spindle blank area is evenly divided
          children: <Widget>[
            // The button on the left
            RaisedButton(
              padding: EdgeInsets.all(0),
              // Click the event
              onPressed: () {
                // Go to the voice page
                onAudio();
              },
              child: Container(
                height: 120,
                width: 120./ / decoration
                decoration: BoxDecoration(

                    / / gradients
                    gradient: const LinearGradient( colors: [Colors.blueAccent, Colors.lightBlueAccent], ).// Rounded Angle 12 degrees
                    borderRadius: BorderRadius.circular(12.0)),
                child: Text(
                  "Voice call",
                  style: TextStyle(color: Colors.white, fontSize: 18.0),),// The text is centered
                alignment: Alignment.center,
              ),
              shape: new RoundedRectangleBorder(
                borderRadius: BorderRadius.circular(12.0),),),// The button on the right
            RaisedButton(
              padding: EdgeInsets.all(0),
              onPressed: (a) {
                // Go to the video page
                onVideo();
              },
              child: Container(
                height: 120,
                width: 120.// Decorate --> gradient
                decoration: BoxDecoration(
                    gradient: const LinearGradient( colors: [Colors.blueAccent, Colors.lightBlueAccent], ).// Rounded Angle 12 degrees
                    borderRadius: BorderRadius.circular(12.0)),
                child: Text(
                  "Video call",
                  style: TextStyle(color: Colors.white, fontSize: 18.0),),// The text is centered
                alignment: Alignment.center,
              ),
              shape: new RoundedRectangleBorder(
                borderRadius: BorderRadius.circular(12.0)(), [(), [(), ();
  }
Copy the code

The effect is as follows:

  • Voice click eventonAudio()
  onAudio() async {
    SimplePermissions.requestPermission(Permission.RecordAudio)
        .then((status_first) {
      if (status_first == PermissionStatus.denied) {
        // If rejected
        Toast.show("This feature requires recording permission", context,
            duration: Toast.LENGTH_SHORT, gravity: Toast.CENTER);
      } else if (status_first == PermissionStatus.authorized) {
        // If permission is granted, go to the voice page
        Navigator.push(
          context,
          MaterialPageRoute(
            builder: (context) => new AudioCallPage(
                  // Channel write dead, for easy experience
                  channelName: "122343",),),); }}); }Copy the code

Only the recording rights can be granted.

  • Video pass click eventonVideo()Video needs to be granted more permissions than camera permissions and:
   onVideo() async {
    SimplePermissions.requestPermission(Permission.Camera).then((status_first) {
      if (status_first == PermissionStatus.denied) {
        // If rejected
        Toast.show("This feature requires granting camera permissions", context,
            duration: Toast.LENGTH_SHORT, gravity: Toast.CENTER);
      } else if (status_first == PermissionStatus.authorized) {
        // If agreed
        SimplePermissions.requestPermission(Permission.RecordAudio)
            .then((status_second) {
          if (status_second == PermissionStatus.denied) {
            // If rejected
            Toast.show("This feature requires recording permission", context,
                duration: Toast.LENGTH_SHORT, gravity: Toast.CENTER);
          } else if (status_second == PermissionStatus.authorized) {
            // If authorized
            Navigator.push(
              context,
              MaterialPageRoute(
                builder: (context) => new VideoCallPage(
                      // The video room channel number is written dead for easy experience
                      channelName: "122343",),),); }}); }}); }Copy the code

So the home page is finished.

2. AudioCallPage

Here I only made a one-to-one voice call interface effect, can also achieve a multi-person call, just change the interface style to your favorite style.

Style of 2.1.

The interface of a one-to-one call is similar to that of a wechat voice call. In the middle of the screen is the profile picture of the other party (here I only display the user ID of the other party), and at the bottom is the menu bar: Mute or not, Hang up or not, Play outside. The sketch is as follows:

Stack
Positioned

  @override
  Widget build(BuildContext context) {
    return new Scaffold(
      appBar: new AppBar(
        title: Text(widget.channelName),
      ),
      // Background black
      backgroundColor: Colors.black,
      body: new Center(
        child: Stack(
          children: <Widget>[_viewAudio(), _bottomToolBar()],
        ),
      ),
    );
  }
Copy the code

2.2. The logical

There are five main steps to achieve speech, which are as follows:

  • Initialize engine
  • Enabling the Audio Module
  • Create a room
  • Set event monitoring (successfully join the room, whether there is a user to join, whether the user leaves, whether the user is offline)
  • Layout of the implementation
  • Exit voice (destroy engine as needed, free resources)
2.2.1. Initialize the engine

There is only one line of code to initialize the engine:

    // Initialize the engine
    AgoraRtcEngine.create(agore_appId);
Copy the code

Go into the source code found:

  /// Creates an RtcEngine instance.
  ///
  /// The Agora SDK only supports one RtcEngine instance at a time, therefore the app should create one RtcEngine object only.
  /// Only users with the same App ID can join the same channel and call each other.
  // Applications in the RtcEngine SDK should create only one instance of RtcEngine
  static Future<void> create(String appid) async {
    _addMethodCallHandler();
    return await _channel.invokeMethod('create', {'appId': appid});
  }

Copy the code

_addMethodCallHandler ();

// CallHandler
  static void _addMethodCallHandler(a) {
    _channel.setMethodCallHandler((MethodCall call) {
      Map values = call.arguments;

      switch (call.method) {
        // Core Events
        case 'onWarning':
          if(onWarning ! =null) {
            onWarning(values['warn']);
          }
          break;
        case 'onError':
          if(onError ! =null) {
            onError(values['err']);
          }
          break;
        case 'onJoinChannelSuccess':
          if(onJoinChannelSuccess ! =null) {
            onJoinChannelSuccess(
                values['channel'], values['uid'], values['elapsed']);
          }
          break;
        case 'onRejoinChannelSuccess':
          if(onRejoinChannelSuccess ! =null) {
            onRejoinChannelSuccess(
                values['channel'], values['uid'], values['elapsed']);
          }
          break; . }}}Copy the code

You can see that the callbacks are mostly triggered by specific conditions, such as SDK errors, whether the channel was successfully created, whether the channel left, etc. Now you know that the agorartcEngine.create (agore_appId) line initializes the engine and implements listening callbacks in some states.

2.2.2. Enable audio module

Enable audio module:

    // Make video available enable the audio module
    AgoraRtcEngine.enableAudio();
Copy the code

Read the official documentation:

2.2.3. Join the room

After initializing the engine and enabling the audio module, create the room:

  // Create a render view
  void _createRendererView(int uid) {
    // Add audio session objects for audio layout needs (via uid and container information)
    // Add channel the first parameter is token the second parameter is channel ID the third parameter is channel information is usually empty the fourth user ID
    setState(() {
      AgoraRtcEngine.joinChannel(null, widget.channelName, null, uid);
    });

    VideoUserSession videoUserSession = VideoUserSession(uid);
    _userSessions.add(videoUserSession);
    print("Set size"+_userSessions.length.toString());
  }
Copy the code

Basically see AgoraRtcEngine. JoinChannel (null, widget. The channelName, null, uid); This method:

VideoUserSession
List<VideoUserSession>

2.2.4. Set event listening

Do we know if a user comes in, if a user leaves, if a user drops out? The answer is yes:

  // Set the event listener
  void setAgoreEventListener(a) {
    // Succeeded in joining the room
    AgoraRtcEngine.onJoinChannelSuccess =
        (String channel, int uid, int elapsed) {
      print(${channel}+uid+${uid}");
    };

    // Listen for new users to join
    AgoraRtcEngine.onUserJoined = (int uid, int elapsed) {
      print("New user added with id :$uid");

      setState(() {
        // Update the UI layout
        _createRendererView(uid);
        self_uid = uid;
      });
    };

    // Listen for the user to leave the room
    AgoraRtcEngine.onUserOffline = (int uid, int reason) {
      print("User left with id :$uid");
      setState(() {
        // Remove the user to update the UI layout
        _removeRenderView(uid);
      });
    };

    // Listen for the user to leave the channel
    AgoraRtcEngine.onLeaveChannel = () {
      print("User leave");
    };
  }
Copy the code
2.2.5. Layout implementation

The following is a simple implementation of UI in the middle of the screen. I only make a one-to-one call, that is, only the user ID of the other party is displayed in the middle. If there are multiple calls, the user ID can also be displayed according to the number of List

.

  // Audio layout View layout
  Widget _viewAudio(a) {
    // Get the audio number first
    List<int> views = _getRenderViews();
    switch (views.length) {
      // Only one user (oneself)
      case 1:
        return Center(
          child: Container(
            child: Text("User 1"),),);// Two users
      case 2:
        return Positioned(// Display the peer id in the middle
          top: 180,
          left: 30,
          right: 30,
          child: Container(
            height: 260,
            child: Column(
              mainAxisAlignment: MainAxisAlignment.spaceBetween,
              crossAxisAlignment: CrossAxisAlignment.center,
              children: <Widget>[
                ClipRRect(
                  borderRadius: BorderRadius.circular(10),
                  child: Container(
                    alignment: Alignment.center,
                    width: 140,
                    height: 140,
                    color: Colors.red,
                    child: Text(\n${self_uid}",
                      textAlign: TextAlign.center,

                      style: TextStyle(color: Colors.white),
                    ),
                  ),
                ),
              ],
            ),
          ),
        );

      default:}return new Container();
  }
Copy the code

The above is mainly based on the List

collection to control the voice through the page.

2.2.6. Exit the voice

If a user exit the interface or hang up, you must call AgoraRtcEngine. LeaveChannel (); :

  // This page is about to be destroyed
  @override
    void dispose(a) {
    // Clear the set
    _userSessions.clear();
    AgoraRtcEngine.leaveChannel();
    // Release SDK resources
    AgoraRtcEngine.destroy();
    super.dispose();
  }
Copy the code

When a user to leave the room, the callback AgoraRtcEngine. OnUserOffline this method, the document also has a description:

  // Remove the corresponding user interface and remove the user session object
  void _removeRenderView(int uid) {
    // Clear from the session object based on uid first
    VideoUserSession videoUserSession = _getVideoUidSession(uid);

    if(videoUserSession ! =null) { _userSessions.remove(videoUserSession); }}Copy the code
2.2.7. Mute or not

If quiet is through AgoraRtcEngine muteLocalAudioStream (muted); Methods to implement:

  // Switch local audio sending
  void _isMute(a) { setState(() { muted = ! muted; });// true: mute the microphone false: unmute the microphone (default)
    AgoraRtcEngine.muteLocalAudioStream(muted);
  }
Copy the code
2.2.8. Whether to turn on the loudspeaker
  // Whether to enable the loudspeaker
  void _isSpeakPhone(a) { setState(() { speakPhone = ! speakPhone; }); AgoraRtcEngine.setEnableSpeakerphone(speakPhone); }Copy the code

2.3. Final effect

  • In a one-to-one call, you can enter the call screen only when both parties are connected
  • When one side quits, the other should quit too

3. Video Page

The toolbar is also at the bottom. When a one-to-one video call is made, the screen is divided into two parts: the top part is your own video, and the bottom part is the other part’s video. Other logic and voice are basically the same.

  • Initialize engine
  • Enabling the Video Module
  • Create a video render view
  • Setting the local View
  • Enabling Video Preview
  • Join channel
  • Setting up Event Listeners

3.1. Enable video

Enable video module is primarily a code AgoraRtcEngine. EnableVideo (); , refer to the documentation:

3.2. Create a video rendering view

Create a video player plug-in:


  // Create a render view
  void _createDrawView(int uid,Function(int viewId) successCreate){
    // This method creates a video render view and adds a new video session object. This render view can be used in local/remote streams and needs to be updated here
    //Agora SDK renders on the View provided by App.
    Widget view = AgoraRtcEngine.createNativeView(uid, (viewId){
        setState(() {
           _getVideoUidSession(uid).viewId = viewId;
           if(successCreate ! =null){ successCreate(viewId); }}); });// Add the video session object for the video (via uid and container information)
    VideoUserSession videoUserSession = VideoUserSession(uid, view: view);
    _userSessions.add(videoUserSession);


  }
Copy the code

Management session object information is also stored through collections, just to facilitate video layout.

3.3. Set the local view

    // Set the local view. This method sets the local view. App binds the View of the local video stream by calling this interface and sets the video display mode.
    // In App development, this method is usually called after initialization to set up the local video and then add the channel. The binding remains in effect after you exit the channel, and you can specify a null View call if you want to unbind
    // This method sets the local video display mode. App can call this method multiple times to change the display mode.
    //RENDER_MODE_HIDDEN(1) : make sure the window is filled first. The video size is scaled equally until the entire window is filled with video. If the length and width of the video are different from the display window, the extra video will be truncated
    AgoraRtcEngine.setupLocalVideo(viewId, VideoRenderMode.Hidden);
Copy the code

And make the video rendering mode.

3.4. Enable video preview

setupLocalVideo
enableVideo

3.5. Join the channel

When everything is ready to join the video room, joining the video room is the same as joining the voice room:

    // Add channel the first parameter is token the second parameter is channel ID the third parameter is channel information is usually empty the fourth user ID
    AgoraRtcEngine.joinChannel(null, widget.channelName, null.0);
Copy the code

3.6. Set event listening

The biggest difference between setting the event listening video and voice is that there is more setting the remote user’s video view. This method mainly binds the remote user to the video display window (setting the view UID for the specified remote user).

// Set the event listener
  void setAgoreEventListener(a){
    // Succeeded in joining the room
    AgoraRtcEngine.onJoinChannelSuccess = (String channel,int uid,int elapsed){
      print("Joined room successfully, channel number :$channel");
    };

    // Listen for new users to join
    AgoraRtcEngine.onUserJoined = (int uid,int elapsed){
      print("New user added with id :$uid");
      setState(() {
        _createDrawView(uid, (viewId){
          // Set the remote user's video view

          AgoraRtcEngine.setupRemoteVideo(viewId, VideoRenderMode.Hidden, uid);
        });
      });

    };

    // Listen for the user to leave the room
    AgoraRtcEngine.onUserOffline = (int uid,int reason){
      print("User left with id :$uid");
      setState(() {
        _removeRenderView(uid);
      });

    };

    // Listen for the user to leave the channel
    AgoraRtcEngine.onLeaveChannel  =  (){
      print("User leave");
    };

  }
Copy the code

3.7. Layout implementation

Here we need to divide the situation, 1-5 each user’s situation:

// Video view layout
  Widget _videoLayout(a){
    // Get the number of video attempts
    List<Widget> views = _getRenderViews();

    switch(views.length){
      // The entire screen with only one user
      case 1:
        return new Container(
          child: new Column(
            children: <Widget>[
              _videoView(views[0])],),);// Two users layout up and down when they are on top of each other
      case 2:
        return new Container(
          child: new Column(
            children: <Widget>[
              _createVideoRow([views[0]]),
              _createVideoRow([views[1]]]],),);// Three users
      case 3:
        return new Container(
          child: new Column(
            children: <Widget>[
              // Truncate 0-2 not including 2 columns above two columns below one column
              _createVideoRow(views.sublist(0.2)),

              // cut 2 -3 not including 3
              _createVideoRow(views.sublist(2.3))],),);// Four users
      case 4:
         return new Container(
           child: new Column(
             children: <Widget>[
               // Cut the number from 0 to 2, excluding 2, i.e. 0,1, two users above and below
               _createVideoRow(views.sublist(0.2)),

               // cut 2-4 not including 4, i.e. 3,4
               _createVideoRow(views.sublist(2.4))],),);default:}return new Container();
  }
Copy the code

And at the core, you update the UI view as users exit and join.

3.8. Final effect

Four,

  • The overall development is not very difficult, according to the specific document to do, some common functions can be achieved, of course, if the later to do some more advanced functions will spend a little more attention to research.
  • Voice, video effect or good.
  • It’s nice to have detailed development and a documented developer community that allows developers to communicate with each other and give feedback on any problems that arise.
  • Error: Pod not install is not available in ios emulator.

5. Reference materials

  • Soundnet developer documentation
  • Build your first Flutter video calling app
  • Project Demo: github.com/KnightAndro…