In order to give you an interactive Live streaming experience based on the Agora SDK, we developed the app Agora Live three years ago. We recently launched an “epic” update to Agora Live. Not only did we redesign the UI, but we also added multiplayer livestreaming, PK livestreaming, and virtual anchors. More importantly, we decided to open source this code to all developers as well. We hope that you will be able to implement the most popular real-time interactions in social entertainment faster and create more unique gameplay!

Making URL:github.com/AgoraIO-use…

A new version of Agora Live is available in the app store for Android and iOS users only. IOS users can download or update the App directly by searching for “Agora Live” in the App Store. Android users will be able to visit the website download center to download.

One App, multiple popular scenarios

The new version of Agora Live now supports four of the most popular real-time interactive scenarios, including:

  • Single anchor Live broadcast: This is a feature that Agora Live originally supported, including beauty, text messaging, adding background music, etc.

  • Multiple people live broadcast: on the basis of the live broadcast, you can invite another 6 audience members to perform the live broadcast.

  • PK live broadcast scenario: Just like PK live broadcast you can see in Momo, Douyin and other apps, anchors can initiate PK invitations to another anchor. Viewers in both studios will see both anchors interact online at the same time.

  • Virtual anchor scenario: Similar to the single anchor live broadcast scenario, except that the App will generate a real-time virtual image for the host, and the expression of the virtual image will be synchronized with the anchor. During the live broadcast, the audience can also be invited to the mic.

All audio and video real-time interactions, text messages, and control commands (such as invitation to the mic) in the App are implemented based on Agora Native SDK and Agora Real-time message RTM SDK. Beauty, virtual image is based on the sound network of Agora ecological partner FaceUnity SDK implementation.

Implementation of core functions

In order to familiarize you quickly with the source code, we briefly explain the following core functions of the code. The following uses the Swift code as an example.

In this example, the studio, the owner, the audience and the mic are all based on the Agora RTC SDK. We through the following code can let the user to join the RTC channel, the realization of audio and video communication.

  func join(channel: String, token: String? = nil, streamId: Int, success: Completion = nil) {
        agoraKit.join(channel: channel, token: token, streamId: streamId) { [unowned self] in
            self.channelStatus = .ing
            if let success = success {
                success()
            }
        }
  }
Copy the code

Text messages and control commands in the broadcast room (such as inviting the audience to the mic) are all implemented based on the Agora real-time message RTM SDK. Here we integrate the RTM SDK and let users join the RTM channel with the following code.

    func joinChannel(_ id: String, delegate: AgoraRtmChannelDelegate, success: Completion, fail: ErrorCompletion) {
        do {
            let channel = try createChannel(id: id, delegate: delegate)
            channel.join { (errorCode) in
                switch errorCode {
                case .channelErrorOk:
                    self.log(info: "rtm join channel success", extra: "channel id: \(id)")
                    if let success = success {
                        success()
                    }
                default:
                    let error = AGEError.rtm("join channel fail",
                                             code: errorCode.rawValue,
                                             extra: "channel: \(id)")
                    
                    self.log(error: error)
                    if let fail = fail {
                        fail(error)
                    }
                }
            }
        } catch {
            log(error: error, extra: "create channel fail")
            if let fail = fail {
                fail(error)
            }
        }
    }

Copy the code

Beauty and avatars are achieved through access to the FaceUnity service. The FUClient implementation can be combined with the FaceUnity documentation to integrate the beauty module.

typedef void (^FUCompletion)(void);
typedef void (^FUErrorCompletion)(NSError *error);
​
typedef NS_ENUM(NSUInteger, FUFilterItemType) {
    FUFilterItemTypeSmooth      = 1,
    FUFilterItemTypeBrighten    = 2,
    FUFilterItemTypeThinning    = 3,
    FUFilterItemTypeEye         = 4
};
​
@interface FUFilterItem : NSObject
@property (nonatomic, assign) FUFilterItemType type;
@property (nonatomic, assign) float defaultValue;
@property (nonatomic, assign) float minValue;
@property (nonatomic, assign) float maxValue;
@property (nonatomic, assign) float value;
@property (nonatomic, copy) NSString *funcName;
@end
​
@interface FUClient : NSObject
- (void)loadFilterWithSuccess:(FUCompletion)success fail:(FUErrorCompletion)fail;
- (void)setFilterValue:(float)value withType:(FUFilterItemType)type;
- (FUFilterItem *)getFilterItemWithType:(FUFilterItemType)type;
​
- (void)loadBackgroudWithSuccess:(FUCompletion)success fail:(FUErrorCompletion)fail;
- (void)loadAnimoji:(NSString *)name success:(FUCompletion)success fail:(FUErrorCompletion)fail;
- (void)renderItemsToPixelBuffer:(CVPixelBufferRef)pixelBuffer;
- (void)destoryAllItems;
@end
​
Copy the code

Video streams flow out from AVCaptureSession, into FaceUnity for pre-processing, and then into the Agora RTC SDK for sending to the remote end.

    func camera(_ camera: AGESingleCamera, position: AGECamera.Position, didOutput sampleBuffer: CMSampleBuffer) {
        cameraStreamQueue.async { [unowned self] in
            guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
                return
            }
            
            CVPixelBufferLockBaseAddress(pixelBuffer, .init(rawValue: 0))
            
            let timeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
            
            ifself.enhancement.beauty == .on || self.enhancement.appearance ! = .none { self.enhancement.renderItems(to: pixelBuffer) } self.consumer? .consumePixelBuffer(pixelBuffer, withTimestamp: timeStamp, rotation: .rotationNone) CVPixelBufferUnlockBaseAddress(pixelBuffer, .init(rawValue: 0)) } }Copy the code

Open source project and future plans

We have opened the source code for Agora Live on the official Github “AgoraIO Usecase”. You can directly fork, register in the official website of the sound network, in the background to get the AppID, replace the source of the AppID can be used. It is convenient for you to quickly realize four real-time interactive scenes including multi-person continuous mic live broadcast, single anchor live broadcast, PK live broadcast and virtual anchor.

Making URL:github.com/AgoraIO-use…

If you have implemented more features on top of this source code, you are also welcome to submit PR. We will also recommend your project to more developers in the community.

added

🌟 this open source is based on the existing App, only Java and Swift source code. If you need an Objective-C or Kotlin version, wrap it yourself.

😉 Of course, if you have made a different language version based on the source code, and would like to open source to more people, you can submit it to us, we are happy to help promote your repo.