The technical solution based on intelligent classroom or conference is generally related to screen acquisition and push. The overall technical solution is generally recommended to go to RTMP. Speaking of which, good developers mentioned that there are RTSP technical solutions on the market, even RTSP multicast solution, this, Daniu live SDK Github has also done the relevant comparison. In general, RTMP is the most reliable solution for 60-person smart classroom or similar screen scenarios.

Someone said, RTMP delay is big, this statement, relatively one-sided, a lot is caused by the push-pull flow problem of the module itself (if the server is NIGNX or SRS, basic can eliminate server forwarding leads to large time delay and don’t depend on the server), from the point of our official and the actual scene, RTMP overall technical scheme, delay can do within 1 second, and millisecond.

The overall design scheme is as follows

Matters needing attention

1. Networking: wireless networking, good AP modules are required to support large concurrent traffic, push end to AP, preferably connected to a cable network;

2. Server deployment: If the Windows platform, you can consider NGINX, if Linux, you can consider SRS or NGINX, the server can be deployed on the same machine as the Windows platform teacher machine;

3. Teacher: If the teacher has a mobile PAD, it can be directly pushed to the RTMP server and shared.

4. On the student side: directly pull the RTMP stream and play it;

5. Interaction between teachers and students: If students want to share screen data with other students as a demonstration case, they only need to request the same screen and push the data back to the RTMP server for other students to view.

6. Extended monitoring: If further technical solutions are needed, for example, the teacher side wants to monitor the screen situation of the student side, there are two solutions, such as the student side directly push RTMP, or the student side starts the built-in RTSP service, and the teacher side can watch it at any time when it wants to (or polling and playing).

The following describes related configuration options by platform

Windows RTMP push terminal

Corresponding DEMO: SmartPublisherDemo.exe

1. If only part of the screen is collected, click “Select screen area” button to select the area to be collected, and the collection area can be moved in the process of collection push;

2. If it is a high-resolution screen (such as some acquisition devices, 4K screen, the original resolution is too high), and the user does not want to push such a high resolution, you can select “Zoom screen size” and specify the zoom ratio. You can zoom first, then encode and push data;

3. Set the capture frame rate: if it is PPT/Word documents, 8-12 frames are generally enough, if it is a movie, can be set to 20-30 frames, the key frame interval is generally set to 2-4 times the frame rate, screen push, suggest the average bit rate mode;

4. If you want to collect the output sound from the computer, select Speaker Collection. If you want to collect the audio from external microphones, select Microphone Collection and select the corresponding collection device.

5. Set the RTMP URL for the next push and click “push”.

6. If you want to preview the pushed data, click “Preview”; if you want to stop the preview, click “Stop preview”.

Android platform RTMP screen push terminal

Project: SmartServicePublisherV2

Matters needing attention:

1. Android 8.0 or later devices need to be added to the power saving optimization whitelist, and 6.0 or later devices need to dynamically obtain audio permission, the specific code is as follows:

        // Add the power saving optimization whitelist to prevent the background running of devices 8.0 or later from automatically stopping after one minute
        //if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O)
        if (Build.VERSION.SDK_INT >=26)
        {
            if(!isIgnoringBatteryOptimizations())
            {
                gotoSettingIgnoringBatteryOptimizations();
            }
        }

        // Version 6.0 or later: Dynamically obtain the Audio permission
        if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.M)
        {
            RequestAudioPermission();
        }


    // Pull up the request to join the power-saving whitelist popup
    private void gotoSettingIgnoringBatteryOptimizations(a) {
        if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
            try {
                Intent intent = new Intent();
                String packageName = getPackageName();
                intent.setAction(Settings.ACTION_REQUEST_IGNORE_BATTERY_OPTIMIZATIONS);
                intent.setData(Uri.parse("package:" + packageName));
                startActivityForResult(intent, REQUEST_IGNORE_BATTERY_CODE);
            } catch(Exception e) { e.printStackTrace(); }}}// Dynamically obtain the Audio permission
    private void RequestAudioPermission(a)
    {
        if (PackageManager.PERMISSION_GRANTED ==  ContextCompat.checkSelfPermission(this.getApplicationContext(), android.Manifest.permission.RECORD_AUDIO))
        {
        }
        else {
            // Prompt user to register permission audio
            String[] perms = {"android.permission.RECORD_AUDIO"};
            ActivityCompat.requestPermissions(this, perms, RESULT_CODE_STARTAUDIO); }}Copy the code

2. Continuous frame filling strategy to prevent the screen from moving and no data down;

3. If you need to transfer area of the bottom go to, can use SmartPublisherOnCaptureVideoClipedRGBAData () interface;

4. Horizontal and vertical screen switching, the upper layer without asking, the bottom will automatically cut.

IOS RTMP screen push terminal

Corresponding engineering: SmartServiceCameraPublisherV2

Note: ReplayKit2 currently has a 50M memory limit for live streams. If you exceed this limit, the system will kill the extension process directly. Therefore, it is recommended that the resolution, frame rate and bit rate of ReplayKit2 streams not be too high.

Here’s the core processSampleBuffer() process, which has been added to iOS 11.0 and above:

- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer
                   withType:(RPSampleBufferType)sampleBufferType {
    
    CGFloat cur_memory = [self GetCurUsedMemoryInMB];
    
    if( cur_memory > 20.0f)
    {
        //NSLog(@"processSampleBuffer cur: %.2fM", cur_memory);
        return;
    }
        
    switch (sampleBufferType) {
        case RPSampleBufferTypeVideo:
            {
                if (!CMSampleBufferIsValid(sampleBuffer))
                    return;
                
                NSInteger rotation_degress = 0;
                // Automatic rotation is supported above 11.1
    #ifdef __IPHONE_11_1
                if (UIDevice.currentDevice.systemVersion.floatValue > 11.1) {
                    CGImagePropertyOrientation orientation = ((__bridge NSNumber*)CMGetAttachment(sampleBuffer, (__bridge CFStringRef)RPVideoSampleOrientationKey , NULL)).unsignedIntValue;
                    
                    //NSLog(@"cur org: %d", orientation);
                    
                    switch (orientation)
                    {
                        / / vertical screen
                        case kCGImagePropertyOrientationUp:{
                            rotation_degress = 0;
                        }
                            break;
                        case kCGImagePropertyOrientationDown:{
                            rotation_degress = 180;
                            break;
                        }
                        case kCGImagePropertyOrientationLeft: {
                            // Turn the mute button 90 degrees as needed
                            rotation_degress = 90;
                        }
                            break;
                        case kCGImagePropertyOrientationRight:{
                            // Go to 270
                            rotation_degress = 270;
                        }
                            break;
                        default:
                            break; }}#endif
                
                //NSLog(@"RPSampleBufferTypeVideo");
                if(_smart_publisher_sdk)
                {
                    //[_smart_publisher_sdk SmartPublisherPostVideoSampleBuffer:sampleBuffer];
                    [_smart_publisher_sdk SmartPublisherPostVideoSampleBufferV2:sampleBuffer rotateDegress:rotation_degress];
                }
                
                //NSLog(@"video ts:%.2f", CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)));
            }
            break;
        case RPSampleBufferTypeAudioApp:
            //NSLog(@"RPSampleBufferTypeAudioApp");
            if (CMSampleBufferDataIsReady(sampleBuffer) ! =NO)
            {
                if(_smart_publisher_sdk)
                {
                    NSInteger type = 2; [_smart_publisher_sdk SmartPublisherPostAudioSampleBuffer:sampleBuffer inputType:type]; }}//NSLog(@"App ts:%.2f", CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)));
            
            break;
        case RPSampleBufferTypeAudioMic:
            //NSLog(@"RPSampleBufferTypeAudioMic");
            if(_smart_publisher_sdk)
            {
                NSInteger type = 1;
                [_smart_publisher_sdk SmartPublisherPostAudioSampleBuffer:sampleBuffer inputType:type];
            }
            //NSLog(@"Mic ts:%.2f", CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)));
            
            break;
        default:
            break; }}Copy the code