Recording on iOS has been a challenge until now, but with the official release of ReplayKit, recording on iOS has become much easier.

At the business level, live games, screen sharing, remote assistance, etc.

Currently, there are a large number of relevant apps in the App Store, mainly divided into the following two categories:

  1. Remote screen broadcast class
  2. Local record screensaver class

In the implementation of the project, ReplayKit2 adopts the Extension sub-process, but the system gives a 50M memory limit, once exceeding 50M, the screen recording sub-process will crash.

Because of this limitation, similar processing solutions in the industry limit their video quality to 720P or 30 frames. Take Tencent’s live streaming SDK for example.


To solve this problem, dance in chains.

Let’s start by looking at what the child process does:

@implementation SampleHandler - (void)broadcastStartedWithSetupInfo:(NSDictionary<NSString *, NSObject *> *)setupInfo { // User has requested to start the broadcast. Setup info from the UI extension can be supplied  but optional. } - (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType { switch (sampleBufferType) { case RPSampleBufferTypeVideo: // Handle video sample buffer break; case RPSampleBufferTypeAudioApp: // Handle audio sample buffer for app audio break; case RPSampleBufferTypeAudioMic: // Handle audio sample buffer for mic audio break; default: break; } } @endCopy the code

There are only two important functions:

  1. broadcastStartedWithSetupInfo:(NSDictionary<NSString *, NSObject *> *)setupInfo

    The child process starts the callback

  2. processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType

    Video/audio data callback

As you can see from the function, the data type of the callback is CMSampleBufferRef, which itself takes almost no memory.

But when we convert it to bitmap information, especially directly to binary stream information, it is relatively memory intensive.


So, in order to keep memory consumption, the idea is to send the data from the child process to the main process, and the main process will do the image and other operations.

From this, we introduced ** “process communication” **.

The ways in which the child process and the master process can communicate are as follows:

  1. CFMachPort

    It’s no longer available after iOS7

  2. CFNotificationCenterRef

    You can only send simple string data.

    If complex data is sent, the requirements for data assembly are high. This can be implemented using the three-party encapsulation library MMWormhole. The idea is to archive data into a file, then send the file identity between processes and read the file at the receiving end. Low efficiency

  3. Local Socket

    Establish local Socket between processes, process TCP communication.

    Flexible use, high efficiency.

    We use GCDAsyncSocket implementation, can directly transfer NSData data stream.


The transmission mode of interprocess communication, we finally decided to use local Socket implementation.

Next we need to think about how to assemble the data.

As you can see from the system API, the data type provided by the system in the callback function is CMSampleBufferRef.

In fact, each frame of video data, and it is a compressed data structure used to store media file properties, it consists of the following parts:

CMTime: 64-bit value, 32-bit scale, media time format

CMVideoFormatDesc: Video format, including width and height, color space, encoding format, SPS, PPS

CVPixelBuffer: contains uncompressed pixel format, width and height

CMBlockBuffer: compressed image data

CMSampleBuffer: Stores one or more compressed or uncompressed media files

It would be nice if you could send it to the main process, but there is currently no way to convert the data format to communicate without decoding it.

Therefore, the further problem we need to solve is how to efficiently and lightweight decoding.

First, direct conversion to bitmaps is not feasible because each frame consumes a lot of memory at large screen resolutions.

Therefore, we need an intermediate data structure for transmission, which needs to meet the following conditions:

  1. Can learn fromCMSampleBufferRefImage information, but lighter than imageData itself
  2. From the child process to the main process, it can be restored to the image information, and can be rotated, cropped, compressed, and other operations on the image

Of course, we also have many decoding options, whether hard solution, soft solution, YUV or RGB.

But either way, we need to decode it first.

I wondered if I could compress images directly on CMSampleBufferRef itself, but I gave up.

Based on the above, we finally refer to the processing method of screen sharing of netease cloud communication, and use YUV decoding and its NTESI420Frame intermediate data structure to carry CMSampleBufferRef, just like carrier signal.

Its conversion source is as follows:

+ (NTESI420Frame *)pixelBufferToI420:(CVImageBufferRef)pixelBuffer
                            withCrop:(float)cropRatio
                          targetSize:(CGSize)size
                      andOrientation:(NTESVideoPackOrientation)orientation
{
    if (pixelBuffer == NULL) {
        return nil;
    }

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);

    OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);

    size_t bufferWidth = 0;
    size_t bufferHeight = 0;
    size_t rowSize = 0;
    uint8_t *pixel = NULL;

    if (CVPixelBufferIsPlanar(pixelBuffer)) {
        int basePlane = 0;
        pixel = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, basePlane);
        bufferHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, basePlane);
        bufferWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, basePlane);
        rowSize = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, basePlane);
    } else {
        pixel = (uint8_t *)CVPixelBufferGetBaseAddress(pixelBuffer);
        bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
        bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
        rowSize = CVPixelBufferGetBytesPerRow(pixelBuffer);
    }

    NTESI420Frame *convertedI420Frame = [[NTESI420Frame alloc] initWithWidth:(int)bufferWidth height:(int)bufferHeight];

    int error = -1;

    if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
        error = libyuv::ARGBToI420(
            pixel, (int)rowSize,
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneY],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneU],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneV],
            (int)bufferWidth, (int)bufferHeight);
    } else if (kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange == sourcePixelFormat || kCVPixelFormatType_420YpCbCr8BiPlanarFullRange == sourcePixelFormat) {
        error = libyuv::NV12ToI420(
            pixel,
            (int)rowSize,
            (const uint8 *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1),
            (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1),
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneY],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneY],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneU],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneU],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneV],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneV],
            (int)bufferWidth,
            (int)bufferHeight);
    }

    if (error) {
        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        NSLog(@"error convert pixel buffer to i420 with error %d", error);
        return nil;
    } else {
        rowSize = [convertedI420Frame strideOfPlane:NTESI420FramePlaneY];
        pixel = convertedI420Frame.data;
    }

    CMVideoDimensions inputDimens = { (int32_t)bufferWidth, (int32_t)bufferHeight };
    CMVideoDimensions outputDimens = [NTESVideoUtil outputVideoDimensEnhanced:inputDimens crop:cropRatio];
//        CMVideoDimensions outputDimens = {(int32_t)738,(int32_t)1312};
    CMVideoDimensions sizeDimens = { (int32_t)size.width, (int32_t)size.height };
    CMVideoDimensions targetDimens = [NTESVideoUtil outputVideoDimensEnhanced:sizeDimens crop:cropRatio];
    int cropX = (inputDimens.width - outputDimens.width) / 2;
    int cropY = (inputDimens.height - outputDimens.height) / 2;

    if (cropX % 2) {
        cropX += 1;
    }

    if (cropY % 2) {
        cropY += 1;
    }
    float scale = targetDimens.width * 1.0 / outputDimens.width;

    NTESI420Frame *croppedI420Frame = [[NTESI420Frame alloc] initWithWidth:outputDimens.width height:outputDimens.height];

    error = libyuv::ConvertToI420(pixel, bufferHeight * rowSize * 1.5,
                                  [croppedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneY],
                                  [croppedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneU],
                                  [croppedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneV],
                                  cropX, cropY,
                                  (int)bufferWidth, (int)bufferHeight,
                                  croppedI420Frame.width, croppedI420Frame.height,
                                  libyuv::kRotate0, libyuv::FOURCC_I420);

    if (error) {
        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        NSLog(@"error convert pixel buffer to i420 with error %d", error);
        return nil;
    }

    NTESI420Frame *i420Frame;

    if (scale == 1.0) {
        i420Frame = croppedI420Frame;
    } else {
        int width = outputDimens.width * scale;
        width &= 0xFFFFFFFE;
        int height = outputDimens.height * scale;
        height &= 0xFFFFFFFE;

        i420Frame = [[NTESI420Frame alloc] initWithWidth:width height:height];

        libyuv::I420Scale([croppedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneY],
                          [croppedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneU],
                          [croppedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[croppedI420Frame strideOfPlane:NTESI420FramePlaneV],
                          croppedI420Frame.width, croppedI420Frame.height,
                          [i420Frame dataOfPlane:NTESI420FramePlaneY], (int)[i420Frame strideOfPlane:NTESI420FramePlaneY],
                          [i420Frame dataOfPlane:NTESI420FramePlaneU], (int)[i420Frame strideOfPlane:NTESI420FramePlaneU],
                          [i420Frame dataOfPlane:NTESI420FramePlaneV], (int)[i420Frame strideOfPlane:NTESI420FramePlaneV],
                          i420Frame.width, i420Frame.height,
                          libyuv::kFilterBilinear);
    }

    int dstWidth, dstHeight;
    libyuv::RotationModeEnum rotateMode = [NTESYUVConverter rotateMode:orientation];

    if (rotateMode != libyuv::kRotateNone) {
        if (rotateMode == libyuv::kRotate270 || rotateMode == libyuv::kRotate90) {
            dstWidth = i420Frame.height;
            dstHeight = i420Frame.width;
        } else {
            dstWidth = i420Frame.width;
            dstHeight = i420Frame.height;
        }
        NTESI420Frame *rotatedI420Frame = [[NTESI420Frame alloc]initWithWidth:dstWidth height:dstHeight];

        libyuv::I420Rotate([i420Frame dataOfPlane:NTESI420FramePlaneY], (int)[i420Frame strideOfPlane:NTESI420FramePlaneY],
                           [i420Frame dataOfPlane:NTESI420FramePlaneU], (int)[i420Frame strideOfPlane:NTESI420FramePlaneU],
                           [i420Frame dataOfPlane:NTESI420FramePlaneV], (int)[i420Frame strideOfPlane:NTESI420FramePlaneV],
                           [rotatedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[rotatedI420Frame strideOfPlane:NTESI420FramePlaneY],
                           [rotatedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[rotatedI420Frame strideOfPlane:NTESI420FramePlaneU],
                           [rotatedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[rotatedI420Frame strideOfPlane:NTESI420FramePlaneV],
                           i420Frame.width, i420Frame.height,
                           rotateMode);
        i420Frame = rotatedI420Frame;
    }

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    return i420Frame;
}

Copy the code

In this function, YUV decoding is mainly carried out for the original image information. After decoding, clipping, compression and rotation are carried out.

As can be seen from the amount of code, this function has a lot of redundancy for us. Our goal is to minimize any processing in the child process and memory usage, so we only retain its decoding function and exclude the rest, as follows:

+ (NTESI420Frame *)pixelBufferToI420:(CVImageBufferRef)pixelBuffer {
    if (pixelBuffer == NULL) {
        return nil;
    }

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);

    OSType sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);

    size_t bufferWidth = 0;
    size_t bufferHeight = 0;
    size_t rowSize = 0;
    uint8_t *pixel = NULL;

    if (CVPixelBufferIsPlanar(pixelBuffer)) {
        int basePlane = 0;
        pixel = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, basePlane);
        bufferHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, basePlane);
        bufferWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, basePlane);
        rowSize = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, basePlane);
    } else {
        pixel = (uint8_t *)CVPixelBufferGetBaseAddress(pixelBuffer);
        bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
        bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
        rowSize = CVPixelBufferGetBytesPerRow(pixelBuffer);
    }
    NTESI420Frame *convertedI420Frame = [[NTESI420Frame alloc] initWithWidth:(int)bufferWidth height:(int)bufferHeight];

    int error = -1;
    if (kCVPixelFormatType_32BGRA == sourcePixelFormat) {
        error = libyuv::ARGBToI420(
            pixel, (int)rowSize,
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneY], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneY],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneU], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneU],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneV], (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneV],
            (int)bufferWidth, (int)bufferHeight);
    } else if (kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange == sourcePixelFormat || kCVPixelFormatType_420YpCbCr8BiPlanarFullRange == sourcePixelFormat) {
        error = libyuv::NV12ToI420(
            pixel,
            (int)rowSize,
            (const uint8 *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1),
            (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1),
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneY],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneY],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneU],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneU],
            [convertedI420Frame dataOfPlane:NTESI420FramePlaneV],
            (int)[convertedI420Frame strideOfPlane:NTESI420FramePlaneV],
            (int)bufferWidth,
            (int)bufferHeight);
    }

    if (error) {
        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        NSLog(@"error convert pixel buffer to i420 with error %d", error);
        return nil;
    }
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    return convertedI420Frame;
}

Copy the code

Now that we have the intermediate data carrier, we need to think about how to transfer it.

Before Socket communication, we need to perform binary conversion on the data structure obtained above. The source code of netease is as follows:

//NTESI420Frame.m

- (NSData *)bytes {
    int structSize = sizeof(self.width) + sizeof(self.height) + sizeof(self.i420DataLength) + sizeof(self.timetag);

    void *buffer = malloc(structSize + self.i420DataLength);

    memset(buffer, 0, structSize + self.i420DataLength);
    int offset = 0;

    memcpy(buffer + offset, &_width, sizeof(_width));
    offset += sizeof(_width);

    memcpy(buffer + offset, &_height, sizeof(_height));
    offset += sizeof(_height);

    memcpy(buffer + offset, &_i420DataLength, sizeof(_i420DataLength));
    offset += sizeof(_i420DataLength);

    memcpy(buffer + offset, &_timetag, sizeof(_timetag));
    offset += sizeof(_timetag);

    memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneY], [self strideOfPlane:NTESI420FramePlaneY] * self.height);
    offset += [self strideOfPlane:NTESI420FramePlaneY] * self.height;

    memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneU], [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2);
    offset += [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2;

    memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneV], [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2);
    offset += [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2;
    NSData *data = [NSData dataWithBytes:buffer length:offset];
    free(buffer);
    return data;
}

Copy the code

There is nothing wrong with the function itself, just pack all the information contained in the data structure into an NSData binary stream, and send it as a frame of the Socket.

But, don’t forget, these operations we are all in the child process, on high resolution size too large equipment, once the information in the image itself is very rich, when the CPU to deal with the release of the temp, still can easily cause memory explosion, eventually more than 50 m, collapse led to record the screen process.

Just like a river, too much water, too slow, too narrow, will cause the levee to collapse.

Therefore, we can focus on the following three aspects:

  1. Reduce the water flow

    A. Use NTESI420Frame to carry image information instead of bitmap binary stream information

    B. Reduce the use of temporary variables

    C. Split data. Big data is divided into small data for processing

  2. faster

    A. Speed up the processing speed of information in the sub-process, which is based on “reducing water flow”. The smaller the data, the faster the processing

    B. Speed up the transmission between processes. Use local sockets instead of CFNotificationCenterRef.

    C. Multi-task data

    D. Multi-channel data transmission

  3. Widen the channel of

    Since the system limit is 50M, we can’t handle this item.

Based on the above, we optimized the byte method of NTESI420Frame as follows:

- (void)getBytesQueue:(void (^)(NSData *data,NSInteger index))complete { int offset = 0; { int structSize = sizeof(self.width) + sizeof(self.height) + sizeof(self.i420DataLength) + sizeof(self.timetag); void *buffer = malloc(structSize + self.i420DataLength); memset(buffer, 0, structSize + self.i420DataLength); memcpy(buffer + offset, &_width, sizeof(_width)); offset += sizeof(_width); memcpy(buffer + offset, &_height, sizeof(_height)); offset += sizeof(_height); memcpy(buffer + offset, &_i420DataLength, sizeof(_i420DataLength)); offset += sizeof(_i420DataLength); memcpy(buffer + offset, &_timetag, sizeof(_timetag)); offset += sizeof(_timetag); NSData *data = [NSData dataWithBytes:buffer length:offset]; if (complete) { complete(data,0); } free(buffer); data = NULL; } { void *buffer = malloc([self strideOfPlane:NTESI420FramePlaneY] * self.height); offset = 0; memset(buffer, 0, [self strideOfPlane:NTESI420FramePlaneY] * self.height); memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneY], [self strideOfPlane:NTESI420FramePlaneY] * self.height); offset += [self strideOfPlane:NTESI420FramePlaneY] * self.height; NSData *data = [NSData dataWithBytes:buffer length:offset]; if (complete) { complete(data,0); } free(buffer); data = NULL; } { void *buffer = malloc([self strideOfPlane:NTESI420FramePlaneU] * self.height / 2); offset = 0; memset(buffer, 0, [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2); memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneU], [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2); offset += [self strideOfPlane:NTESI420FramePlaneU] * self.height / 2; NSData *data = [NSData dataWithBytes:buffer length:offset]; if (complete) { complete(data,1); } free(buffer); data = NULL; } { void *buffer = malloc([self strideOfPlane:NTESI420FramePlaneV] * self.height / 2); offset = 0; memset(buffer, 0, [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2); memcpy(buffer + offset, [self dataOfPlane:NTESI420FramePlaneV], [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2); offset += [self strideOfPlane:NTESI420FramePlaneV] * self.height / 2; NSData *data = [NSData dataWithBytes:buffer length:offset]; if (complete) { complete(data,2); } free(buffer); data = NULL; }}Copy the code

This function splits a chunk of data into four pieces:

  1. Image header information
  2. Y Channel Information
  3. U Channel Information
  4. V-channel information

When you convert one piece of data, send one, reduce the amount of data, speed up data processing, release temporary variables as quickly as possible, and keep the memory value at an average level.


Now that the data is ready, how do we organize it in Socket transfers?

As a result of the above operations, we have divided an image into four parts:

  1. Image header information
  2. Y Channel Information
  3. U Channel Information
  4. V-channel information

We send each piece of data from the child process to the master process separately, and wait until the master process receives a completed picture message to do further processing.

Although the data is sent separately, we need to combine the four parts of the data into a complete frame in the Socket transmission, so that the main process knows that it has received a complete picture information.

Therefore, after sending these four pieces of data separately, we finally send a data frame similar to HTTP header to the subroutine to tell the main process that a picture message is finished.

- (void)sendVideoBufferToHostApp:(CMSampleBufferRef)sampleBuffer { if (! self.socket) { return; } if (self. FrameCount > 0) {return 1 frame at a time; } long curMem = [self getCurUsedMemory]; NSLog (@ "curMem: % @ @ (curMem / 1024.0/1024.0)); If (evenlyMem > 0 && ((curMem - evenlyMem) > (5 * 1024 * 1024) | | curMem > 45 * 1024 * 1024)) {/ / the current memory soared more than 5 m, or a total of more than 45 m, Return is not handled; } self.frameCount++; CFRetain(sampleBuffer); dispatch_async(self.videoQueue, ^{ // queue optimal @autoreleasepool { // To data NTESI420Frame *videoFrame = [NTESYUVConverter pixelBufferToI420:CMSampleBufferGetImageBuffer(sampleBuffer)]; CFRelease(sampleBuffer); // To Host App if (videoFrame) { __block NSUInteger length = 0; [videoFrame getBytesQueue:^(NSData *data, NSInteger index) { length += data.length; [self.socket writeData:data withTimeout:5 tag:0]; }]; @autoreleasepool { NSData *headerData = [NTESSocketPacket packetWithBufferLength:length]; [self.socket writeData:headerData withTimeout:5 tag:0]; }}}; if (self->evenlyMem <= 0) { self->evenlyMem = [self getCurUsedMemory]; NSLog(@" average memory :%@", @(self->evenlyMem)); } self.frameCount--; }); }Copy the code

Above, the 50M system limitation problem is basically solved.

Test model:

  • iPhone 5s
  • iPhone 6s Plus
  • iPhone 7
  • iPad mini4
  • iPad Air2

Try to use complex and fast-changing images to test for violence.

In summary, it is still the river problem, mapped to our computer world, is processing too much data, the CPU can not handle, memory is not released in a timely manner.

Welcome to pay attention to my public number: programming Daxin, we communicate progress together!