When you are asked the following question, can you answer it? 1. What happens between the app clicking the screen and finishing the rendering? 2. When a UIImageView is added to the view, how does the interior render to the phone? 3, there are multiple cells in a tableView, how to avoid lag.

Today, we’ll look at the rendering process in iOS;

Image rendering pipeline

The image rendering process is roughly divided into the following steps:In the above image rendering pipeline, except for the first part of the Application stage, GPU is mainly responsible for the subsequent stages. For the convenience of the following explanation, the rendering flow chart of GPU is first shown:The figure above is the rendering line that GPU is responsible for in the process of rendering a triangle. As you can see, simple triangle drawing requires a lot of computation, but with more and more complex vertices, colors, texture information (including 3D textures), the computation is unimaginable. This is why gpus are better suited for the rendering process. Next, the specific tasks of each part of the rendering pipeline are explained in detail:

Application Processing stage: Get primitives

This phase specifically refers to the stage in which the image is processed in the application, which is still in the CPU’s charge. In this phase, the application may perform a series of operations or changes to the image, and finally pass the new image information to the next phase. This information is called primitives and is usually triangles, lines, vertices, etc.

Stage of Geometry treatment: Dealing with primitives

Once you get to this stage, and beyond, it’s all on the GPU. At this point, the GPU can get the pixel information transmitted from the previous stage, and the GPU will process this part of the pixel, and then output a new pixel. This series of stages includes:

Vertex Shader: This phase transforms Vertex information in primitives, adds lighting information, and adds textures. Shape Assembly: triangles, lines, and points in primitives correspond to three, two, and one Vertex, respectively. This stage connects Vertex into the corresponding shape. Geometry Shader: Add additional Vertex to transform original primitives into new primitives to build a different model. In short, it’s based on building more complex geometries from triangles, line segments, and points.

Rasterization phase: pixel conversion

The main purpose of rasterization is to convert geometric rendered meta information into a series of pixels for subsequent display on the screen. In this stage, the pixel information covered by each pixel will be calculated according to the pixel information, so as to divide the pixel into different parts.

A simple division is based on the center point. If the center point of a pixel is inside the pixel, then the pixel belongs to the pixel. As shown in the figure above, the dark blue lines are triangles of meta information. By covering the center point or not, all pixels belonging to the pixel can be traversed, that is, the light blue part.

The screen image

After the image rendering process is complete, the next step is to display the resulting pixel information on the physical screen. After the rendering of GPU in the last step, the pixel information is stored in the Framebuffer, and then the Video Controller will read the information in the Framebuffer and transfer it to the Monitor for display through digital-analog conversion. The complete process is shown below:The gPU-processed collection of pixels, known as bitmaps, is cached by the frame buffer for later display. The display’s electron beam scans the screen line by line, starting at the top left corner of the screen, and the image information for each point on the screen is read from a bitmap in the frame buffer and displayed on the screen. The scanning process is as follows:During the beam scan, the screen displays the corresponding results, and each time the entire screen is scanned, it is equivalent to presenting a complete image. By constantly refreshing the screen and presenting new frames, continuous images are presented. The rate at which the screen refreshes is called Frame per Second. Due to the transient effect of the human eye, when the screen refreshes at a high enough rate (an FPS is usually around 50 to 60), it makes the picture look continuous and fluid. For iOS, 60 FPS should be the best experience you can get. .

Problems and solutions brought by the rendering pipeline

(1) Screen tearIn this single-cache mode, the optimal situation is a smooth pipeline: every time the electron beam starts scanning a new frame from scratch, the CPU+GPU rendering process for the frame has finished, and the rendered bitmap has been put into the frame buffer. But this perfect situation is very fragile and prone to screen tearing: Vsync + Double BufferingOne strategy to solve screen tearing and improve display efficiency is to use Vsync signal and Double Buffering. According to Apple’s documentation, iOS devices will always use Vsync + Double Buffering. Vertical synchronisation (Vsync) is the equivalent of locking the frame buffer: a Vsync signal is emitted when an electron beam has finished scanning a frame and is about to start again. Only when the video controller receives Vsync does it update the bitmap in the frame buffer to the next frame, ensuring that the same frame is displayed each time and thus avoiding screen tearing. However, in this case, after the video controller receives Vsync, it has to pass in the bitmap of the next frame, which means that the entire CPU+GPU rendering process has to be completed in a flash, which is obviously unrealistic. So double buffering adds a new back buffer. The rendering results are pre-stored in the back buffer, and when the Vsync signal is received, the video controller will swap the contents of the back buffer into the frame buffer, ensuring that the swap is done almost instantaneously (effectively swapping memory addresses).

(2) frame dropEnabling the Vsync signal and dual buffering solves the problem of screen tearing, but introduces a new problem: frame dropping. If the CPU and GPU have not rendered the new bitmap by the time Vsync is received, the video controller will not replace the bitmap in the frame Buffer. The screen then rescanns to show the exact same image from the previous frame. This is equivalent to two cycles showing the same picture, which is called frame drop.As shown in the figure, A and B represent two frame buffers. When B is not finished rendering, Vsync signal is received, so the screen can only display the same frame A again, which occurs the first frame drop.

Solution: Triple BufferingIn fact, there is room for improvement. We notice that the CPU and GPU are idle for A period of time when the frame drop occurs: the CPU and GPU are idle when the content of A is being scanned and displayed on the screen while the content of B has been rendered. So if we add a frame buffer, we can use this time to do the next render and temporarily store the render results in the new frame buffer.As shown in the figure, due to the addition of a new frame buffer, the gap period of frame dropping can be utilized to a certain extent, the CPU and GPU performance can be rationally utilized, and the number of frame dropping can be reduced.

(3) The nature of screen jam

The direct cause of cell phone use lag is frame loss. As mentioned earlier, the screen refresh rate must be high enough to be smooth. For aN iPhone, the maximum screen refresh rate is 60 FPS, and 50 FPS is generally a good experience. However, if too many frames are dropped and the refresh rate is too low, the experience will be not smooth. So in that sense, you can sort of summarize

  • The root cause of screen lag: THE CPU and GPU rendering pipeline take too long, resulting in frame loss.
  • What Vsync and double buffering mean: Force synchronous screen refresh to solve the screen tear problem at the expense of dropping frames.
  • The significance of three buffers: reasonable use of CPU, GPU rendering performance, reduce the number of frames.

Rendering frameworks in iOS

CALayer is the basis of the display: it stores bitmaps

Simply put, CALayer is the foundation of on-screen display. So how did CALayer pull it off? In calayer. h, CALayer has this property contents:

/** Layer content properties and methods. **/

/* An object providing the contents of the layer, typically a CGImageRef, * but may be something else. (For example, NSImage objects are * supported on Mac OS X 10.6 and later.) Default value is nil. * Animatable

@property(nullable, strong) id contents; An object providing the contents of the layer, typically a CGImageRef.

Contents provides the contents of layer and is a pointer type, which in iOS is CGImageRef (or NSImage in OS X). As we further checked, Apple defined CGImageRef as:

A bitmap image or image mask.

Seeing the Bitmap, we can now relate it to the rendering pipeline we talked about earlier: In fact, the contents property in CALayer holds the bitmap rendered by the device rendering pipeline (also known as the backing Store), and when the device screen is refreshed, the generated bitmap is read from the CALayer and rendered to the screen. So, if we set the contents property of CALayer in our code like this:

// typedef struct CGImage CGImageRef; // typedef struct CGImage CGImageRef; layer.contents = (__bridge id)image.CGImage;

Then, when the code is copied, the operating system will call the underlying interface and render the image through the RENDERING pipeline of CPU+GPU to obtain the corresponding bitmap, which will be stored in calayer.contents. When the device screen is refreshed, the bitmap will be read and presented on the screen. Because the content to be rendered is stored statically each time, the Core Animation triggers the drawRect: method to use the stored bitmap for a new display.

What is Core Animation

It is essentially a composite engine that renders, builds, and implements animations. Core Animation is usually used to implement animations efficiently and easily, but in fact its predecessor is called Layer Kit and Animation implementation is only part of its functionality. For iOS apps, no matter whether Core Animation is directly used or not, it is deeply involved in app construction at the bottom level. For OS X app, some functions can be easily realized by using Core Animation.Ore Animation is the perfect low-level support for AppKit and UIKit. It is also integrated into the workflow of Cocoa and Cocoa Touch. It is the most basic architecture for app interface rendering and construction.

Core Animation’s job is to combine the different visual content on the screen as quickly as possible. This content is broken down into separate layers (specifically CALayer in iOS) and stored as a tree hierarchy. This tree also forms the basis of UIKit and everything you see on screen in iOS apps.

Core Animation renders the entire process

The whole assembly line has the following steps:(1) Handle Events:

This process handles touch events that may change the layout and interface hierarchy of the page, such as:

  1. Create and adjust the view hierarchy, such as addSubView, removeSubView, and so on
  2. Set UIView frame, modulate Autolayout constraint, etc
  3. Change CALayer’s transparency
  4. Add an animation to the view
  5. Other operations that may cause changes to the Calayer-tree;

(2) Commit Transaction:

When these operations cause the Layer Tree to change, a transaction transaction is implicitly generated. The entire transaction includes Layout, Display, Prepare, and Commit operations.

Layout: Builds the view

This stage mainly deals with the construction and layout of the view. The specific steps include:

  1. Call the overloaded layoutSubviews method
  2. Create a view and add a child view using the addSubview method
  3. Evaluate the view Layout, that is, all the Layout constraints

Since this phase is in CPU, usually CPU-limited or IO limited, we should be as efficient and light as possible to reduce this part of the operation time, such as reducing unnecessary view creation, simplifying layout calculation, reducing view hierarchy, and so on.

Display: Draws a view

This stage is mainly handed over to Core Graphics to draw the view. Note that we are not really displaying the view, but get the primitives data mentioned above.

According to the Layout result of the previous stage, the primitive information is created. If the drawRect: method is overridden, the overridden drawRect: method is called and the bitmap data is drawn manually in the drawRect: method to customize the drawing of the view.

Note that in normal cases, only primitives information can be obtained in the Display stage, while bitmap can be drawn on the GPU according to the information. However, if you override the drawRect: method, this method will call the Core Graphics drawing method directly to get the bitmap data, and the system will request an extra block of memory to hold the drawn bitmap temporarily. Because the drawRect: method was overwritten, the drawing process was moved from the GPU to the CPU, which resulted in a loss of efficiency. At the same time, this process uses extra CPU and memory, so it needs to be drawn efficiently, or it can cause CPU congestion or memory explosion.

Prepare: Additional work by Core Animation

This step is mainly: picture decoding and conversion

Commit: package and send

This step is mainly: layer packaging and send to Render Server. Note that the commit operation is performed recursively based on the layer tree, so if the layer tree is too complex, the commit can be expensive. This is why we want to reduce the view hierarchy and thus the layer tree complexity.

(3)Decode, Draw Calls, Render, display

Once the packaged layer is transferred to Render Server, it is decoded first. After decoding, Core Animation will call the method of the lower rendering framework (such as OpenGL or Metal) to draw, and then call the GPU. After the GPU rendering is complete, the next runloop waits for displayRender Server is usually OpenGL or Metal. Taking OpenGL as an example, the above figure mainly shows the operations performed on GPU, including:

Tiler is using Vertex Shader to process Vertex information, including primitives, Vertex Shader, Vertex Shader and Vertex Shader. Tiling generates the tile bucket geometry. This step converts the pixel information into pixels, and then writes the result to the Parameter Buffer after Tiler has updated all the pixel information, or the Parameter Buffer is full. Render Buffer the Render Buffer stores the rendered bitmap for subsequent Display operations

The rendering process is triggered in Runloop

What is the runloop?

Runloop is a large event-driven loop that adds user interaction events, system internal events, and timer events to the event queue and cycles events from the event queue for processing. When all events have been processed, runloop goes to sleep until it is woken up by new events.

The RunLoop commonly referred to is NSRunloop or CFRunloopRef. CFRunloopRef is a pure C function, while NSRunloop is only the OC encapsulation of CFRunloopRef and does not provide additional functions. Therefore, the following analysis of CFRunloopRef, Apple has open source CoreFoundation source code, so it is easy to find the CFRunloop source 👇 CFRunloop source code

int32_t __CFRunLoopRun( /** 5 parameters */ )
{
    // Notification is about to enter runloop
    __CFRunLoopDoObservers(KCFRunLoopEntry);
    
    do
    {
        // Notify that timer and source will be processed
        __CFRunLoopDoObservers(kCFRunLoopBeforeTimers);
        __CFRunLoopDoObservers(kCFRunLoopBeforeSources);
        
        // Handle non-delayed main thread calls
        __CFRunLoopDoBlocks();
        // Handle the Source0 event
        __CFRunLoopDoSource0();
        
        if (sourceHandledThisLoop) {
            __CFRunLoopDoBlocks();
         }
        // If there is a Source1 (port-based) in ready state, process the Source1 directly and jump to the message.
        if (__Source0DidDispatchPortLastTime) {
            Boolean hasMsg = __CFRunLoopServiceMachPort();
            if (hasMsg) goto handle_msg;
        }
            
        // Notify Observers that there are no events to deal with, and that threads of RunLoop are about to enter sleep.
        if(! sourceHandledThisLoop) { __CFRunLoopDoObservers(runloop, currentMode, kCFRunLoopBeforeWaiting); }// GCD dispatch main queue
        CheckIfExistMessagesInMainDispatchQueue(a);// About to go to sleep
        __CFRunLoopDoObservers(kCFRunLoopBeforeWaiting);
        
        // Wait for the kernel mach_msg event
        mach_port_t wakeUpPort = SleepAndWaitForWakingUpPorts(a);// Wait for...
        
        // Wake up from waiting
        __CFRunLoopDoObservers(kCFRunLoopAfterWaiting);
        
        // Handle timer awakenings
        if (wakeUpPort == timerPort)
            __CFRunLoopDoTimers();
        
        // Handles asynchronous method awakenings such as dispatch_async
        else if (wakeUpPort == mainDispatchQueuePort)
            __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__()
            
        / / processing Source1
        else
            __CFRunLoopDoSource1();
        
        // Again make sure that there are no synchronized methods to call
        __CFRunLoopDoBlocks();
        
    } while(! stop && ! timeout);// Notification that runloop is about to exit
    __CFRunLoopDoObservers(CFRunLoopExit);
}
Copy the code

The following diagram depicts the Runloop running process. The core flow of Runloop is describedDescription of The RunLoop Sequence of EventsDescription.

Input source source

The input source is the source of the event, and the input source asynchronously delivers the event to your thread. The source of the event depends on the type of input source and is usually one of two categories. Port-based input sources monitor the Mach port of the application. Custom input sources monitor custom event sources. Port-based sources are automatically signaled by the kernel, while custom sources must be signaled manually from another thread. Take a look at the official Runloop structure diagram (note the Input Source Port below and the corresponding Source1 in the previous flowchart). Source1 and Timer are both Port event sources. The difference is that all timers share the same Port “Mode Timer Port”, and each Source1 has a different Port) :

The difference between source1 and source0 is as follows: source1: mach_port-based events from the kernel or other processes or threads that can actively wake up a dormant RunLoop (usually not actively used in iOS interprocess communication development). Mach_port is a mechanism for processes to send messages to each other, such as screen clicks and network data transfers that trigger sourse1.

When the phone is touched, shaken, or locked, the system sends an event to the app process (process communication). This is why it is called port based source1. Source0: Non-port-based processing events. What is non-port-based? This means that you did not receive this message directly from another process or kernel. This is typically an in-app event, such as a hitTest:withEvent handler or a function selectors event. Take a simple example: An APP stands still in the foreground, and the user clicks the APP interface with his finger. The process is as follows: We touch the screen, we touch the hardware first, Events on the screen surface will be wrapped by IOKit as events, which will be passed to the active APP via mach_Port. Event will tell Source1 (mach_Port), which will wake up RunLoop. The Event is then distributed to Source0, which then processes it.

Common sources are port-based sources, custom sources, performSelect sources, and timer sources;

Run the runloop mode

Each time you run a run loop, you specify (explicitly or implicitly) a particular “mode” to run. During that part of the loop, only the sources associated with the pattern are monitored and allowed to deliver their events. (Similarly, only observers associated with the pattern are notified of the progress of running the loop.) Sources associated with other schemas retain any new events until they are subsequently passed through the loop in the appropriate schema.

It is easy to see from the source that Runloop always runs under some particular CFRunLoopModeRef (Mode must be specified every time __CFRunLoopRun() is run). From the definition of the corresponding structure of CFRunloopRef, it is easy to know that each Runloop can contain several modes, and each Mode contains Source/Timer/Observer. A Mode must be specified each time the main function of Runloop __CFRunLoopRun() is called. This Mode is called _currentMode. Then re-enter the Runloop to ensure that the Source/Timer/Observer of different modes do not interfere with each other.

struct __CFRunLoop {  / / part
    CFRuntimeBase _base;
    pthread_mutex_t _lock; /* locked for accessing mode list */
    __CFPort _wakeUpPort; // used for CFRunLoopWakeUp 
    Boolean _unused;
    pthread_t _pthread;
    CFMutableSetRef _commonModes;
    CFMutableSetRef _commonModeItems;
    CFRunLoopModeRef _currentMode;
    CFMutableSetRef _modes;
    CFAbsoluteTime _runTime;
    CFAbsoluteTime _sleepTime;
    CFTypeRef _counterpart;
};

// ----------------------------------------

struct __CFRunLoopMode {  / / part
    CFRuntimeBase _base;
    /* must have the run loop locked before locking this */
    pthread_mutex_t _lock;
    CFStringRef _name;
    Boolean _stopped;
    CFMutableSetRef _sources0;
    CFMutableSetRef _sources1;
    CFMutableArrayRef _observers;
    CFMutableArrayRef _timers;
    CFMutableDictionaryRef _portToV1SourceMap;
    __CFPortSet _portSet;
    CFIndex _observerMask;
};
Copy the code

The default RunLoop Modes provided by the system are kCFRunLoopDefaultMode(NSDefaultRunLoopMode) and UITrackingRunLoopMode. To switch to the corresponding Mode, you only need to pass in the corresponding name. The former is the default Runloop Mode of the system. For example, when you enter the iOS program, you will be in this Mode by default without doing any operations. If you swipe UIScrollView, the main thread will switch Runloop to UITrackingRunLoopMode. No more event operations will be accepted (unless you set the other Source/Timer to UITrackingRunLoopMode).

However, there is another Mode commonly used by developers, kCFRunLoopCommonModes (NSRunLoopCommonModes). This is not a specific Mode, but a combination of modes. In iOS system, NSDefaultRunLoopMode and UITrackingRunLoopMode are included by default. Note: This does not mean that Runloop will run in kCFRunLoopCommonModes, but rather registers NSDefaultRunLoopMode and UITrackingRunLoopMode respectively. You can also add custom modes to the kCFRunLoopCommonModes group by calling CFRunLoopAddCommonMode().

CFRunLoopRef and CFRunloopMode, CFRunLoopSourceRef/CFRunloopTimerRef/CFRunLoopObserverRef relationship is as follows:

A RunLoop object (CFRunLoop) contains several operating modes (CFRunLoopMode). Each operating mode contains several input sources (CFRunLoopSource), timing sources (CFRunLoopTimer) and observers.

The Observer the Observer

struct __CFRunLoopObserver {
    CFRuntimeBase _base;
    pthread_mutex_t _lock;
    CFRunLoopRef _runLoop;
    CFIndex _rlCount;
    CFOptionFlags _activities; /* immutable */
    CFIndex _order; /* immutable */
    CFRunLoopObserverCallBack _callout; /* immutable */
};
Copy the code

CFRunloopObserverRef is relatively straightforward to understand, acting as a listener in a message loop that notifies the outside world of the current RunLoop’s running status (it contains a function pointer _callout_ to inform the observer of the current status). The specific running status is as follows:

/* Run Loop Observer Activities */
typedef CF_OPTIONS(CFOptionFlags, CFRunLoopActivity) {
    kCFRunLoopEntry = (1UL << 0), / / into the RunLoop
    kCFRunLoopBeforeTimers = (1UL << 1), // Timer processing is about to start
    kCFRunLoopBeforeSources = (1UL << 2), // Source processing is about to start
    kCFRunLoopBeforeWaiting = (1UL << 5), // About to go to sleep
    kCFRunLoopAfterWaiting = (1UL << 6), // Wake up from sleep
    kCFRunLoopExit = (1UL << 7), / / from the RunLoop
    kCFRunLoopAllActivities = 0x0FFFFFFFU
};
Copy the code

The callback function call out

RunLoop calls back almost all operations (whether Observer status notification or Timer/Source processing) via Call out. In other words, your code is ultimately called by the following functions. Even if you listen to the Observer, you will call the following functions first and then inform you indirectly, so you will often see these functions in the call stack:

static void __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__();
static void __CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__();
static void __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__();
static void __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__();
static void __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__();
static void __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__();
Copy the code

For example, in the controller’s touchBegin break point to view the stack (since UIEvent is Source0, So you can see a Source0 callout function called CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION) :

The runloop dormant

In fact, for Event Loop, the most core thing of RunLoop is to ensure that threads sleep when there is no message to avoid occupying system resources, and wake up when there is a message. The mechanism of RunLoop relies entirely on the kernel, specifically Mach in Darwin, the core component of Apple’s operating system (Darwin is open source). Mach can be found in the bottom Kernel below:The essence of mach_msg() is a call to mach_MSg_trap (), which is equivalent to a system call that triggers a kernel state switch. When the program is stationary, the RunLoop stays at

__CFRunLoopServiceMachPort(waitSet, &msg, sizeof(msg_buffer), &livePort, poll ? 0 : TIMEOUT_INFINITY, &voucherState, &voucherCopy)
Copy the code

And inside of this function is a call to mach_msg to put the program to sleep.

The RunLoop mechanism is a transition between user-mode and kernel mode. User mode and kernel mode in Linux and Unix systems, is the basic concept, are the two kinds of operating systems run level, their permissions, due to the system’s resources are limited, such as networking, memory, etc., so in order to optimize performance, lower power consumption, improve the utilization rate of resources, so the kernel layer’s design.

Runloop and thread relationship

Runloop is managed based on pThreads, the c-based underlying API for cross-platform multithreading. It is a high-level encapsulation of Mach threads (see Kernel Programming Guide) and corresponds to NSThreads (which are an object-oriented API, so we rarely use pThreads directly in iOS development).During OS development, NSRunloop is more commonly used by developers, which provides three common run methods by default:

- (void)run; 
- (BOOL)runMode:(NSRunLoopMode)mode beforeDate:(NSDate *)limitDate;
- (void)runUntilDate:(NSDate *)limitDate;
Copy the code
  • Run: the method corresponding to CFRunLoopRun in the CFRunloopRef above does not exit unless CFRunLoopStop() is called; This method is usually used if you want to never exit the RunLoop, otherwise you can use runUntilDate:.
  • RunMode :beforeDate: corresponds to the CFRunLoopRunInMode(mode,limiteDate,true) method, which is executed only once and exited after execution. This is usually used to manually control runloops (for example, in a while loop).
  • RunUntilDate: method is actually CFRunLoopRunInMode (kCFRunLoopDefaultMode limiteDate, false), after will not quit, continue to the next RunLoop until the timeout.

The application of the runloop

This article mainly deals with interface rendering, so it mainly discusses the application of RUNloop in UI update process:

Update the UI

If the print start App after the main thread RunLoop can find another callout for _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv Observer, The monitor for the UI changes after the update, such as changed the frame, adjusted the UI level (UIView/CALayer) or manually setNeedsDisplay: / setNeedsLayout:, these adjustments will trigger operation transaction commit, Submit the layer tree to the render server. When the Observer listens for the main RunLoop to go to sleep and exit, all UI updates are iterated and committed for the actual draw update.

UIView display related method calls and proceduresNotice that the entire process is the Display step in the CA Transaction!! Below is the process before CALayer renders!!

By drawing the process diagram, we can conclude:

1. When [UIView setNeedsDisplay] is called, the layer setNeedsDisplay method of the same name is called directly. 2. The Core Animation then catches the layer-tree changes, submits a CATransaction, and fires the Runloop Observer callback, where it calls [CALayer Display] for the actual drawing of the current view. This step can be seen in the 3 Runloop above to trigger the rendering process; 3. [CALayer display] determines whether the layer’s delegate responds to the displayLayer: method. If it does not, it enters the system drawing process. If it can respond, it actually provides an entry point for asynchronous drawing, that is, it gives us some leeway for asynchronous drawing;

The CoreGraphic API is thread-safe as long as CGBitmapContextCreate and endContext are on the same thread;

Wwdc2012 Session 211 Building Concurrent User interfaces on ios And use asynchronous rendering in conjunction with UIImageView to show instances of complex rendering logic diagrams.

; For layout updates and layoutSubViews, when we call [UIView setNeedsLayout], we also trigger [CALayer setNeedsLayout] to put a dirty mark on the layer. Runloop will run in the next loop. Call [UIView layoutSubviews]/[CALayer layoutSublayers]. Then the Layout processing in the CA Commit is triggered;

Regarding the timing when a render is triggered in CALayer (whether system render or drawRect render), we can refer to the logic practiced by the blogger in UIView/CALayer trigger timing (juejin. Cn)

System drawing processThe essence is to create a backing storage process1. When the [CALayer display] method is called, determine whether there is a delegate to implement the drawing method, if not, trigger the system drawing;

The backing storage(CGContextRef) will be created first. Notice that every layer has a context, and that context points to a cache called the backing storeage;

Delegate – (void)drawLayer:(CALayer *)layer inContext:(CGContextRef) CTX (CGContextRef is passed in by default) Otherwise, call -[CALayer drawInContext:], and then call [UIView drawRect:]. Now in CGContextRef, If passed in drawRect UIGraphicsGetCurrentContext () get is CALayer create CGContextRef;

The backing store will be loaded with data from the drawRect method. The backing store will be loaded with data from the drawRect method. The backing Store’s bitmap data will be displayed on the screen by handing rendServer to the GPU for rendering.

Every UIView Layer has a Backing Store for its Backing Content, which is essentially a CGImage data, or bitmap data, for the GPU to read and display

reference

  1. Full resolution of iOS rendering
  2. IOS Interface rendering and Optimization (I) – what’s going on with the CPU and GPU
  3. IOS Interface Rendering and Optimization (2) – UIView and Rendering
  4. Understand Runloop in depth
  5. Ios-runloop resident thread/performance optimization
  6. Analysis of iOS interface rendering process
  7. IOS UIView drawing (3) from Layout to Display