Introduction to the

  • LNAsyncKit is an asynchronous rendering tool, which provides a convenient method to help you asynchronously render multiple elements to a picture. This process replaces UIKit’s view building process, thus optimizing App performance. Prender provides preloading strategies to help you make up for the delay caused by asynchronous rendering in your Feed stream. In addition to building views, Transaction provides a more elegant way for the main thread to interact with child threads and control the number of concurrent threads and the timing of the main thread callback based on machine state.

  • YYKit and Texture are YYKit and Texture are YYKit and Texture are YYKit and Texture are YYKit and Texture are YYKit and Texture are YYKit and Texture are YYKit and Texture are YYKit and Texture The skills of iOS to keep the page smooth (there are some problems with the external link, there may be sensitive information guoqi in the comments section of the blog, you can search the title yourself if you are interested, basically the first one is). The ideas for fluency optimization are basically the same as described in this article.

How it can help

  • No solution has been found to optimize rounded corners, borders, gradients, LNAsyncKit can solve these asynchronously.
  • Feed streams require a preload strategy, and LNAsyncKit provides a preload area calculation scheme (which is also used for precomposition).
  • Provides a very close approximation to UIKit for building layers that require pre-composition, allowing you to do all your complex layer building in child threads and not create as many UIViews.
  • The Demo shows the use: AFNetworking SDWebImage/IGListKit/YYModel MJRefresh + LNAsyncKit structures, feed flow method. Except for LNAsyncKit, the first five components of this system are fairly complete, and the Demo also provides feeds that are not built using LNAsyncKit. Therefore, beginners who need to quickly learn how to build a Feed stream can consult this tripartite set.

Making a link

You can download the link directly and run the Demo reference code above to implement your own asynchronous list, or use Cocoapods👇 directly

pod 'LNAsyncKit'
Copy the code

Fluency optimization

There are plenty of articles on fluency optimization on the web, and it doesn’t make sense to repeat them one by one; The purpose of this article is to show how to implement those optimization ideas in the Feed stream and simplify the process; So, let’s not talk about why these optimization points are good, how much better, just how to achieve them; If you have any questions about these optimization points, please refer to the article linked above. The following points hold true:

  • A list with fewer layers is better than a list with more layers.
  • No rounded corners, borders, gradients, etc.
  • The size of the image and the size of the control is good.
  • Model resolution is better placed on child threads than on the main thread.
  • Layout calculations are better placed in child threads than in the main thread.
  • Some preloading is better than no preloading (some prefer no preloading list).
  • Layer is better than View (without gestures).
  • Opaque layers are better than transparent ones.

The goal of LNAsyncKit is to make these optimizations simple and free without changing the business complexity.

Optimizing a Cell

Let’s consider a Cell as the minimum optimized unit for a Feed stream, taking a regular Cell in a Bilibili recommendation Feed stream as an example:

Such a small Cell contains eight elements: cover image, number icon, number Label, anchor nickname, live broadcast room name, [live broadcast], live broadcast content classification, negative feedback button; In addition to these elements, there is a black gradient layer at the bottom of the cover image, rounded corners for [live], borders, and rounded corners for the entire Cell (with some shadows as well); This small Cell already contains a lot of small elements, so we’ll try to undo it in the Demo and see the view hierarchy as follows:

LNAsyncKit simplifies the Cell to something like this:

(Bug feedback in the lower right corner needs to respond to events, usually this control will remain independent)

Take the “Live” TAB as an example. The differences in view construction are as follows:

UIKit:

self.liveTagLabel.layer.cornerRadius = 3.f; self.liveTagLabel.layer.borderColor = [UIColor colorWithRed:239.f/255.f green:91.f/255.f blue:156.f/255.f alpha:1.f].CGColor; self.liveTagLabel.layer.borderWidth = 1.f; Self. liveTaglabel. text = @" live "; self.liveTagLabel.font = [UIFont systemFontOfSize:12.f]; self.liveTagLabel.textColor = [UIColor colorWithRed:239.f/255.f green:91.f/255.f blue:156.f/255.f alpha:1.f]; self.liveTagLabel.textAlignment = NSTextAlignmentCenter; [self.cellContentView addSubview:self.liveTagLabel];Copy the code

LNAsyncKit:

LNAsyncTextElement *liveTagElement = [[LNAsyncTextElement alloc] init]; liveTagElement.cornerRadius = 3.f; liveTagElement.borderColor = [UIColor colorWithRed:239.f/255.f green:91.f/255.f blue:156.f/255.f alpha:1.f]; liveTagElement.borderWidth = 1.f; Livetagelement. text = @" live "; liveTagElement.font = [UIFont systemFontOfSize:12.f]; liveTagElement.textColor = [UIColor colorWithRed:239.f/255.f green:91.f/255.f blue:156.f/255.f alpha:1.f]; liveTagElement.textAligment = NSTextAlignmentCenter; [cellContentElement addSubElement:liveTagElement];Copy the code

After LNAsyncKit renders a complete image as large as the required display view area, the complex rendering logic is digested by all quilt threads, and the feedback to the main thread is only shown as a picture of the same size as the target control.

The principle of

Similar to UIKit,LNAsyncKit uses a view tree to build the final view. The difference is:

A. Elements inherit from NSObject. These elements can be created, rendered, and destroyed in child threads. An Element can be thought of as a “layer to draw” descriptor. It is not an entity, and it differs from UIView/CALayer in the same way that UIView is an object you want to buy; Element is the order information, which contains various descriptions of the item, such as its size and color.

B. All elements are temporary, and the information is destroyed after the result is constructed. You can create the elements after entering the child thread, destroy the elements after rendering the real image, and return the desired image in the main thread, like this:

    dispatch_queue_t queue = dispatch_queue_create(0, 0);
    dispatch_async(queue, ^{
        LNAsyncElement *contentElement = [weakSelf rebuildElements];
        [LNAsyncRenderer traversalElement:contentElement];
        UIImage *image = contentElement.renderResult;
        contentElement.renderResult = nil;
        dispatch_async(dispatch_get_main_queue(), ^{
            weakSelf.imageView.image = image;
        });
    });
Copy the code

The rebuildElement process can build a very complex tree, but for the main thread, this is not a problem! Not having Element in the main thread is also the recommended way for LNAsyncKit to use it (after you get your resultImage, leave element.resultimage empty). Of course, it usually doesn’t matter if it does, because the cost of NSObject is very small compared to UIView.

C. Clement is rendered layer by layer: Is actually A subsequent traversal, rendering A child Element of the first, and then render A, then make A parent, A child node rendering LNAsyncRendererTraversalStack is traversed the stack used, LNAsyncRenderer. Traversal function is traversal methods. Loop detection is built into traversal, so duplicate elements are not rendered, like this:

    LNAsyncRendererTraversalStack *stack = [[LNAsyncRendererTraversalStack alloc] init];
    [stack pushElements:@[element]];
    
    NSMutableSet <LNAsyncElement *> *repeatDetectMSet = [[NSMutableSet alloc] init];
    while (!stack.isEmpty) {
        LNAsyncElement *topElement = [stack top];
        if (topElement.getSubElements.count > 0 && (![repeatDetectMSet containsObject:topElement])) {
            [repeatDetectMSet addObject:topElement];
            [stack pushElements:topElement.getSubElements.reverseObjectEnumerator.allObjects];
        } else {
            [stack pop];
            [self renderElement:topElement];
            for (LNAsyncElement *subElement in topElement.getSubElements) {
                subElement.renderResult = nil;
            }
        }
    }
Copy the code

LNAsync comes with some elements:

  • LNAsyncElement: UIView corresponding to UIKit. It is the base class of other Elements and contains the background color, frame, and common boundary and rounded corners properties.
  • LNASyncImageElement: Corresponding to UIImageView, renders an image and provides three fillings.
  • LNAsyncTextElement: Renders a text corresponding to UILabel, provides regular text attributes and supports line folding.
  • LNAsyncLinerGradientElement: corresponds to CAGradientLayer, rendering a gradient.

Custom Element:

In addition to native elements, we also recommend wrapping our own elements, for example: An AvatarElement, which renders the user’s avatar, VIP logo, avatar border and other modifies together, overwrites – (void)renderSelfWithContext:(CGContextRef)context, drawing the three elements separately.

The significance of the custom Element is that all the customized elements are reusable and combinable, which is convenient to maintain the uniform style of the entire App and reduce the development cost appropriately.

The Feed flow

We’ve already talked about how a single Cell, single image can render asynchronously to optimize performance, but performance problems are often not caused by a single image. LNAsyncKit prefers performance-sensitive scenarios: Feed streams; Rendering a Feed stream requires more consideration than rendering a single view: Cell reuse, cache of rendered images, multiple image downloads, and result merging. In addition, consider using pre-loading and pre-rendering capabilities to optimize the user experience.

Tripartite libraries used:

  • AFNetworking network
  • IGListKit Feed flow framework, you can split each module business
  • SDWebImage image download
  • YYModel dictionary to model
  • MJRefresh Pull-up/pull-down refreshes components
  • A free API written by a big guy, although I don’t know the big guy, but these interfaces are really very convenient, in which to thank the air ~

These are very mature tripartite frameworks that can be used directly to reduce development time; This paper mainly introduces how to integrate LNAsyncKit into this system. The Demo already provides the default Feed stream and asynchronous Feed stream code. If you encounter some strange bugs, you can refer to the implementation in the Demo. Both demos work fine.

  • Default Demo: We use this Demo to show a general Feed stream implementation, without any embellishment or design thought, which can be interpreted as the minimum amount of work required to implement a Feed stream.
  • Asynchronous Demo: In this Demo, we will use LNAsyncKit to implement the Feed flow compared to the usual implementation, and see the changes and differences in moving from a regular Feed to an asynchronous Feed.

Default Feed stream implementation:

  1. ViewDidLoad uses AFNetworking to request a page of data, uses YYModel to parse into Model type data, and assigns values to VC.
  2. VC calls CollectionView/IGList to refresh the list and assign Model values inside the Cell.
  3. Cell internal assignment lazy load Label, ImageView call sd_setImage download image display.

Optimization of asynchronous Feed streams:

1. Picture downloading is carried out in Model

A. Because asynchronous feeds not only need to download images, but also need to precompose multiple original images, so this process can be carried out in the Model to ensure that there will not be multiple compositing at the same time due to Cell reuse problems. If you do layer compositing in the Cell asynchronously, the Model may be compositing once every assignment. However, after being synthesized in Model, it can always be stored in Model (the Model only holds weak references, which are stored in the global NSCache).

B. consider preload, we think the preload of the layer and its synthesis is of two kinds of priority, usually from the screen focus areas remote areas need only to download of images, and close place requires synthesis, whether it is a kind of way, Cell usually can only in the display on the screen to get near the point in time, It is difficult to “pre-load” images if they are stored in cells.

One of the responsibilities of the Model in MVC is to provide the data the View needs to display, so it’s not wrong or inappropriate to download images in the Model.

2. Model analysis and layout calculation as part of the network request

In general, when AFNetworking is used for network requests, the model is resolved and the list is refreshed in the successful callback, and the list is refreshed using the CollectionView dataSource protocol to calculate the layout.

This is not recommended for asynchronous lists: The process of Model parsing is not as simple as it might seem. Typically, Model parsing involves iterating through the Dictionary layer by layer and creating a large number of models and submodels. Although a single NSObject is not expensive, the list view models are always stacked, and creating so many objects is not trivial.

The time required to compute the layout is recognized, so table view optimizations generally recommend cache row height, but even if the cache row is high, the first calculation in the main thread takes some time.

We recommend that model resolution and layout calculations be done asynchronously in the AFNetworking callback as part of the network request, which does not have a significant impact on the overall response time of the network request because the network callback time is usually an order of magnitude higher than the screen refresh time. Moreover, preloading technology can make up for this little delay.

The LayoutObj assigned to Model in the request callback encapsulates this process like this:

- (void)transferFeedData:(NSDictionary *)dic comletion:(DemoFeedNetworkCompletionBlock)completion { LNAsyncTransaction *transaction = [[LNAsyncTransaction alloc] init]; [transaction addOperationWithBlock:^id _Nullable{ DemoFeedModel *feedModel = [DemoFeedModel yy_modelWithDictionary:dic];  for (DemoFeedItemModel *item in feedModel.result) { DemoAsyncFeedDisplayLayoutObjInput *layoutInput = [[DemoAsyncFeedDisplayLayoutObjInput alloc] init]; layoutInput contextString = item. The title; layoutInput. HwScale = 0.3 + f ((random () % 100) / 100 f) * 0.5 f; DemoAsyncFeedDisplayLayoutObj * layoutObj = [[DemoAsyncFeedDisplayLayoutObj alloc] initWithInput:layoutInput]; item.layoutObj = layoutObj; } return feedModel; } priority:1 queue:_transferQueue completion:^(id _Nullable value, BOOL canceled) { if (completion) { completion(YES, value, nil); } }]; [transaction commit]; }Copy the code
3. Layout in Model

This may sound a little weird, but why do view operations happen in the Model?

We have explained the responsibility of Element, which is simply the class that describes it. The process of building a View with element is: The Model decides how to build (Element), delivers the idea to LNAsyncRenderer, the renderer delivers the image, and the Model sends the image back to the View. As we talked about at the beginning.

4. The preload

Preloading mainly includes two aspects: preloading next page information and preloading images. The preloading mentioned here mainly refers to preloading images:

As mentioned above, image loading is carried out in Model, so each Model needs a necessary parameter to mark its own resources to the degree of urgency. If it is far from the current user focus, it means that its resources are not urgent at present, you can wait and see what happens. If you’re getting a little close to user focus, you might want to start thinking about downloading images first. If you are close enough to user focus, you need to start precomposing existing resources immediately. Something like this:

- (void)setStatus:(DemoFeedItemModelStatus)status { if (status > _status) { _status = status; } [self checkCurrentStatus]; CheckCurrentStatus {if} - (void) (self) status > = DemoFeedItemModelStatusPreload) {/ / need to preload images [self preloadImage]; } the if (self status > = DemoFeedItemModelStatusDisplay) {/ / need to render view [self renderView]; }}Copy the code

LNAsyncCollectionViewPrender provides a set of emergency resource level marking strategy, will be far away from the current screen center resources is marked as not urgent, close markers for emergency resources, the Model affected by the degree of emergency marker for preload or pre-rendered independently.

The smart preloading mechanism comes from Texture, which is so useful that I changed it to an Objective-C implementation and simplified it. You can even create a scrolling list of exposure dots that follow this interval calculation idea, calculating exposure intervals that are more consistent with the user’s line-of-sight, rather than simply depending on the cell/View lifecycle.

5. Verify image consistency

The asynchronous Cell rendering picture callback sets the consistency check between the Model to be rendered and the current Model. Multiplexing may result in two models being set in one Cell successively, so both models may notify the Cell to refresh data after the asynchronous rendering, so consistency check is required. Syncing doesn’t have this problem; later content always overwrites the first image. Like this:

    NSObject *model = self.model;
    __weak DemoAsyncFeedCell *weakSelf = self;
    [self.model demoAsyncFeedItemLoadRenderImage:^(BOOL isCanceled, UIImage * _Nullable resultImage) {
        if (!isCanceled && resultImage && model == weakSelf.model) {
            weakSelf.contentView.layer.contents = (__bridge id)resultImage.CGImage;
        }
    }];
Copy the code
6. Render cache

Different from the native Image downloaded by SDWebImage, the rendered Image is stored in an additional rendering cache, and the Model weak reference is held. LRU is used to manage the cache internally. Model strong references cannot be used because some Feed flows are resident, and we do not want memory wasted on resident pages that are not the main consumption scenario. LNAsyncCache is a unified storage place where you can store the image after rendering, point to it with a weak pointer, and re-render and store the image if it is deleted.

7. Reduce rendering times

SD downloads images with the AvoidDecode parameter, because the synthesis process renders the Image into a block of memory, which itself includes decoding and also takes place in child threads; Using this parameter can reduce the rendering of the image after downloading Hershey. Like this:

[[SDWebImageManager sharedManager] loadImageWithURL:[NSURL URLWithString:weakSelf.image]
                                            options:SDWebImageAvoidDecodeImage 
                                           progress:nil 
                                          completed:nil];
Copy the code

conclusion

The LNAsyncKit optimization looks like this:

  • From the perspective of the main thread, there is basically no time-consuming work except refreshing the CollectionView and calculating the preloading area. Layout calculation and model analysis are transferred to the unified sub-threads, and the main thread is basically unaware of the creation and destruction of Element.
  • From the CPU’s point of view: rounded corners, borders, gradients, etc. are digested asynchronously during Layer composition, the image size returned is the same as the Layer control size, and Layer complexity is digested asynchronously by the thread.
  • From the child thread perspective: there are many child threads.

Writing an asynchronous Feed stream is slightly more difficult than writing a regular Feed stream, and the average development time cost increases; In terms of efficiency, it is true that the development efficiency of each requirement is reduced, but this will save much more efficiency in the future by setting up a separate performance optimization team for optimization. Platform-type developers are often more familiar with the business than they are with business development, and therefore need to communicate frequently to identify optimizations, scope of change, impact, and so on. In addition, sometimes the business is limited when it meets the optimization point, and it may not dare to make drastic corrections, resulting in the comparison between the result after optimization and the result before optimization is not obvious. LNAsyncKit allows the business line to consider optimization content from the beginning of the requirements, thus saving time for specific optimization. Of course, if the overall performance of the App is not a concern, just choose normal development mode.

gossip

The hardware of iPhone is getting stronger and stronger, and the product manager can usually accept the smoothness standard without optimizing the routine business. I used a similar approach for performance optimization in requirements, and the development time was really tight. Of course, if your company is only looking at the demand output, they usually won’t give you that time, and you can decide if you want to do that extra work between your coding aspirations and your actual situation.

LNAsyncKit can be used directly or as a stepping stone to a deeper understanding of performance optimization and Texture. All in all, I’d be honored if it could help in any way.