Original address: mp.weixin.qq.com/s?__biz=Mzg…

Time = time1(before main()) + time2(after main()) Time1 = time for the system to load app :(system dylib(dynamic link library) and the executable file of its own app are loaded) time2 = time for the app to load and render the first interface :(mainly to build the first interface and complete rendering display).

Today let’s take a look at how much time is spent on time2. After – the main boot process can be divided into the following four stages: AppDelegate LaunchScreenViewController home page load data to build home page rendering

Page loading display accelerated

Build the drawing display on the home page and end with the loading of the last image on the first screen. Among them, the loading time of pictures accounts for more than 50%, and today we are going to discuss the optimization of pictures: the acceleration of webP format pictures for page display is more than 200ms.

What is a WebP?

WebP (pronounced “weppy” /(wĕp ri ē)/) is an image compression format developed by Google for reducing the size of image files. Google says images and photos account for almost 65 percent of all data transmitted over the Internet, a significant share. It makes sense that reducing the size of each image can affect the average page size and thus speed up page loading.

WebP lossy compression process

MacroBlocking

The first stage of the encoder divides the image into macro blocks, typically consisting of a 16×16 luma pixel block and two 8×8 Chroma pixel blocks. The smaller the chunk, the more accurate the prediction and the more information that needs to be recorded. In general, the more detailed the place, the finer the block, that is, the use of 4×4 block prediction. Use 16×16 blocks where there is less detail. (This process is equivalent to color space conversion in JPEG encoding)

Frame prediction

Each 4×4 subblock in the macroblock has a predictive model. (AKA filtering). PNG filters a lot, it does the same thing for every row, whereas WebP filters every block. It does this by defining two sets of pixels around A block: there is A row above it called A, and the column to its left is L:

With A and L, the encoder fills them in A 4×4 block of test pixels and determines which one generates the value closest to the original block. These blocks filled in different ways are called “prediction blocks”.

Horiz Prediction — Each column of the block is filled with a copy of the data from the left column (L). Vertical Prediction — Fill each row of the block with A copy of the data in column (A) above. DC Prediction — Fill the block with the average of the pixels in the upper column of A and the pixels in the left column of L. TrueMotion (TrueMotion prediction) — a super advanced model that is close to the real data. The following figure shows all the intra-frame prediction models in 4×4 blocks. The basic flow is that we find the fast best prediction block, export the filtering results (residual errors), and then move to the next stage.

Which chunking prediction model to use is dynamically determined. The encoder calculates all possible prediction patterns and selects the one with the least degree of error.

DCT (Discrete cosine Transform)

The difference matrix is obtained by subtracting the predicted data from the original image data of the predicted part. Finally, DCT is performed on the difference. This step produces a matrix of frequency coefficients with the highest amplitude in the upper left and lowest in the lower right. The smaller the amplitude, the higher the frequency. Most of the image information is in the upper left area. The purpose of this step is to find the high and low frequency regions of the image.

quantitative

The human eye is insensitive to the high frequency part, so this step will eliminate the high frequency part. Calculate the frequency coefficient table and quantization table in the previous step, divide the frequency coefficient table and quantization table by digit, and round to integer digits. Finally, a quantization matrix is generated.

Coding algorithms

WebP uses the Arithmetic entropy encoding, which performs better in compression than the Huffman Encoding used on JPEG.

To sum up, WebP blocks the image and then uses inter-frame prediction techniques for the filled macroblocks to predict the current block based on the encoded macroblocks nearby. Compared with JEPG encoding the original value of the image, WebP encodes the difference between the predicted value and the original value, which is the main reason for the smaller size of WebP. Finally, WebP uses better arithmetic coding.

IOS support for WebP

The SDWebImage that supports WebP format can complete the conversion of UIImage -> WebP and WebP -> UIImage. Add pod ‘SDWebImage/WebP’ directly to your Podfile from CocoaPods.

SDWebImage/WebP provides UIImage+WebP Category and there’s a method for converting WebP NSData to UIImage:

+ (UIImage *)sd_imageWithWebPData:(NSData *)data;
Copy the code

Use WebP format images in Native

conclusion

After testing, JPG and PNG converted into WebP, the actual volume is roughly reduced as follows:

According to the test results, WebP lossless compression of PNG resulted in a 31% reduction in size, which is roughly consistent with Google’s claim of 26%. WebP’s lossy compression is even more dramatic, reducing image size by about 90% after reducing image quality by 75%.

After the final result was checked and accepted by PM &UE, a unified standard was worked out and all network pictures were output according to the standard. This reduces the overall volume of images transferred between networks, speeds up the loading speed of home page images,

The actual acceptance saves about 200+ms.

Reference: developers.google.com/speed/webp


Follow the official account to learn more about iOS