background
H5 seconds on optimization is a platimous problem, so get the client and H5 joint force. This article will step through how to increase the speed from 30% to 75% by optimizing the client + H5 (1+1>2). Subsequent interface prerequest, client prerendering and preloading 2.0 will help to improve metrics again.
Why optimize?
Global Web Performance Matters for ecommerce
- 47% of users care more about whether the page loads in 2 seconds.
- Fifty-two percent of online users believe that the speed of web page loading affects their loyalty to a website.
- Every 1 second delay results in an 11% decrease in page PV and a 16% decrease in user satisfaction.
- Nearly half of mobile users give up after not opening the page within 10 seconds.
Overall System Architecture Diagram:
Index selection
First of all, let’s talk about the index FMP, which is used to measure the speed in seconds. Why not choose FCP or LCP? FCP will be triggered only when rendering is required, and LCP is not compatible. The object is expected to be measured from the perspective of the user in seconds. The time point from the user clicking to open a Webview to the complete display of the first screen content is the FMP trigger time defined by the object.
Now that the metrics are clear, take a look at what time consuming is involved in a complete FMP.
The next part will be divided into two parts, client optimization and H5 optimization.
Client optimization
The opening speed of the first screen of a page can be improved by means of HTML preloading, HTML prerequest, offline package, interface prerequest, link preservation, and pre-rendering, among which the opening speed of the first screen of a page can be increased by about 10% in seconds respectively by preloading, pre-request, and offline package.
HTML preload
By configuring the client to download the main HTML document in advance, users can directly use the downloaded HTML document when accessing the web, so as to reduce the HTML network request time and improve the web opening speed.
This content cannot be displayed outside the document at this time
How do I determine which pages to download
Predecessors plant trees and descendants enjoy the shade. The App has a lot of resource bits, such as banner, king Kong and middle pass, etc. What contents are displayed in these positions has long been produced by the intelligent recommendation algorithm, so you can directly specify these resource bits for preloading.
Page cache Management
After a page is preloaded, can’t it go on forever? So when do you update the page cache?
- Pre-loaded pages are stored in memory and are cleared when the App cache is closed
- You can manually control the maximum cache time by setting the expiration time
- Initiate an asynchronous thread to update the HTML document after the page is entered.
Slapped in the face by reality:
But in the grayscale process, I was severely educated by the reality. I found that some SSR pages would involve the change of state, such as: coupon scene. These states are rendered by SSR service. When the user enters the page, he has not received the coupon, so it is really lonely to update the HTML document at this time. After the user receives the coupon, he closes the page and enters the page again, only to find that the state in the page is still for the user to receive the coupon.
Improvement measures
- When the H5 page is opened, it requests the interface to obtain the latest status data for components whose status may change.
- The client changed from updating the HTML document when entering the page to updating the HTML document when closing the WebView.
Online revenue effect
The problem has been solved. Is the engineer’s job over? If you think the function is finished, then at this moment, please change your thinking and ask yourself what is our goal? Our goal is to improve the second opening, preloading is just a means to improve the second opening, but after the function is done, we do not know how much profit this function brings, so after the completion of the function development online, we should start to pay attention to the results after the launch, to see how the performance of preloading. As can be seen from the figure below, the second opening rate can be increased by more than 10% when preloading is on.
Challenges encountered
- Preloaded pages are basically SSR service pages, preloading virtually caused a large number of requests, at this time the SSR service can not carry such a large number of requests
- Even if SSR service can withstand, it will cause pressure on the whole service link at the back end
Expansion of SSR Service
Server pressure problem to solve, very will naturally think of adding machine, so we have done a number of SSR machine capacity, the machine number increased one times, this time continue to try to enlarge the number of users of preloading, but still unable to withstand such a big QPS, and at this time also led to a second question, the algorithm department server issued a warning, Again, the volume plan hit a snag.
Broken the CDN
Using CDN server’s cache ability can not only reduce the pressure of SSR server but also reduce the pressure of back-end service link. Why not use such a good thing? I’ll keep you guessing, but I’ll cover the H5 optimizations in more detail.
Client cooperated transformation
- Supports full open preloading capability for CDN domain names, and maintains original volume ratio for non-CDN domain names
The open page is preloaded
In this process, we also analyzed the traffic ratio of the page, and found that the traffic ratio of the page from the source of the open screen advertisement is also very high. So is it possible to preload the HTML document content of the open screen advertisement?
A preloading policy for open screen pages
- To de-load the preloaded list, there may be duplicate pages in the open screen AD list, with different background images and effective times
- Added the configuration related to the effective time to display the page in the open screen AD list at a certain time in the future
- Add black and white list control, there may be third party cooperative pages in the open screen AD list, they don’t want the preloading statistics to cause inaccurate PV time
preload Looking forward to
Since it is possible to download HTML in advance, is it possible to go one step further and load the resources in the page in advance, so that when opening a page, most of the network requests can be reduced and the content can be presented to the user faster? There is also a need to consider how to collaborate with the offline packages described below.
HTML requests in advance
At the same time of webView initialization, it requests the MAIN HTML document and waits for the HTML document to download and render after the webView initializes successfully to reduce the user’s waiting time. After the client requests successfully, the WebView loads the local HTML and saves it for next use. Pre – request HTML open state can increase about 8% seconds.
Prerequest VS preload
In essence, the difference between HTML preloading and HTML prerequest is that the HTML document is downloaded at different times. Preloading means that the user will download the HTML document without any operation after the App starts, but the prerequest will be downloaded only when the user clicks to open the H5 page. If the user opens an H5 page for the second time and finds that the HTML has been downloaded locally and has not expired, the user will use it directly, this time the behavior is consistent with the preloading function.
challenges
After going online, it was found that the pre-request was only increased by 2% or so seconds. After analysis, problems were found:
- The cache validity time is too short, the page expiration time is only configured for 10 minutes, that is to say, users have to download again after 10 minutes, can we extend the cache time?
- H5 pages do not have the ability to self-update and cannot support the configuration of a longer cache time, which is consistent with the problem of preloading HTML.
Dig deep
The whole second link is analyzed locally with low end machine, why use low end machine to analyze? Low-end machines have one advantage, naturally with the slow down function, can maximize the discovery of problems.
The android H5 page loading is performed in parallel with native layout filling
As you can see, the time before the h5 page loads is distributed in the activityStart() function, which contains onCreateView. The longest time is the layout padding () because the WebView object is created ahead of time and is taken directly from the object pool. So spent mainly in the initialization process, the WebView’s own initialization WebViewChromiumFactoryProvider. StartYourEngines (87 us, less than 1 ms), time-consuming and some other initialization of the WebView, Jockey initialization and so on.
The second calculation includes the time from View initialization to WebVIew URL loading, thus finding the optimization point, which can front-load WebVIew loadUrl, h5 page loading and native layout filling are executed in parallel. On onCreateView, create FrameLayout and return. After executing WebView loadUrl, the main thread starts to inflate the layout. After loading the layout successfully, addView it into FrameLayout. Reduces the blocking time of loadUrl. Mid – and high-end models have optimization effect of about 15ms, and low-end models have optimization effect of 30 ~ 50 ms.
The time to download HTML from both ends is brought forward to the routing phase
The time to pre-request HTML is when you enter the native page. This time point is 100ms after the user clicks the event, so can you advance the time to download HTML? After some exploration, the interception is finally selected in the routing stage, which can be unified and the time interval from user click can be ignored. In this way, the HTML download time is advanced by an average of 80ms+.
The flow now looks like this.
Some students might ask, why not download when the user clicks? There must be some time between the user click and the route.
- The code level is not easy to maintain, if you need to invade the business level when you click on it, there are thousands of entrances, it is difficult to maintain and manage
- The time taken from click to route was negligible for offline performance testing
Final online revenue effect
After the above problems are solved, the cache time is changed to 1 day, and it is found that the pre-request HTML can be improved by about 8% in seconds, which is not much different from the effect of pre-loading.
The offline package
The CSS, JS and other resources required on the H5 page are aggregated into a compressed package in advance. After the App is started, the client downloads and decompresses the CSS and JS resources. When accessing the H5 page, the client checks whether there are local offline resources to speed up the page access.
Android implementation
ShouldInterceptRequest is supported by webView. This method can be used to check if resource intercepting is needed. If necessary, return WebResourceResponse instead of null
The iOS implementation
However, I encountered some difficulties in iOS and investigated the following schemes:
Scheme 1: NSURLProtocol Interception mode
NSURLProtocol intercept method, using WKBrowsingContextController and registerSchemeForCustomProtocol. I get a private class selector by reflection. The registration hands HTTP and HTTPS requests to NSURLProtocol for processing. It is possible to intercept the request this way, but it is found that the body of the POST request is missing. And NSURLProtocol is enabled globally once registered. We hoped it would only block pages that had access to offline packages, but there was no way to control it, and it would block all page requests, including third-party collaboration pages, which was clearly unacceptable.
Solution 2: Use CustomProtocol to intercept requests
In iOS 11 and above, there is an API for loading custom resources: WKURLSchemeHandler.
You can change the current page url to a custom scheme protocol, such as: fast.dewu.com to duapp://fast.dewu.com, and then register the scheme in the client. The front end can modify all the resource request unadaptive protocols in the page, such as: SRC =”//fast.dewu.com/xxx” to implement interception. However, during the test, the interface allows only domain names in the whitelist to initiate cross-domain requests for security, and multiple domain names cannot be configured. As a result, the scheme cannot continue.
Scheme 3: Hook handlesURLScheme
It is still a proxy that uses WKURLSchemeHandler and then supports HTTP and HTTPS requests through the handlesURLScheme method of Hook WKWebview. The request can be intercepted in this way, but the following problems are encountered:
Body missing problem
This has been fixed since iOS 11.3, and only bloB data is lost. JS is required to proxy the behavior of FETCH and XMLHttpRequest. When the request is initiated, the body content is informed to native through JSBridge, and the request is handed over to the client for initiation. After the request is completed, the client callback JS method.
The Cookie is lost or unavailable
Through the proxy document.cookie assignment and value action, by the client to manage, but need to pay extra attention to here, need to do a good cross-domain verification, to prevent malicious pages to modify cookie.
challenges
Now that the function has been developed and launched, here is a group of online revenue data first. When Android is enabled offline, there is about 10% revenue, but when iOS is enabled offline, the opening rate of seconds is lower. After the fix, iOS can also be improved by more than 10%.
Android and iOS implement differences
After analysis and comparison, it is found that The interception action of Android is relatively light, so it can be judged whether interception is needed or not. If interception is not needed, it can be handed over to webView to request by itself.
However, once the blocking function is enabled on the iOS page, all HTTP and HTTPS requests on the page will be blocked, and the client initiates the request and responds, instead of returning the request to the WebView itself.
IOS cache issues fixed
Resources in the page through the client request proxy originally opened the webView itself the second time will use the cache memory, now the cache is also invalid, so only in the client to achieve a set of caching mechanism.
- Determine which resources can be cached and for how long based on the HTTP protocol
- Add a user-defined control policy to allow only some types of resources to be cached
The offline package Download error rate governance
As can be seen from the figure below, the download error rate of offline packages fluctuates around 6%. Such a high download error rate is definitely unacceptable. After a series of optimization methods, the download error rate of offline packages decreases from 6% to 0.3%.
Let’s take a look at the flow chart and problem points before optimization
A large number of unknown hosts, network request failures, and network disconnection are found through buried points. Analysis of the code found that without queue control, multiple offline packages will be downloaded simultaneously, resulting in the situation of multiple download tasks competing for resources. The following optimizations were made for the found problem points:
- Download failure Add a retry mechanism and dynamically configure the retry times to alleviate network request failures and network disconnection.
- Added the download task queue management function to dynamically configure the number of concurrent downloads, alleviating resource contention among different download tasks.
- For weak network and no network delay to good network download.
- Offline package download supports HTTPDNS to resolve domain names that cannot be resolved.
Here is the optimized flow chart:
Outlook:
As offline resources are directly stored on disks, each access will take disk I/O time. After testing on low-end machines, it is found that the time fluctuates between 0 and 10ms, and then the memory will be reasonably utilized. By setting the upper limit of memory, upper limit of file quantity and even file type, And through LRU policy memory file obsolescence update
Interface prerequest
Initiating the H5 first screen interface request through the client is much earlier than waiting for the client to initialize the page and download HTML and JS, thus saving the user’s first screen waiting time. During local testing, it was found that interface prerequests were 100+ms ahead of time, so users could see content faster.
Function is introduced
After the App starts, the client will obtain the configuration and save the page address supporting pre-request and the corresponding interface information. When the user opens the WebView, it will initiate the corresponding pre-request interface in parallel and save the result. When the JS execution starts to obtain the first screen data, it will first ask the client whether it has the corresponding response data. If it has received the data at this time, there is no need to initiate a request; otherwise, JS will initiate an interface request and start racing mode. Here is the overall flow chart:
The configuration platform
So how does the client know what interface the page needs to request? And what are the parameters of the interface? The configuration platform supports the following features:
- Configure the page URL to be preloaded and corresponding to an API URL to be requested and parameters
- Configure the audit function to avoid incorrect configuration of publishing online
QA
Why do WE need interface prerequest for SSR server rendering?
First of all, even in the case of SSR, some components in the first screen content may still be skeleton-straight, and data will be requested only when the page rendering is executed. In addition, some pages are SPA. It’s a good complement to both of these situations.
Pre-built link & link preservation
After this function is enabled, the DNS 90 bit time decreases from 80ms to 0ms, the TCP 90 bit time decreases from 65ms to 0, the average DNS time decreases from 55ms to 4.3ms, and the average TCP connection time decreases from 30ms to 2.5ms.
Network request time analysis
In the figure above, we can see that a network request can be sent only after the DNS resolution, TCP connection and SSL connection. Can we save this time?
The network request framework commonly used by clients, such as OkHttp, can fully support http1.1 and HTTP2 functions, which support connection reuse. If you understand the advantages of connection reuse mechanism, you can make use of it. For example, when the APP opens the screen and waits, you can establish the connection of key domain name in advance, so that you can get the network request results more quickly after entering the corresponding page, giving users a better experience. In the case of network environment deviation, this kind of preconnection can theoretically have better results.
Implementation scheme
Links can be established by making a HEAD request ahead of time for domain links, and the network framework automatically pools connections. If no operation is performed by default, the link will be released after 5 minutes. Repeat the operation within 5 minutes to maintain the link.
In addition, we need to pay attention to the number of connection pools. If the data in the connection pool is too small, but the number of domain names is too large, the links maintained through pre-built connections will be easily released, which needs to be optimized by domain name convergence or increasing the number of connection pools.
Online income
Does pre-built connectivity add strain to the server? This is certainly will, first of all for the pre-built connection function itself on the gray strategy, in HTML pages through CDN hosting, directly for the FULL CDN domain name open, so as not to worry about the CDN domain name can not bear the pressure.
The following figure shows the online effect. After the function is enabled, the DNS 90-bit time decreases from 80ms to 0ms, the TCP 90-bit time decreases from 65ms to 0, the DNS average time decreases from 55ms to 4.3ms, and the TCP average time decreases from 30ms to 2.5ms.
pre-rendered
The client will render the page through webView in advance, waiting for the user to visit, can be directly displayed. So as to achieve the effect of instantaneous opening. However, this function can not be open to all pages, and there are certain disadvantages.
- It consumes additional client resources, needs to be executed when the main thread is idle, and needs to control the number of pages pre-rendered.
- If the page will enter the red rain, this page is not suitable for pre-rendering, need to avoid.
The following figure [Back-to-school Season] is the H5 page that has been pre-rendered in the business. It can be seen that when the [back-to-school Season] page is opened, the page has been rendered completely without any waiting process.
Later, we plan to enlarge this capability to the general webView and open it for pages with high efficiency and PV volume.
H5 optimization
SSR server rendering
In general, the data rendering of an H5 page is completely completed by the client, which first requests the page data through AJAX and fills the corresponding data into the template to form a complete page to present to the user. Server rendering puts the request for data and template filling on the server and returns the rendered complete page to the client.
SSR has an average increase of 15% for second opening. Since it is server rendering, it will cause pressure on the server, especially after the pre-loading HTML function is turned on. How to solve the problem?
Preliminary optimization content:
- Interface cache: Node service injects redis instance into CTX, and the business side processes the interface cache in the logic rendered by the server side, which involves configuration file delivery and AB interface.
- Static page cache: Since page completion is interfaceless and all user presentations are the same, renderToHtml generates static HTML resources that are written to the cache.
- No user status page cache: In most cases, the display content of this page is the same, and the data requested by the server is also consistent. The server determines whether to cache the page based on the login status of the user.
- The content of thousands-per-thousand interface is changed from SSR to CSR, and the skeleton diagram is displayed. Since thousands-per-thousand interface content is provided by the algorithm interface, and the algorithm interface itself responds slowly, the response time of the server can be reduced in this way, and the content can be displayed to users more quickly.
Broken the CDN
So many optimization methods still cannot meet the requirements of preloading, and the analysis shows that the network stage takes a long time, and finally the CDN is moved out. There are many reasons for not being on CDN, mainly in the following aspects:
- The page where you get things is thousands of faces and each person sees something different
The above optimization 4 can be solved by modifying the CSR of the original SSR rendering content. Since it has been added to the CDN, the subsequent plan is to modify this part of the content back to the SSR again, so that users can see the goods rather than the skeleton more quickly, and then update the content through CSR.
- The page status changes and the cache cannot be updated in time
This problem has been solved in the above client preloading optimization part. After the page is opened, it can request the interface to refresh the data again for the components that need to be refreshed to ensure the accuracy of the data. However, this part of the workload is relatively large, and there are 30+ components that need to refresh the state sorted out. And every component developed thereafter needs to consider status updates.
- HTML template content changes cannot be updated in a timely manner
There are two reasons for the change of template content. The first scenario is that in the builder scenario, the operation can dynamically modify template content, resulting in page structure change (low frequency); the second scenario is that template content needs to be updated after project release (high frequency).
This problem can be solved by automatically calling the CDN service provider’s refresh cache interface when content changes are perceived.
Pre-rendered HTML
Render the SPA page and save the HTML document with Puppeteer to match the above page refresh strategy, and host the HTML via CDN to make your SPA page as smooth as SSR.
The main solution is to use the Webpack-based plug-in prerender-SPa-plugin and configure routes that need to be pre-rendered so that they are packaged to produce pages for the routes. The scheme itself is generic, but a human check is required for every page that is accessed.
Unremarkable CSS package size optimization
As is known to all, CSS loading will block HTML rendering, which eventually reduces the public CSS on the first screen from 118KB to 38KB. The following figure shows the sequence diagram of SSR page loading in a weak network environment simulated by Chrome. Can be seen from the diagram styles. Fb201fce. The chunk. The CSS download take 18 s, blocking the page rendering, main HTML documents took 2.38 s will have the download is complete, but the actual rendering time is after the 20 s.
The optimization idea is also simple. The CSS files required by the first screen are inlined into the HTML, and then returned by the SSR service. The CSS files are split and loaded on demand.
Initially, I tried MiniCssExtractPlugin, which can divide CSS into separate files and generate a CSS file for each JS. However, it needs to be built on Webpack5. However, the next version used in the project is 9.5. So I thought about upgrading to the latest version of Next12. After upgrading, I found that other packages reported various errors during construction, and found that some packages did not support the latest version of Next12. After trying to fix it for a day, it was still not solved, and it was uncertain whether upgrading to the latest version would cause other stability problems.
After reading the next source code, I found that all the common CSS are grouped by splitChunks during packaging. Since components are introduced dynamically in the project, I directly modify the webPack parameters in next.config.js. Will splitChunks. CacheGroups. Styles section configured to delete, use the default chunks: async configuration, on-demand introduced can be realized.
Image optimization
Avoid image SRC being empty
Although the SRC attribute is an empty string, the browser will still make an HTTP request to the server, especially if the SSR server is overwhelmed, so be careful here.
Image compression and format selection
The advantages of WebP are reflected in its better image data compression algorithm, which can bring smaller image volume and has undifferentiated image quality recognized by naked eyes. With lossless and lossy compression modes, Alpha transparency and animation features, JPEG and PNG conversion results are excellent, stable and uniform.
Select the appropriate resolution by passing parameters to the image server
Details of the optimization
Packaging optimization
- The page component is split, and the resources required by the first screen content are loaded first
- Webpack SplitChunks effectively split common dependencies to improve cache utilization
- Components are loaded on demand
- Tree Shaking reduces code size
Non-critical JS and CSS lazy loading
- Defer, Async, dynamically loading JS
- Lazily loading JS files on iOS devices
Optimized media resource loading
- Lazy loading of pictures and videos
- Resource compression, by passing parameters to the image server to select the appropriate resolution
Other resource optimization
- Data burying point is reported and delayed. The onLoad event is not blocked
- Custom font optimization, using Fontmin to generate compact font packages
Page rendering optimization
- Page rendering time optimization
-
- SSR page First Screen CSS inline (Critial CSS)
- Use Layers properly
- Layout jitter optimization: set the width and height in advance
- Reduced rearrangement redraw operations
Code level optimization
- Time-consuming task segmentation
-
- Reduce main thread time using Web workers
- The RAF callback executes the code logic when the thread is idle
- Avoid deep nesting of CSS
monitoring
To help developers better measure and improve front-end page Performance, the W3C Performance Group has introduced the Performance API, including Navigation Timing API, which enables automatic, precise page Performance tracking. The front-end Performance monitoring indicators of objects are also reported for statistical analysis by obtaining data from Performance API.
System architecture
After the SDK data is collected, it will be reported to Ali Cloud SLS log platform, and then the data will be consumed and cleaned in real time through Flink and stored in Clickhouse. The platform back end will read clickhouse data and do various aggregation processing before using it.
Index of the market
Do optimization before the first to establish monitoring indicators, the Internet called the grip, no monitoring indicators, how you optimize, do not know the effect of optimization, more do not know what to do next? And what’s still unresolved. Therefore, before optimization, indicators first, of course, must be accurate indicators.
The index market mainly contains the following functions:
- You can quickly view the version, device manufacturer, device name, device system version, and network ratio in a certain period of time, and perform filtering based on these fields.
- The middle area shows the overall and active page client time and H5 seconds open time.
- The bottom area shows the time in seconds of each service domain.
- This shows both the average time and the 90-quartile time, but the downside of the average time is that it’s easy to average, and I think you’ve all been averaged. In this way, 90% of the access time is less than 90 qubits, and in the same way, 50% of the access time is less than 50 qubits. The qubit value is obtained by sorting all the access time data in descending order.
White during monitoring
Under normal circumstances, after completing the above optimization measures, the user can basically open the H5 page in seconds. However, there are always exceptions. The user’s network environment and system environment are very different, and even the WebView can have internal crash. When there is a problem, the user may see a blank screen directly, so the further optimization means is to detect whether there is a blank screen and the corresponding countermeasures.
The most intuitive solution to detect a white screen is to take a screenshot of the WebView and traverse the color value of the pixels in the screenshot. If the color of the non-pure color exceeds a certain threshold, it can be considered that the screen is not white. First obtain the Bitmap object containing the WebView, and then reduce the screenshot to the specified resolution size, such as 100* Auto. Traverse the pixels of the detected image. When the pixels of non-pure color are greater than 5%, it can be considered as non-white screen. We analyze the screenshots through image recognition technology, and can well perceive whether the current screen is blank, loading, and special pages.
White screen is an important indicator. We send alarm notifications for the rapid increase of the overall white screen rate and new white screen pages, so that developers can timely intervene and start troubleshooting.
Performance Problem discovery
It mainly finds potential problems in pages through CDN coverage monitoring, HTTP request monitoring, network monitoring (load failure, time abnormal, abnormal transmission size), picture monitoring (uncompressed, abnormal resolution) and other monitoring methods, and also provides problem analysis ability. Enter the URL on the fault analysis page to help you find problems and provide suggestions.
CDN Not covered monitoring
The importance of CDN is self-evident. It can accelerate the speed of resource access and thus improve user experience. Through the analysis of online buried point data, we find out the list of resources not covered by CDN, so as to promote the optimization of each business.
HTTP Request Monitoring
Why monitor HTTP requests? Let’s take a look at the new features of HTTPS compared to HTTP:
- Content encryption: the use of mixed encryption technology, the middle can not directly view the plaintext content
- Authentication: Authenticates the client to access its own server through a certificate
- Protect data integrity: Prevent transmitted content from being impersonated or tampered with by middlemen
In this case, for the security of our service, the existing HTTP protocol needs to be upgraded uniformly, which requires monitoring to find out.
Network monitoring
Some pages open rate is low, it is necessary to analyze the reason, is not the page interface response is slow, or the page itself has a larger request for resources? If a network request fails, you should be the first to perceive it, rather than passively waiting for user feedback.
Image monitoring
The following functions are available: the image is not compressed, the image resolution is abnormal, the image transfer size is larger than 300kb, and the GIF resource transfer size is larger than 1 MB.
Page Problem Analysis
Listed above a bunch of functions, for business students may be more vexed, I a page specific what problems? You can’t let me go to the above functions one by one, which exception is the page I am responsible for? The functionality itself leveragesexisting functionality for aggregation analysis through a single page path.
Abnormal monitoring
H5 exceptions have always been monitored using Sentry, but the Sentry system lacks correlation with PV and DAU data, so it cannot measure the severity of product exceptions once they occur. The absence of business domain association results in exception problems that cannot be classified by business domain. User behavior log has not been connected with Native side, so it is easy to encounter the bottleneck of incomplete context in problem analysis. Another problem is that Sentry has finite flow measures and will discard some of the abnormal data when the QPS is high.
Since Sentry has been able to help us with certain troubleshooting and analysis capabilities, we are not going to do the same functions as Sentry, but the parts that are not supported by Sentry. We designed the following functions for the above problems:
- Abnormal problem indicator measurement
-
- Increase the exception rate, page exception rate, and affect the user rate trend
- Add the distribution ratio and service domain division under the problem dimensions (system version, APP version, H5 release version, and network)
- Improved exception problem aggregation capability (improved problem aggregation capability)
-
- The exception list supports ranking of newly added, Top PV, number of exceptions, and number of affected users
- Distinguish between SDK exceptions and interface exceptions of three parties
future
Although the second opening rate has reached more than 75% at present, we also have an important indicator, 90-bit time, which is committed to improving the user experience of H5 page at the end. After the completion of 90-bit optimization, we may consider further optimizing the 95bit time.
conclusion
Finally, I would like to thank those students who have contributed to the H5 page opening in seconds. Thanks to the H5 team, the students are all great, and various optimization methods and ideas emerge endlessly.
So far, we have systematically explained the background and the whole process from index establishment to second opening optimization online. The whole paper is divided into three parts: client, H5 and monitoring. If you gain from reading this article, please move your rich little hands and point a thumbs-up! If you have any questions or thoughts after reading, please feel free to comment in the comments section.
Finally, here is the overall optimized brain map:
Article/XU MING
Pay attention to the technology, do the most fashionable technology!