Grape city website, grape city for developers to provide professional development tools, solutions and services, enabling developers.

background

Have done front-end development know front-end work content is a lot of, for HTML, CSS, Javascript, Image, Flash and other content use. In order to improve the performance of the application, we need to optimize various aspects of the resource content.

For users, optimization can make the application respond faster, load more quickly, and bring a better user experience. For service providers, front-end optimization can reduce the number of page requests, bandwidth occupied by broadband, and effectively save resources.

Front-end optimization has a lot of content, which can be roughly divided into two categories according to the level of granularity: page optimization level and code level optimization. Page optimization mainly focuses on page loading, including the number of HTTP requests, non-blocking loading of scripts, and location optimization of inline scripts. Code optimization includes: Javascript DOM operation optimization, CSS selector optimization, image optimization, HTML structure optimization and other content.

Code-level optimization focuses more on data requests, notably reducing the number of HTTP requests. A complete HTTP request requires a route lookup, A TCP handshake, a sending request, a server response, and a browser reception. For small files, the actual download time is a low percentage of the total request time, so the small files need to be merged into large files for transfer.

(Photo from the Internet)

Page level: Improves page loading speed

Load optimization is to solve the problem of page content loading speed limited by network bandwidth, too time-consuming, the main means are: Project packaging optimization Webpack is a front-end resource loading/packaging tool. It performs a static analysis based on the dependencies of modules, and then generates the corresponding static resources for those modules according to the specified rules. Usually we use Webpack to convert multiple static resources js, CSS, less into a static file, reducing page requests. The core concepts are: Output: tells WebPack where to Output the bundles it creates and how to name these files. The default value is./dist. Module: Webpack recurses from the configured Entry to find all dependent modules. Chunk: A Chunk consists of multiple modules and is used for code merging and splitting. Loader: Loader converts all types of files into valid modules that WebPack can handle, and then you can take advantage of WebPack’s packaging capabilities to process them. Plugins: Are used to convert certain types of modules, while plug-ins can be used to perform a wider range of tasks.

CSS Sprite is the CSS Sprite, also known as THE CSS Sprite, is a CSS image merging technology, the method is to combine small ICONS and background images into a picture, and then use the CSS background positioning to show the part of the picture that needs to be displayed.

The basic idea behind the Sprite diagram implementation is to integrate the images we use from the web into one image, thus reducing the number of HTTP requests to the site. This image is rendered using the CSS background and background-position properties, which means that our tag becomes more complex and the image is defined in CSS, notThe label.

There are two obvious advantages to using a Sprite chart:

  1. Reduce the number of page image content requests to the server

The Sprite chart combines most background images and small ICONS, making it easy to use anywhere. Requests from different locations will only call the same image, greatly reducing the number of page requests to the server, reducing the pressure on the server; This can also improve page loading speed, saving server traffic.

  1. Improves page loading speed

Sprite image Mosaic picture size is significantly smaller than all the pictures before the Mosaic of small. From these two aspects, front-end request speed can be clearly optimized. After HTTP2, there is no need to reduce the number of requests, so Sprite charts are now less useful for optimizing performance on front pages. Now it’s more recommended to use font ICONS, small files and vector ICONS. CDN acceleration. The full name of CDN is Content Delivery Network. By adding a new CACHE layer to the existing Internet, the content of the website is published to the node closest to the user’s “edge” of the network, so that the user can get the required content nearby and improve the response speed of the user’s access to the website. From the technical comprehensive solution due to the network bandwidth is small, user access to large, unequal distribution of outlets, improve the user access to the site response speed.

Cache layer technology can be used to eliminate the peak data access caused by node device blocking. The Cache server has the caching function. For most web page objects that are repeatedly accessed, the files do not need to be retransmitted from the original web site. Only a simple authentication is required to send copies. The location of the cache server is usually not in the vicinity of the client, so you can obtain the response speed of the LAN, effectively reduce wide-area broadband consumption. It is effective to improve the response speed, save bandwidth, and reduce the load of the source server.

In summary, the optimization effect of CDN on the network is mainly reflected in the following aspects:

  • Solve the “first kilometer” problem on the server side
  • Alleviating or even eliminating the impact of the bottleneck between the interconnection of different operators
  • Reduce the export bandwidth pressure of the provinces
  • Takes the pressure off the backbone
  • Optimized the distribution of hot content online

Gzip is the abbreviation of GNU Zip, is a GNU free software file compression program, in use can basically compress 50% of the size of text files. Before we talk about Gzip, let’s introduce the concept of HTTP compression. HTTP compression is a way built into web pages and web clients to improve transmission speed and bandwidth utilization. In the case of HTTP compression, HTTP data is compressed before it is sent from the server: compatible browsers declare which method is supported to the server before downloading the desired format; Browsers that do not support compression methods will download uncompressed data. HTTP compression is the process of reencoding HTTP content in order to reduce its size. Gzip is a classic example of HTTP compression.

Reducing file size has two obvious benefits:

  1. Reduce storage space
  2. Transmission time can be reduced when transmitted over the network

The principle behind Gzip compression is to find a few recurring strings in a text file and temporarily replace them, thus making the entire file smaller. Because of this principle, the higher the rate of code duplication in the file, the more efficient Gzip compression is, and the greater the benefit of using Gzip. And vice versa.

Project packaging optimization

(Photo from the Internet)

Webpack is a front-end resource loading/packaging tool. It performs a static analysis based on the dependencies of modules, and then generates the corresponding static resources for those modules according to the specified rules. Usually we use Webpack to convert multiple static resources js, CSS, less into a static file, reducing page requests. The core concepts are: Output: tells WebPack where to Output the bundles it creates and how to name these files. The default value is./dist. Module: Webpack recurses from the configured Entry to find all dependent modules. Chunk: A Chunk consists of multiple modules and is used for code merging and splitting. Loader: Loader converts all types of files into valid modules that WebPack can handle, and then you can take advantage of WebPack’s packaging capabilities to process them. Plugins: Are used to convert certain types of modules, while plug-ins can be used to perform a wider range of tasks.

CSS Sprite

(Photo from the Internet)

CSS Sprite is CSS Sprite, also known as CSS Sprite, is a CSS image merging technology, the method is a small icon and background image into a picture, and then use the CSS background positioning to show the part of the picture that needs to be displayed.

The basic idea behind the Sprite diagram implementation is to integrate the images we use from the web into one image, thus reducing the number of HTTP requests to the site. This image is rendered using the CSS background and background-position properties, which means that our tag becomes more complex and the image is defined in CSS, notThe label.

There are two obvious advantages to using a Sprite chart:

  1. Reduce the number of page image content requests to the server

The Sprite chart combines most background images and small ICONS, making it easy to use anywhere. Requests from different locations will only call the same image, greatly reducing the number of page requests to the server, reducing the pressure on the server; This can also improve page loading speed, saving server traffic. 2. Improve the page loading speed. The image size of Sprite image Mosaic is obviously smaller than that before all the pictures are pieced together. From these two aspects, front-end request speed can be clearly optimized. After HTTP2, there is no need to reduce the number of requests, so Sprite charts are now less useful for optimizing performance on front pages. Font ICONS are now more recommended, with small files and vector ICONS

The CDN to accelerate

(Photo from the Internet)

The full name of the CDN is Content Delivery Network. By adding a new CACHE layer to the existing Internet, the content of the website is published to the node closest to the user’s “edge” of the network, so that the user can get the required content nearby and improve the response speed of the user’s access to the website. From the technical comprehensive solution due to the network bandwidth is small, user access to large, unequal distribution of outlets, improve the user access to the site response speed.

Cache layer technology can be used to eliminate the peak data access caused by node device blocking. The Cache server has the caching function. For most web page objects that are repeatedly accessed, the files do not need to be retransmitted from the original web site. Only a simple authentication is required to send copies. The location of the cache server is usually not in the vicinity of the client, so you can obtain the response speed of the LAN, effectively reduce wide-area broadband consumption. It is effective to improve the response speed, save bandwidth, and reduce the load of the source server.

In summary, the optimization effect of CDN on the network is mainly reflected in the following aspects:

  • Solve the “first kilometer” problem on the server side
  • Alleviating or even eliminating the impact of the bottleneck between the interconnection of different operators
  • Reduce the export bandwidth pressure of the provinces
  • Takes the pressure off the backbone
  • Optimized the distribution of hot content online

Gzip compression

(Photo from the Internet)

Gzip is the abbreviation of GNUzip, is a GNU free software file compression program, in use can basically compress 50% of the size of text files. Before we talk about Gzip, let’s introduce the concept of HTTP compression. HTTP compression is a way built into web pages and web clients to improve transmission speed and bandwidth utilization. In the case of HTTP compression, HTTP data is compressed before it is sent from the server: compatible browsers declare which method is supported to the server before downloading the desired format; Browsers that do not support compression methods will download uncompressed data. HTTP compression is the process of reencoding HTTP content in order to reduce its size. Gzip is a classic example of HTTP compression.

Reducing file size has two obvious benefits:

  1. Reduce storage space

  2. Transmission time can be reduced when transmitted over the network

The principle behind Gzip compression is to find a few recurring strings in a text file and temporarily replace them, thus making the entire file smaller. Because of this principle, the higher the rate of code duplication in the file, the more efficient Gzip compression is, and the greater the benefit of using Gzip. And vice versa.

Code level: Reduce data requests

The previous example of how to optimize for the initial page load is not enough in some scenarios, however, because there are often scenarios where the service is frequently requested to update information while the page is being displayed and used.

For example, in the development of the Excel online collaborative system, because the cell services are independent of each other, full-screen refresh cannot meet the requirements. We can only periodically retrieve the values of each cell from the server and display them on the page when we detect changes. Each cell calls the API separately to get the content, creating a flood of network requests. On the one hand, the large number of requests slows down the loading speed, and the page also slows down.

In this scenario, WebSocket is a good choice, with long links to keep in sync with the server, and the server actively pushes updates to the client, reducing network overhead. However, WebSocket also has its own disadvantages, high development costs, both the client and the server need to consider disconnection and reconnection, frequent push, resource consumption and other issues. Therefore, we also need to optimize to minimize the frequency of requests.

Optimization idea

How can I reduce the number of data requests? We can optimize the logic by queuing requests.

(Optimize Web requests through request queues)

After optimization, the data acquisition logic of the Excel like online collaborative system becomes as follows:

  • When a cell sends a request, the request is first added with an ID, and the callback method is cached with the ID. Then the request is queued, and the queue manager sends the request packet to the server periodically or depending on the number of requests in the queue.
  • After receiving the request packets, the server processes them in batches and encapsulates a new return packet
  • After receiving the return package, the front end calls the corresponding callback method according to the unique ID of the request to complete the cell request

The advantages of using this method for optimization are obvious:

  • Simple implementation, code changes are small, the original Ajax request to the queue call, after the request callbak without modification. The server can add a new interface to split the request.
  • Set the request frequency or the amount of data in a request based on the actual scenario, and take into account the update frequency and corresponding times.

Examples of application

The following code is the implementation of GETNUMBERFROMSERVER, which is responsible for calling the server’s getData interface, passing parameters, getting the display content and displaying it in the cell. To ensure the user experience of updating cells asynchronously, this function is derived from SpreadJS ‘asynchronous function.

1. var GetNumberFromServer = function () { 2. }; 3. GetNumberFromServer.prototype = new GC.Spread.CalcEngine.Functions.AsyncFunction("GETNUMBERFROMSERVER", 1, 2); 4. GetNumberFromServer.prototype.evaluate = function (context, arg1, arg2) { 5. fetch("/spread/getData? data="+arg1) 6. .then(function(response) { 7. return response.text(); 8. }) 9. .then(function(text) { 10. context.setAsyncResult(text); 11.}); 12.}; 13. GC.Spread.CalcEngine.Functions.defineGlobalCustomFunction("GETNUMBERFROMSERVER", new GetNumberFromServer()); 14.Copy the code

To reduce the number of requests, we first need to use a cache object to store the request data and periodically call the interface for processing.

1. let callStack = {}; 2. Let callingStack = {}; 3. Let callStackCount = 0; Let timingId = 0; // Let timingId = 0; // Is used to determine whether there is a timer waiting for a requestCopy the code

Then, we define new queued request methods, instead of calling the API interface directly in the function.

4. // Callback 5. Function stackCall(data, context, stackCall) callback){ 6. let id = callStackCount++; 7. callStack[id] = {}; 8. callStack[id].data = data; 9. callStack[id].context = context; 10. callStack[id].callback = callback; 13. TimingId = setTimeout(function(){14. CallingStack = callStack; 15. callStack = {}; Let newData = "" // merge request data, For (let cId in callingStack){19. newData += (cId + "," + callingStack[cId].data + ";" ); 23. Fetch ("/spread/getData? data=" + newData) 24. .then(function(response) { 25. return response.text(); 26. }) 27. .then(function(text) { 28. let resData = newData.split(";" ); 29. let spread = designer.getWorkbook(); 30. spread.suspendPaint(); 33. For (let resId in resData){34. If (resData[resId]){35 resData[resId].split(","); 36. // Retrieve the function context based on the Id, CallingStack [ress[0]].calling.call (null, callingStack[ress[0]].context, ress[1]) 38. } 39. } 40. spread.resumePaint(); 41. TimingId = 0; 42.}); 43.}, 1000) 44.} 45.}Copy the code

Finally, the implementation of asynchronous functions is updated, stackCall stack function is called in the function, and setAsyncResult method in the callback callback is executed after the batch call succeeds, and the business logic is finally realized.

1. GetNumberFromServer.prototype.evaluate = function (context, arg1, arg2) { 2. stackCall(arg1, context, function(context, text){ 3. context.setAsyncResult(text); 4.}) 5.};Copy the code

After this optimization, when the page has a large number of asynchronous requests, these requests will be put in the queue, timed and uniform processing, one refresh.

In addition, we can also use SpreadJS doNotRecalculateAfterLoad import options, at the time of loading for the first time calculation, convert json in original value; And calcOnDemand turns on on-demand computing. Further optimize the speed and experience of page initialization.


1.	json.calcOnDemand = true;
2.	spread.fromJSON(json, { doNotRecalculateAfterLoad: true });

Copy the code

conclusion

This paper classifies several methods of front-end performance optimization. These best practices cover page loading and data requests. In the second half of the article, we detailed the implementation of “data request queuing” through an example of Excel like online co-editing, hoping to be helpful to your front-end development.