Preface:
In the same network environment, two websites can also meet your needs, one “Duang” immediately loaded out, the other tangled for a long time, which will you choose? Research shows that the most satisfying time for users to open a web page is 2-5 seconds, and 99% of users will close the page if they wait more than 10 seconds. This may not make you feel too much, but let me give you a set of statistics: Every 400ms of slower access to Google leads to a 0.59% decrease in user search requests; Every 100ms increase in site latency by Amazon results in a 1% drop in revenue; A 400ms delay on yahoo would result in a 5-9% drop in traffic. The loading speed of a website can seriously affect the user experience and make the difference between life and death.
Some might argue that web site performance is a matter for back-end engineers, not the front end. All I can say is, too young too simple. In fact, only 10-20% of the end user response time is spent getting HTML documents from the Web server and delivering them to the browser. Where does the rest of the time go? Take a look at the golden rules of performance:
Only 10-20% of end user response time is spent downloading HTML documents. The remaining 80 to 90 percent of the time is spent on all the components in the download page.
Next we’ll look at how the front end siege lion can improve page loading speed.
Reduce HTTP requests
It says that 80-90% of the time is spent on HTTP requests made by all the components in the download page. Therefore, the easiest way to improve response time is to reduce the number of HTTP requests.
Picture map:
Suppose there are five images on the navigation bar, and clicking on each image takes you to a link, so that five navigational images will generate five HTTP requests when loaded. However, it is efficient to use a picture map so that only one HTTP request is required.
Server-side picture map: submit all clicks to the same URL, and submit the x and Y coordinates of user clicks at the same time, and the server responds according to the coordinate mapping
Client picture map: Maps clicks directly to actions
<img src="planets.jpg" border="0" usemap="#planetmap" alt="Planets" /> <map name="planetmap" id="planetmap"> <area HTML "Alt ="Venus" /> <area shape="rect" coords="129,161,10" href HTML "Alt ="Mercury" /> <area shape="rect" coords="0,0,110,260" href ="sun.html" Alt ="sun" /> <area shape="rect" Coords ="140,0,110,260" href ="star.html" Alt ="Sun" /> </map>Copy the code
Disadvantages of using picture maps: rectangles or circles are easy to specify when specifying coordinate areas, while other shapes are hard to specify manually
CSS Sprites
The literal translation of CSS Sprites is CSS Sprites, but this translation is obviously not enough, in fact, by combining multiple images into one image, and then using some of the techniques of CSS layout on the web page. In particular, if you can use CSS sprites to reduce the number of pictures, the speed will be improved.
<div>
<span id="image1" class="nav"></span>
<span id="image2" class="nav"></span>
<span id="image3" class="nav"></span>
<span id="image4" class="nav"></span>
<span id="image5" class="nav"></span>
</div>Copy the code
.nav {
width: 50px;
height: 50px;
display: inline-block;
border: 1px solid #000;
background-image: url('E:/1.png');
}
#image1 {
background-position: 0 0;
}
#image2 {
background-position: -95px 0;
}
#image3 {
background-position: -185px 0;
}
#image4 {
background-position: -275px 0;
}
#image5 {
background-position: -366px -3px;
}Copy the code
Running results:
PS: Using CSS Sprites can also reduce downloads, as you might expect the combined image to be larger than the sum of the split images because of the possibility of adding white space. In fact, the combined image will be smaller than the sum of the separate images, because it reduces the overhead of the image itself, such as color tables, formatting information, etc.
The fonts icon
Font ICONS can be used as much as possible where font ICONS can be used. Font ICONS can reduce the use of many images, thus reducing HTTP requests. Font ICONS can also be set by CSS to color, size and other styles, why not?
Merge scripts and stylesheets
By combining multiple stylesheets or script files into one file, you can reduce the number of HTTP requests and thus shorten the effect time.
However merge all files for many people, especially of modular code can’t endure, and the style of the merge all files or scripts may cause in a page load when the load is more than what you need to style or script, to visit the site only one (or several) which increases downloads page person, So you should weigh the pros and cons for yourself.
Second, use CDN
If the application Web server is closer to the user, the response time for an HTTP request will be shorter. On the other hand, if the component Web server is closer to the user, the response time for multiple HTTP requests is reduced.
A CDN (Content Delivery network) is a set of Web servers distributed in multiple geographic locations to deliver content to users more efficiently. When optimizing performance, the selection of servers to deliver content to specific users is based on measurement of online MOOCs congestion. For example, a CDN might choose the server with the smallest network steps, or the server with the shortest response time.
CDN can also back up data, expand storage capacity, and cache data. CDN also helps ease Web traffic peak pressure.
Disadvantages of CDN:
1. Response time may be affected by traffic from other sites. CDN service providers share Web server groups among all their customers.
2. If the quality of CDN service declines, the quality of your work will also decline
3. Cannot directly control the component server
Add the Expires header
First-time visitors to a page make a lot of HTTP requests, but by using a permanent Expires header, these components can be cached, reducing unnecessary HTPP requests the next time the page is visited, and thus increasing load times.
The Web server uses the Expires header to tell clients that they can use the current copy of a component until a specified time. Such as:
Expires: Fri, 18 Mar 2016 07:41:53 GMT
The downside of Expires: It requires strict synchronization between server and client clocks; Expiration dates need to be checked frequently
Cache-control was introduced in HTTP1.1 to overcome the Expires header limitation, using max-age to specify how long a component is cached.
Cache-control: Max – age = 12345600
If both cache-control and Expires are specified, max-age overwrites the Expires header
Four, compression components
Starting with HTTP1.1, Web clients can indicate support for compression through the Accept-Encoding header in HTTP requests
Accept-Encoding: gzip,deflate
If the Web server sees this header in the request, it compresses it using one of the methods listed by the client. The Web server notifys the Web client with the Content-Encoding in the response.
Content-Encoding: gzip
Proxy cache
This is different when the browser sends the request through a proxy. Assume that the first request sent to the broker for a URL comes from a browser that does not support GZIP. This is the agent’s first request and the cache is empty. The proxy forwards the request to the server. The response is uncompressed and the proxy cache is sent to the browser simultaneously. Now, assume that the request arriving at the proxy is the same URL from a GZIP-enabled browser. The proxy responds with uncompressed content in the cache, missing the opportunity for compression. Conversely, if the first browser supports GZIP and the second does not, the compressed version in your proxy cache will be available to subsequent browsers regardless of whether they support GZIP or not.
Workaround: Add the Vary header to the Web server’s response The Web server can tell the proxy to change the cached response based on one or more request headers. Because the compression decision is based on the Accept-encoding request header, you need to include accept-encoding in the Vary response header.
Put the style sheet in the header
First of all, putting stylesheets in the head doesn’t have much of an impact on the actual page load time, but it does reduce the time it takes to appear on the first screen of the page, allowing content to gradually appear, improving the user experience and preventing “white screens”.
We always want pages to display content as quickly as possible to provide visual feedback, which is important for users with slow Internet connections.
Placing the stylesheet at the bottom of the document prevents content from appearing in the browser. To avoid redrawing page elements when styles change, browsers block progressive rendering of content, creating a “white screen.” This comes from the behavior of the browser: if the stylesheet is still loading, it’s a waste to build a rendering tree because nothing is left until all the stylesheet loads and parses
Put the script at the bottom
As with style sheets, putting scripts at the bottom doesn’t have much of an impact on the actual page loading time, but it does reduce the time it takes to appear on the first screen of the page, allowing the page content to appear gradually.
The download and execution of JS blocks the building of the Dom tree (or, more precisely, interrupts the updating of the Dom tree), so the script tag placed in an HTML section within the first screen will truncate the content of the first screen.
Parallel downloading is disabled when the script is downloaded — even if a different host name is used, other downloads are not enabled. Because the script might modify the page content, the browser waits; In addition, to ensure that scripts are executed in the correct order, because later scripts may depend on earlier scripts and errors may occur if scripts are not executed in the correct order.
Avoid CSS expressions
CSS expressions are a powerful and dangerous way to dynamically set CSS properties, supported by IE5 and later, before IE8.
p {
width: expression(func(),document.body.clientWidth > 400 ? "400px" : "auto");
height: 80px;
border: 1px solid #f00;
}Copy the code
<p><span></span></p>
<p><span></span></p>
<p><span></span></p>
<p><span></span></p>
<p><span></span></p>
<script>
var n = 0;
function func() {
n++;
// alert();
console.log(n);
}
</script>Copy the code
The mouse moves a few times, the function runs several thousand times easily, the danger is obvious.
How to solve:
One-time expression:
p {
width: expression(func(this));
height: 80px;
border: 1px solid #f00;
}Copy the code
<p><span></span></p>
<p><span></span></p>
<p><span></span></p>
<p><span></span></p>
<p><span></span></p>
<script>
var n = 0;
function func(elem) {
n++;
elem.style.width = document.body.clientWidth > 400 ? '400px' : "auto";
console.log(n);
}
</script>Copy the code
Event handling mechanism
Js event processing mechanism to dynamically change the style of elements, so that the number of functions run in a controllable range.
Use external JavaScript and CSS
Inline scripts or styles can reduce HTTP requests and, in theory, speed up page loading. In practice, however, when scripts or styles are imported from outside the file, it is possible for the browser to cache them so that they can be used directly for later loading, while the SIZE of the HTML document is reduced to speed up loading.
Influencing factors:
1. The fewer page views per user, the stronger the argument for inline scripting and styling. For example, if a user only visits your site once or twice a month, inlining will be better. If the user is generating a lot of page views, cached styles and scripts can greatly reduce download times and submit page loads.
2. If your site uses roughly the same components from page to page, using external files can increase the reuse rate of those components.
Download after loading
Sometimes we want inline styles and scripts, but we can provide external files for subsequent pages. Then we can stop vomiting when the page is loaded and dynamically load external components for the user’s subsequent access.
1 function doOnload() { 2 setTimeout("downloadFile()",1000); 3 } 4 5 window.onload = doOnload; 6 7 function downloadFile() { 8 downloadCss("http://abc.com/css/a.css"); 9 downloadJS("http://abc.com/js/a.js"); 10 } 11 12 function downloadCss(url) { 13 var ele = document.createElement('link'); 14 ele.rel = "stylesheet"; 15 ele.type = "text/css"; 16 ele.href = url; 17 18 document.body.appendChild(ele); 19 } 20 21 function downloadJS(url) { 22 var ele = document.createElement('script'); 23 ele.src = url; 24 document.body.appendChild(ele); 25}Copy the code
On this page, JavaScript and CSS are loaded twice (inline and externally). For this to work properly, you have to deal with dual definitions. A better solution is to put these components in an invisible IFrame.
Reduce DNS lookups
What happens when we type the url into the browser’s address bar (www.linux178.com, for example) and press enter to see the page?
The server responds to the HTTP request. The browser gets the HTML code. The browser parses the HTML code. And request resources in HTML code (such as JS, CSS, images, etc.) -> browser to render the page to the user
Domain name resolution is the first step of page loading, so how to resolve the domain name? Take Chrome as an example:
1. Chrome first searches the DNS cache of the browser (the cache time is short, about 1 minute, and can hold only 1000 caches) to see whether the DNS cache contains the corresponding entry www.linux178.com and whether it has not expired. If so, the resolution ends. Note: How do we view Chrome's own cache? You can use Chrome ://net-internals/# DNS to view 2. If the browser doesn't find an entry in its own cache, Chrome searches the OPERATING system's own DNS cache, and if it does, it stops the search and resolves. Note: To query the DNS cache of the operating system, run the ipconfig /displaydns command in the Windows operating system as an example. If the DNS cache on Windows does not find an IP address for the domain name, try reading the hosts file (located in C:\Windows\System32\drivers\etc) to see if there is an IP address for the domain name. If there is, the resolution succeeds. 4. If no corresponding entry is found in the hosts file, the browser makes a DNS system call to the locally configured preferred DNS server (usually provided by telecom carriers). You can also use DNS servers like Google's) to initiate domain name resolution requests (through UDP protocol to DNS port 53, this is a recursive request, that is, the carrier'S DNS server must provide us with the IP address of the domain name). The carrier's DNS server first looks up its cache. If the corresponding entry is found and does not expire, the parsing succeeds. If no corresponding entry is found, the carrier's DNS initiates an iterative DNS resolution request on behalf of our browser. It first searches for the DNS IP address of the root domain (this DNS server has 13 built-in ROOT domain DNS IP addresses), and then searches for the DNS address of the root domain. Will initiate a request to it (please ask www.linux178.com the IP address of this domain name is how many ah? The root domain discovers that this is a top-level domain, a domain name in the COM domain, and tells the carrier's DNS that I don't know the IP address of this domain, but I know the IP address of the COM domain, you go find it, and the carrier's DNS gets the IP address of the COM domain, I sent a request to the IP address of the com domain (what is the IP address of the domain name www.linux178.com?), the com domain server told the DNS of the carrier that I do not know the IP address of the domain name www.linux178.com, But I know the DNS address of linux178.com, so you go to find it, so the carrier'S DNS provides the DNS address of Linux178.com (this is usually provided by a domain registrar, like Wan wan, What is the IP address of www.linux178.com? ", the DNS server of linux178.com domain checked, ah, sure enough here, so I sent the found result to the DNS server of the carrier, this time the DNS server of the carrier got the IP address corresponding to the domain name www.linux178.com, And returned to the Windows system kernel, the kernel returned the results to the browser, finally the browser got the IP address corresponding to www.linux178.com, the step of action. Note: Generally, the following steps are not performed. If the resolution is not successful after the above four steps, the following steps are performed: 5. The operating system looks for the NetBIOS name Cache. What does this Cache contain? The computer name and Ip address of any computer that I have communicated with successfully in the last period of time will be stored in this cache. When does this step resolve successfully? If this name was successfully communicated to me just a few minutes ago, then this step can be successfully resolved. If step 5 also fails, the NETBIOS name and IP address server will be queried. If the query fails in step 6, the client will broadcast the query. 8. If the query fails in step 7, the client will read the LMHOSTS file (in the same directory as the HOSTS file). There's no way to communicate with the target computer. As long as one of these eight steps can be resolved successfully, it can successfully communicate with the target computer.Copy the code
DNS is also overhead. It usually takes the browser 20 to 120 milliseconds to find the IP address for a given domain name, and the browser can’t load anything from the server until the domain name is resolved. So how to reduce domain name resolution time, speed up page loading?
When the client DNS cache (browser and operating system) cache is empty, the number of DNS lookups is the same as the number of unique host names in the Web page to load, including the hostname of the page URL, scripts, style sheets, images, Flash objects, and so on. Reducing the number of host names reduces the number of DNS lookups.
Reducing the number of unique host names potentially reduces the number of parallel downloads on a page (the HTTP 1.1 specification recommends downloading two components in parallel from each host name, but you can actually download more than one), so reducing the number of host names and parallel downloads is a contradiction that needs to be weighed. It is recommended that components be placed under at least two but not more than four host names to reduce DNS lookups while allowing a high degree of parallel downloading.
Simplify JavaScript
streamline
Streamlining is the removal of unnecessary characters from code to reduce file size and load time. When code is streamlined, unnecessary whitespace characters (Spaces, newlines, tabs) are removed, and the overall file size is reduced.
confusion
Obfuscation is another way to apply it to source code. It removes comments and whitespace, and it also overwrites the code. When obfuscated, function and variable names are converted to shorter strings, making the code more concise and harder to read. This is often done to make it more difficult to reverse engineer your code, which also improves performance.
Disadvantages:
Confusion itself is complex and can introduce errors.
Symbols that cannot be changed need to be marked to prevent JavaScript symbols (such as keywords, reserved words) from being modified.
Obfuscation makes code hard to read, which makes debugging problems in a production environment more difficult.
As mentioned above, even if you use gzip to compress files, it is still necessary to simplify code. In general, compression yields more savings than streamlining, and in a production environment, streamlining and compression are used together to maximize the savings.
The CSS to streamline
The savings from CSS simplicity are generally less than those from JavaScript simplicity, because CSS has fewer comments and whitespace.
In addition to removing whitespace and comments, CSS can be optimized for additional savings:
Merge the same classes;
Remove unused classes.
Use abbreviations, for example
.right {
color: #fff;
padding-top: 0;
margin: 0 10px;
border: 1px solid #111
}
.wrong {
color: #ffffff;
padding-top: 0px;
margin-top: 0;
margin-bottom: 0;
margin-left: 10px;
margin-right: 10px;
border-color: #111;
border-width: 1px;
border-style: solid;
}Copy the code
Above. Right is the correct way to write, use abbreviations for colors, use 0 instead of 0px, and combine styles that can be combined. Also, when simplifying, actually style the ‘on the last line; ‘can also be omitted.
Let’s look at a simplified example:
The size of the thin file is 155K smaller than the source file. Jquery has also made obfuscations in the thin file, such as using e instead of window, to achieve maximum savings.
Avoid redirects
What is redirection?
Redirection is used to reroute a user from one URL to another.
Common redirection types
301: Permanent redirection, mainly used when the site’s domain name has changed, to tell the search engine that the domain name has changed, should be transferred to the old domain name data and link count under the new domain name, so that the site’s ranking will not be affected by the change of domain name.
302: Temporary redirection, the main implementation of post request after the browser to move to a new URL.
304: Not Modified is used when the browser has kept a copy of a component in its cache and the component has expired. The browser generates a conditional GET request. If the server component has Not been Modified, it returns a 304 status code with no body, telling the browser that it can reuse the copy. Reduce the response size.
How does redirection hurt performance?
When a page is redirected, the transfer of the entire HTML document is delayed. Nothing is rendered on the page and no components are downloaded until the HTML document arrives.
Redirect(“”); response.redirect (“”); response.redirect (“”); response.redirect (“”); response.redirect (“”); The Button simply transfers the URL, which is inefficient because clicking on the Button sends a Post request to the server, which handles the Response.redirect (“”) and sends a 302 Response to the browser. The browser then sends a GET request based on the URL of the response. The correct way to do this is to use the A tag directly on the HTML page, so that unnecessary posts and redirects are avoided.
Application scenarios of redirection
1. Track internal traffic
Redirection is often used to track the direction of user traffic. When you have a portal home page and want to track the traffic after the user leaves the home page, you can use redirection. For example, if you click the link to http://a.com/r/news, a 301 response will be generated. The Location is set to http://news.a.com. Analysis of the web server logs of a.com shows where people go after they leave the front page.
We know how redirects can hurt performance, and for better efficiency, Referer logs can be used to track internal traffic. Each HTTP request has a Referer representing the original request page (except for things like opening it from a bookmark or typing a URL directly), and recording the Referer for each request improves response time by avoiding sending redirects to the user.
2. Track outbound traffic
Sometimes links can take users away from your site, in which case using Referer is impractical.
You can also use redirection to solve the problem of tracking outbound traffic. Baidu Search, for example, solves tracking problems by wrapping each link into a 302 redirect, such as search keywords “front-end performance optimization,” A URL to https://www.baidu.com/link?url=pDjwTfa0IAf_FRBNlw1qLDtQ27YBujWp9jPN4q0QSJdNtGtDBK3ja3jyyN2CgxR5aTAywG4SI6V1N in the search results YpkSyLISWjiFuFQDinhpVn4QE – uLGG&wd = & eqid = 9 c02bd21001c69170000000556ece297, even if the results did not change, but the string is a dynamic change, temporarily also don’t know here have the effect of what? (Personal feeling: the string contains the url to visit, click it will generate a 302 redirection, redirect the page to the target page.)
In addition to redirects, we have the option of using beacons, an HTTP request whose URL contains tracking information. The trace information can be extracted from the visit diary of the beacon Web server. The beacon is usually a transparent 1px by 1px image, but the 204 response is superior because it is smaller, never cached, and never changes the state of the browser.
12. Delete duplicate scripts
When a team is developing a project, the same script may be added multiple times because different developers may be adding pages or components to the page.
Duplicate scripts can cause unnecessary HTTP requests (if the script is not cached), waste time executing extra JavaScript, and potentially cause errors.
How do you avoid repeating scripts?
1. Form a good script organization. Duplicate scripts can occur when different scripts contain the same script, some of which are necessary, but some of which are not, so the script needs to be well organized.
2. Implement the script manager module.
Such as:
1 function insertScript($file) { 2 if(hadInserted($file)) { 3 return; 4 } 5 exeInsert($file); 6 7 if(hasDependencies($file)) { 8 9 $deps = getDependencies($file); 10 11 foreach ($deps as $script) { 12 insertScript($script); 13 } 14 15 echo "<script type='text/javascript' src='".getVersion($file)."'></script>"; 16 17} 18}Copy the code
Check if it has been inserted and return if it has. If the script depends on another script, the dependent script will also be inserted. Finally, when the script is delivered to the page, getVersion checks the script and returns the file name with the corresponding version number appended, so that if the version of the script changes, the previous browser cache is invalidated.
Configure ETag
The previous browser cache will be invalidated.
What is ETag?
An EntityTag is a string that uniquely identifies a particular version of a component and is a mechanism used by a web server to validate the caching of a component, often constructed using some of its properties.
Conditional GET request
If a component is expired, the browser must first check that it is valid before reusing it. The browser sends a conditional GET request to the server, which determines that the cache is still valid and sends a 304 response telling the browser that it can reuse the cache component.
So what does the server use to determine if the cache is still valid? There are two ways:
ETag (entity tag);
Date of last modification;
Last Modified Date
The original server returns the date of the Last modification of the component through the last-Modified response header.
Here’s an example:
When we visit www.google.com.hk without caching and need to download the Google logo, we send an HTTP request like this:
Request:
The GET googlelogo_color_272x92dp. PNG HTTP 1.1
Host: www.google.com.hk
Response:
The HTTP 1.1 200 OK
Last-Modified:Fri, 04 Sep 2015 22:33:08 GMT
When the same component needs to be accessed again and the cache has expired, the browser sends the following conditional GET request:
Request:
The GET googlelogo_color_272x92dp. PNG HTTP 1.1
If-Modified-Since:Fri, 04 Sep 2015 22:33:08 GMT
Host: www.google.com.hk
Response:
HTTP 1.1 304 Not Modified
Entity tags
ETag provides another way to detect whether components in the browser cache match components on the original server. Examples from the book:
Requests without caching:
Request:
GET/I/yahoo/HTTP 1.1 GIF
Host: us.yimg.com
Response:
The HTTP 1.1 200 OK
Last-Modified:Tue,12 Dec 200603:03:59 GMT
ETag: 10 elc1f c24bc – 4 ab – 457 “”
Request the same component again:
Request:
GET/I/yahoo/HTTP 1.1 GIF
Host: us.yimg.com
If-Modified-Since:Tue,12 Dec 200603:03:59 GMT
If None – Match: “10 elc1f c24bc – 4 ab – 457”
Response:
HTTP 1.1 304 Not Midified
Why ETag?
ETag was designed to solve problems that Last-Modified could not:
1. Some files may change periodically, but their contents do not change (only change the modification time). At this time, we do not want the client to think that the file has been modified, and GET again;
2. Some files are Modified very frequently, such as If they are Modified less than seconds (say N times in 1s), and if-modified-since the granularity can be detected is s-level, so the modification cannot be determined (or the UNIX record MTIME is only accurate to seconds).
3. Some servers cannot accurately obtain the last modification time of a file.
Problems with ETag
The problem with ETag is that it is usually constructed using certain attributes, some of which are unique to the particular server on which the site is deployed. When clustered servers are used, ETag mismatches occur when the browser retrieves the original component from one server and then makes a conditional GET request to a different server. For example, when inode-size-timestamp is used to generate an ETag, the file system uses inodes to store information such as file type, owner, group and access mode. On multiple servers, inodes are different even if the file size, permissions, and time stamps are the same.
Best practices
1. If there is no problem with last-Modified, you can remove ETag directly. Google does not use ETag on its search homepage.
2. Determine to use ETag. When configuring ETag values, remove attributes that may affect component cluster server authentication, such as using size-timestamp to generate time stamps.
Make Ajax cacheable
Wikipedia defines Ajax as follows:
AJAX, Asynchronous JavaScript and XML, refers to a set of browser-side web development technologies that combine several technologies. The concept of Ajax was developed by Jesse James Jarrett.
Traditional Web applications allow users to fill out forms and send a request to the Web server when the form is submitted. The server receives and processes the incoming form, and then sends back a new web page, but this wastes a lot of bandwidth because most of the HTML is the same in the first and second pages. The response time of the application depends on the response time of the server because each application communication needs to send a request to the server. This results in the user interface being much slower to respond than native applications.
In contrast, an AJAX application can simply send and get back the necessary data to the server and use JavaScript on the client side to handle the response from the server. Because the amount of data exchanged between the server and the browser is significantly reduced (about 5%) [source requests], the server responds faster. At the same time, much of the processing can be done on the requesting client machine, so the load on the Web server is reduced.
Like DHTML or LAMP, AJAX does not refer to a single technology, but rather organically utilizes a series of related technologies. Although the name contains XML, the data format can actually be replaced by JSON, further reducing the amount of data to what is known as AJAJ. The client and server do not need to be asynchronous. Some Ajax-based derivative/ Composite technologies are also emerging, such as AFLAX.
Ajax is meant to be the beginning of a breakthrough in the nature of the Web – stopping interaction, showing the user a blank screen and redrawing the entire page is not a good user experience.
Asynchrony and immediacy
One of the obvious advantages of Ajax is that it provides immediate feedback to the user because it asynchronously requests information from a back-end Web server.
The key factor in whether a user needs to wait is whether the Ajax request is passive or active. Passive requests are made in advance for future use, while active requests are made based on the user’s current actions
What Kinds of AJAX requests can be cached?
POST requests cannot be cached on the client. Each request needs to be sent to the server for processing, and status code 200 is returned each time. (Data can be cached on the server side for faster processing)
GET requests can (and by default) be cached on the client side. Unless a different address is specified, AJAX requests to the same address will not be repeated on the server, but will return 304.
Ajax requests use caching
When making Ajax requests, you can choose to use the GET method as much as possible to use the client’s cache and speed up the request.
If it is original articles, reprint please indicate the source: www.cnblogs.com/MarcoHan/
Study notes for the High Performance Website Building Guide