1. Minimize the number of HTTP requests. 80% of the end user response time is spent downloading content. This time includes downloading images, stylesheets, scripts, Flash, and more from the page. You can reduce the number of HTTP requests by reducing the number of elements in the page. This is a key step to speed up your web page. One way to reduce page components is to simplify page design. So is there a way to keep the content rich on the page and speed up the response time? Here are a few techniques to reduce the number of HTTP requests while potentially keeping the page rich.

Merging files is a way to reduce HTTP requests by putting all scripts into one file, such as simply putting all CSS files into one style sheet. This can be a bit of a hassle when the script or style sheet is used on different pages and needs to be modified differently, but it is important to use this approach as a step towards improving page performance.

CSS Sprites are an effective way to reduce image requests. Put all the background images into a single image file, and then use the BACKground-image and background-position properties of CSS to display the different parts of the image.

A picture map is a combination of multiple images into a single image. Although the overall size of the file does not change, you can reduce the number of HTTP requests. Image maps can only be used when all the components of the image are next to each other on the page, such as the navigation bar. This method is not recommended because it can be tedious and error-prone to determine the coordinates and coordinates of images, and navigation with image maps is not readable.

Inline images load image data into the page using the data:URL scheme method. This may increase the size of the page. Putting inline images in a (cacheable) style sheet can reduce HTTP requests without increasing the size of the page file. But inline images are not currently supported by major browsers.

Reducing the number of HTTP requests to your page is one of the first things you should do. This is the most important way to improve the wait time for first-time users. As Tenni Theurer’s blog Browser Cahe Usage-exposed! HTTP requests take up 40 to 60 percent of the response time without caching, according to the. Make your site a faster experience for those who visit it for the first time!

Reduce THE number of DNS Lookups The Domain Name System (DNS) provides a mapping between domain names and IP addresses, much like the mapping between people’s names and their phone numbers in a telephone book. When you type www.dudo.org in the browser address bar, the DNS server returns the IP address corresponding to the domain name. The DNS resolution process also takes time. It typically takes 20 to 120 milliseconds to return the IP address corresponding to a given domain name. And the browser doesn’t do anything during this process until the DNS lookup is complete.

Caching DNS lookups improves page performance. This caching requires a specific caching server, which is typically owned by the user’s ISP provider or local area network control, but also generates caching on the user’s computer. DNS information is stored in the DNS cache of the operating system (DNS Client Service in Microsoft Windows). Most browsers have their own caches, independent of the operating system. Because the browser has its own cache record, it is not affected by the operating system during a request.

Internet Explorer caches DNS lookup records for 30 minutes by default, and its key value in the registry is DnsCacheTimeout. Firefox to DNS lookup record cache time for 1 minute, its option in the configuration file for the network. The dnsCacheExpiration (Fasterfox put this option to change to 1 hour).

When both DNS caches in the client are empty (both browser and operating system), the number of DNS lookups is the same as the number of host names on the page. This includes the host names contained in urls, images, script files, style sheets, Flash objects, and so on. Reducing the number of host names reduces the number of DNS lookups.

Reducing the number of host names also reduces the number of parallel downloads on the page. Reducing DNS lookups saves response time, but reducing parallel downloads increases response time. My guideline is to divide the content on these pages into at least two parts but no more than four. The result is a trade-off between reducing the number of DNS lookups and maintaining a high level of parallel downloads.

3. Avoid jumps Jumps are implemented using 301 and 302 codes. HTTP/1.1 301 Moved Permanently Location: example.com/newuri Content-type: The text/ HTML browser points the user to the URL specified in Location. All information in the header file is required in a jump, and the content section can be empty. Regardless of their name, 301 and 302 responses will not be cached unless an additional header option, such as Expires or cache-control, is added to specify that it is cached. The refresh tag for the
element and JavaScript also make URL jumps possible, but if you do have to jump, the best way is to use the standard 3XXHTTP status code, mainly to make sure the back button is used correctly.

But keep in mind that jumps degrade the user experience. Adding a jump between the user and the HTML document delays the display of all elements on the page because no files (images, Flash, etc.) will be downloaded until the HTML file is loaded.

There is a jump phenomenon that is often overlooked by web developers but often wastes response time. This happens when a URL that should have a slash (/) is ignored. For example, when we are going to visit astrology.yahoo.com/astrology, actually returns a jump containing 301 code, it points to the astrology.yahoo.com/astrology/ slash at the end of (note). This can be avoided using Alias or mod_rewrite or the DirectorySlash in the Apache server.

Linking old and new sites is another case where the jump feature is often used. In this case, it is often necessary to connect different contents of the website and then jump to different types of users (such as browser type and user account type). Using the jump to achieve the switch between two sites is very simple, the amount of code is not much. While this approach reduces complexity for developers, it also reduces the user experience. An alternative is to use Alias and mod_rewrite and implementation if both are on the same server. If you want to jump because of a different domain name, you can use Alias or mod_rewirte to create a CNAME (a DNS record that holds the relationship between one domain name and another) instead.

One oft-cited benefit of AJAX is the immediacy of user feedback due to the asynchronous nature of the transfer of information from backend servers. However, using Ajax doesn’t guarantee that users won’t spend time waiting for asynchronous JavaScript and XML responses. In many applications, whether a user needs to wait for a response depends on how Ajax is used. For example, in a Web-based Email client, users must wait for Ajax to return mail query results that match their criteria. It’s important to remember that “asynchronous” doesn’t smell like “instant.”

To improve performance, it is important to optimize Ajax responses. One of the most important ways to improve Ajxa performance is to make the response cacheable. For a discussion, see Add an Expires or a Cache-Control Header. A few other rules apply to Ajax as well: Gizp compresses files to reduce DNS lookups Simplify JavaScript to avoid jumps Configure ETags

Let’s take a look at an example: a Web2.0 Email client uses Ajax to automatically download the user’s address book. If no changes have been made to the address book since the last time the user used the Email Web application, and the Ajax response is cached through an Expire or cacke-Control header, the address book can be read directly from the previous cache. The browser must be told whether to use the cached address book or send a new request. This can be done by adding a timestamp to the Ajax URL that reads the address book containing the last time it was edited, for example, &t=11900241612, etc. If the address book has not been edited since the last download, the timestamp is unchanged and loaded from the browser’s cache, thus reducing the HTTP request process. If the user has changed the address book, the timestamp is used to determine that the new URL does not match the cached response, and the browser will request an update to the address book. Even if your Ajxa response is dynamically generated, even if it only applies to one user, it should be cached. Doing so can make your Web2.0 applications faster.

Take a close look at your page and ask yourself, “What content must be loaded first for the page to render? What content and structure can be reloaded later? JavaScript is ideal for splitting the whole process into two parts based on the onload event. For example, if you have JavaScript that implements drag and drop and animation, it waits to load later because drag and drop elements on the page occur after initial rendering. Other tools such as hidden parts of the content (the content of the user after the operation) and folded parts of the Image can also delay loading tools can save you work: YUI Image Loader can help you to delay loading folded parts of the Image, YUI Get Utility is a convenient way to include JS and CSS. For example, you can open the Firebug Net TAB and look at the Yahoo homepage. Performance goals work best when they are aligned with other web development practices. In this case, the method of improving the performance of the site programmatically tells us that you can remove the user experience while supporting JavaScript, but make sure your site works without JavaScript. After you’re sure the page is working properly, load the script for fancier effects like drag and drop and animation.

Preloading and postloading may seem like opposites, but preloading serves a different purpose. Preloading is requesting page content (such as images, stylesheets, and scripts) that may be used in the future when the browser is idle. With this approach, by the time the user wants to go to the next page, most of the content of the page has already been loaded into the cache, so access speeds can be greatly improved.

There are several methods of preloading: Unconditional loading: Additional page content is loaded directly when the onload event is triggered. In the case of Google.com, you can see how its Spirit Image image is loaded in onLoad. The Spirit Image is not required on the Google.com home page, but can be used on the search results page. Conditional loading: Based on the user’s operations to determine the user may go to the following page and preload the corresponding page content. At search.yahoo.com you can see how to load additional page content as you enter it. Load as expected: Use preloading when loading redesigned pages. This is often the case after a page redesign where users complain that “the new page looks cool, but it’s slower than before”. The problem may be that the user has built a full cache for your old site and has nothing cached for your new site. So you can avoid this by loading a piece of content before visiting the new site. Use free browser time in your old site to load images and scripts used in the new site to speed up access.

7. Reduce the number of DOM elements A complex page means more data to download, which also means JavaScript is slower to traverse the DOM. For example, if you add an event handle to the loop between 500 DOM elements and 5,000 DOM elements, the effect will be different. The presence of a large number of DOM elements means that there are parts of the page that can be simplified by replacing element tags without removing content. Do you use tables in your page layout? Did you introduce more

elements just for layout? There may be a more appropriate or semantically appropriate label for you to use. YUI CSS Utilities gives you great help with your layout: grids. CSS will help you with your overall layout, while font. CSS and reset.css will help you remove your browser’s default formatting. It provides an opportunity to re-examine tags in your pages, such as using

only when it makes sense semantically, not because it has a newline effect. Number of DOM elements is easy to calculate, just within the Firebug console input: the document. The getElementsByTagName (‘ * ‘). Length so many DOM elements is much? This can be compared to similar pages that have good markup to use. Such as Yahoo! The home page is a very informative page, but it only uses 700 elements (HTML tags).

8. Divide your page content by domain name. Splitting your page content into sections allows you to maximize parallel downloads. Because of the impact of DNS lookups you should first make sure that you use between two and four domain names. For example, you could put your HTML content and dynamic content at www.example.org, and the various components of the page (images, scripts, CSS) at statics1.example.org and statics.example.org, respectively. You can find more information about Maximizing Parallel Downloads in the Carpool Lane in the article follows Tenni Theurer and Patty Chi.

The IFrmae element can insert a new HTML document into the parent document. It’s important to understand how iframe works before you can use it more effectively.

The Security sandbox parallel loading script

Real-time content is empty, also need time will stop loading page load without semantic 10, 404 will not occur error HTTP request time consumption is very big, so using the HTTP request to get a useless response (for example, the 404 page not found) is completely unnecessary, it will only reduce the user experience and there won’t be any good. Some sites change their 404 response pages to “Are you looking for ***?”, which improves the user experience but also wastes server resources (such as databases). The worst-case scenario is that links to external JavaScript fail and return a 404 code. First, this type of loading breaks parallel loading; Second, the browser executes as JavaScript code an attempt to find potentially useful parts of the returned 404 response.

11. The proximity of content distribution network users to your web server will affect the response time. Spreading your content across multiple servers in different geographical locations can speed up downloads. But what should we do first? The first step in localizing your site is not to try to reframe your site so it works on the distributor. Changing the structure of the site to suit the needs of the application may involve complex tasks such as synchronizing Session state between servers and merging database updates. These architectural steps are probably unavoidable to reduce the distance between the user and the content server. Keep in mind that 80% to 90% of the end user response time is spent downloading images, stylesheets, scripts, Flash, and other page content. This is the golden Rule of web performance. Rather than the more difficult task of rearchitecting your application, it’s better to distribute static content first. Not only does this reduce response times, but it’s also easier to implement for content delivery networks. The Content Delivery Network (CDN) is a series of Web servers scattered across different geographical locations that speed up the Delivery of Web Content. The servers used to deliver content to users are designated primarily based on their proximity to the user on the network. For example, the server with the fewest network hops and the fastest response time will be selected. Some of the big web companies have their own CDNS, but using CDN services like Akamai Technologies, Mirror Image Internet, or Limelight Networks is very expensive. There may not be a cost budget to use a CDN for a start-up enterprise or personal website, but as the target audience grows and becomes more global, a CDN becomes necessary for rapid response. In Yahoo’s case, their move to CDN for static content saved end users more than 20 percent of their response time. Using CDN is a relatively simple way to dramatically improve the speed of web site access with code modifications.

This rule covers two aspects: For static content, set the header Expires value to “Never expire.” For dynamic content: Web content design is getting richer, and that means more scripts, stylesheets, images, and Flash in your pages. A user visiting your page for the first time means making multiple HTTP requests, but using Expires file headers can make that cacheable. It avoids unnecessary HTTP requests for subsequent page visits. The Expires header is often used in image files, but it should be used in everything from scripts to stylesheets to Flash. Browsers (and proxies) use caching to reduce the size and number of HTTP requests to speed up page access. The Web server uses the Expires header in the HTTP response to tell the client how long the content needs to be cached. The following example is a long Expires header that tells the browser that the response will not expire until April 15, 2010. Expires: Thu, 15 Apr 2010 20:00:00 GMT If you’re running an Apache server, you can use ExpiresDefault to set an expiration date relative to the current date. Remember that if you use the Expires header, you must change the file name of the content when the page content changes. In accordance with the Yahoo! For example, we often use this step: add the version number to the file name of the content, such as yahoo_2.0.6.js. Using an Expires header only works after the user has already visited your site. This is not effective in reducing HTTP requests when users first visit your site because the browser cache is empty. So this method will improve your site performance based on how often they hit your page when they are “pre-cached” (which already contains all the content on the page). Yahoo! We built a set of metrics and found that 75 to 85% of all page views were “pre-cached.” By using the Expires file header, you increase the amount of content cached in the browser and can be used again in subsequent requests from the user, without even having to send a byte through the user.

13. HTTP request and reply times in network transport of Gzip compressed file content can be significantly improved by a front-end mechanism. Indeed, the bandwidth of the end user, the Internet provider, the proximity to the peer exchange point, etc., are not up to the website developer. But there are other factors that affect response time. You can save HTTP response time by reducing the size of the HTTP response. Since HTTP/1.1, Web clients have supported HTTP requests with accept-encoding headers by default: accept-encoding: If the Web server detects the above code in the file header of the request, it compresses the response content as listed by the client. The Web server returns the compression to the browser via Content-Encoding in the response header. Content-encoding: gzip Gzip is the most popular and effective compression method. This was developed by the GNU project and standardized through RFC 1952. The only other compression format is Deflate, but it is less effective and limited in scope. Gzip can reduce the response size by approximately 70%. Gzip is currently supported by about 90 percent of browser-delivered Internet exchanges. If you use Apache, the gzip module configuration depends on your version: Apache 1.3 uses mod_zip, while Apache 2.x uses Moflate. The problem with both browsers and agents is that there is a mismatch between what the browser expects to receive and what it actually receives. Fortunately, this particular case is decreasing with the decline in the use of older browsers. The Apache module avoids this by automatically adding the appropriate Vary response header. The server selects files to be gzip compressed based on file type, but this limits the number of files that can be compressed. Most Web servers compress HTML documents. Compressing scripts and stylesheets is also worth doing, but many Web servers don’t have this capability. In fact, it’s worth compressing any text-type response, including XML and JSON. Images and PDF files cannot be gzip compressed because they are already compressed. Trying to compress these files using GIzP will waste CPU resources and increase the file size. Gzip compression of all possible file types is an easy way to reduce file size and increase user experience.

Configuring ETag Entity Tags (ETags) is a mechanism that Web servers and browsers use to determine whether the content in the browser cache matches the original content in the server (” entities “are” content “, including images, scripts, style sheets, etc.). The addition of ETag provides a more flexible mechanism for entity validation than using last-Modified date. Etag is a unique string that identifies the version number of the content. The only format restriction is that it must be enclosed in double quotes. The original server specifies the ETag of the page content through a response containing the ETag file header. HTTP/1.1 200 OK Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT ETag: “10C24BC-4AB-457e1C1f” Content-Length: 12195 Later, If the browser wants to validate a file, it uses the if-none-match file header to pass the ETag back to the original server. In this case, if the ETag matches, a 304 status code is returned, which saves 12195 bytes of response. GET/I /yahoo.gif HTTP/1.1 Host: us.yimg.com if-modified-since: Tue, 12 Dec 2006 03:03:59 GMT if-none-match: The problem with the “10C24BC-4AB-457E1C1f” HTTP/1.1 304 Not Modified ETag is that it is generated based on unique attributes that identify the server on which the web site is located. ETag mismatches occur when a browser retrieves page content from one server and validates it on another, which is very common for web sites that use server groups and process requests. By default, both Apache and IIS embed data in eTags, which significantly reduces file validation conflicts between multiple servers. The ETag format in Apache 1.3 and 2.x is inode-size-timestamp. Even if a file resides in the same directory on different servers, with the same file size, permissions, time stamps, etc., its internal code is different on different servers. IIS 5.0 and IIS 6.0 handle ETag similarly. IIS the ETag format for Filetimestamp: ChangeNumber. Use ChangeNumber to track IIS configuration changes. The ChangeNumber varies between IIS servers used by the site. Apache and IIS on different servers produce different Etags even for the exact same content, and the user does not receive a small, fast 304 response; Instead, they receive a normal 200 response and download the whole thing. If your site is hosted on only one server, you won’t have this problem. But if your site is hosted on multiple servers, and you use Apache and IIS to create the default ETag configuration, your users will get pages more slowly, the server will transfer more content, take up more bandwidth, and the proxy will not cache your site content effectively. Even if your content has an Expires header, a GET request will be sent whenever the user hits the “refresh” or “reload” button. If you are not using the flexible validation mode provided by ETag, it is better to remove all etags altogether. Last-modified header validation is based on the timestamp of the content. Removing the ETag header reduces the size of the file in the response and the next request. This Microsoft support document describes how to remove ETag. In Apache, you simply add the following line to the configuration file: FileETag None

Refresh the output buffer as early as possible. When a user requests a page, it takes 200 to 500 milliseconds to organize the HTML file in the background anyway. During this time, the browser is idle, waiting for the data to return. In PHP, you can use the flush() method, which allows you to send a partially compiled HTML response file to the browser, which then downloads the file (script, etc.) while the rest of the HTML page is processed in the background. The effect of this is more obvious when the background is annoyed or the foreground is idle. One of the best places to use output buffering is right after , because the HTML header is easy to generate and often contains CSS and JavaScript files, so the browser can download them in parallel while compiling the rest of the HTML in the background. Example:

.

To demonstrate the benefits of using this technology, Yahoo! Search took the lead in researching and completing user tests.

16. Use GET to complete AJAX requests. The Mail team found that when XMLHttpRequest was used, the POST method in the browser was a “two-step” process: send the file header first, then send the data. So use GET is best because it only needs to send one TCP packet (unless you have a lot of cookies). The maximum length of a URL in IE is 2K, so you can’t use GET if you want to send a data larger than 2K. One interesting difference is that POST doesn’t actually send data like GET does. According to the HTTP specification, GET means “GET” data, so it makes more sense to use GET (semantically) when you’re just getting data, and instead use POST when you’re sending and saving data on the server.

17. Put stylesheets at the top when researching Yahoo! We found that putting stylesheets inside the document’s seemed to speed up page downloads. This is because putting the stylesheet inside the causes the page to load in steps. Performance-oriented front-end servers tend to want pages to load in an orderly fashion. At the same time, we want the browser to display the received content as much as possible. This is especially important for pages with lots of content and users with slow Internet speeds. Visual feedback back to the user, such as process Pointers, has been well studied and formally documented. In our study, HTML pages are process Pointers. When the browser loads the file header in an orderly manner, the navigation bar, the logo at the top, and so on all serve as visual feedback to the user waiting for the page to load. This improves the overall user experience. The problem with putting stylesheets at the bottom of documents is that in many browsers, including Internet Explorer, it interrupts the orderly rendering of content. The browser stops rendering to avoid redrawing of page elements due to style changes. The user has to face a blank page. The HTML specification makes it clear that stylesheets are to be placed within the area of a page: “Unlike , can only appear within the area of a document, although it can be used multiple times.” Either a blank screen or unstyled content is not worth trying. The best solution is to load your stylesheet in the document as per the HTML specification.

CSS expressions are a powerful (but dangerous) way to dynamically set CSS properties. Internet Explorer has supported CSS expressions since version 5. Background-color: expression((new Date()).gethours ()%2? “#B8D4FF” : “#F08A00” ); As shown above, JavaScript expressions are used in Expression. CSS properties are set based on the evaluation of JavaScript expressions. The expression method does not work in other browsers, so it can be useful to set for Internet Explorer alone in a cross-browser design. The problem with expressions is that they evaluate more often than we think. Recalculation occurs not only when the page is displayed and zoomed, but when the page is scrolled, and even when the mouse is moved. Adding a counter to a CSS expression keeps track of how often the expression is evaluated. Moving the mouse around the page can easily add up to more than 10,000 calculations. One way to reduce the number of times CSS expressions are evaluated is to use one-time expressions that, on the first run, assign the result to the specified style property and replace the CSS expression with that property. If style attributes must change dynamically over the page cycle, using event handlers instead of CSS expressions is a viable option. If you must use CSS expressions, keep in mind that they are evaluated thousands of times and can affect the performance of your page.

Use external JavaScript and CSS Many performance rules are about how to handle external files. Before you take these steps, however, you might ask a more basic question: Should JavaScript and CSS be placed in external files or within the page itself? In practice, using external files can speed up pages because both JavaScript and CSS files are cached in the browser. JavaScript and CSS built into the HTML document are re-downloaded with the HTML document on each request. This reduces the number of HTTP requests, but increases the size of HTML documents. On the other hand, if JavaScript and CSS in external files are cached by the browser, you can reduce the size of the HTML document without increasing the number of HTTP requests. The key issue is that the frequency of external JavaScript and CSS file caching is related to the number of requests for HTML documents. Although it is difficult, there are still some indicators to measure it. Caching external files is even more beneficial if the user is browsing multiple pages on your site in a single session, and the same scripts and stylesheets are used repeatedly in those pages. Many sites do not have the capability to build these metrics. The best bet for these sites is to reference JavaScript and CSS as external files. A good exception to the built-in code is the home page of a website, such as Yahoo! The homepage and My Yahoo! . The home page has fewer (maybe one) views in a session, and you can see that the built-in JavaScript and CSS will speed up the response time for the end user. For a front page with high traffic, there is a technique to balance the reduction in HTTP requests from built-in code with the benefits of caching by using external files. One of them is to build JavaScript and CSS into the home page, but dynamically download external files after the page has been downloaded, which are already cached in the browser by the time they are used in child pages.

Cutting down oN JavaScript and CSS simplification refers to reducing file size by removing unnecessary characters from code to save download time. When whittling code, remove all comments, unwanted whitespace characters (Spaces, newlines, TAB indents), and so on. In JavaScript, the response time is saved because of the smaller file size that needs to be downloaded. The two most widely used tools in compact JavaScript are JSMin and YUI Compressor. YUI Compressor can also be used to simplify CSS. Obfuscation is another method that can be used for source code optimization. This approach is more problematic than streamlining a process that is complex and confusing. A survey of the top 10 websites in the US found that streamlining can also reduce the size of the original code by 21%, while obfuscation can reduce the size by 25%. While obfuscation is a better way to shrink code, it is less risky for JavaScript to be thin. In addition to eliminating external scripts and stylesheet files,

The previous best implementation mentioned that CSS should be placed at the top for orderly loading rendering. In IE, @import at the bottom of the page works just as well as using , so it’s best not to use it.

22. Avoid using filter AlphaImageLoader to correct translucency of PNG images shown in versions 7.0 and below. The problem with this filter is that it stops rendering the content and freezes the browser when the image is loaded. It evaluates every element (not just images) once, increasing memory overhead, so its problems are manifold. The best way to avoid AlphaImageLoader altogether is to use the PNG8 format instead, which works well in IE. If you do need AlphaImageLoader, use the underscore _filter to disable it for IE7 + users.

The problem with scripting is that it prevents parallel downloads of the page. The HTTP/1.1 specification recommends that browsers have no more than two concurrent downloads per host name. If your images are on more than one host name, you can download more than 2 files at the same time in each parallel download. But when a script is downloaded, the browser will not download other files at the same time, even if the host name is different. In some cases it may not be easy to move the script to the bottom of the page. For example, if a script uses Document. write to insert page content, it cannot be moved down. There may also be scope issues. In many cases, this is a problem. One oft-used alternative is to use deferred scripting. The DEFER attribute indicates that document.write is not included in the script, and tells the browser to continue displaying. Unfortunately, Firefox does not support the DEFER attribute. In Internet Explorer, scripts may be delayed but not as well as expected. If the script can be deferred, it can be moved to the bottom of the page. This will make your page load faster.

24. Eliminating duplicate scripts Repeatedly referencing JavaScript files on the same page can affect page performance. You might think that’s not often. A survey of the top 10 websites in the US revealed that two of them had repeated references to scripts. There are two main factors that lead to this strange phenomenon of a script being repeatedly referenced: the size of the team and the number of scripts. If this is the case, repeated scripts can cause unnecessary HTTP requests and useless JavaScript operations, which can degrade site performance. Unnecessary HTTP requests are generated in Internet Explorer, but not in Firefox. In Internet Explorer, if a script is referenced twice and it is not cacheable, it will generate two HTTP requests during page loading. Instant scripts can be cached, and additional HTTP requests can be made when the user reloads the page. In addition to adding additional HTTP requests, multiple computation scripts can be a waste of time. Both Internet Explorer and Firefox have the problem of duplicating JavaScript regardless of whether the script is cacheable or not. One way to avoid the occasional reference to the same script twice is to use the script management module in the template to reference the script. The most common way to reference a script in an HTML page with the tag is: This can be replaced in PHP by creating a method called insertScript:

Using JavaScript to access DOM elements is slow, so in order to get as many pages as possible, do the following: For more information on this, see Julien Lecomte’s article “High-performance Ajax Applications” in YUI.

26. Develop smart event handlers Sometimes pages feel unresponsive because there are too many event handlers attached to DOM tree elements and event handlers are triggered too often. That’s why it’s a good idea to use Event Delegation. If you have 10 buttons in a div, you only need to append an event handle to the div once, instead of adding a handle to each button. You can catch the event as it bubbles and determine which event was emitted. You also don’t have to wait for an onLoad event to occur in order to manipulate the DOM tree. All you need to do is wait for the element you want to access to appear in the tree. You don’t have to wait for all the images to load. You might want to use the DOMContentLoaded event instead of onLoad, but you can use the onAvailable method in the YUI event application until all browsers support it.

HTTP Coockie can be used for a variety of purposes, including permission authentication and personalized identity. Information in Coockie is communicated between the Web server and the browser via HTTP file headers. Therefore, it is important to keep Coockie as small as possible to reduce user response time. For more information, see Tenni Theurer and Patty Chi’s article “When the Cookie Crumbles.” These studies mainly include:

Remove unnecessary coockie keep coockie as small as possible to reduce the impact on user response Note Set coockie on adaptive level domains so that subdomains are not affected set a reasonable expiration time. Earlier Expire times and not clearing coockie too soon will improve user response times. 28. Use no Coockie domain name for page content. When a browser requests a static image and sends a coockie, the server does nothing with the coockie. So they are just network transports created for some negative reason. So you should make sure that requests for static content are coockie free requests. Create a subdomain and use it to store all static content. If your domain name is www.example.org, you can have static content on static.example.org. However, if you set coockie not on www.example.org but on the top-level domain example.org, then all requests for static.example.org will include coockie. In this case, you can buy a new domain to hold static content and keep it coockie free. Yahoo! Ymig.com, YouTube ytimg.com, Amazon images-anazon.com, and so on. Another benefit of having static content with coockie-free domain names is that some proxies may refuse to cache requests for Coockie content. A related tip is that if you want to decide whether to use example.org or www.example.org as your home page, you need to consider the impact of Coockie. Ignoring the WWW leaves you with no choice but to set coockie to *.example.org, so for performance reasons it is better to use a subdomain with WWW and set coockie on it.

29. After the graphic designer has finished designing the page, don’t rush to upload it to a Web server. Here are a few things to do:

You can check to see if the number of image colors in your GIFs matches the color palette specification. It’s easy to check using the following command line in Imagemagick: identify -verbose image.gif If you find that only 4 colors are used in an image and 256 colors are displayed in the middle of the palette, then the image has room to compress. Try converting GIF to PNG to see if you save space. In most cases it is compressible. Designers were often reluctant to use PNG images due to limited browser support, but that’s a thing of the past. There is only one issue with alpha channel translucency in true color PNG, but again, GIF is not true color and does not support translucency. So what GIF can do, PNG (PNG8) can do too (except for animations). GIF image.png “What we’re saying is: Give PNG its chance!” Run PngCrush (or any other PNG optimization tool) on all PNG images. Pngcrush image.png -rem alla-reduce-brute result. PNG runs Jpegtran on all JPEG images. This tool can also be used to optimize and remove comments and other useless information (such as EXIF information) from images: Jpegtran – Copy None-optimize -perfect src.jpg dest. JPG

Arrange your images horizontally in the Spirite. Arranging them vertically increases the file size slightly. In the Spirite, combining colors closer together can reduce the number of colors, ideally below 256 colors for the PNG8 format; Ease of movement and don’t leave large gaps in the middle of the Spirite image. This does not increase the file size much, but it requires less memory for the user agent to decompress the image into a pixel map. A 100×100 image is 10 megapixels, and 1000×1000 is 1 megapixel.

31. Don’t scale images in HTML. Don’t use images that are larger than they need to be in order to set the width and length in THE HTML. If you need: Then your image (mycat.jpg) should be 100×100 pixels instead of a 500×500 pixel image.

Favicon. ico is an image file located in the server root directory. It’s a must, because browsers will request it even if you don’t care if it’s useful, so it’s best Not to return a 404 Not Found response. Since it’s on the same server, coockie is sent every time it’s requested. This image file also affects the download order, for example in IE when you request additional files in onLoad, favicon will be downloaded before the additional content is loaded. Therefore, in order to reduce the disadvantages of Favicon.ico, do:

Keep the file small, preferably less than 1K. Set the Expires header for it when appropriate (that is, when you don’t plan to change favicon.ico again, because you can’t rename it when you change a new file). You can safely set the Expires header to a few months in the future. You can judge this by checking the last time the current favicon.ico was edited. Imagemagick can help you create small Favicons.

33. This limit is mainly due to the fact that the iPhone can’t cache files larger than 25K. Note that this refers to the uncompressed size. Since pure GIZP compression may not be required, compact files are important. For more information, see Wayne Shea and Tenni Theurer’s document “Performance Research, Part 5: iPhone Cacheability – Making It Stick”.

Wrapping your page content into composite text is like Email with multiple attachments. It allows you to retrieve multiple components in a single HTTP request (remember: HTTP requests are expensive). When you use this rule, first determine whether the user agent supports it (iPhone does not).