preface

When it comes to performance tuning, you can think of a lot of things. For example, how to modify the Webpack configuration to speed up the build and optimize the production; For example, how to optimize page performance and so on.

In fact, there are still a lot of places to play in performance optimization. Today, THE author will talk about some places that we do not often pay attention to, from the development to the construction stage of CI and finally the deployment of online links.

The construction phase

The construction phase is divided into two parts:

  • Local development
  • CI build

The two parts focus on different points. The former pays more attention to build speed and has no requirements for production. After all, local development builds are very frequent, and slow speed means reduced development efficiency. The latter pays more attention to the product, such as file volume, number and other indicators, of course, the construction speed can not be said to be completely unconcerned, but more is to improve the construction speed on the premise of ensuring the product.

I’ll talk about performance optimizations we can do in each of these sections.

Local development

In the case of Webpack, the local development build part is one of the most frequently encountered in our daily work, and if it leads to some speed up, the perception is pretty good.

This part is mainly divided into two stages:

  • Start the project, for exampleyarn start
  • Modify save code to trigger hot update

The first stage is relatively low frequency, and a slower speed is tolerable in most cases.

But in the second stage, our front-end development will carry out dozens or even hundreds of high configuration operations every day, and for some medium and large projects, the time spent in this part may be more than 10 seconds. Assuming Wang changes the code 100 times a day, that’s 15 minutes or more of waiting time for this part of the code every day.

Is there any way we can improve this part? The answer is yes, and many readers already know about this product: Vite.

Vite is NoBundle. It can also be called UnBundle or Bundleless, but it’s the same thing. Similar competing products snowpack, WMR, etc., of course, are not as famous as the former.

If you’ve already used Vite, you should have an image of its second startup and hot update, which is a huge improvement over Webpack alone.

The image above is from this article. The project has 150,000 lines of code. It’s a very large project, and you can see that there is a ten-fold improvement in startup and hot update, which is a much smoother development experience. In addition, the author also participated in the research and development of the company’s internal NoBundle scheme before, and obtained considerable data.

So why does Vite bring such an experience, and how does it work? The author here will chat about roughly.

First let’s talk about Webpack builds, based on Webpack4 and medium to large projects. After we execute YARN Start, Webpack will start packing everything up, build the dependency graph and break everything into several files. The process of building the dependency graph and packaging will take us several minutes.

When hot updates are triggered, Webpack also needs to find the dependent link and package it again, so this process is also time-consuming. Of course, if you use Webpack5, persistent caching should be a lot faster.

But for Vite, that’s not necessary.

First, Vite uses ESBuild to pre-build dependencies. This is an awesome builder written in Go that is tens or even hundreds of times faster than JS. Vite needs to use this to handle modules and build ESM environments.

Thanks to ESM’s load on demand feature, we don’t need to build dependency maps and package files when we start the project, but instead compile what the browser asks for (e.g., compile TS, insert hot update code, etc.), which allows us to run the project quickly.

Finally, when the user triggers a hot update, Vite doesn’t have to do what Webpack does. Instead, Vite finds the least dependent path (usually the modified file) and hashes the file to the browser’s cache before it expires.

Based on some of the above features and ESBuild, Vite rarely slows down significantly as code volume increases, but for Webpack, project size obviously slows down builds.

Readers may see this and think it’s really cool, and get ready to do it. But I’m going to throw some cold water on this. According to the results of our internal use of Nobundle and the communication between the author and many friends who have done this scheme in big factories, the cost of access business is huge. Although the improvement effect is good, the input-output ratio may not be so significant in terms of the cost of access. At present, there is not a good access plan, and it is likely to step on different pits in different projects. The biggest reason is the ESM environment.

However, I think Nobundle will definitely become a mainstream local development build in the future because the development experience is so smooth.

It’s also likely that readers will ask if Vite is going to do away with Webpack issues. I don’t think these two things are competitive, Webpack can be used for complex, customizable scenarios that Vite can’t (and probably won’t), and Vite is designed to improve the experience during development, at best it can replace Webpack’s work during development.

The following author lists some information, you are interested to know:

  • Yunqian big toy Vite
  • 150,000 lines of code migrated to Vite series of articles

Finally, if you are interested in doing migration in the business, you must look at the market migration related articles, can help you quickly solve the problem when stepping on the pit.

CI

The construction of CI can be roughly divided into three steps:

  1. Install dependencies
  2. Code quality assurance
  3. build

The first two links involved in the content is not much, the author here quickly with.

Install dependencies

Depending on the installation is time-consuming, we can speed up this process by:

  • The source must switch to Taobao source or its own private source
  • The cache node_modules must be rounded
  • Try yarn2 if you can

Here’s a little bit about Yarn2. There are two options for upgrading to Yarn2, either node_modules or abandoning node_modules in favor of PnP.

The former basically does not touch the code in the process of migration, but can improve the speed of dependent installation; The latter needs to change a lot of places, there is still some resistance in the internal promotion of all, but this scheme can greatly reduce the dependency volume and improve the installation speed, you can evaluate the input-output ratio.

Code quality assurance

Code quality assurance is generally divided into two parts:

  • ESLint
  • A single measurement

Of course, there are other quality assurance programs, which are not listed here.

There are no optimizations for ESLint, but most projects should be optimized when submitting code locally, though many people may ignore them. Husky + Lint-stage Husky + Lint-stage Husky + Lint-stage Husky + Lint-stage husky + Lint-stage husky + Lint-stage This is an incremental approach, and in many cases we need this approach to help with performance tuning.

For single beta, most readers probably haven’t written about it. However, if you have done some NPM packages or Node services, you will find that single testing is quite necessary.

For large projects, there are quite a few Test cases. In our internal component library, for example, there are 1000+ Test cases. It takes two or three minutes to complete a YARN Test locally, not to mention the speed of running in the cloud. But in fact, the code we submitted each time did not affect so many Test cases, and it took too much time to run a single Test in full.

At this point, readers will recall the increments I mentioned above. Yes, we can definitely use increments here to increase the speed of running a single test. If you are using the Jest framework, you can take a look at the parameters associated with -onlyChanged to implement incremental single testing.

build

When it comes to build optimization, many readers will say I know how to do this. After all, optimizing Webpack configuration has been a bad topic for the interview, and there are endless articles about it in the market.

So I won’t talk about how we should configure Webpack this way or that, and readers can browse online if they need to.

In addition to modifying Webpack for performance optimization purposes, there will also be big surprises in the updated version.

For example, after upgrading from 4 to 5, we can use these new features to improve performance:

  • Persistent caching, which we’ve already covered, can help speed up secondary startup and HMR
  • Better Tree shaking ability to clean up unused exports further reduces the size of the build artifacts
  • Prepack ability to reduce code count through static calculation
  • Federated modules, which can load remote modules or dependencies at run time, reducing build time

I’ve listed some of the benefits of upgrading Webpack above, so if you’re interested in a feature, you can search for it.

In addition, in fact, we often say that the build speed optimization, there is a point not many people pay attention to, but also has a large impact on the build speed, that is to compress the code.

If you have used the Speed Measure Plugin, you can see that this is true. For large applications, even if you use multiple threads for compression, it can still take 20 or 30 seconds.

Of course, it is possible to optimize this phase using the tool mentioned above, ESBuild. ESBuild is all about building fast, and as you can see from the official performance comparison chart, dimension reduction hits other builders.

Horrible hundreds of times improvement (of course, I can’t get such data in actual measurement), but even though it is really fast to build, there are still some problems (the biggest problem is the processing of CSS) so that the production is still not realistic. But in fact, ESBuild also supports compression of code, so the risk is basically negligible. The speed increase of 30% ~ 40% in the business project measured by the author is still considerable.

In addition to the above mentioned, in fact, we can also improve the efficiency of the construction process through incremental thinking.

For multi-page applications, most of the time the code we change each time we release it doesn’t affect all the entries. So the unaffected entries don’t actually need to be built again, just use the previous cache. According to this idea, we need to find out all the files that have changed since the last release and the entry that these files affect before each build, and finally dynamically modify the entry configuration of Webpack to achieve incremental build.

Here’s the general idea of incremental building.

The first step is to find the files that have changed since the last release, which is as simple as a single command:

git diff --name-only {git tag / commit sha}
Copy the code

Don’t forget to tag or record the COMMIT ID after deployment and pass it in the next time you run the command.

Once we have the changed file names, we need to figure out which entries these files affect, so we need to start building the dependency tree. Webpack will help us build this as well, but we don’t have to use that heavy stuff. Just find a library that focuses on dependency trees, madge or something similar.

Then we just need to match the files to find the affected entries and dynamically modify the Entry property in the Webpack configuration to build.

Finally, we will replace the content of the build with the old entry product, and do not need to care about the unchanged.

The incremental build of this multi-page application is similar to the deployment in Monorepo. If we only need to deploy the package of the modified code in Monorepo, then the logic of deploying the code is very similar, which is to find the affected package (which is the entry point for multi-page applications) and then build and publish it.

If you have a multi-page application project in your business, you can try this solution and the benefits should be considerable.

summary

Said so much, the author to summarize the above mentioned optimization means:

  • The development phase tried to replace Webpack with NoBundle, which worked well, but the migration cost was a consideration
  • ESBuild is a great thing for both building and compressing code. The former has risks and poorly handled scenarios, while the latter has little risk and a decent increase in efficiency
  • Install dependency acceleration from source, cache, and upgrade yarn2
  • The code quality assurance phase of large projects can take too long, so consider increasing the speed with incremental solutions, but that’s fine if you feel more comfortable running through it all
  • CI construction level, Webpack configuration related to say too much, do not know can learn, in addition to upgrading Webpack will have unexpected benefits, of course, there are migration costs
  • Multi-page applications don’t have to build every entry every time, just the ones affected by the code
  • Incremental thinking is quite common in performance optimization

After the launch

General performance optimization after the launch is also said to be bad, but from the network protocol, CSS, Webpack configuration, I would like to talk about something else.

Since we’re talking about performance optimization, we definitely need to know where the performance problem is, otherwise it’s just a void optimization. How to detect performance optimization and what performance indicators are often asked by the author (of course, if the candidate has done this in the resume), but most of the answers are not correct in the author’s opinion, and it is not clear whether the candidate has really done this optimization.

For example, when it comes to performance metrics, nine out of ten people will say white screen time, but actually white screen time is not a qualified metric at the moment. Most applications have Loading or skeleton screens, and it will take some time for these contents to transition to the page and show the content users care about. However, if we only rely on the white screen time to determine that the user sees the DOM is wrong, it is not enough to rely on this indicator to optimize the open screen. We must collect the time when the user sees the real DOM.

At this point we can collect the LCP (Largest Contentful Paint) metric, which will help us record the time stamp of the Largest content drawing on the page.

Through this index plus the white screen time, we can correctly do the screen time optimization. It is also possible not to use the LCP metric here, so we can customize the collection of key DOM points ourselves.

In addition to THE LCP index, there are many new indicators. If you are interested, you can read the article written by the author before. In this article, several new indicators are described and how to optimize these indicators are explained.

The last

That’s all for this article. Performance optimization is a big topic, and there are many solutions that can be done in addition to the familiar ones.

If you have any questions, please feel free to share them in the comments section.