preface

This article is from an unoriginal open course on the topic of the geek Front end boot camp.

Author’s brief introduction

Sang Shilong (Wolf Uncle), alibaba front-end technology expert, nodejs “Wolf Book” author.

The background of rapid development

Front-end development is too fast, before 2004, as long as the web page three musketeers (a powerful web page editing tools, originally Macromedia company developed, by Dreamweaver, Fireworks, Flash 3 software) is very cattle, when the front-end is still more “pure”. In the Era of Web 2.0 where Ajax is represented by asynchronous refresh to improve user experience, a large number of libraries have emerged, such as Prototype, jQuery, MooTools, YUI, Ext JS, Kissy, etc., all of which are just the encapsulation of browser compatibility and tool class functions. Backbone, Angular 1.x, MVC, MVVM, IOC (Inversion of Control, Java framework Spring concepts), front-end routing (similar to Express, Koa and other frameworks routing), Virtual DOM (Virtual DOM, through the DOM Diff algorithm, Reduced DOM manipulation), JSON apis (interface specifications), JavaScript Runtime (Coffee, Babel, TypeScrpt), and so on. Frameworks are popping up all over the place where they used to take six months or more. Now that a new framework is likely to be born in a few weeks, the front end has entered an unprecedented burst phase.

Figure 2004-2016 change

The front-end goes through several stages

Front-end development is now so complex that it can be called modern Web development. To summarize, there are four stages of front-end development: * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Componentized React, Vue.js (popular trend)

The current state of development of the front end

At present, the whole front end still belongs to the development period, and has not formed a fixed pattern, so it is an upward period from the perspective of the trend. The result of complex, chaotic and diversified also leads to the current front end being more popular, which is nicknamed “money end” by everyone. Many students will ask, should we learn front-end technology in a progressive way, or directly learn React/ vue.js and other frameworks? In fact, I think both methods exist. If there is economic pressure, just look at “money”, you can directly learn React/ vue. js and other frameworks, but after you learn, you must finish the other three stages of supplementary learning. If there is no financial pressure, and you have the time and patience, learning gradually is the best way. There are no shortcuts to programming, either way, you need to get down to earth and practice.

When you start writing too much code, you’ll see: when people get tired of HTML, start introducing a template engine. When you get tired of writing CSS, such as nesting, start introducing CSS preprocessors such as SASS, Less, Stylus, and PostCSS. When people get tired of writing JavaScript, they start to introduce friendlier languages: CoffeeScript for those familiar with Ruby, TypeScript for those familiar with Java/C#.

As a result, front-end development became complex and entered the era of modern Web development.

nodejs

For Node.js, all front-end frameworks are just presentation technologies of the View layer, which can be easily integrated with various frameworks and better implemented according to business requirements. In addition, all front-end frameworks use Node.js and NPM as auxiliary development tools, and a large number of Node.js modules are used, but front-end frameworks use similar modules. If you have a good command of Node.js and NPM, for learning front-end technologies, you only need to learn pure front-end parts, and the reuse value is very high.

There are a lot of front-end development modules associated with Node.js. Here are a few:

From these modules, it also leads to the topic we are going to talk about today — the practice and thinking of big front-end engineering.

Build tools

Let’s start with build tools. The preprocessor is the front end of advanced gameplay.

I always joke that the old DAYS of HTML/JavaScript/CSS were too pure, and now you have to compile/translate whatever you write. The advantage is that advanced features can be used to improve development efficiency; The disadvantage is also extremely obvious, is the human brain to have a transformation of thinking. This is actually quite painful, the original HTML/JavaScript/CSS is not skilled enough, again, for the novice needs a process of adaptation. So this matter should be viewed dialectically, blessing and calamity depend on each other.

When you think of build tools like Make, Ant, Rake, Gradle, etc., there are many more implementations in the Node.js world.

The source code for the build tools is not particularly complex, so there are a lot of implementations in the Node.js world. Some people have written a version of Make for Node.js.

The primary reason for choosing a build tool is for automation. For repetitive tasks such as minification, compilation, unit testing, linting, etc., automation tools can make your work easier and easier. In particular, the more complex the project, the greater the value of automation. Compilation here includes templates, CSS preprocessing languages, JavaScript friendly languages and other compilation, in the source code is written with advanced gameplay.

In addition to code compilation, testing, code style checking, pre-launch optimization (merge, compress, obvert), build systems are ubiquitous throughout software engineering.

grunt

Grunt is the first popular DSL (Domain Definition Language) style building tool for the front-end domain, and it plays a very good role in leading front-end engineering. It describes tasks through gruntfiles and extends various engineering capabilities through a plug-in mechanism. When you correctly configure tasks in the Gruntfile file, the task runner will automatically do most of the boring work for you or your team.

In addition to the complex configuration of Grunt, there is another problem of performance, which may not be encountered by many students. Grunt reads and writes files, so there is a performance bottleneck when dealing with multiple and large files.

The Grunt design was pretty neat, but the configuration was maddeningly hard to get, and Gulp rose to the top.

Gulp

In short, Gulp is a build tool written by Node.js, a Stream based build tool, and contains a large number of plug-ins. Orchestrator is the core library related to tasks that Gulp relies on. It defines task execution modes and dependencies, and supports maximum concurrency. This is why Gulp is efficient. Stream itself is great for reading and writing large files, and with all the features mentioned above, Gulp’s popularity is inevitable.

Gulp can be used in a wide range of scenarios, including front-end projects and Node projects. Even Webpack can be used with Gulp through the gulp-Webpack module. To learn more about Gulp, check out stuq-gulp (github.com/i5ting/stuq…) In this paper, the use of Gulp in WeUI as an example, from the shallow to the deep, from the use to the principle of the elaboration.

modular

Decoupling is an eternal theme in the field of software development, and modularity is the best way of decoupling at present, so the evolution from nothing to maturity is bound to go a long way. Today’s development began with the creation of HTML in 1993, JavaScript in 1995, CSS1 in 1996, and then the brutal development phase. Web technology didn’t really rise until the bubble burst in 2000. At that time, especially pure, can write people are very cattle, after Flash has been super hot. Then came the Era of Web 2.0. Ajax began to appear, and various browser-compatible libraries began to emerge, and then modularization began. In 2009, Node.js was born, which completely changed the development method of JavaScript and front-end development field. To MVVM supported by AngluarJS and now React/Vue and Webpack.

Here I try to take a broader view of the classification, which is roughly divided into five stages: the original stage, package manager, module specification, module loader, and module packer. Let me explain each of them.

The original phase

Script loading is also fairly primitive, using multiple script tags to load, manually managing the order, and manually managing which scripts to load.

After a lot of trying and doing a lot of nasty things in Web development, For example, dynamically create Script tags XHR Eval, XHR Injection, $.getScript(), Script in Iframe, Script DOM Element, and Script Defer.

If the loading is gross, then the script order is even more gross, and JavaScript has a “feature” where one error crashes all the others, so maintaining the script load order is a pain in the ass.

A package manager

What about updating the jQuery UI version if there is a problem with the code? JQuery UI depends on jQuery. Download the jQuery UI code, find the version of jQuery that you depend on, replace the existing file, test it, and replace it if it is ok.

Obviously, doing file manipulation directly is a very inefficient way to deal with versions and dependencies. As a result, package managers such as Bower and NPM emerged, and package management was responsible for all module upgrades and dependencies, which to a large extent saved repetitive work on the front end.

The module specification

The essence of modular loading is on-demand loading. There are three common specifications :(1) AMD, CommonJS, and ES6 Modules; (2) Use the standard module system to handle dependencies and export each file as a module; (3) Use module loaders or packers for processing

Module loader

The module loader needs to realize two basic functions: 1. It implements the module definition specification, which is the foundation of the module system; 2

Common ones are RequireJS, sea-.js, and SystemJS. Take AMD’s RequireJS module loader, for example. Normally, with RequireJS, you only need to import RequireJS. You don’t need to explicitly import other JS libraries, because RequireJS will do the work.

Module packer

Module loaders provide a runtime environment on which code that adheres to the module specification can run, so the benefits of modularity are obvious. But for a real project, you still need to build, package, and so on.

It is obviously ideal for development to focus only on business modules without having to understand the module loader or the build process, which has led to module packers like WebPack.

Gulp is perfect as a general-purpose build tool. But the technology changes too fast, the application of various preprocessors, front-end componentization, lead to the front-end is extremely complex, and the emergence of WebPack just right, perfect solution to the problem of front-end engineering.

Webpack basics

Webpack is a well-known module written by Node, which is a bundler. It supports not only CommonJS modules, but also the more fashionable ES6 modules. Webpack is the most widely used packaging, such as front-end componentize framework (React, Vue) mostly use WebPack.

loader && plugins

It provides two excellent mechanisms: loaders and plugins.

Loaders: Webpack considers each file to be a resource module that is used to process source files during the packaging build (JSX, SCSS, Less..) Loader.

Plugins: Plugins can do more than a loader can. The plug-in does not operate directly on a single file, but directly on the entire build process, and most of the content functionality runs on this plug-in system; Of course, open source WebPack plug-ins can be developed and used to meet a wide variety of needs. Such as autoprefixer, html-webpack-plugin, webpack-dev-middleware, webpack-hot-middleware.

The WebPack packaging process

(1) Find the entry point from the configuration file (2) resolve the module system (3) resolve dependencies (4) resolve dependency management (read, resolve, resolve) (5) merge all used modules (6) merge the runtime environment of the module system (7) generate the packaged file

Browser loading process

(1) Load the webPack package through

If you compare understanding the WebPack packaging process to the browser loading process, you can get a better understanding of the front-end modular presentation process. The evolution from defining module specifications to the runtime environment of the modular system (loaders) to the more advanced packers also makes front-end development easier and easier, which is why packers are becoming more popular.

Understand the capabilities of WebPack

In fact, it turns out that this is the case. The ultimate goal of engineering is to let developers focus on writing business modules and let packers do the rest. In this regard, WebPack is still more successful than Gulp.

But webpack also has a steep learning curve, and it’s a great way to learn by tapping and digging. From basic usage and concepts, to Tree-shaking, code Spit, to how to package, how to unpack browsers, to engineering, to how to optimize large-scale builds, the test is whether you have a solid foundation, understand the principles, and understand the optimization ideas and source code. Compared to Gulp as a similar module, to Stream (the core of Gulp), to event (the Stream base class), to HTTP (Stream is used in HTTP), to eventloop, to libuv and v8 implemented by C++, it’s awful, almost covers the Wolf book.

Webpack builds large projects

Webpack encapsulation

There are many encapsulation practices about WebPack, such as af-WebPack, yKit, easyWebPack.

Af-webpack is a customized WebPack from Alipay, which directly packs node.js modules such as Webpack-dev-server into it, and at the same time does better configuration processing and plug-in.

Ykit is qunar’s open source WebPack, with Connect built in as a Web server, combined with Dev and hot middleware. It is a good practice for multi-project construction and release of version files.

Easywebpack is also plug-in, but there is a lot of integration with solutions like Boilerplate, such as EGG’s SSR, which is well thought out, although I don’t agree with it.

CRA and UMI

With the framework and basic exploration stable, people started thinking about how to use it better and easier. Each big factory is thinking about how to select and reduce the cost of the front-end technology stack, unified technology stack.

In the Create React App (CRA) project, React-scripts are used as startup scripts. Similar to egg-scripts, they are also used by conventions to hide specific implementation details, so that developers do not need to pay attention to the build.

UMI is similar to CRA in that it is a set of best practices based on ant Financial’s technology stack summary. It is a zero-configuration “convention over configuration” front-end framework that is developed out of the box according to best practices: React Family barrel + dVA + jest + ANTd (mobile) + less + ESLint. At the heart of UMI is chainWebPack, which plugins webPack’s complex configurations, allowing you to switch between projects freely and eliminating various WebPack configuration issues.

UMI has a relatively comprehensive thinking, from technology selection, construction, to multi-end output, performance optimization, release and other aspects of the split, so that the boundaries of UMI are more clear, as a front-end best practice there is nothing wrong with it, most front-end groups are also similar implementation. In other words, it is a combination of prior art stacks that encapsulate the details and make it easier for developers to use. In the future, there will be more of these types of packages, as there will be less innovation at the framework level and people will focus on the application level.

The scaffold

Cli Parameter Processing

One of the more famous early versions of Commander, written by TJ, was actually a version of Ruby migrated to Node, and the famous Express-Generator was written as a scaffold based on this module.

The problem with this is that the package dependency is a bit too big for obsessive developers. So modules like Yargs emerge, powerful and with few dependencies.

Template method

Yeoman is the famous scaffolding module. By installing the template and extending on the yo command, it is clear that this is the result of centralization. The advantage is that the development difficulty is small, for the engineering convergence is helpful.

Git repository for templates

The template is good, but it’s a bit of a hassle. If the template is on Github, it is easy to update and maintain. Typical examples are Vue CLI (previous version) and SAOJS.

The core is ownload-git-repo, and then templates the files inside. The flexibility of this way is very high, is the mainstream way. To sum up, scaffolding is evolving step by step, but overall, there is no significant optimization except for some optimization in engineering convergence, for the simple reason that the complexity of scaffolding cannot be improved. UMI is complex enough, but the CLI doesn’t have much in it, and it doesn’t require a particularly flexible configuration. In terms of how to build scaffolding with Node, it’s actually quite simple to use node.js sketcher. You can see How to use a Node.js Sketcher in 10 Minutes from Zero: You Only need 5 Steps (github.com/i5ting/writ…) .

Future side rendering and building

Here’s a look at the evolution of the front end: the original Java, PHP era of pure server-side rendering.

The separation of front and back ends means that JavaScript is used to run on the client side, interface data of the server side is obtained through requests, and the front-end framework such as JQuery, Angular, React and Vue is used to operate or generate the page DOM, making full use of client resources and reducing the pressure on the server side. The division of labor of front and back ends is clear. It is still the most common development method.

Homogeneous development, such as Meteor, Nex.js, NuxT.js and other frameworks, all provide different application scenarios and development methods, but the purpose is for the same set of code to be applied to the server and client at the same time.

The evolution of SSR

JSP/ASP, of course, Node rendering template also counts, Node world template is the most abundant.

BigPipe, though old, has the obvious benefits of chunking and is browser-friendly. Facebook and Weibo and Wherever you go are all beneficiaries. Node has natural support and is res.write friendly.

Component based SSR, such as React SSR. Times change, and SSR has to keep up. Vdom + Hydrate is a great time to play, and you can even combine it with BigPipe. UMI SSR and Rax SSR could be expected in the future.

True isomorphism, meaning CSR and SSR are written in the same way. In the future, there will be no distinction between concepts. In Servless, API and rendering are functions. In recent years, the front-end technology changes can be called earth-changing, before choosing the technology stack should see their own application scenario, there is no best framework, only the most suitable for the application scenario of the framework, isomorphic development is no exception, I will introduce the advantages of using isomorphic development and the problems that need to pay attention to.

To give an example of a project, github.com/ykfe/egg-re… . It is written as follows:

GetInitialProps getInitialProps getInitialProps getInitialProps getInitialProps getInitialProps getInitialProps getInitialProps getInitialProps getInitialProps getInitialProps getInitialProps getInitialProps

Beidou is the SSR solution. Developers care about building UMI SSR because UMI has built-in Webpack, so developers only need to focus on writing React and deploying it to egg.

In the future, we want all services to be Serverless, so users only need to pay attention to React writing. The build is done locally, similar to UMI. Egg does not need a relationship because it runs in the Serverless runtime environment.

CSR and SSR writing is consistent, and can be on Serverless, its advantages are: C-end application, direct SSR, can improve efficiency, performance is guaranteed, SEO friendly. In the background application, using this way can improve the efficiency, reduce the difficulty of development and code maintenance. Each page is independent, and reengineering is extremely simple.

Once you’re on Serverless, you can provide cloud builds in addition to local CLI builds. For example, if each page is a function, how can multiple pages be combined? For example, in a typical UMI multi-page application, the react-Loadable + React-Router code can be loaded on demand. In this case, it is possible to publish multiple pages on Serverless first, then configure multiple pages on the cloud build, and finally build a multi-page application.

In addition, the rendering layer is made light, which is a great convenience for Web ides in the future. Don’t you think it would be nice to have a process to complete development in the future? Ali has taken Serverless and IDE as two major directions, which is very careful indeed.

Front end collaboration

Then we’ll talk about back end collaboration. The original definition of the front end is based on the browser to do development, as the browser application scope is more and more wide, the definition of the front end is more and more blurred. At present, the front end has covered PC, H5, mobile terminal components, almost all and user contact with the interface are considered as the front end, so there is a big front end concept, refers to all and user interaction terminal development.

From Native to Hybrid, to React Native/Weex, to Electron, PWA, mini programs, you can see that everyone is moving towards lightweight, unifying the technology stack, and reducing costs. That is to say, the end of the hope of unity, and before the end of the development of the core, so collectively referred to as the big front end.

JavaScript already spans three ends, and again Node covers both engineering and server capabilities. Many big companies have merged mobile and front end into one group, and the big front end is a foregone conclusion.

Front and rear separation

The separation of front and back ends means that JavaScript is used to run on the client side, interface data of the server side is obtained through requests, and the front-end framework such as JQuery, Angular, React and Vue is used to operate or generate the page DOM, making full use of client resources and reducing the pressure on the server side. The division of labor of front and back ends is clear. It is still the most common development method.

BFF

BFF (API Proxy), a separate API layer, comes about for a number of reasons:

With the rise of mobile, everyone started to develop for API, but neglected the big front end is not just mobile, but ALSO PC Client and PC/H5 Web. The main difference in UI/UE is the screen size.

When developing backend apis, you don’t like to maintain multiple apis simultaneously.

In addition to the communication costs, there are also misunderstandings that arise from not understanding the front-end implementation.

In simple terms, BFF is an API provided independently of each other. But this is clearly uneconomic. So, should we upgrade the architecture? As a result, the separate BFF becomes a unified server-side API service. This is the same as Ajax, where an extra layer of XHR allows for asynchronous processing on the front and back ends, which led to Web 2.0. BFF is upgraded to a unified server API service, which is essentially the same. The front-end and back-end can be developed asynchronously, each doing its own job, and all the apis are registered in the API layer. This is actually the last very big upgrade to the front-end architecture.

GraphQL

GraphQL is a data query language developed internally by Facebook in 2012 and opened in 2015 to provide an alternative to RESTful architecture. It is both a graphical query language for apis and a runtime that satisfies your data queries.

It is a common scenario for traditional Web applications to provide interfaces to clients through development services. When requirements or data change, the application needs to modify or create new interfaces. In the long run, the server code will continue to grow, and the internal logic of the interface will be complex and difficult to maintain. GraphQL solves this problem with the following features:

Declarative. The result format of the query is determined by the requester (i.e. the client) rather than the responder (i.e. the server). You don’t need to write many additional interfaces to accommodate client requests.

Can be combination. The query structure of GraphQL can be freely combined to meet requirements.

Strongly typed. Each GraphQL query must follow its set type before it will be executed.

In other words, with the above three features, when the requirements change, the client only needs to write the query structure to meet the new requirements, and if the server can provide the data to meet the requirements, the server code needs to make almost no changes.

GraphQL is defined by a model and is front and back end friendly, so it’s definitely a great solution. Of course, there are problems with it. The conventions themselves are not easy, and there are also front and back end costs.

Serverless

So how do you really free up the front end? One does not care about operation and maintenance, two does not care about expansion, three does not care about the Web framework. For front-end developers, I just want an interface, or wrap an interface, so why do I need to know about the Node Web framework?

Eventloops are black boxes, so it’s hard to fully cover what code to put in the stack. Occasionally an Eventloop will block, and it’s a pain to check.

Using Serverless, you can effectively prevent Eventloop blocking. Encryption, for example, is a common scenario, but the execution itself is very slow. If encryption and decryption are combined with your other tasks, it is very easy to cause Eventloop to block.

As easy as it is to develop a function locally and publish it to the Serverless cloud via the CLI, this is definitely the future.

You can see the picture below.

Front-end engineering understanding of the future

CRA, UMI let us not care about Webpack configuration, which is the future of local CLI.

Unified CSR and SSR, isomorphic development, local CLI package build, published to the Serverless cloud, simple and efficient.

Serverless based API is extremely simple, no need to understand the Web framework, simple composition of the API, no operation and maintenance, not afraid of high concurrency scenarios.

It should be mentioned that in the Serverless environment, the front-end can do all the business if the API is solved. Front-end can do more and more things, the future front-end application layer will also be more.

In the future, as the economy and business grows, for example, there will be more applications and more terminals, which is good for the front end, because the more you can do, the better the value will be.

So as a strong believer in the Web, I believe that the front end of the future looks pretty good.

More and more

YuanWenYu finches connections: www.yuque.com/robinson/fe…

If you want to learn more about the front-end engineering, welcome to my album of finches, continues to collect: www.yuque.com/robinson/fe…

Disclaimer: this is the last post written on the golddigger platform, since then it’s been moved to the Wordbird and blog.