This article is a bit of analysis and understanding of front-end website architecture from the perspective of an intern

sequence

When I first contacted the front end, I started from simple HTML, CSS and JS. At that time, the prevailing WEB concept was the separation of structure, style and behavior, that is, the separation of HTML, CSS and JS, independent development, and mutual invocation through link and script.

At the beginning, all the small projects I came into contact with directly deployed static resources such as HTML, CSS and JS to the server, and then responded to different HTML files according to the request routing.

Even after learning Webpack, I still think webpack only compresses JS and CSS files, improves server response speed and optimizes user experience, while min.css and min.js files are still deployed on the server after compression.

Later, after I entered A company as an intern, this was indeed the same development mode. At that time, we developed H5 pages by directly writing HTML, CSS and JS files, and then deploying them to the server, directly accessing HTML.

Later, when I joined B company, I got to know what real front-end engineering was like.

Front end separation

Part of the traditional business major PC website front part are using the MVC architecture, namely after every project, before and after the end of the appointment interface, development, development, front end will develop a good page to the back-end (typically Java or PHP), and then by the backend response to client requests, return to specific HTML page.

The disadvantage of this development mode is that it takes time and effort, the communication cost and the original joint adjustment are very high, and a little change in the front end needs to be changed online together with the front and back end, which greatly increases the total workload.

Therefore, modern large PC websites generally adopt the architecture of front and back end separation, front-end and back-end business functions convergence respectively, can be developed online, each other, can greatly improve work efficiency.

Front and rear end separation is generally divided into two types:

  • There is no separation of the front and back ends of the middle layer;
  • There is anterior and posterior separation of the intermediate layer. React, one of the most popular frameworks, is introduced here.

Middle layer has no web interface layer separation is simple type before and after the end, we will be a unified HTML/templates and other relation of CSS, js, such as static resources placed in CDN, each time you visit the page, the template directly returned to the user, and then all the dom node and other data are generated by js to perform.

However, the separation of the front and back ends without an intermediate layer has many disadvantages:

  • [Fixed] The first screen rendering time is too long
  • Not seo friendly.
  • A middle tier

The front and rear end separation with intermediate layer is generally adopted in large projects.

Since The emergence of Node in 2009, the front-end has gradually assumed some back-end business. However, Node is not suitable as a back-end server for large projects due to its limitations of robustness, so after a while, Node has gradually become the middle layer connecting the traditional front-end and back-end. We also call this front-end + Node architecture the “big front end”.

There are a number of things we can do in the Node layer, the most basic of which is to return different front-end templates.

I have worked with several front-end templates, but the puG template (formerly known as Jade) is the one that gives me the best feeling.

The syntax in PUG is js, which is very front-end engineer friendly, and puG is very powerful and can be used as HTML-Middleware, which is perfectly supported by Node. Learn to use Get Started.

Data splicing

Second, when a website development is more and more big, the data quantity is more and more, the service layer division, will produce a lot of service module, or our data scattered in different database on the server, or our front page to embed third-party advertising some API, the node can help us to finish the work of data splicing.

Since the server can access the interface many orders of magnitude faster than the browser, it is very efficient to access multiple interfaces in Node and concatenate the data. The concatenated data can be directly passed into the template for JS to use.

However, in general, it is more appropriate to access the underlying services or data from the data server in the back end, but there are always exceptions. In the last resort, we can concatenate data in the middle layer of Node.

This mode is generally not recommended because of poor maintainability and low security. In general, there is a special data module at the back end to access the database and service, and then join the data together. We only need to call an API at the back end in Node to get the data we want.

Monitoring service

The Node layer can capture some abnormal requests or events and report them to our third-party monitoring platform, such as Sentry, etc. Meanwhile, Node can also undertake part of the data statistics work and call some users to the third-party data statistics platform for PM and data analyst to view.

Node also monitors the entire instance and triggers an alarm when an exception causes CPU or memory usage to exceed a threshold.

But also, adding node layers means that the site has another layer of backend monitoring and oncall, which increases the complexity of the work.

Server-side rendering

SSR(Server-side-render) can be said to be one of the most important features, it can help us solve the previously mentioned problems of long first screen rendering time and low SEO support.

Modern SEO crawlers are generally divided into two types:

  • Crawlers that support parsing JS, such a small number, represented by Google;
  • Crawlers that do not support parsing JS, most of which are such, are basically non-Google search engine crawlers. For Google crawlers, it doesn’t matter whether they use SSR or not in terms of SEO, because they can eventually crawl to a normal page.

For non-Google search engines, we need to use SSR to render specific DOM nodes for crawlers to climb.

In addition, this method has another advantage: users can see the content of the page during loading, instead of a blank page. If SSR is not used, users will be kept blank in the process of loading resources on the webpage, which may cause the loss of users.

This picture is the content I saw in the first second when I visited STATION B at 50KB /s network speed. If STATION B does not use SSR technology, it may be five or six seconds after users can see the content on the first screen.

Here is a simple Node layer code using SSR:

// The code uses es6 syntax. If you don't understand it, you can learn from ruan Yifong's "ES6 Introduction".
// If the node does not use Babel, import will be reported, and can be replaced with require
import { renderToString } from 'react-dom/server';
import DemoContainer from 'containers/demo';
// Take the KOA2 framework for example
module.exports = (ctx) = > {
    constprops = {... };// The HTML here holds the DOM node of our component after render.
    const html = renderToString(<Demo >); Pug template ctx.render('demo.pug', {__props: json.stringify (props), HTML}); // The pug template ctx.render('demo.pug', {__props: json.stringify (props), HTML}); };Copy the code

Here is the puG code snippet:

// Pug code snippet body #root! { html } script. window.__props = '! { __props }'Copy the code

When using SSR, make sure that the props on the front end and the server side are the same. Therefore, I usually pass the props to the Window object in the Node layer, and then the front-end component obtains the props from the Window object.

In SSR, the React component only performs two life cycles (componentWillMount and Render) to generate the DOM structure. Other life cycles and method mounting are done in the front end.

The Node layer does more than that, and I won’t go into more detail here.

Although SSR has a lot of, it still has its own disadvantages. Using SSR means that you use your own server to replace part of the functions originally belonging to the user client, so it will cause the server performance reduced, the cost may increase, relative to small teams or teams not rich in funds, should be careful to choose whether to use SSR.

The front and back ends are isomorphic

Speaking of SSR, we have to mention the isomorphism of the front and back ends. Isomorphism means that the front-end and Node execute the same set of code. The first screen is rendered by the server, and the rendered HTML is directly handed to the browser to render. The client is responsible for loading JS, executing the rest of the component life cycle, and mounting custom events. A good set of front-end isomorphic code can greatly reduce the workload of our maintenance code, and has a very efficient execution efficiency, how to elegantly write the front-end isomorphic code is also a technical work, we need to plan a set of front-end architecture in advance.

Such as:import React, { Component } from "react";
import { Provider } from "react-redux";
import ReactDOM from "react-dom";
import Loadable from "react-loadable";
import { BrowserRouter, StaticRouter } from "react-router-dom";

// server side render
const SSR = App= >
  class SSR extends Component<{
    store: any;
    url: string;
  }> {
    render() {
      const context = {};
      return (
        <Provider store={this.props.store} context={context}>
          <StaticRouter location={this.props.url}>
            <App />
          </StaticRouter>
        </Provider>); }};// client side render
const CLIENT = configureState= > Component => {
  const initStates = window.__INIT_STATES__;
  const store = configureState(initStates);
  Loadable.preloadReady().then((a)= > {
    ReactDOM.hydrate(
      <Provider store={store}>
        <BrowserRouter>
          <Component />
        </BrowserRouter>
      </Provider>.document.getElementById("root")); }); };export default function entry(configureState) {
  return IS_NODE ? SSR : CLIENT(configureState);
}
Copy the code

And in terms of isomorphism, Ali has a demotion strategy. When the server pressure is normal, SSR is carried out by the server to improve the user experience. When the number of user visits surges, such as double Eleven, the server will automatically downgrade the process, node does not carry out SSR, all converted to client rendering, to reduce the pressure of the server.

Choose the framework

Now that we know about the front and back end separation, it’s time to frame the Node layer.

Currently, there are three mainstream frameworks: Express, koa1.0, and koa2.0.

For starters, it is recommended to use koa2.0 directly for middle tier learning and development.

The disadvantages of Express are:

  • It’s too heavy, there’s a lot of modules that we probably won’t use; Callback hell, even with Promise, can only alleviate it. The disadvantages of koa1.0 are:

  • It must be used together with the CO library and generator, which is cumbersome to configure. Since Node has been upgraded to version 7.6 and added async/await sugar, we can use koA2 syntax directly in native Nodes without any third party libraries.

Koa2 is an updated version of Express. Many of the modules are directly migrated from Express, but the modules that were not used in the past have been removed, and only the developers can import them when they need to use them.

And with koA2 using async/await syntactic sugar, the code appears to be executed synchronously, which is perfectly suited to the logic of the front end engineer.

Here is the sample code for Express, Promise, and KOA2:

/ / express version
module.exports = (req, res) = > {
    const data1 = request.get('/api/demo1', (err, res) => {
        const data2 = request.get('/api/demo2', (err, res) => {
            const data3 = request.get('/api/demo3', (err, res) => { res.send(data1 + data2 + data3); })})})}/ / promise version
module.exports = (req, res) = > {
    new Promise((resolve, reject) = > {
        request.get('/api/demo1', (err, res) => {
            resolve(res);
        }).then(res= > {
            request.get('/api/demo2', (err, res2) => res + res2 );
        }).then(res2= > {
            request.get('/api/demo3', (err, res3) => res2 + res3)
        }).then((data) = >{ res.send(data); }); })}Copy the code

It looks a little bit cleaner, but it’s still very cumbersome.

// KoA1 and KOA2 are basically the same in writing, except that koA1 requires tedious configuration of the CO library and generator before use.
// Each await should be preceded by a try-catch to prevent node from crashing if an asynchronous request fails.
module.exports = async (ctx) => {
    const data1 = await request.get('/api/demo1');
    const data2 = await request.get('/api/demo2');
    const data3 = await request.get('api/demo3');
    ctx.body = {
        data: data1 + data2 + data3
    };
}
Copy the code

Koa2 is very comfortable to use and fits the logic of a front-end engineer.

While the koA2 code looks like it executes synchronously, it actually becomes a promise function after compilation, and all the code behind await is executed in the promise callback.

The development of structure

Once the framework is selected, all that remains is development. The typical Node layer follows the following directory structure:

node

  • Lib // Store third-party plug-ins
  • Util // Store your own utility functions
  • Middleware // Holds middleware
  • Routes // Stores the route
  • Controller // Holds the route handler function
  • App.js // Node layer entry file The basic Node layer architecture is introduced here, the rest of the front-end is generally familiar things. Front-end directory structure:

public

  • static
  • src
  • js
  • components
  • containers
  • routes
  • stores
  • actions
  • reducers
  • pages
  • css/scss/less
  • Static use normal React development logic.

Finally, there are other folders that can be used freely, such as template to store templates, scripts to store scripts that you normally write, etc.

configuration

An online project should have two modes — production mode and development mode.

Production mode is the environment in which we operate online.

Development mode is our normal local development environment.

You can even configure more environments if necessary.

The requirements of these two environments are different, so we have two sets of configuration files, which we pass into Node and WebPack to start different environments depending on the configuration.

Automated testing

Automated testing is essential in a large, mature website.

Although the automation testing of the business layer has become unstable due to the rapid iteration of the business due to the rapid growth of the front-end domain, it is still necessary to do some basic testing.

During normal development, we should do a good job in the automation of unit tests of the class library and UI components.

It is best to store these test files in a separate test directory, or add a component.test.js file to each of the base UI component directories so that the.test file will be found automatically when you start the test.

Before each project goes online, an integration test should be conducted to test whether the routing is normal, whether the Webpack and Node module installation are normal, and whether the simulated user login and access are normal.

Occasionally we need to do stress tests, disaster tests, etc.

For beginners, testing is a very important concept and habit, usually write more unit tests.