preface

Recently, I have interviewed some companies, such as Ant and Toutiao, and I have received offers from toutiao and Ant. Personally, I think I will work for Ant.

The following ali Ziyi shared that Ali’s front face was great, and the blogger only got offered after reading his article:

How I became an interviewer at Alibaba

Interview sharing: two years of work experience successful interview Ali P6 summary

I will not repeat the questions shared by Ziyi. Now I will share 8 partial questions and ideas with you in the hope of helping you in the follow-up interview.

Q1. What is CDN? CDN back source mechanism? What is the difference between CDN and OSS?

Answer:

(1) Definition of CDN

The full name of CDN is Content Delivery Network. CDN is an intelligent virtual network built on the basis of the existing network. It relies on the edge servers deployed in various places, through the central platform of load balancing, content distribution, scheduling and other functional modules, users can get the content nearby, reduce network congestion, and improve user access response speed and hit ratio. The key technologies of CDN mainly include content storage and distribution. Its function is to reduce propagation delay and find the nearest node, a typical “space for time” technology. For example, most of the resources of taobao website we visited are deployed on CDN nodes.

For example, if a web server is deployed in one place (Beijing), then all the traffic from all over the country is concentrated in Beijing. The delay time from Shanghai to Beijing is different from that from Hong Kong to Beijing, and the source station is easy to hang down under heavy load. The solution is to deploy CDN nodes in multiple regional centers across the country, and the content will be returned from the CDN nodes in Hong Kong when accessed from Hong Kong, and the content will be returned from the CDN nodes in Nanjing when accessed from Nanjing, which can effectively reduce the transmission delay and greatly reduce the load of the source station.

CDN access process:

1. When a user clicks the content URL on the web page, the local DNS system parses the URL. The DNS system gives the domain name resolution authority to the DEDICATED DNS server pointed to by CNAME.

2. The DNS server of the CDN returns the GLOBAL load balancing device IP address of the CDN to the user.

3. The user sends a URL access request to the global load balancer of the CDN.

4. The CDN global LOAD balancing device selects a regional load balancing device in the region to which the user belongs based on the USER IP address and URL of the requested content and sends the request to the device.

5. The LLB selects an appropriate cache server to provide services for users. The selection criteria are as follows: Determine which server is closest to the user based on the user’S IP address. According to the content name carried in the URL requested by the user, determine which server has the content required by the user; Query the current load of each server and determine which server has service capability. Based on the above analysis, the LAN load balancer returns the IP address of a cache server to the global load balancer.

6. The global load balancer returns the SERVER IP address to the user.

7. The user sends a request to the cache server. The cache server responds to the request and sends the required content to the user terminal. If the cache server does not have the content the user wants, and the zone balancer still allocates it to the user, the server requests the content from its upper-level cache server until the source server that traces it back to the web site pulls it locally.

(2) CDN’s back source mechanism

When the CDN cache server does not have the resources that meet the requirements of the client, the cache server will request the upper-level cache server, and so on, until it obtains the resources. If not, we’ll go back to our own server to get the resource.

Back source condition:

1. If there is no cache on the node during user access, the system pulls resources back to the source.

2. The file on the CDN node expires, and resources are pulled back to the source.

3. If the file is not cached, the user directly pulls resources back to the source when accessing the file.

(3) Differences between OSS and CDN

OSS (Object Storage Service) Object Storage separates the data channel (data to be accessed) from the control channel (metadata, that is, index). The data store is located based on the index (metadata), and then the data is accessed through the underlying Storage interface. In this way, object storage has both the access performance similar to block storage and the sharing convenience similar to file storage. It can be said that object storage can have both the cake and the cake. Object storage is used to store unstructured data such as images, audio, and videos.

Difference: The core of OSS is storage and computing capability, while the core of CDN is distribution. CDN does not provide storage interface, so the two are generally used together. The resource files stored in the object storage are just suitable for CDN acceleration. Object storage +CDN has become an indispensable part of Internet applications.

Reference article:

What is CDN? What are the advantages of using CDN?

On THE CDN, back to the source and other issues

Can’t distinguish OSS object storage from CDN?

Q2. How does WebPack package optimization?

Answer:

First, we use Webpack-bundle-Analyzer to analyze the volume structure of the packaged project. We can see all the third-party packages used in the project and the proportion of each module in the whole project.

1. Load as required

(1) Routes are loaded on demand

In Vue, route loading is lazy. If () => import(xxx. Vue) is used, the package is automatically split and packaged according to routes.

import VueRouter from 'vue-router'

Vue.use(VueRouter)

export default new VueRouter {
   routes: [
      {
        path: 'a',
        component: () => import('.. /views/A.vue')
      },
      {
        path: 'b',
        component: () => import('.. /views/B.vue')}}]Copy the code

Use the React. Lazy function in React to handle dynamically imported components in the same way as regular components. Render lazy components in Suspense to be more efficient with routing.

import { Switch, Route, Redirect } from 'react-router-dom';

const Home = lazy(() => import('.. /views/Home)); const About = lazy(() => import('../views/About')); const WrappedComponent = (component) => { return ( 
      
       Loading... }> {component} 
       ); }; const Main = () => ( 
       
        
        
       ); export default Main;Copy the code

(2) Load the third-party library as required

For example, when using loDash or Element-UI component libraries, try to load on demand instead of packing the entire library into your project.

// Importing lodash on demand requires the function import get from'lodash/get'; // Import {Button} from as needed'element-ui';
Vue.component(Button.name, Button);
Copy the code

2. File parsing optimization

Loader resolution optimization: Include and exclude are configured to reduce the number of files to be processed, and cacheDirectory is used to cache compiled results.

module: {
  rules: [
    {
      test: /\.js$/,
      loader: 'babel-loader? cacheDirectory',
      include: [
        path.resolve(__dirname, 'src')
      ],
      exclude: /node_modules/
    }
  ]
}
Copy the code

File resolution optimization: This is done by configuring alias, Extensions, and modules in resolve.

Alias: Create an alias for import or require to speed up Webpack lookup.

Extensions: automatically resolved extensions. The default is:

extensions: [".js".".json"]
Copy the code

Using this option overrides the default array. Webpack parses modules in the order they are set, and puts most of the modules’ extensions in front of the array to speed up the lookup.

Modules: The directory to search for when parsing a module. It is generally recommended to use an absolute path to avoid layer by layer searching for the ancestor directory.

resolve: {
  alias: {
    The '@': path.resolve(__dirname, "src")
  },
  extensions: [".js".".vue"],
  mainFields: ["index"."main"],
  modules: [path.resolve(__dirname, "src"),"node_modules"]}Copy the code

3. Split common modules

Use splitChunks to unpack the common modules. The default configuration of splitChunks is as follows:

SplitChunks: {// Indicates which chunks to split. Possible values are async, INITIAL, and all chunks:"async", // Indicates that the newly separated chunk must be larger than or equal to minSize. The default value is 30000, about 30KB. MinSize: 30000, // Indicates that a module can be divided only if it contains at least one Minchunk. The default value is 1. MinChunks: 1, // indicates the maximum number of parallel requests when loading files on demand. The default value is 5. MaxAsyncRequests: 5, // Indicates the maximum number of parallel requests to load entry files. The default value is 3. MaxInitialRequests: 3, // represents the name connecter of the split chunk. The default value is ~. Such as the chunk to vendors. Js automaticNameDelimiter:'~', // Set the name of chunk. The default istrue. As for thetrueSplitChunks are automatically named based on the key of chunk and cacheGroups. name:true, // Multiple groups can be configured under cacheGroups, each based ontestThe conditions are mettestConditional modules are assigned to that group. Modules can be referenced by multiple groups, but ultimately which group to package into depends on priority. By default, all modules from the node_modules directory are packaged into the vendors group, and modules shared by two or more chunks are packaged into the default group. cacheGroups: { vendors: {test: /[\\/]node_modules[\\/]/,
            priority: -10
        },
        // 
    default: {
            minChunks: 2,
            priority: -20,
            reuseExistingChunk: true}}}Copy the code

To sum it up:

  • 1. Reused code or fromnode_moulesModules in folders
  • 2. The module must be larger than 30kb before splitting
  • 3. When chunks are loaded on demand, the maximum number of parallel requests cannot exceed 5
  • 4. When the initial page loads, the maximum number of parallel requests cannot exceed 3

Here is how to split react and moment in node_modules to avoid a large vendor package.

splitChunks: {  
    chunks: 'all',  
    minSize: 30000,
     minChunks: 1,
    cacheGroups: {    
        lib: {      
            name: 'vendors'.test: /[\\/]node_modules[\\/]/,      
            priority: 10,      
            chunks: 'initial'}, react: {name:'react'Priority: 20,test: /[\\/]node_modules[\\/]react[\\/]/,      
            chunks: 'all'    
       },
       moment: {
            name: 'moment'Priority: 20,test: /[\\/]node_modules[\\/]moment[\\/]/
       },
       default: {
            minChunks: 2,
            priority: -20,
            reuseExistingChunk: true}}}Copy the code

4. DllPlugin and DllReferencePlugin

Usually during the packaging process, since the third-party library code does not change very often, we can separate the third-party library code from the business code. DllPlugin and DLLReferencePlugin, both built-in modules of WebPack, are capable of splitting bundles and greatly improving build speed.

Configure webpack.dll. Js to remove Lodash, jquery, and ANTD.

const path = require("path");
const webpack = require("webpack");
const {CleanWebpackPlugin} = require("clean-webpack-plugin");

module.exports = {
  mode: "production",
  entry: {
    lodash: ["lodash"],
    jquery: ["jquery"],
    antd: ["antd"]
  },
  output: {
    filename: "[name].dll.js",
    path: path.resolve(__dirname, "dll"),
    library: "[name]"Plugins: [new CleanWebpackPlugin(), new webpack.dllplugin ({name:"[name]",
      path: path.resolve(__dirname, "manifest/[name].manifest.json")]}});Copy the code

Add script package DLL to package.json configuration

"scripts": {... ."dll": "webpack --config webpack.dll.js"
}
Copy the code

Execute NPM run DLL to generate the DLL file and the corresponding manifest.json.

The packaged DLL is added to the HTML via add-asset-html-webpack-plugin, and the DLL is referenced to the dependencies that need to be compiled via the DllReferencePlugin.

Configuration webpack. Config. Js:

const manifests = ['antd'.'jquery'.'lodash'];
const dllPlugins = manifests.map(item => {
  return new webpack.DllReferencePlugin({
    manifest: require(`./manifest/${item}.manifest`) }); }); module.exports = { ... , plugins: [ ...dllPlugins, new AddAssetHtmlPlugin({ filepath: path.resolve(__dirname,"./dll/*.dll.js")]}})Copy the code

Reference article:

Summary of several webpack optimization methods

Optimize the Webpack package to the extreme _20180619

Q3. How to implement a scaffolding like VUE-CLI/create-React-app?

Answer:

Related technologies: Node, Webpack, Commander, Download-git-repo

Scaffolding needs to contain 3 core functions (this scaffolding is bronze version) :

  • Initialize the project;
  • Local development;
  • Pack locally.

1. Initialize project: xx create

Set up a repository of project templates on Github

Use commander to register the create command:

const program = require('commander');
const download = require('download-git-repo'); // register the command program.com ('create <app-name>')
  .description('create a new project by react-cli')
  .action((name) => {
    require('.. /packages/create')(name);
  });
Copy the code

Create. Js enables you to download templates, modify package.json content, and install dependency packages.

// Change the name in package.jsonfunction editPackageName(appName) {
  return new Promise((resolve, reject) => {
    const packageJsonPath = path.resolve(process.cwd(), `${appName}/package.json`);
    const packageJson = require(packageJsonPath);
    packageJson.name = appName;
    fs.writeFile(packageJsonPath, JSON.stringify(packageJson), (err) => {
      if (err) {
        returnreject(err); } resolve(); }); }); } // Download the dependency packagefunction installPackages(appName) {
  const appPath = path.resolve(process.cwd(), appName);
  return new Promise((resolve, reject) => {
    const spinner = ora('Install dependencies');
    spinner.start();
    child_process.exec('npm install', {cwd: appPath}, (err) => {
      spinner.stop();
      if (err) {
        return reject(err);
      }
      successLog('Dependency package installed successfully');
      console.log(`cd ${appName}`); console.log(`npm run start`); resolve(); }); }); } // Download the project templatefunction downloadTemplate(appName) {
  return new Promise((resolve, reject) => {
    const spinner = ora('Start building project');
    spinner.start();
    download(templateUrl, `./${appName}`, {clone: false}, err => {
      spinner.stop();
      if (err) {
        return reject(err);
      }
      successLog('Project generated successfully');
      resolve();
    });
  });
}

async function create(appName) {
  try {
    await downloadTemplate(appName);
    await editPackageName(appName);
    await installPackages(appName);
  } catch (err) {
    errorLog(err);
    process.exit(1);
  }
}

module.exports = create;
Copy the code

2. Local project development

The project supports local development mainly through webpack-dev-server. The webpack configuration is set up first.

The common configuration is in webpack.com.config.js (to list only parts) :

module.exports = {
  entry: {
    app: appIndexJs
  },
  output: {
    filename: "[name].[hash:7].js",
    path: appBuild
  },
  module: {
    rules: [
      {
        test: /\.js[x]? // exclude: /node_modules/, use: ["babel-loader"]},... }, plugins: [ .... ]Copy the code

Webpack. Dev. Config. Js configuration:

const path = require("path");
const merge = require("webpack-merge");
const baseConfig = require("./webpack.com.config");
const {appBuild} = require("./pathConfig");

module.exports = merge(baseConfig, {
  mode: "development",
  devtool: "cheap-module-eval-source-map",
  devServer: {
    contentBase: appBuild,
    publicPath: ' ',
    host: "localhost",
    port: 3000,
    open: true// Automatically open the browser compress:true, // enable gzip compression hot:true,
    inline: true// enable inline mode}});Copy the code

To register the dev command with commander:

program.command('dev')
  .description('Start app development.')
  .action(() => {
    require('.. /packages/dev') (); });Copy the code

Dev.js starts the local development server primarily by calling webpack-dev-server:

// Package in development modefunction development() {
  const compiler = webpack(webpackConfig);
  const server = new WebpackDevServer(compiler, {
    contentBase: webpackConfig.devServer.contentBase,
    publicPath: webpackConfig.devServer.publicPath
  });
  server.listen(webpackConfig.devServer.port, (err) => {
    if (err) {
      errorLog(err);
      process.exit(1);
    }
    console.log(`\nApp is running: ${underlineLog('http://localhost:3000/')}`);
  });
};

module.exports = development;
Copy the code

3. Pack locally

Local packaging is performed by configuring WebPack as follows:

const merge = require("webpack-merge");
const baseConfig = require("./webpack.com.config");

module.exports = merge(baseConfig, {
  mode: "production",
  devtool: "source-map"
});
Copy the code

Register the build command with commander:

// register build command program.com command('build')
  .description('Build app bundle.')
  .action(() => {
    require('.. /packages/prod') (); });Copy the code

Prod.js executes the Webpack production packaging process:

#! /usr/bin/env node

const webpack = require('webpack');
const {errorLog, successLog} = require('.. /utils/index');
const webpackConfig = require('.. /config/webpack.prod.config'); // Package in production modefunction production() {
  webpack(webpackConfig, (err, stats) => {
    if (err) {
      errorLog(err);
      process.exit(1);
    }
    const compiler = webpack(webpackConfig);
    compiler.run((err, stats) => {
      if (err) {
        errorLog(err);
        process.exit(1);
      }
      process.stdout.write(stats.toString({
        colors: true,
        modules: false,
        children: false,
        chunks: false,
        chunkModules: false
      }));

      if (stats.hasErrors()) {
        errorLog(' Build failed with errors.\n');
        process.exit(1);
      }
      successLog('Build completed.');
    });
  });
};

module.exports = production;
Copy the code

React-fe-cli complete code: github.com/dadaiwei/re…

Q4.CSR and SSR are different. How to combine CSR and SSR?

Answer:

1.SSR

SSR stands for Server Side Rendering, which means Rendering is done on the Server Side. Once the browser has a complete structure, it can directly parse the DOM, build it, load resources, and render it.

Advantages:

  • 1. The first screen loads quickly
  • 2. Friendly to search engines and conducive to SEO

Disadvantages:

  • 1. When the traffic volume is large, the server is under great pressure
  • 2. The experience of frequent refreshing between pages is not very good

2.CSR

The full name of CSR is Client Side Rendering. The server returns the initial HTML content and then asynchronously loads the data through JS to complete the Rendering of the page. SPA applications developed based on Vue or React are typical CSR cases.

Advantages:

  • 1. Page routing is placed on the client, and switching between pages is quick
  • 2. Data rendering is placed on the client, greatly reducing the pressure on the server

Disadvantages:

  • 1. The rendering of the first screen is slow, which may lead to a blank screen
  • 2. It’s bad for SEO

3. A combination of the two

The home page is based on SSR, and the interaction of subsequent clicks and other events is based on CSR rendering, which can avoid the slow loading of the home page and solve the SEO problem.

Isomorphism is used for client and server code.

The react-DOM /server renderToString method is used to render the Index component directly:

const { renderToString}  = require( 'react-dom/server');

const http = require('http'); // class Index extends React.Com {constructor(props){super(props); }render() {return<h1>{this.props. Data.title}</h1>}} // server server http.createserver ((req, res) => {if (req.url === '/') {
        res.writeHead(200, {
            'Content-Type': 'text/html'
        });
        const html = renderToString(<Index />);
        res.end(html);
    }
}).listen(8080);
Copy the code

The client uses ReactDOM’s reactdom.hydrate method instead of the ReactDOM. Render method, which is used to hydrate HTML content in containers rendered by the ReactDOMServer. React will try to bind event listeners to existing tags.

import ReactDOM from 'react-dom'; // The event bound to the Index component listens to the page reactdom.hydrate (<Index />, document.getelementbyid ('root'));
Copy the code

Reference article:

React SSR server rendering and isomorphism

React SSR: + 2 projects

Q5.redux source code design

Answer:

Concept: Store: status administrator: obtain status and distribute status. State: indicates the global status tree shared by all components. Action: Processing asynchronous, click, event, and other types of operation objects, including type. Reducer: Data is processed differently by different operation types and different states are returned. Actipn is processed by reducer.

Method: Dispath: Modify the corresponding state according to the type of the action object passed in. Subsciribe: Adds a new subscriber and returns the unsubscribe function. ReplaceReducer: Replace the current reducer.

The Store section returns the stroe object through createStore:

export default function createStore(reducer, initialState) {
  letcurrentState = initialState; // State tree stateletcurrentReducer = reducer; / / the current reducerletcurrentListeners = []; // Listen on an array of objectsletnextListeners = currentListeners; // Next Liseners // Get the current statefunction getState() { 
    returncurrentState; } // Publish subscriptionsfunction subscribe(listener) { 
    nextListeners.push(listener);
    return unsubscribe// Unsubscribe const index = nextListeners. IndexOf (listener); // Cancel current listeners nextListeners. Splice (index, 1); } // Replace the current reducerfunction replaceReducer(nextReducer) { 
    currentReducer = nextReducer;
    dispatch({ type: 'REPLACE'})} // Trigger actionfunction dispatch(action) { 
    currentState = currentReducer(currentState, action);
    for (let i = 0; i < listeners.length; i++) {
        const listener = listeners[i];
        listener();
    }
    returnaction; } // Initialize state dispatch({type: 'INIT' });

  return {
    getState,
    subscribe,
    replaceReducer,
    dispath
  }
}
Copy the code

CombineReducers: Split the Reducer. Each Reducer is only responsible for a part of state. State parameters of each Reducer are different, corresponding to the part of state data it manages, and a new state is finally returned.

functioncombineReducers(reducers) { const reducerKeys = Object.keys(reducers); // Final reducer object const finalReducers = {};for (leti = 0; i < reducerKeys.length; i++) { const key = reducerKeys[i]; finalReducers[key] = reducers[key]; } const finalReducerKeys = Object.keys(finalReducers); // All reducer actions return final statereturnCombination (state = {}, action) {// nextState const nextState = {}; // Status change flaglet hasChaged = false;
    for (letj = 0; j < finalReducerKeys.length; j++) { const key = finalReducerKeys[j]; const reducer = finalReducers[key]; const previousStateForKey = state[key]; const nextStateForKey = reducer(previousStateForKey, action); nextState[key] = nextStateForKey; hasChanged = hasChanged || nextStateForKey ! == previousStateForKey; } // determine whether to return the new state or the previous statereturnhasChanged ? nextState : state; }}Copy the code

ApplyMiddlewares: Enhanced store support for middleware such as Redux-Thunk or Redux-Saga.

export default functionapplyMiddleware(... middlewares) {returncreateStore => (reducer, initialState) => { var store = createStore(reducer, initialState); var dispatch = store.dispatch; Var middlewareAPI = {getState: store.getState, dispatch: action => dispatch(action) }; Var chain = middlewares.map(Middleware => middlewareAPI); dispatch = compose(... chain, dispatch); // Returns a new store object whose dispatch method has been passed through many layers // Each layer can call dispatch, You can also call next for the next layer to consider calling dipatch // and the last next is store.dispatch itself.return {
      ...store,
      dispatch
    };
  };
}
Copy the code

Reference article:

10 lines of code to see the redux implementation

Deep into the source code: Unpack redux’s design and usage

Q6. The difference between Redux and Vuex?

Flow of VUex:

Mutations — > Mutations — > State changes — > View changes — > Dispatch — > Actions — >mutations — > State changes — > View changes

Redux flow: View — > Dispatch — > Actions — > Reducer — > State changes — > View changes

Difference:

  • 1. Vuex replaces reducer in REdux with mutations function, and only needs to change state in the corresponding mutation function.
  • 2. The state in the VUEX branch is directly associated with the component instance and automatically re-renders when the state changes without subscribing to the re-render function. Redux uses the Store object to store the state of the entire application. When the state changes, it is passed down from the topmost level, and each level is updated by comparing the state.
  • Vuex supports action asynchronous processing. Redux supports only synchronous processing. For asynchronous processing, use redux-Thunk and Redux-Saga.

Q7. Principle of Express/KOA Middleware

Answer:

Node itself does not have the concept of “middleware”. The middleware in Express/KOA is usually a function that receives Request (request object), response (response object), next (a function passed in from express, Used to transfer control from one middleware to the next), receives the request, processes the response, and then passes control to the next middleware.

function middlewareA(req, res, next) {
  next();
}
Copy the code

For example, a custom print log middleware:

const app = express();

const myLogger = function (req, res, next) {
  console.log('LOGGED'); next(); // Pass control to the next middleware}; app.use(myLogger); // Register middlewareCopy the code

Q8. Nginx is related to forward proxy, reverse proxy, and load balancing

Answer:

(1) Forward proxy

A forward proxy serves clients to access servers that cannot be directly accessed by clients through requests of the proxy client. The wall-scaling tool is the most common forward proxy. The forward proxy is transparent to the client and opaque to the server.

Example nginx configuration:

Server {resolver 8.8.8.8;# Specify the DNS server IP address
    listen 80;
    location / {
        proxy_pass http://$host;     Set the protocol and address of the proxy server
        proxy_set_header HOST $host; proxy_buffers 256 4k; proxy_max_temp_file_size 0k; proxy_connect_timeout 30; proxy_send_timeout 60; proxy_read_timeout 60; proxy_next_upstream error timeout invalid_header http_502; }}Copy the code

(2) Reverse proxy

A reverse proxy serves the server. The reverse proxy receives requests from clients and helps the server forward requests and load balancing. Reverse proxies are also commonly used to solve front-end cross-domain problems. The reverse proxy is transparent to the server and opaque to the client.

Example nginx configuration:

server { listen 80; server_name xx.a.com; // listen to address location / {root HTML; proxy_pass xx.b.com; // Redirect address index index.html; // Set default page}}Copy the code

(3) Load balancing

Load balancing means that Nginx allocates a large number of client requests to each server reasonably, so as to achieve full utilization of server resources and faster response to requests.

Example nginx configuration:

Upstream {server 1.1.2.3:80; Server 1.1.2.4:80; Server 1.1.2.5:80; } server { server_name xx.a.com; listen 80; location /api { proxy_pass http://balanceServer; }}Copy the code

Common load balancing policies:

  • 1. Polling policy The default policy is to traverse the server node list and allocate requests one by one. If the server goes Down, the request is automatically removed.
  • 2. Weighted polling Each server has different weights. Generally, a larger weight means a better performance of the server and can handle more requests.
Upstream {server 1.1.2.3:80 weight=5; Server 1.1.2.4:80 weight = 10; Server 1.1.2.5:80 weight = 20; }Copy the code
  • 3. The minimum connection policy preferentially allocates requests to servers under low pressure, preventing the servers under high pressure from adding more requests.
upstream balanceServer {
    least_conn;
    server 1.1.2.3:80;
    server 1.1.2.4:80;
    server 1.1.2.5:80;
}
Copy the code
  • 4. The fastest response time policy allocates limited requests to the server with the shortest response time.
upstream balanceServer {
    fair;
    server 1.1.2.3:80;
    server 1.1.2.4:80;
    server 1.1.2.5:80;
}
Copy the code
  • 5. Client IP binding assigns requests from the same IP address to only one fixed server forever.
upstream balanceServer {
    ip_hash;
    server 1.1.2.3:80;
    server 1.1.2.4:80;
    server 1.1.2.5:80;
}
Copy the code

Reference article:

Understand Nginx from principle to practice

Essential Nginx knowledge for front-end developers

conclusion

The above is the blog summary of the interview of some topics and experience, feel the harvest can pay attention to a wave, like a wave, code word is not easy, thank you very much.