Fairy – A front and back end separation frame
A can support before and after the separation and support a complete framework of intermediary isomorphism, maybe now it is not perfect enough, but I’ll build the framework of problems are listed, to facilitate others encounter problems no longer need to search everywhere, hope to have some help for those who build framework, the document will be regularly updated and optimization, you can watch the program at any time See the document update, also hope to finally become a complete and perfect framework, if these questions help you, please click star, thanks ~ ~
Why is it called that?
It was a gift for a cat called Fairy Mo
- Route synchronization React-router and KOA-router are synchronized
- Template synchronization View layer synchronization, using react-DOM /server implementation
- Data synchronization is implemented using Redux and React-Redux
- Css-modules Synchronization Ensure that the CSS-modules generated on the front and back ends are the same
- Webpack hot loaded component optimization
How to install
Mysql > create a Mysql database using phpMyadmin and other tools. Finally, configure the mysql database name, user name and password in server/config/db.json
npm i
npm startCopy the code
- Route synchronization (The front and back ends share the same path)
- Template synchronization (the front and back ends share a set of templates)
- Data synchronization (front and back ends share a set of data state machines)
Isomorphic comparison of the previous non-isomorphic loading comparison, it can be clearly seen that the white screen time is less, and the total page loading speed is faster
Nonisomorphism VS isomorphism
Want to build a framework like this from start to finish? Well, LET me write down the build process in as much detail as possible
For a framework, the most important thing is architecture. If we need to build a plug-in that is isomorphic to the front end and middle layer, we need it in a folder. When considering the architecture, IN order to make the environment used by the front and back ends relatively independent, THE front-end part can be extracted separately for production, and I also hope that the back-end part can be displayed and managed more clearly. We decided to divide the framework content into two parts, Cliet and Server folders respectively. For the front-end folder directory structure is relatively fixed, there is no need to consider too much, just according to the official recommended directory, distinguish View, Router and Store. As for the server side, we still use the most classic MVC architecture after consideration, so that the control layer, data layer and presentation layer can be distinguished, which is conducive to the decoupling of background business, and also conducive to our subsequent maintenance and modification and addition of new business.
The directory structure is as follows:
├── ├─ Assets ├─ │ ├─ dist │ ├─ exercises - Exercises - Exercises │ ├─ ├─ SRC │ ├─ SRC │ ├─ CSS │ ├─ CSS │ ├─ SRC │ ├─ actions │ ├─ ├─ CSS │ ├── img │ ├─ js │ ├─ reducers Redux reducers file storage directory │ ├ ─ ─ the route front deposit routing address │ ├ ─ ─ store front redux state control storage directory │ └ ─ ─ the view the front view storage directory ├ ─ ─ the front of the pack folder │ public server ├─ ├─ ├─ ├─ CSS │ ├─ img │ ├─ ├─ auth ├─ ├─ Models ├─ route ├─ view ├─ generate coat, because of interface synchronization, the back end is only responsible for generating the page when the coat set
Copy the code
preface
Why do we use intermediate isomorphism? What is an intermediate isomorphism?
Before a isomorphism, we will let the back-end output API json data, and the front end to receive the data, encapsulate and assembling, so there will be a problem, is that if the business has changed, then the interface changes, at this time the front-end and back-end workers need to modify the byte code and business logic, before the main problems are as follows:
- 1. The front and back ends need to be developed separately and connected to each other using JSON interfaces
The back-end uses its own business to realize the splicing of business logic and JSON data, and the front-end receives JSON data to complete the conversion of data to the interface, which costs a lot
Copy the code
- 2. The problem of SEO
SEO optimization problem of asynchronous loading, no server rendering, spiders can't grab data, no SEO
Copy the code
The simple reason is to reduce development costs and increase overall project maintainability, Moreover, this technology can effectively reduce the speed of web page loading and provide excellent SEO optimization capabilities. It can also take advantage of the high concurrency of Nodejs and let Nodejs deal with its best aspects, thus increasing the load capacity of the overall architecture, saving hardware costs and improving the load capacity of the project
Building such a framework must be done in a NodeJS environment, since NodeJS is responsible for the back-end servers, code packaging and optimization
How to install it? It’s easy to download Nodejs from Nodejs
Windows: Download the official installation package directly to install, please download the latest version, so that you can support some new features, such as async and await, etc., you will need later
Mac: do not recommend using the installation package to install a is not convenient to upgrade, switching version is not convenient, and uninstall very troublesome, Mac use BREW installation or NVM to install, guide you search it
2. Build webpack and Babel environment tools
When we finished with Nodejs, one big pitfall was the need to use some automation tools, mainly WebPack and Babel. Webpack we used version 2.0, which was also recently released
Webpack is currently the most popular front-end resource modular management and packaging tool. It can package many loose modules according to dependencies and rules into front-end resources suitable for production deployment. You can also code separate modules that are loaded on demand and load them asynchronously when they are actually needed. Through loader conversion, any form of resources can be regarded as modules, such as CommonJs module, AMD module, ES6 module, CSS, images, JSON, Coffeescript, LESS, etc.
Can see the official introduction, more is the tool is defined as a modular management and packaging tools, also is such, it is in our front end work is responsible for the function of automatic packaging, it can help us to merge automatically packaging js, remove duplicate js code, documentation such as automatic processing some style, so a stack of automation, complete all need to manually before Operational work
Babel is a conversion compiler that converts ES6 into code that can be run in a browser. Babel was created by Sebastian McKenzie, a developer from Australia. His goal was to make Babel able to handle all of ES6’s new syntaxes, and to build in the React JSX extension and support for flow-type annotations.
Babel will automatically convert the new ES6 and ES7 features we use into ES5 syntax that all browsers are compatible with, and will make your code more formal and tidy.
When I used these two tools at that time, Webpack felt that the configuration was very troublesome, and there were a lot of loaders and various configurations, which felt very troublesome, but after I got used to it, I found it was very simple
How to configure webPack2 configuration and how to make the environment support hot loading
This part because is the client environment tool configuration, will not explain particularly detailed, we can go to the official Chinese website for reading and learning, but will encounter some pits, and how to configure, we look at the webPack2 configuration is what, the current environment configuration Webpack configuration file
Why 2 configuration files? What is devServer.js?
Development environment configuration files (webpack.config.dev.js & devServer.js)
A configuration file (webpack.config.dev.js) is used for the development environment. It supports some hot loading and hot replacement functions, and it does not need to be washed to see the changes at any time. It also supports source Map support for debugging pages and scripts. And of course we need to do that and we need to run it with a server
Let’s take a look at the code snippet:
//Load the Webpack module
webpack = require('webpack'),
//Load automatic HTML automatic compilation plug-in
HtmlWebpackPlugin = require('html-webpack-plugin'),
autoprefixer = require('autoprefixer'),
precss = require('precss'),
postcsseasysprites = require('postcss-easysprites'),
//Load the common component plug-in
CommonsChunkPlugin = webpack.optimize.CommonsChunkPluginCopy the code
The headers are basically the plug-in components that need to be used, and the comments are clearly written, so we can implement the corresponding functions by referring to these plug-ins. Let me explain them one by one
webpack = require('webpack'),Copy the code
I don’t want to talk about that
HtmlWebpackPlugin = require('html-webpack-plugin'),Copy the code
When referencing this component, we can automatically convert the template to an HTML page and automatically load the webpack-packed CSS links into the page source code
new HtmlWebpackPlugin({
template: 'src/index.html'.//Page template address, support some special templates such as Jade, EJS, Handlebar, etc
inject: true.//The position to insert the file, can be in the body or head
hash: true.//Whether to add hash to the end of the page's resource file to prevent cache reading
minify: {
removeComments: true,
collapseWhitespace: false
},
//Streamline optimization to get rid of line feeds and so on
chunks: ['index'.'vendor'.'manifest'].//The name of the entry inserted into the file. Note that there must be a corresponding declaration in the entry, or the chunk extracted using CommonsChunkPlugin. Simple to understand that the page needs to read js file module
filename: 'index.html'
//The name of the final HTML file to be generated, where pathnames can be taken
}),Copy the code
CommonsChunkPlugin = webpack.optimize.CommonsChunkPluginCopy the code
When webpack is packaged, if not subcontracted, it is packaged into a JS file with the name you gave it, for example:
entry: {
index: './src/index.js'
}Copy the code
If we do not use this component, we will only generate an index,js file after packaging, so we still need to remember the branch packaging in order to extract some common packages. Why do we do this? Or in order to use the browser cache, you can also complete the replacement of the updated package when the project is new deployed, and the public part is not replaced, so that users do not need to download the public JS, from reducing the server or CDN pressure, right, save money ~ server free money ah, CDN does not charge ah?
How does it work? We’re not going to go straight to the code
Entrance to the configuration
entry: {
index:
'./src/index.js',
vendor: [
'react'.'react-dom'.'redux'.'react-redux'.'react-router'.'axios']}Copy the code
We can see that the Vendor module has written some common modules used here. In order to facilitate the plug-in configuration of these public modules, they are packaged separately. The Index module is the script we use for single page application, which is all written by ourselves
Module configuration JS
new webpack.optimize.CommonsChunkPlugin({
names: [
'vendor'.'manifest'//The name of the subcontractor
],
filename: jsDir+'[name].js' //Configure the output structure, which is configured to generate by path and module name
}),Copy the code
Why is there a manifest? Webpack2 is used to store relationships, links, etc. If we don’t extract this module, the vendor will change after each package, and we lose the meaning of not replacing the vendor package when we replace resources. So every time the project is updated, just replace index.js and mainifest
autoprefixer = require('autoprefixer'),
//Automatic plus browser compatible scheme, mainly cSS3 compatible scheme
precss = require('precss'),
//PostCSS can be made to support some of SASS's syntactic features
postcsseasysprites = require('postcss-easysprites'),
//Support the front-end CSS Sprite function that the background image is automatically stitched and merged into a picture, reducing requestsCopy the code
PostCss and LESS and SASS are all preloaders of CSS. The main purpose of postCss is to make it easier and faster to write CSS and to support some programming features, such as loops and variables. In this case, we chose postCss, The reason is simple. 1. 2. The plugin supports Sass and Less functions
We have a look at webpack is how to process the file, Webpack uses the **loader(loader)** to process, with a variety of loaders for file refinement and feature execution, look at the code:
module: {
//Loader configuration
rules: [
{
test: /\.css$/,
use: [
{
loader: "style-loader"
}, {
loader: "css-loader",
options: {
modules: true,
camelCase: true,
localIdentName: "[name]_[local]_[hash:base64:3]",
importLoaders: 1,
sourceMap: true
}
}, {
loader: "postcss-loader",
options: {
sourceMap: true.plugins:(a)= > [
precss(),
autoprefixer({
browsers: ['last 3 version'.'ie >= 10']}),postcsseasysprites({imagePath: '../img', spritePath: './assets/dist/img'})
]
}
}
]
}, {
test: /\.css$/,
exclude: [path.resolve(srcDir, cssDir)],
use: [
{
loader: "style-loader"
}, {
loader: "css-loader",
options: {
importLoaders: 1,
sourceMap: true
}
}, {
loader: "postcss-loader",
options: {
sourceMap: true.plugins:(a)= > [
precss(),
autoprefixer({
browsers: ['last 3 version'.'ie >= 10']}),postcsseasysprites({imagePath: '../img', spritePath: './assets/dist/img'})
]
}
}
]
}, {
test: /\.js$/,
exclude: /node_modules/,
use: [
{
loader: "babel-loader",
options: {
presets: ['react-hmre']
}
}
]
}, {
test: /\.(png|jpeg|jpg|gif|svg)$/,
use: [
{
loader: "file-loader",
options: {
name: 'dist/img/[name].[ext]'}}]}]},Copy the code
Very long. Let’s break it down:
CSS processing, the specific format does not say, say what it is, why do so
{
test: /\.css$/,
use: [
{
loader: "style-loader" //Used to handle the most basic CSS styles
}, {
loader: "css-loader",
options: {
modules: true.//Whether CSS-modules is supported
camelCase: true.//Whether to support -(cylinder line) class, ID name
localIdentName: "[name]_[local]_[hash:base64:3]".//The generation format of csS-modules
importLoaders: 1.//Whether CSS import methods are supported
sourceMap: true //Whether to generate CSS sourceMap, mainly for easy debugging
}
}, {
loader: "postcss-loader".//PostCSS load modules, you can use postCSS plug-in modules
options: {
sourceMap: true.plugins:(a)= > [
precss(), //Some features of Sass are supported
autoprefixer({
browsers: ['last 3 version'.'ie >= 10']}),//CSS3 automation compatible solution
postcsseasysprites({imagePath: '../img', spritePath: './assets/dist/img'}) //Supports CSS sprites
]
}
}
]
}, {
test: /\.css$/,
exclude: [path.resolve(srcDir, cssDir)],
use: [
{
loader: "style-loader"
}, {
loader: "css-loader",
options: {
importLoaders: 1,
sourceMap: true
}
}, {
loader: "postcss-loader",
options: {
sourceMap: true.plugins:(a)= > [
precss(),
autoprefixer({
browsers: ['last 3 version'.'ie >= 10']}),postcsseasysprites({imagePath: '../img', spritePath: './assets/dist/img'})]}}]},Copy the code
After looking at the code, we’ll focus on CSS-modules. The main reason for referring to CSS modules is to allow styles to be automatically tagged and hash, so that styles never collide. The obvious benefit of this is that you don’t have to worry about style name conflicts when you’re working together. In addition, CSS -modules can be used to reduce cascading styles and reduce parent references. Such low-level is conducive to the reuse and utilization of styles, making them more general and enhancing the reuse of styles.
The reason why 2 CSS loader modules are written here is very simple, because to prevent other plug-ins from adding CSS -modules to the module’s style, we need to add another loader so that other CSS styles not in the specified folder are not affected by CSS -modules. Exclude: [path.resolve(srcDir, cssDir)] and delete the csS-modules configuration.
Let’s look at the configuration of the output
output: {
path: assetsDir,//Path indicates the output path of the JS file
filename: jsDir + '[name].js'.//Used to configure the output file name format
publicPath: '/' //Public path, used to configure the path added before all resources, the specific purpose of this path will be explained in the generated directory
},Copy the code
Finally, let’s take a look at the development tool. By using this tool, source-Map can be automatically loaded, which is convenient for our debugging and development. After all, the compressed code cannot be debugged, while source-Map can restore the previous code and point to the location, which is convenient for our operation
devtool: 'source-map'.Copy the code
Because it’s a development environment, one important feature we need is react hot loading. What is hot loading? We use webpack-dev-server and React-hot-loader3 to make changes to the react script and state control of redux without refreshing the page. Can complete the page hot load and replacement function. Let’s look at the configuration method:
Create devServer.js to execute server execution and configuration and attach the react-hot-loader3 plugin with the following code:
//Load the Path module of Node
const path = require('path');
//Load the Webpack module
const webpack = require('webpack');
// const express = require('express');
const WebpackDevServer = require('webpack-dev-server');
//Load the WebPack configuration file
const config = require('./webpack.config.dev');
//Configure and initialize the Koa server
var creatServer =(a)= > {
//Initialize the WebPack application
let compiler = webpack(config);
//Call the Webpack hot loading module and its corresponding parameters
let app = new WebpackDevServer(webpack(config), {
publicPath: config.output.publicPath.//The output path of the file, because it is executed in memory, so it can not see the specific file
hot: true.//Whether to enable the hot loading function
historyApiFallback: true.//Whether to record browser history and work with the React-router
stats: {
colors: true //Color code}});//The call opens port 5000 for testing and development
app.listen(5000.function(err) {
if (err) {
console.log(err);
}
console.log('Listening at localhost:5000');
});
};
//Call the create KOA server method
creatServer(a);Copy the code
The webpack-dev-server is a miniature Express or KOA framework that can be used to implement a simple local server using NodeJS and support hot replace. The main thing is to check the webPack process and make the application hot loaded, but using this plugin does not complete all the hot loading effects. For example, we have problems with redux, because the hot replacement does not preserve state, so when we use it, every time we save, The React component will not retain its state, so we need to introduce another plugin to solve this problem. Let’s see how to use this plugin. There are many ways to use this plugin
Step 1: Where do you add the top 3 sentences to the entry file
entry: {
index: [
'react-hot-loader/patch'.'webpack-dev-server/client? http://0.0.0.0:5000'.'webpack/hot/only-dev-server'.'./src/index.js']}Copy the code
Step 2: Add hot update plug-ins to webpack
new webpack.HotModuleReplacementPlugin(),Copy the code
Step 3: add the corresponding hot loading module in Babel
We need to add a file in the root directory. Babelrc to configure the Babel configuration
We just need to add a hot-loaded plug-in
{
"plugins": [ "react-hot-loader/babel"]}Copy the code
So let’s look at the configuration of Babel
{
"presets": ["react"."es2015"."stage-0"."stage-1"."stage-2"."stage-3"]."plugins": ["transform-class-properties"."transform-es2015-modules-commonjs"."transform-runtime"."react-hot-loader/babel"]}Copy the code
We use a lot of converters and add-ons in Babel, and it’s easy to install. The package we use has the following features
"babel-cli": "^ 6.23.0"."babel-core": "^ 6.6.5"."babel-eslint": "^ 6.1.0"."babel-loader": "^ 6.2.4"."babel-plugin-transform-class-properties": "^ 6.11.5"."babel-plugin-transform-es2015-modules-commonjs": "^ 6.23.0"."babel-plugin-transform-react-jsx": "^ 6.23.0"."babel-plugin-transform-require-ignore": "^ hundreds"."babel-plugin-transform-runtime": "^ 6.23.0"."babel-polyfill": "^ 6.23.0"."babel-preset-es2015": "^ 6.3.13"."babel-preset-react": "^ 6.3.13"."babel-preset-react-hmre": "^ 1.1.1"."babel-preset-stage-0": "^ 6.22.0"."babel-preset-stage-1": "^ 6.22.0"."babel-preset-stage-2": "^ 6.13.0"."babel-preset-stage-3": "^ 6.22.0"."babel-register": "^ 6.23.0"."babel-runtime": "^ 6.23.0".Copy the code
Finally, let’s take a look at the generated WebPack configuration file
ExtractTextPlugin = require('extract-text-webpack-plugin'),Copy the code
When WebPack packages code, you can see that the styles are generated directly on the page, so if you want to reference the styles in a single file, you need to use this plugin. When you use this plugin, you can make the page reference CSS in link mode. For some large styles, it is better to store CSS styles in the browser cache to reduce the load of server data. The configuration is as follows:
new ExtractTextPlugin('dist/css/style.css'),Copy the code
Here we have compressed all styles into a style.css file, but of course you can do it separately
new ExtractTextPlugin(cssDir + '[name].css'),Copy the code
//Load JS module compression compiler plug-in
UglifyJsPlugin = webpack.optimize.UglifyJsPlugin.Copy the code
Load the compression module, which can compress JS into the most simplified code and greatly reduce the generated file size. The configuration is as follows:
new UglifyJsPlugin({
//Most compact output
beautify: false.//Delete all comments
comments: false,
compress: {
//UglifyJs does not print a warning when it deletes unused code
warnings: false.//Delete all 'console' statements
//It is also compatible with Internet Explorer
drop_console: true.//Nested variables that are defined but used only once
collapse_vars: true.//Extract static values that occur multiple times but are not defined as variables to reference
reduce_vars: true,}})Copy the code
OK, we have finished the configuration of Webpack and Babel, and then we can start the development. This part is mainly about the lack of materials. Now we have the Chinese official website, it is better.
React as the core implementation framework, the main black technology is also based on the methods provided by react. Let’s take a look at the life cycle of React
ReactJS life cycle
The life cycle of ReactJS can be divided into three phases: instantiation, existence, and destruction
instantiation
First instantiation
- getDefaultProps
- getInitialState
- componentWillMount
- render
- componentDidMount
GetDefaultProps => props => state => mount => Render => Mounted
pol
When a component already exists and its state changes
- componetWillReceiveProps
- shouldComponentUpdate
- ComponentWillUpdate
- render
- componentDidUpdate
ReceiveProps => shouldUpdate => Update => render => updated
Destruction of period
componentWillUnmount
The role of the 10 apis in the life cycle
-
GetDefaultProps is called only once on the component class, and returns the object used to set the default props, for reference values, which are shared in the instance
-
GetInitialState works on component instances and is called once when the instance is created to initialize the state of each instance, at which point you can access this.props
-
ComponentWillMount is called before the first rendering is done, at which point the state of the component can be modified
-
Render Mandatory method to create a virtual DOM with special rules:
Only data can be accessed through this.props and this.state, which can return null, false, or any React component. Only one top-level component can appear
Copy the code
-
ComponentDidMount Called after the real DOM has been rendered. You can access the real DOM element in this method by calling this.getDomNode (). At this point, you can manipulate the DOM using other class libraries. The server will not be called
-
ComponetWillReceiveProps Is called when the component receives new props and uses it as the nextProps parameter. In this case, you can change the props and state of the component
-
ShouldComponentUpdate whether the component should render new props or state, returning false to skip subsequent lifecycle methods, is generally not needed to avoid bugs. Is a point of optimization when application performance bottlenecks occur.
-
ComponetWillUpdate is called before rendering after receiving new props or state. Updates to props or state are not allowed
-
ComponetDidUpdate is called after rendering the new props or state, at which point DOM elements can be accessed.
-
Called before the componetWillUnmount component is removed, it can be used to do some cleanup, and all tasks added in the componentDidMount method need to be undone in this method, such as timers created or event listeners added.
var React = require("react");
var ReactDOM = require("react-dom");
var NewView = React.createClass({
//1. Creation phase
getDefaultProps:function() {
console.log("getDefaultProps");
return {};
},
//2. Instantiation phase
getInitialState:function() {
console.log("getInitialState");
return {
num:1
};
},
//This is where the business logic should be called before render, such as operations on state, etc
componentWillMount:function() {
console.log("componentWillMount");
},
//Render and return a virtual DOM
render:function() {
console.log("render");
return(
<div>
hello <strong> {this.props.name} </strong>
</div>
);
},
//This method occurs after the Render method. In this method, ReactJS uses render to generate the returned virtual DOM object to create the real DOM structure
componentDidMount:function() {
console.log("componentDidMount");
},
//3. Update phase
componentWillReceiveProps:function() {
console.log("componentWillReceiveProps");
},
//Whether it needs to be updated
shouldComponentUpdate:function() {
console.log("shouldComponentUpdate");
return true;
},
//State and props cannot be updated in this method
componentWillUpdate:function() {
console.log("componentWillUpdate");
},
//Updated completely
componentDidUpdate:function() {
console.log("componentDidUpdate");
},
//4. Destruction stage
componentWillUnmount:function() {
console.log("componentWillUnmount");
},
//Handling click events
handleAddNumber:function() {
this.setProps({name:"newName"}); }});ReactDOM.render(<NewView name="ReactJS"></NewView>.document.body);Copy the code
Let’s talk about the dark technology called React isomorphism
The official website provides us with two methods for the server to render the page
-
React. RenderToString converts the React element into an HTML string. Since reactid has already been rendered on the server side, it is rendered again on the browser side. This results in a high performance first page load! Isomorphic dark magic comes primarily from this API.
-
The React. RenderToStaticMarkup this API is equivalent to a simplified version of the renderToString, if your application is basically static text, suggest using this method, a large number of less reactid, natural to streamline the DOM tree, Save some traffic on IO stream transmission.
When renderToString is used, the background rendering will generate an HTML string with an identifier. When the current page reads JS, it will determine that if the HMTl string has an identifier, the page will not be rendered. Instead, it will only bind events, saving the rendering work of the React script.
Therefore, when the user refreshes the page, the backend will render the page and send the HTML string to the front end for practical binding. If the user clicks the switch link inside React, react will render and fill the page. So when you don’t refresh the page, you can switch pages in seconds. When the user refreshes, it does not need to wait for JS loading to complete to display the interface, but directly display the interface effect. That’s one of the great advantages of isomorphism
Now that we understand how this works, we also need to look at how do we synchronize the front and back interfaces
So how do we do that
Our interface layer is very simple. We create a file in the Client/View folder to write the React component as follows:
"use strict";
import React, {Component} from 'react';
import ReactDOM, {render} from 'react-dom';
import Nav from '../view/nav.js';
import LoginForm from '../view/components/login_form';
import logo_en from '../dist/img/text_logo.png';
import logo_cn from '../dist/img/text_logo_cn.png';
import '../dist/css/reset.css';
import Login from '../dist/css/login.css';
import Style from '../dist/css/style.css';
class App extends Component {
constructor(props) {
super(props);
//Initialize method, inheriting the parent props method
}
render() {
//Save the HTML code structure in memory and render a piece of HTML code
return (
<div>
<Nav/>
<div className={Login.banner}>
<p className={Login.text_logo}>
<img width="233" src={logo_en}/>
</p>
<p className={Login.text_logo_cn}>
<img width="58" src={logo_cn}/>
</p>
</div>
<LoginForm/>
<div className={Login.form_reg}>No account yet?
<a href="#">Sign up on ListenLite now</a>
</div>
</div>); }};export default App;Copy the code
The above method is very simple, how does the back-end use the corresponding method to use server rendering? One is very simple, let’s look at the code:
"use strict";
import React from 'react';
import {renderToString.renderToStaticMarkup} from 'react-dom/server';
import {match.RouterContext} from 'react-router';
import {layout} from '../view/layout.js';
import {Provider} from 'react-redux';
import bcrypt from 'bcrypt';
import jwt from 'jsonwebtoken';
import passport from 'koa-passport';
import routes from '. /.. /client/src/route/router.js';
import configureStore from '. /.. /client/src/store/store.js';
import db from '../config/db.js';
import common from '. /.. /common.json';
const User = db.User;
//get page and switch json and html
export async function index(ctx.next) {
console.log(ctx.state.user.ctx.isAuthenticated());
if (ctx.isAuthenticated()) {
ctx.redirect('/');
}
switch (ctx.accepts("json"."html")) {
case "html":
{
match({
routes,
location: ctx.url},error.redirectLocation.renderProps) = > {
if (error) {
console.log(500)}else if (redirectLocation) {
console.log(302)}else if (renderProps) {
//iinit store
let loginStore = {
user: {
logined: ctx.isAuthenticated()}};const store = configureStore(loginStore);
ctx.body = layout(renderToString(
<Provider store={store}>
<RouterContext {.renderProps}/>
</Provider>
), store.getState());
} else {
console.log(404); }})}break;
case "json":
{
let callBackData = {
'status': 200.'message': 'This is the login page'.'data':{}};ctx.body = callBackData;
}
break;
default:
{
// allow json and html only
ctx.throw(406."allow json and html only");
return; }}};Copy the code
First create a controller in the Server/Containers folder that corresponds to the page above, and then import the required methods
import {renderToString, renderToStaticMarkup} from ‘react-dom/server’;
Here at this time we will be able to use the background rendering, we can see that the increased rendering this part, we can directly apply colours to a drawing of the react directly to the HTML code, there is no detail, then the comprehensive other homogeneous part explain in detail here, here, you just need to understand the implementation method
renderToString(
<Provider store={store}>
<RouterContext {.renderProps}/>
</Provider>
), store.getState(a)Copy the code
The generated string looks like this
<div data-reactroot="" data-reactid="1" data-react-checksum="978259924"><ul class="style_nav_2Lm" data-reactid="2"><li class="style_fl_10U" data-reactid="3"><a href
="/" data-reactid="4"> first page < /a></li><li class="style_fl_10U" data-reactid="5"><a href="/ 404" data-reactid="6"Flow > string < /a></li><li data-reactid="7"><a href="/" data-reactid="8"><i
class="style_logo_2Hq" data-reactid="9"></i></a></li><li class="style_login_visable_2GR" data-reactid="10"><img src="/dist/img/user_1.png" data-reactid="11"/><dl data-reactid="1
2"><a href="/" data-reactid="13"><dt data-reactid="14"> My home page </dt></a><a href="/" data-reactid="15"><dt data-reactid="16"> I want to upload </dt></a><a href="/logout" data-reactid="17">
<dt data-reactid="18"< / > exitdt></a></dl></li><li class="style_fr_Bxu" data-reactid="19"><a href="/reg" data-reactid="20"><b data-reactid="21"> - < /b></a></li><li class="style_
fr_Bxu" data-reactid="22"><a href="/login" data-reactid="23"> log in < /a></li></ul><div class="login_banner_eub" data-reactid="24"><p class="login_text_logo_3fN" data-reactid="25">
<img width="233" src="/dist/img/text_logo.png" data-reactid="26"/></p><p class="login_text_logo_cn_iYZ" data-reactid="27"><img width="58" src="/dist/img/text_logo_cn.png" data-r
eactid="28"/></p></div><form data-reactid="29"><div class="login_tips_1nU" data-reactid="30"></div><ul class="login_form_HMj" data-reactid="31"><li data-reactid="32"><i class="l
ogin_segmentation_eZc" data-reactid="33"></i></li><li data-reactid="34"><b data-reactid="35"> Log in to ListenLite</b></li><li class="login_form_border_3hw" data-reactid="36"><input
type="text" name="username" value="" placeholder="Username/email address" data-reactid="37"/></li><li class="login_form_pw_2rP" data-reactid="38"><input type="password" name="password"
value="" placeholder="password" data-reactid="39"/></li><li data-reactid="40"><input type="checkbox" name="remmberPw" value="" class="login_remmber_input_28B" id="remmberPw" data-re
actid="41"/><label for="remmberPw" class="login_remmber_pw__H2" data-reactid="42"Remember the password </label></li><li data-reactid="43"><button class="login_form_submit_2A1" disabled="" ty
pe="submit" data-reactid="44"> login < /button></li></ul></form><div class="login_form_reg_32l" data-reactid="45"><! -- react-text: 46 -->Don't have an account yet?<! -- /react-text --><a href="#" d
ata-reactid="47"> Register ListenLite</ nowa></div></div>Copy the code
Here we can see that we are rendering the page component directly, but there is no index.js layer on the client side. We need to write a document layer on the back side to wrap the generated code, so we can see a layout.js file in the server/ View
'use strict';
import common from '. /.. /common.json';
exports.layout = function(content.data) {
return `
<html>
<head>
<meta charSet='utf-8'/>
<meta httpEquiv='X-UA-Compatible' content='IE=edge'/>
<meta name='renderer' content='webkit'/>
<meta name='keywords' content='demo'/>
<meta name='description' content='demo'/>
<meta name='viewport' content='width=device-width, initial-scale=1'/>
<link rel="stylesheet" href="/dist/css/style.css">
</head>
<body>
<div id="root"><div>The ${content}</div></div>
<script>
window.__REDUX_DATA__ = The ${JSON.stringify(data)};
</script>
<script src="The ${common.publicPath}dist/js/manifest.js"></script>
<script src="The ${common.publicPath}dist/js/vendor.js"></script>
<script src="The ${common.publicPath}dist/js/index.js"></script>
</body>
</html>
`;
};
Copy the code
The simple thing here is to fill the coat with the generated content
4. Synchronize front-end and back-end routes
Now that we understand the React lifecycle and see how servers use the official 2 methods for server rendering, let’s take a look at how to structure common routes for both the front and back ends
What is a React-router? Take a look at the official introduction
React Router is the complete React routing solution
The React Router keeps the UI and URL in sync. It has a simple API and powerful features such as code buffering, dynamic route matching, and establishing the correct location transition processing. The URL should be your first thought, not an afterthought.
The browser sends requests to the background server, and then the background feeds back the content. Now the React-Router takes over, and the redirect is implemented in the front end. Why can this be implemented? Thanks to HTML’s new History API
HTML5 new history API can achieve no refresh change address bar links, with AJAX can achieve no refresh jump. Simply put: Assuming the current page is renfei.org/, execute the following JavaScript statement: window.history.pushState(null, NULL, "/profile/"); The address bar then changes to renfei.org/profile/, but the browser doesn't refresh the page or even detect if the target page exists.
Copy the code
So what do we do with a React-router in an intermediate isomorphism? Of course, it is done for the synchronization of routes between the front and back ends
- When a user accesses a page for the first time, the server processes the route and outputs the related page content
- The client user clicks a link to jump to, and the client route processes, renders and displays the related components
- The user refreshes the page after the front-end jump, which is intercepted by the server route and rendered by the server to return * page content
Let’s look at code:
The front-end route Settings are as follows:
const Routers = (
<Router history={browserHistory}>
<Route path="/" component={Home}/>
<Route path="/user" component={User}/>
<Route path="/login" component={Login}/>
<Route path="/reg" component={Reg}/>
<Route path="/logout" component={Logout}/>
<Route path="*" component={Page404}/>
</Router>
export default Routers;
);Copy the code
And you can see that we’re using browserHistory, which is a new feature of HTML5, but requires browsers, and not supported by IE6-8, and hashHistory, The difference is that the hash method adds the form site.com/#/index to the link, so that browser history can remember each page switch, and also uses the anchor feature
Take a look at the configuration snippet on the server side:
const router = new Router(a);//Index page route
router.get('/'.require('../containers/index.js').index);
//404 page route
router.get('/user'.require('../containers/user.js').index);
router.get('/get_user_info'.require('../containers/user.js').getUserInfo);
//User page route
router.get('/ 404'.require('../containers/404.js').index);
//Login page route
router.get('/login'.require('../containers/login.js').index);
router.post('/login'.require('../containers/login.js').login);
router.get('/logout'.require('../containers/login.js').logout);
//Reg page route
router.get('/reg'.require('../containers/reg.js').index);
router.post('/reg_user'.require('../containers/reg.js').reg);
router.post('/vaildate_user'.require('../containers/reg.js').vaildate_user);
router.post('/vaildate_email'.require('../containers/reg.js').vaildate_email);
//set a router
module.exports = router.routes(a)Copy the code
Our server uses KOA2, so the attached route is also the corresponding KOA-Router. Since the background architecture is MVC structure, let’s take a look at the code snippet of the controller layer read by the route
import routes from '. /.. /client/src/route/router.js';
export async function index(ctx.next) {
console.log(ctx.state.user.ctx.isAuthenticated());
switch (ctx.accepts("json"."html")) {
case "html":
{
match({
routes,
location: ctx.url},error.redirectLocation.renderProps) = > {
if (error) {
console.log(500)}else if (redirectLocation) {
console.log(302)}else if (renderProps) {
//iinit store
let loginStore = {user:{logined:ctx.isAuthenticated()}};
const store = configureStore(loginStore);
console.log(store.getState());
ctx.body = layout(renderToString(
<Provider store={store}>
<RouterContext {.renderProps}/>
</Provider>
), store.getState());
} else {
console.log(404); }})}break;
case "json":
{
let callBackData = {
'status': 200.'message': 'This is the home page'.'data':{}};ctx.body = callBackData;
}
break;
default:
{
// allow json and html only
ctx.throw(406."allow json and html only");
return; }}};Copy the code
The Match method of the React – Router is used here. This method automatically reads the route file of the front end and feeds back the module code through the module that matches the path. The React server renders the module directly
Here you can see the design of the route. Koa2 is used in the back end, so you can judge the type of the request, which makes full use of the advantages of links. You can request the same address, because the request type is different, determine whether it is HTML or JSON, feedback different data structure, so as to achieve the rich application of route
What is the story
Redux provides a one-way data flow similar to Flux, maintains only one Store throughout the application, and is functionally oriented to make it user-friendly for server-side rendering support.
The official advice is, if you don’t need to use it, don’t use it, but I’ve found in development that when you use Redux and your application is complicated, it really works. As we know, when we need to modify the React state or interface, we do not directly operate the DOM structure, but only operate the state, so as to update the DOM. However, once the program becomes complicated, it cannot be maintained and modified, which will be very complicated. With Redux, the states of all components are uniformly controlled on the top layer, thus reducing the state interaction of each component, reducing the coupling of the program, and increasing the maintainability of the program.
Redux is really difficult to understand at the beginning of use, especially its Flux architecture, but after a careful look at the official document, it is actually very simple data processing. Here, all data state processing is operated by action and Reducer, and data manipulation is not allowed elsewhere. Thus the consistency of data is guaranteed. And you can save all kinds of states, so that you can return to any state before, which is really very refreshing for these complex applications, even some game applications.
Let’s see how redux and React-Redux can unify states
Server binding entry page code:
let store = configureStore(window.__REDUX_DATA__);
const renderIndex =(a)= > {
render((
<div>
<Provider store={store}> {*Add a Provider jacket to make the React top component store state, which is used to manage state management for all of its children*}
{routes}
</Provider>
</div>
), document.getElementById('root'))};renderIndex(a);store.subscribe(renderIndex);
{*This listens for events for component bindings and modifies the unified Store when state changes*}Copy the code
In order to synchronize the status on the server and ensure that the state read on the page is the latest state when the user refreshes the page, the client creation method is used to create the store
server/containers/login.js
//Reference the client create initial store method
import configureStore from '. /.. /client/src/store/store.js';
let loginStore = {
user: {
logined: ctx.isAuthenticated(a)/// Initially reads the user login status from the server and saves it as a state}};const store = configureStore(loginStore);
//Initial state is passed to the front end page through client methods
ctx.body = layout(renderToString(
<Provider store={store}>
<RouterContext {.renderProps}/>
</Provider>
), store.getState());Copy the code
Finally, we pass the state to a Window object store, so as to directly go to the front page, and read the state to generate the corresponding interface effect, as shown in the code:
server/view/layout.js
<script>
window.__REDUX_DATA__ =The ${JSON.stringify(data)};
</script>Copy the code
6. Server selection – KOA2
So now that we’ve basically seen the use of the three isomorphisms, why we’re doing it and how we’re doing it, let’s look at the server side architecture and some of the components that we’re using, okay
I used it on the server framework side, I chose KOA2 in Express and KOA, the reason why I chose it is simple, it is lighter architecture, better middleware mechanism and strong performance, it is written in ES6 standard, it is very friendly for me to use the new features of ES6.
To start a server, we create an app.js file in the root directory, and then write the corresponding code to create a KOA server
const Koa = require('koa');
const app = new Koa(a);// response
app.use(ctx = > {
ctx.body = 'Hello Koa';
});
app.listen(3000);Copy the code
For other koA related documents, please refer to the official Chinese document, which lists the various middleware used here
router = require('koa-router') (the),Copy the code
The necessary middleware of KOA, through the use of this component, you can set the back-end route on the server side, complete the server state of different requests (GET,POST,DELETE,PUT, etc.) by setting the route, and return the content of the body of the request
logger = require('koa-logger'),Copy the code
Koa server record plug-in, can output all kinds of request error information output, mainly used for debugging and monitoring the server state
bodyParser = require('koa-bodyparser')Copy the code
This plug-in needs to be mentioned that WHEN I used the form request, KOA could not get the corresponding form information, so I need to reference the middleware used by this component to parse the body. For example, if you pass the form, JSON data, or upload a file through post, it is not easy to get in KOA. After koA-bodyParser is parsed, this. Body is retrieved directly from KOA.
Database operation Sequelize is used to operate multiple databases. It also uses the same operation method as MongoDB, which is very convenient to use. For details, see Server/Models
Verify user permissions for passport
For authentication, we chose Nodejs’ most common permission authentication component, which also supports OAuth, OAuth2 and OpenID standard logins.
React related problems
The problem
Error: setState(...) : Can only update a mounted or mounting component. This usually means you called setState() on an unmounted component. This is a no-op. Please check the code for the App component.
Copy the code
why
Reason is not clear in time the timer or variables, the error will cause memory leak | using this define variables, and then use componentWillUnmount () to clear the timer, timer method as shown in the official demo, as follows:
Copy the code
The solution
class Timer extends React.Component {
constructor(props) {
super(props);
this.state = {secondsElapsed: 0};
}
tick() {
this.setState((prevState) = > ({
secondsElapsed: prevState.secondsElapsed + 1
}));
}
componentDidMount() {
this.interval = setInterval(() = > this.tick(), 1000);
}
componentWillUnmount() {
clearInterval(this.interval);
}
render() {
return (
<div>Seconds Elapsed: {this.state.secondsElapsed}</div>); }}ReactDOM.render(<Timer />, mountNode);Copy the code
The problem
Google error ReactDOMComponentTree. Js: 113 Uncaught TypeError: Cannot read property '__reactInternalInstance$xvrt44g6a8' of null at Object.getClosestInstanceFromNode. And Uncaught RangeError: Maximum Call Stack Size exceeded
Copy the code
why
Unknown, may be image reuse or stack caused memory overflow and error
Copy the code
The solution
willrender((
<Provider store={store}>
{routes}
</Provider>
), document.getElementById('root')); Instead ofrender((
<div>
<Provider store={store}>
{routes}
</Provider>
</div>
), document.getElementById('root'));Copy the code
The problem
How to run in the back-end, ignore the CSS file, prevent nodejs backend server error | node server cannot properly interpret CSS file, so there will be an error | use asset - the require - hook * * * * plugins out CSS, also can eliminate sass files, prevent nodejs reads CSS error.
Copy the code
why
Before and after the end of the generated CSS - different modules, causes the deployment to the server, can cause can't read the style, then page flash question | * * CSS - reason is due to the use of component modules - the require - hook * * is according to the mechanism of CSS - modules, to file - path path To generate the hash, so because the CSS - modules - the require - hook and webpack directory is different, so the generated hash not only question | only need in CSS - modules - the require - hook component used in rootDir, two directory can be consistent
Copy the code
The solution
Backend React use renderToString rendering image path into a hash code name | reason is Nodejs load file, will automatically be converted to hash name | using plug-in asset - the require - hook hook to return the correct image name
Copy the code
require('asset-require-hook')({
extensions: [
'jpg'.'png'.'gif'.'webp'
],
name: '[name].[ext]',
limit: 2000
});Copy the code
Backend permission validation classes
The problem
When using passport, cookies have been unable to be written and cannot be authenticated
Copy the code
why
The reason is that the problem is caused by not writing "await" while executing the code so that the verification operation is finished and the subsequent operation is carried out
Copy the code
The solution
We just need to add await and wait for the asynchronous execution to complete and pass the successful content to the HTTP body as follows:
Copy the code
await passport.authenticate('local'.function(err.user.info.status) {...}Copy the code