preface
Recently, I plan to build my own blog system to record my daily study and improve my writing skills, so I plan to build my own front-end and back-end projects. I searched on the Internet, but I couldn’t find a suitable project, so I decided to build it myself. This article mainly describes how to build a Node API interface service.
Brief introduction to technology Stack
There are a lot of Node frameworks on the Internet, including Egg, Express, KOA and so on. Each framework has its own advantages and disadvantages. Finally, I decided to use koA2, which is highly extensible, to build the project. The stack we decided to use was KOA +typescript+mysql+mongodb to build the project.
Why use Node
The main thing is that we don’t speak any other languages either…
Node.js is a framework that runs on the server side, with a V8 engine at the bottom, which is very fast and has other attractions as a front-end back-end service language:
-
Asynchronous I/O
Because node is all I/O asynchronous, it can handle high concurrency scenarios well
-
event-driven
-
Single thread
-
cross-platform
And, most importantly, Node is developed in JavaScript, which greatly reduces the cost of learning for front-end students.
Koa
Koa is a new framework created by the original Express team. Koa is smaller, more expressive and more robust than Express. Of course, what I have mentioned above is not true. In fact, what really attracts me is that KOA solves the problem of hell callback in Express.js by using async function in ES6 writing. Besides, KOA does not have as many middleware as Express, which is undoubtedly excellent for a private project. Another feature is KOA’s unique middleware process control, known as the KOA Onion model.
The Onion model boils down to two things
- Save and pass the context
- Middleware management and next implementation
(Photo from Internet)
The above two pictures clearly show the working process of the Onion model. Of course, if the specific principle is irrelevant to this article, I will not describe it in depth. If you are interested, you can search it on the Internet.
Typescript
There are many debates and articles on the web about why we should develop in Typescript, the pros and cons of Typescript development, and why we should not develop in Typescript.
Ts is mainly used in this project for the following reasons:
- I continue to learn TS, “paper come zhongjue shallow, must know this to practice”, need more TS combat to deepen the understanding of TS
- Your own project and use whatever you want
- It’s going to be a little bit higher
- Ts has a lot of things that JS does not have, such as generic interface abstractions and so on
- Good module management
- Strongly typed voice, personally feel more suitable than JS development server project
- A good error prompt mechanism can avoid a lot of low-level errors in the development stage
- Discipline development habits to make code more elegant and formal
Finally, remember that what suits you is the best
Mysql
MySQL is the most popular Relational Database Management System and one of the best RDBMS(Relational Database Management System) applications in WEB applications
Mongodb
Why use mongodb instead of mysql? In fact, the main reason is that JWT is used for identity authentication. Since middleware does not provide API for refreshing expiration time and wants to realize an automatic life renewal function, mongodb is used to assist in the automatic life renewal function. Moreover, some user identity information or buried point information can be stored in Mongo
PM2
PM2 is a node process management tool. It can be used to simplify many node application management tasks, such as performance monitoring, automatic restart, load balancing, and so on
Project structures,
I mainly divided the project into: framework, log, configuration, routing, request logic processing, data modeling these modules
Here is a directory structure for a project:
├─ App Compiled project file ├─ Node_modules Rely on package ├─ Static Resource File ├─ logs Service Log ├─ SRC source │ ├─ Abstract class │ ├─ config config │ ├─ Controller │ ├─ Database │ ├─ Middleware Module │ ├─ Models │ ├─ Router │ ├─ Utils │ ├─ app.ts Koa2 Import ├─.eslintrc.js EsLint Configure ├─.Gitignore Ignore Submit to Git directory ├─.prettierrc code To Polish ├─ file.config.js Pm2 Configure ├─ Nodemon. json Configure ├─ Package. json File ├─ tsconfig.json typescript Configure ├─ readme.mdCopy the code
Without further ado, let’s look at the project with the code
Create a KOA application
As the saying goes, a man without a head never walks. There is also a header that leads the project. This is the entry app.ts. Let’s see how it does this header with the code
import Koa, { ParameterizedContext } from 'koa'
import logger from 'koa-logger'
// instantiate koA
const app = new Koa()
app.use(logger())
// Agree to the response message
app.use(async (ctx, next) => {
const start = (new Date()).getDate();
let timer: number
try {
await next()
timer = (new Date()).getDate()
const ms = timer - start
console.log(`method: ${ctx.method}, url:${ctx.url} - ${ms}ms`)}catch (e) {
timer = (new Date()).getDate()
const ms = timer - start
console.log(`method: ${ctx.method}, url:${ctx.url} - ${ms}ms`)}})// Listen on the port and start
app.listen(config.PORT, () => {
console.log(`Server running on http://localhost:${config.PORT || 3000}`)
})
app.on('error', (error: Error.ctx: ParameterizedContext) => {
// Project startup error
ctx.body = error;
})
export default app
Copy the code
At this point, we are ready to start a simple project
npm run tsc
Compiling ts filesnode app.js
Start the project
Next, type http://localhost:3000 in your browser to see the access log on the console. Of course, this is not enough, because we need a more convenient development environment because debugging is always part of our development process.
Local development environment
Local development uses Nodemon to implement automatic restart. Since Node cannot recognize TS directly, ts-Node is required to run TS files.
// nodemon.json { "ext": "ts", "watch": [/ / need to monitor changes in the file "/ SRC / * * * ts" and "config / / *. * * ts" and "the router / / *. * * ts", "public / / * * *", "view / * * / *"], "the exec" : "Ts-node --project tsconfig.json SRC /app.ts" // Use ts-node to execute ts files} // package.json "scripts": {"start": "cross-env NODE_ENV=development nodemon -x" }Copy the code
Local debugging
Because sometimes we need to see the requested information, we can not add console.log to the code, which is inefficient and inconvenient, so we need to use the editor to help us implement the debug function. Here is a brief description of how to use vscode to implement debug.
- Enable sourceMap in tsconfig.json
- Register a VSC debug task for TS-Node, modify the project’s launch.json file, and add a new startup mode
- launch.json
{
"name": "Current TS File"."type": "node"."request": "launch"."args": [
"${workspaceRoot}/src/app.ts"// import file],"runtimeArgs": [
"--nolazy"."-r"."ts-node/register"]."sourceMaps": true."cwd": "${workspaceRoot}"."protocol": "inspector"."console": "integratedTerminal"."internalConsoleOptions": "neverOpen"
}
Copy the code
- F9 code breakpoint
- F5 starts debugging the code
Importing Interface Routes
Now that we have created a KOA application, we need to import routes:
// app.ts
import router from './router'
import requestMiddleware from './middleware/request'
app
.use(requestMiddleware) // Use routing middleware to handle routing, some common methods for handling interfaces
.use(router.routes())
.use(router.allowedMethods())
// router/index.ts
import { ParameterizedContext } from 'koa'
import Router from 'koa-router'
const router = new Router()
// Interface documentation - this uses a module to implement routingRouter.use (routes())... Router.use (routing.routes ())// Test the routed connection
router.get('/test-connect'.async (ctx: ParameterizedContext) => {
await ctx.body = 'Hello Frivolous'
})
// Match other undefined routes
router.get(The '*'.async (ctx: ParameterizedContext) => {
await ctx.render('error')})export default router
Copy the code
Defining the database model
- Use Sequlize as the mysql middleware
// Instantiate sequelize
import { Sequelize } from 'sequelize'
const sequelizeManager = new Sequelize(db, user, pwd, Utils.mergeDefaults({
dialect: 'mysql'.host: host,
port: port,
define: {
underscored: true.charset: 'utf8'.collate: 'utf8_general_ci'.freezeTableName: true.timestamps: true,},logging: false,
}, options));
}
// Define the table structure
import { Model, ModelAttributes, DataTypes } from 'sequelize'
// Define field attributes in the user table model
const UserModel: ModelAttributes = {
id: {
type: DataTypes.INTEGER,
allowNull: false.primaryKey: true.unique: true.autoIncrement: true.comment: 'id'
},
avatar: {
type: DataTypes.INTEGER,
allowNull: true
},
nick_name: {
type: DataTypes.STRING(50),
allowNull: true
},
email: {
type: DataTypes.STRING(50),
allowNull: true
},
mobile: {
type: DataTypes.INTEGER,
allowNull: false
},
gender: {
type: DataTypes.STRING(35),
allowNull: true
},
age: {
type: DataTypes.INTEGER,
allowNull: true
},
password: {
type: DataTypes.STRING(255),
allowNull: false}}// Define the table model
sequelizeManager.define(modelName, UserModel, {
freezeTableName: true.// The table name corresponding to model is the same as the model name
tableName: modelName,
timestamps: true.underscored: true.paranoid: true.charset: "utf8".collate: "utf8_general_ci",})Copy the code
Based on the above code, we have defined a user table, and other tables can be defined as well. In addition to mysql, mongo is also used in this project. Let’s see how mongodb is used
- Mongoose is used as the middleware of mongodb
/ / the mongoose entrance
import mongoose from 'mongoose'
const uri = `mongodb://${DB.host}:${DB.port}`
mongoose.connect('mongodb://' + DB_STR)
mongoose.connection.on('connected', () => {
log('Mongoose connection success')
})
mongoose.connection.on('error', (err: Error) => {
log('Mongoose connection error: ' + err.message)
})
mongoose.connection.on('disconnected', () => {
log('Mongoose connection disconnected')})export default mongoose
// Define the table model
import mongoose from '.. /database/mongoose'
const { Schema } = mongoose
const AccSchema = new Schema({}, {
strict: false.// Allow undefined fields to be passed in
timestamps: true.// createTime/updateTime is added by default
versionKey: false // No version number by default
})
export default AccSchema
// Define the model
mongoose.model('AccLog', AccSchema)
Copy the code
Implementing an interface
Ok, now that we’ve defined the table model, it’s time for the exciting interface implementation. We use a simple buried point interface to achieve, first need to analyze the buried point tool implementation logic:
- Because buried information is non-relational, mongodb is used to store buried information
- Because this is a pure recording interface, it needs to be designed to be generic – i.e. the caller saves what he passes except for a few key fields
- The behavior of buried spot is not perceptive to the user, so no feedback information is designed. If the buried spot error is handled internally
Ok, now that you understand the functionality of this buried point, it’s time to implement this simple interface:
// route.ts defines an addAccLog interface
router.post('/addAccLog', AccLogController.addAccLog)
// AccLogController.ts implements addAccLog
class AccLogRoute extends RequestControllerAbstract {
constructor() {
super()}// AccLogController.ts
public async addAccLog(ctx: ParameterizedContext): Promise<void> {
try {
const data = ctx.request.body
const store = Mongoose.model(tableName, AccSchema, tableName)
// The disposeAccInsertData method is used to process log information. Some fields are too deep to be nested or empty redundant fields are removed
const info = super.disposeAccInsertData(data.logInfo)
// Add logs
const res = await store.create(info)
// No feedback is required
// super.handleResponse('success', ctx, res)
} catch (e) {
// Error handling - for example, make a dot, record the error information of the dot, see what caused the error (based on the actual requirements)
// ...}}// ...
}
export default new AccLogRoute()
Copy the code
Speaking of which, I have to mention that routing can introduce decorator writing method, which can reduce rework and improve efficiency. If you are interested, please read my last blog. Here is the code for decorator writing:
@Controller('/AccLogController')
class AccLogRoute {
@post('/addAccLog')
@RequestBody({})
async addAccLog(ctx: ParameterizedContext, next: Function) {
const res = await store.create(info)
handleResponse('success', ctx, res)
}
}
Copy the code
This contrast, is not to see the benefits of decoration.
JWT authentication
Here, jSONWebToken is used for JWT verification
import { sign, decode, verify } from 'jsonwebtoken'
import { ParameterizedContext } from 'koa'
import { sign, decode, verify } from 'jsonwebtoken'
import uuid from 'node-uuid'
import IController from '.. /interface/controller'
import config from '.. /config'
import rsaUtil from '.. /util/rsaUtil'
import cacheUtil from '.. /util/cacheUtil'
interface ICode {
success: string,
unknown: string,
error: string,
authorization: string,
}
interface IPayload {
iss: number | string; / / user id
login_id: number | string; // Id of the login logsub? : string; aud? : string; nbf? : string; jti? : string; [key: string]: any; } abstractclass AController implements IController {
// Server response status
/ / code status code reference https://www.cnblogs.com/zqsb/p/11212362.html
static STATE = {
success: { code: 200.message: 'Operation successful! ' },
unknown: { code: - 100..message: 'Unknown error! ' },
error: { code: 400.message: 'Operation failed! ' },
authorization: { code: 401.message: 'Authentication failed! '}},/** * @description Response event * @param {keyof ICode} type * @param {ParameterizedContext} [CTX] * @param {*} [data] * @param {string} [message] * @returns {object} */public handleResponse( type: keyof ICode, ctx? : ParameterizedContext, data? : any, message? : string ): object {const res = AController.STATE[type]
const result = {
message: message || res.message,
code: res.code,
data: data || null,}if (ctx) ctx.body = result
return result
}
/** * @description Registered token * @param {IPayload} payload * @RETURNS {string} */
public jwtSign(payload: IPayload): string {
const { TOKENEXPIRESTIME, JWTSECRET, RSA_PUBLIC_KEY } = config.JWT_CONFIG
const noncestr = uuid.v1()
const iss = payload.iss
// JWT creates Token
consttoken = sign({ ... payload, noncestr }, JWTSECRET, {expiresIn: TOKENEXPIRESTIME, algorithm: "HS256" })
/ / encrypted Token
const result = rsaUtil.pubEncrypt(RSA_PUBLIC_KEY, token)
const isSave = cacheUtil.set(`${iss}`, noncestr, TOKENEXPIRESTIME * 1000)
if(! isSave) {throw new Error('Save authorization noncestr error')}return `Bearer ${result}`
}
/** * @description verifies Token validity, middleware ** /
public async verifyAuthMiddleware(ctx: ParameterizedContext, next: Function) :Promise<any> {
/ / validation token
const { JWTSECRET, RSA_PRIVATE_KEY, IS_AUTH, IS_NONCESTR } = config.JWT_CONFIG
if(! IS_AUTH && process.env.NODE_ENV ==='development') {
await next()
} else {
// If there is no authentication field in the header, the verification fails
if(! ctx.header || ! ctx.header.authorization) { ctx.response.status =401
return
}
// Get the token and parse to determine whether the token is consistent
const authorization: string = ctx.header.authorization;
const scheme = authorization.substr(0.6)
const credentials = authorization.substring(7)
if(scheme ! = ='Bearer') {
ctx.response.status = 401;
this.handleResponse('authorization', ctx, null.'Wrong authorization prefix')
return;
}
if(! credentials) { ctx.response.status =401;
this.handleResponse('authorization', ctx, null.'Request header authorization cannot be empty')
return;
}
const token = rsaUtil.priDecrypt(RSA_PRIVATE_KEY, credentials)
if (typeof token === 'object') {
ctx.response.status = 401;
this.handleResponse('authorization', ctx, null.'authorization is not an object')
return;
}
const isAuth = verify(token, JWTSECRET)
if(! isAuth) { ctx.response.status =401;
this.handleResponse('authorization', ctx, null.'authorization token expired')
return;
}
const decoded: string | { [key: string]: any } | null = decode(token)
if (typeofdecoded ! = ='object'| |! decoded) { ctx.response.status =401;
this.handleResponse('authorization', ctx, null.'authorization parsing failed')
return;
}
const noncestr = decoded.noncestr
const exp = decoded.exp
const iss = decoded.iss
const cacheNoncestr = cacheUtil.get(`${iss}`)
if(IS_NONCESTR && noncestr ! == cacheNoncestr) { ctx.response.status =401;
this.handleResponse('authorization', ctx, null.'authorization signature "noncestr" error')
return;
}
if (Date.now() / 1000 - exp < 60) {
constoptions = { ... decoded };Reflect.deleteProperty(options, 'exp')
Reflect.deleteProperty(options, 'iat')
Reflect.deleteProperty(options, 'nbf')
const newToken = AController.prototype.jwtSign(options as IPayload)
ctx.append('token', newToken)
}
ctx.jwtData = decoded
await next()
}
}
}
export default AController
// Authorize the decorator code
public auth() {
return (target: any, name? : string, descriptor? : IDescriptor) = > {
if (typeof target === 'function' && name === undefined && descriptor === undefined) {
target.prototype.baseAuthMidws = super.verifyAuthMiddleware;
} else if (typeof target === 'object' && name && descriptor) {
descriptor.value.prototype.baseAuthMidws = super.verifyAuthMiddleware; }}}Copy the code
Thus, we have a JWT-authorized module, which we can use very simply, using the addAccLog interface as an example
class AccLogRoute {
@auth() // just one line of ➕
@post('/addAccLog')... }Copy the code
Interface documentation
Now that we have written the interface, there must always be an output document to refer to. At this time, Swagger comes to mind. Let’s introduce Swagger into our project.
- The entrance
/ / swagger entrance
import swaggerJSDoc from 'swagger-jsdoc'
import config from '.. /config'
const { OPEN_API_DOC } = config
// swagger definition
const swaggerDefinition = {
// ...
}
const createDOC = (): object= > {
const options = {
swaggerDefinition: swaggerDefinition,
apis: ['./src/controller/*.ts']}return OPEN_API_DOC ? swaggerJSDoc(options) : null
}
export default createDOC
How do / /
Copy the code
- Configuration examples – Be sure to pay attention to formatting here
@swagger Tips: File definitions: Login: // Interface name required: // // Optional parameter Username: type: string Password: type: string Path: type: stringCopy the code
- Swagger official Configuration tool
- The FACILITY plug-in is recommended for quick annotation generation
The Mock data
Use mocks to generate test data
The log
The log module was originally intended to use log4.js to do, but later felt that the log module did not meet the expectations, so we decided to temporarily use PM2 log system to replace log4. I’m not going to post any log4 code here
The deployment of
To deploy the project using PM2, here is the configuration file
Tips
- Error_file Indicates the output of error logs
- Out_file Normally generates logs
- Script entry file – Use the packaged JS file as the entry
// pm2.json
{
"apps": {
"name": "xxx"."script": "./app/server.js"."cwd": ". /"."args": ""."interpreter_args": ""."watch": true."ignore_watch": [
"node_modules"."logs"."app/lib"]."exec_mode": "fork_mode"."instances": 1."max_memory_restart": 8."error_file": "./logs/pm2-err.log"."out_file": "./logs/pm2-out.log"."merge_logs": true."log_date_format": "YYYY-MM-DD HH:mm:ss"."max_restarts": 30."autorestart": true."cron_restart": ""."restart_delay": 60."env": {
"NODE_ENV": "production"}}}// package.json
"scripts": {
/ / ring production
"prod": "pm2 start pm2.json"
}
Copy the code
Json to start the Pm2 process
conclusion
Although it is a simple interface server, there are a lot of things to consider, and because many plug-ins are the first contact, so the whole project implementation process is quite bumpy, basically the kind of feeling for stones. Although it was a lot of difficulties, I also learned a lot of new knowledge points in the process. I roughly understood the weight carried by a simple back-end service project and understood the operation process of the back-end in a simple project. Finally, it ends with a lament: once in the front, it is as deep as the sea. Don’t learn big guys, can’t keep up…
As for the open source of the project, because the whole project is still in the process of development, there are still many imperfections, so I hope that after the completion of the whole system, a series of front-end + back-end + other codes will be open source at the first time.