preface

This blog post is still an extension and refinement of the Multi-SPa-Webpack-CLI.

  1. Integrated mongoose.
  2. Integrate Docker development environment.

Multi-spa-webpack-cli has been released to NPM and can be installed in a Node environment.

npm install multi-spa-webpack-cli -g
Copy the code

The steps are as follows:

#1. Initialize the project

multi-spa-webpack-cli init spa-project

#2. Access the file directory

cd spa-project

#Unused Docker
#3. Pack the same parts

npm run build:dll

#4. Start the project (manually open browser: localhost:8090)

npm start

#5. Start the mongo

mongod

#6. Start the service

cd server
npm install
npm start

#Use Docker (Docker installation required)
#3. Start the project (manually open browser: localhost:8090)

npm run docker:dev
Copy the code

As can be seen from the above steps, Docker only needs 3 steps to start the project.

mongoose

Mongoose is an object model tool for convenient operation of MongoDB in node.js environment.

Before you start, install MongoDB. Installing MongoDB can be tricky, especially on a corporate computer (no one knows what’s configured on that computer). The installation process can be referred to [official website: Install MongoDB]

Also know some concepts about MongoDB.

SQL terms/concepts MongoDB terminology/concepts Explanation/explanation
database database The database
table collection Database tables/collections
row document Data record line/document
column field Data fields/fields
index index The index
table joins Table joins,MongoDB does not support
primary key primary key Primary key. MongoDB automatically sets the _ID field as the primary key

Database services and clients:

SQL MongoDB
Mysqld/Oracle mongod
mysql/sqlplus mongo

Mongoose: Mongoose, mongoose, Mongoose

The usage is simple: define Schema, convert to Model, manipulate Model, generate instance.

/* model.js */
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
/ / define Schema
const UserSchema = new Schema({
  username: {
    type: String.unique: true.require: true
  },
  password: {
    type: String.require: true}});// Convert to Model
const UserModel = mongoose.model('User', UserSchema);

module.exports = UserModel;

/* user.js */
/ / operation Model
let user = await UserModel.findOne({ username });
    if(! user) {try {
            // Generate an instance
            await new UserModel({
            username,
            password
            }).save();
            ctx.body = {
            "success": true."message": "Registration successful"}}catch (error) {
            ctx.body = {
            "success": false."message": "Registration failed"}}}else {
    ctx.body = {
        "success": false."message": "Username already exists"}}Copy the code

Docker

From the above steps, we can see that the project startup steps are cumbersome and prone to interference when installing the MongoDB environment.

The following uses Docker to build the development environment and improve the development experience.

Before using Docker, a few concepts are introduced.

  • Mirror: A mirror is a virtual concept, the actual embodiment of which is not a file, but a set of file systems, or a combination of multiple file systems.

When a mirror is built, one layer is built on top of the other. After each layer is built, there are no more changes, and any changes on the next layer only happen on your own layer. For example, deleting a file at the previous layer does not actually delete the file at the previous layer, but only marks the file as deleted at the current layer. This file will not be seen when the final container runs, but it will actually follow the image. Therefore, when building the image, extra care should be taken. Each layer should contain only what needs to be added to that layer, and any extras should be cleared away before the layer is built.

So, in production deployment, make sure each layer is clean and eliminate unnecessary files. Such as developing compile-time files, etc. (node_module). This also avoids unnecessary bloat in the image.

  • Containers: Containers are essentially processes, but unlike processes that execute directly on the host, container processes run in their own separate namespace.

The processes inside the container run in an isolated environment and are used as if they were operating on a separate system from the host. This feature makes container-wrapped applications more secure than running directly on the host.

Each container runtime is based on an image, on which a storage layer of the current container is created. We can call this storage layer prepared for the container runtime reads and writes the container storage layer.

The container storage layer lives the same as the container. When the container dies, the container storage layer dies with it. Therefore, any information stored in the container storage layer is lost when the container is deleted.

As per Docker best practices, containers should not write any data to their storage layer, and the container storage layer should remain stateless. All file writing operations should use data volumes or bind host directories. Read/write operations in these locations skip the container storage layer and directly read/write operations to the host (or network storage), achieving higher performance and stability.

The development environment makes frequent changes to files, so you can use the data volume here to bind the host directory.

  • Context: The directory of files passed to the Docker engine.

Docker is divided into Docker engine (that is, server daemon) and client tools at runtime. When the image is built, the context is copied to the Docker engine. The Docker client then issues instructions, and the execution of the instructions is in the Docker engine. Therefore, the scope of context should be reasonable. If the scope is too large, it will take a long time for files to be copied to the Docker engine. If the range is too small, files outside the range cannot be operated.

Docker deployment development environment

Deploying the development environment is simple, just configure Dockerfile and Docker-compose. Dockerfile: Compose Dockerfile: Compose

Docker-compose uses the YAML language for docker-compose.

version: '3.6'

services:
  client:
    container_name: "client"
    build:
      context: ../
      dockerfile: Dockerfile.client.dev
    volumes:
      - ../src:/app/client/src
    ports:
      - "8090:8090"
    depends_on:
      - server

  server:
    container_name: "server"
    build:
      context: ../server
      dockerfile: Dockerfile.server.dev
    volumes:
      - ../server:/app/server
    ports:
      - "8080:8080"
    depends_on:
      - database


  database:
    container_name: mongo
    image: mongo
    volumes:
      - ../data:/data/db
    ports:
      - "27017:27017"
Copy the code

What the development environment needs is real-time presentation, both on the front-end page and on the back-end service. As mentioned above, with the context already committed to the image, how can the front end project implement hot replacement in the container? In fact, it is very simple, is the configuration of volumes. The same is true for the back end, but with the assistance of the Modemon tool.

There is also a problem with deployment where localhost is not available in the image and needs to be replaced with IP.

// Front-end projects
/* webpack.dev.js */
  devServer: {
    publicPath: '/'.contentBase: path.resolve(__dirname, '.. '.'dist'),
    port: APP_CONFIG.port,
    host: '0.0.0.0'./ / need to specify
    hot: true.historyApiFallback: {
      index: '/'}}// Back-end project
/* config.js */
module.exports = {
  'database': 'mongodb://database:27017/yexiaochen'  // Match the database service name in docker-compose
}
Copy the code