This article is written by mRc, a member of the Tuquai community. Welcome to join tuquai community and create wonderful free technical tutorials together to help the development of the programming industry.

If you think we wrote well, remember to like + follow + comment three times, encourage us to write a better tutorial 💪

This article is finally a series of articles, we will implement the application deployment, this tutorial will first Docker to container of your application, then teach you how to configure mongo’s authentication mechanisms, add a security guard to your database, finally, we will take you to use ali cloud container mirror service will throughout the whole stack application deployed in the cloud, Make your web site accessible to your Internet users, and hopefully this tutorial will solve some of the problems that have been plaguing your deployment to the cloud for years!

Welcome to the Series from Zero to Deployment: Implementing Mini Full Stack E-commerce Applications with Vue and Express:

  • From Zero to Deployment: Implementing Mini Full Stack E-commerce Applications with Vue and Express (PART 1)
  • From Zero to Deployment: Implementing Mini Full Stack E-commerce Applications with Vue and Express (PART 2)
  • From Zero to Deployment: Implementing Mini Full Stack E-commerce Applications with Vue and Express (Part 3)
  • From Zero to Deployment: Implementing Mini Full Stack E-commerce Applications with Vue and Express (4)
  • From Zero to Deployment: Implementing Mini Full Stack E-commerce Applications with Vue and Express (5)
  • From Zero to Deployment: Implementing Mini Full Stack E-commerce Applications with Vue and Express (6)
  • From Zero to Deployment: Implementing Mini Full Stack E-commerce Applications with Vue and Express (7)
  • From Zero to Deployment: Implementing Mini Full Stack E-commerce Applications with Vue and Express (Final)

Apply containerization and Docker Compose configurations

First, if you are following the previous seven tutorials, place the entire Vue front end project in the newly created Client directory and the entire Express back end project in the newly created Server directory. If you want to get started with this article, you can download the code directly from us:

git clone -b deploy-start https://github.com/tuture-dev/vue-online-shop-frontend.git
Copy the code

The source code for this article is available on Github. If you think it is well written, please give ❤️ a thumbs up +Github repository plus a star ❤️

Before we begin the containerization of the entire full-stack application, let’s take a look at the following diagram:

As you can see, we will use three containers:

  • nginxThe container includes the Nginx server (which houses the front-end static pages implemented by the Vue framework)
  • apiThe container contains the API server that we implemented with the Express framework
  • dbThe container is the MongoDB database

We will reverse proxy the entire application through Nginx. In other words, users who want to access our application must go through Nginx first. In addition, Nginx can directly return all requests for front-end resources (such as HTML, CSS, JS and other static file resources). All requests to obtain API endpoints (for example, / API /v1/products) are forwarded to the API server, which then returns JSON data to the user.

This classic architecture has the following advantages:

  • Nginx allows access control to filter out illegal requests
  • This solves the cross-domain problem between the front and back ends, because both the front page and the back end API are accessed through the same endpoint
  • The entire application architecture is transparent to users, can be easily configured and expanded, and Nginx is equipped with load balancing

Containerization of front-end applications

First, let’s containerize the front-end application that was previously done with Vue. Go to the client directory and build the Vue front-end project into a static page:

npm run build
# 或者 yarn build
Copy the code

Then add the client/config/nginx.conf configuration file as follows:

server {
    listen 80;
    root /www;
    index index.html;
    sendfile on;
    sendfile_max_chunk 1M;
    tcp_nopush on;
    gzip_static on;

    location /api/v1 {
      proxy_pass http://api:3000;
    }

    location / {
        try_files $uri $uri/ /index.html; }}Copy the code

There are two location rules to focus on:

  • If you visit/api/v1Then pass the request toapiThe container
  • If you visit/, returns directly to the front-end static page (index.html)

Then, in the code from the front end to the back end, we need to make a little change. Open the client/SRC/store/actions. Js file, modify API_BASE constants as follows:

// ...
import { Message } from 'element-ui';

const API_BASE = '/api/v1';

export const productActions = {
  // ...
};

export const manufacturerActions = {
  // ...
}
Copy the code

With this modification, the API actually accessed by the front end depends on the CURRENT URL of the page, rather than the hard-coded localhost:3000.

After the preparation, we will begin containerization in earnest.

prompt

If you are not familiar with the core concepts of Docker, we recommend you to learn a wave of “It’s Time for a Cup of Tea, Get started with Docker” from our Tuq community to help you quickly master the two important concepts of image and container, and guide you to container your first application.

Create Dockerfile in client directory as follows:

FROM nginx:1.13

# delete default configuration of Nginx
RUN rm /etc/nginx/conf.d/default.conf

# Add custom Nginx configuration
COPY config/nginx.conf /etc/nginx/conf.d/

Copy the front-end static files to the/WWW directory of the container
COPY dist /www
Copy the code

Create the client/.dockerignore file and make sure to ignore node_modules when building the image:

node_modules
Copy the code

Containerization of back-end applications

After the containerization of the front-end application, it’s time to prepare the containerization of the back-end application. The first step is to change the hard-coded MongoDB connection string to be injected via an environment variable. Modify the server/app.js file as follows:

// ...
// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine'.'ejs');

// Datbase connection here
mongoose.connect(process.env.MONGO_URI || `mongodb://localhost:27017/test`);

// ...
Copy the code

Create server/Dockerfile as follows:

FROM node:10

/usr/ SRC /app /usr/ SRC /app
WORKDIR /usr/src/app

# copy package.json to your working directory
COPY package.json .

# Install NPM dependencies
RUN npm config set registry https://registry.npm.taobao.org && npm install

# copy source code
COPY.
Set environment variables
ENV NODE_ENV=production
ENV MONGO_URI=mongodb://db:27017/test
ENV HOST=0.0.0.0
ENV PORT=3000

Open port 3000
EXPOSE 3000

# set the mirror run command
CMD [ "node"."./bin/www" ]
Copy the code

As with the front end, create the server/.dockerignore file to make sure node_modules is not packaged into the image:

node_modules
Copy the code

Docker Compose configuration

Docker Compose is a powerful multi-container management tool that can be configured with a YAML file to start all containers (services) with a single command. Create docker-comemage. yml in the project root directory as follows:

version: '3'

services: 
  db:
    image: mongo
    restart: always
  api:
    build: server
    restart: always
  nginx:
    build: client
    restart: always
    ports:
      - 8080: 80
Copy the code

As you can see, we created three services for our three containers (DB, API, and nginx) :

  • dbThe service specifies the mirror asmongoAnd then setrestart: alwaysTo make sure that the system automatically restarts after stopping for some reason
  • apiThe service specifies that the mirror passesserverDirectory construction, port mapping rule isA 3000-3000
  • nginxThe service specifies that the mirror passesclientDirectory construction, port mapping rule isA 8080-80

Pay attention to

When specifying each service, if the image is specified using the image field, the image is pulled directly from the image repository, as is the case with our DB service. If the build field is used to specify the image, the image will be built from the Dockerfile file in the specified directory. For example, here we specify the server and client directories to build the API and nginx respectively.

prompt

Docker Compose creates a Docker network for all services by default, allowing containers to communicate with each other via a service discovery mechanism (rather than a fixed IP), which is why you can specify http://api:3000 directly in the Nginx configuration. And set the MongoDB connection string to MongoDB ://db:27017/test. For a better understanding of the Docker Network, check out the previous release, “Dreams Can Connect: Connecting Containers with Networks.”

With everything in place, we can build and run the entire application with a single command in the e-commerce root directory:

docker-compose up --build
Copy the code

The initial build may take quite a while (pulling the base image), so order yourself a cup of coffee ☕️ and wait for the results to arrive. If you see something like this on the console, the image has been built:

Each mirror then outputs its own log information. We’re through

docker ps
Copy the code

The command further confirms the status of three containers:

OK, we can access our site with localhost:8080!

In addition, we also access our site through the Intranet (such as other devices under the same WiFi). By querying the Intranet IP address of the local computer (such as 192.168.1.1), and then entering the IP address in the browser of the mobile phone, we can access the site through 192.168.1.1:8080. To query the local Intranet IP address, go to the search engine.

summary

In this section, we learned:

  • Front-end static pages are served through the Nginx container and back-end requests are forwarded to the API container
  • Containerized back-end applications to establish connections to databases
  • Build and launch apps with Docker Compose in one click

Configure MongoDB authentication

In the previous deployment configuration, there was a major security flaw: our MongoDB database was not configured with any authentication measures, which meant that any request that could access the database could make any changes to the database! Next, we will handle MongoDB authentication, for our data security escort.

Example Modify MongoDB connection Settings

First, we modify the MongoDB connection Settings in server/app.js as follows:

// ...

// Datbase connection here
mongoose.connect(process.env.MONGO_URI || `mongodb://localhost:27017/test`, {
  useNewUrlParser: true.useUnifiedTopology: true.user: process.env.MONGO_USER,
  pass: process.env.MONGO_PASSWORD,
});

// ...
Copy the code

The meanings of the four options are as follows:

  • useNewUrlParser: Uses the new MongoDB driver URL resolver
  • useUnifiedTopology: The new connection management engine greatly improves connection stability and supports reconnection
  • user: Connection user name, injected through environment variables
  • pass: Connection password, injected through environment variables

Inject environment variables into Dockerfile

Then add these environment variables to the server/Dockerfile:

// ...

Set environment variables
ENV NODE_ENV=production
ENV MONGO_URI=mongodb://db:27017/admin
ENV MONGO_USER=mongoadmin
ENV MONGO_PASSWORD=secret
ENV HOST=0.0.0.0
ENV PORT=3000

// ...
Copy the code

Note that we adjusted the MONGO_URI to set the Database from test to admin, which is generated by default, in order to use Admin as the Authentication Database.

Docker Compose configures the initial password

Docker-comemess. yml = docker-comemess. yml = docker-comemess. yml

// .
  db:
    image: mongo
    restart: always
    environment:
      MONGO_INITDB_ROOT_USERNAME: mongoadmin
      MONGO_INITDB_ROOT_PASSWORD: secret
  api:
    build: server
    restart: always
  // .
Copy the code

Delete the container group completely:

docker-compose down --volumes
Copy the code

The down command is just the opposite of up. Delete all the images, containers, networks and data volumes created by up. Use the –volumes parameter to delete the data volumes created by the MongoDB container.

Pay attention to

If you don’t delete the volume from the previous MongoDB container, creating a MongoDB container with authentication will reuse the volume and skip the user initialization process (which I spent nearly two hours in). If you’re worried about deleting your data volume, you can run docker Volume Prune.

Then rebuild and start the container group:

docker-compose up --build
Copy the code

At this point, we should check our application (visit localhost:8080) and see that everything is fine. But a hanging heart finally relieved – this time our database is not in the “streaking” state!

summary

In this section, we went through a whole wave of how to configure authentication for MongoDB containers. To be fair, though, our approach is fairly primitive, with confidential information written in code files. Large container orchestration systems, such as Kubernetes and Docker Swarm, integrate sophisticated, enterprise-class confidential information management solutions. Due to the introductory nature of this series of tutorials, we’ll leave it at that.

In addition, we also did not talk about MongoDB database backup and restore details, if you want to know and learn, you can read our previous “Refuse to delete database run! Get a handle on Docker container data management”.

Use the mirror warehouse service

At this point, we’re practically ready to deploy the application. After connecting to the remote host via SSH (or other means), then run the following command:

# Bring down the warehouse
git clone https://github.com/tuture-dev/vue-online-shop-frontend.git
cd vue-online-shop-frontend

Build the front-end code
cd client && npm install && npm run build && cd.Docker Compose starts all containers with Docker Compose and runs them in daemon mode
docker-compose up -d --build
Copy the code

At this point, add the port number via the IP (or domain name) of the remote host (8080 in this case, of course you can modify the port configuration of the nginx service in docker-comemage.yml). For example, if our remote host IP is 1.2.3.4, then you can access our website through 1.2.3.4:8080.

In fact, there is a more efficient way to distribute and deploy images — a mirror repository service in the cloud.

Docker Hub and image naming rules

In fact, Docker company has made a mirror warehouse called Docker Hub, which provides rich official maintenance images, as well as storage and distribution of custom images. The images we use (mongo, Nginx, Node, etc.) are all official images on Docker Hub (or other proxy accelerators).

The rules for naming an image are as follows:

<registry_name>/<username>/<image_name>
Copy the code

Among them:

  • registry_nameRepresents the name of the mirror repository, or Docker Hub if omitted
  • usernameRepresents the username of the mirror warehouse, if andregistry_nameDocker official image if omitted together
  • image_nameThat’s the image name

Docker Hub is official, but it actually has the following problems:

  1. Free users support one private image
  2. Upload and pull speeds are inconsistent in the country
  3. There is no mirror security scan function

And we are going to experience the ali cloud mirror warehouse service is a good solution to the above problems.

Experience Ali’s cloud mirror warehouse service

For some reason, Digg does not allow the name of a cloud vendor, so alibaba’s cloud is used below.

First let’s visit the official website of the mirror warehouse service, and then find “mirror warehouse service” in the product list, click Open. After opening, enter the console and create the mirror namespace, as shown below:

Please fill in the name freely. In this case, we filled in vue-online-shop. After creation, it will look like the following figure:

With the namespace created, we can create a mirror repository for each of our application’s images (except for MongoDB database images). Click the “Create Mirror warehouse” button, as shown below:

Step 1: Fill in the information about the mirror warehouse:

Step 2, select the code source, here we select “local repository” :

After creating the two image repositories (API and Nginx), you can see the following list of images:

OK, then click the “Manage” button for a single warehouse and follow the instructions to upload the image. Here is a sample code (follow the instructions on your console) :

# Login ali cloud mirror warehouse, aliyunUser change their account name
docker login --username=aliyunUser registry.cn-shanghai.aliyuncs.com

# Build and push API images
docker build -t registry.cn-shanghai.aliyuncs.com/vue-online-shop/api server
docker push registry.cn-shanghai.aliyuncs.com/vue-online-shop/api

# Build and push nginx images
docker build -t registry.cn-shanghai.aliyuncs.com/vue-online-shop/nginx client
docker push registry.cn-shanghai.aliyuncs.com/vue-online-shop/nginx
Copy the code

prompt

In actual deployment, it is recommended that each image be tagged with the Hash currently submitted by Git, for example:

docker build -t registry.cn-shanghai.aliyuncs.com/vue-online-shop/api:9ca500a server
Copy the code

After the image push is complete, we change the API and nginx service in docker-comemage. yml to use cloud image.

// .
      MONGO_INITDB_ROOT_USERNAME: mongoadmin
      MONGO_INITDB_ROOT_PASSWORD: secret
  api:
    image: registry.cn-shanghai.aliyuncs.com/vue-online-shop/api
    restart: always
  nginx:
    image: registry.cn-shanghai.aliyuncs.com/vue-online-shop/nginx
    restart: always
    ports:
      - 8080: 80
Copy the code

After done, we just put the docker – compose. Yml file on a remote host, and then in the open directory docker compose container group can be:

Pull the latest version of all mirrors
docker-compose pull

Start all containers
docker-compose up -d
Copy the code

summary

In this step, we:

  • How to use Git to grab code to deploy on a remote host
  • Then understand Docker Hub and image naming rules, and analyze a wave of Docker Hub defects
  • Then take you step by step to experience and use Ali’s cloud mirror warehouse service, easy to realize the distribution and deployment of mirror

After eight full tutorials, our mini Full Stack e-commerce series is coming to an end. Since its release on December 21, 2019, it has lasted 86 days, during which it has been widely loved by readers. Some people ask The Tuq community to be faster, so that they can watch the whole series so that they can go to the interview. Some people directly add the Tuq community learning exchange group, where they discuss technology… However, we hope that this series of tutorials will be enjoyable and practical for you.

Finally, our mini full stack e-commerce series combat so far 🎉🎉, thank you all the way down always never abandon, love learning you 😘! See you in the next more exciting article 👋~

Want to learn more exciting practical skills tutorial? Come and visit the Tooquine community.

The source code for this article is available on Github. If you think it is well written, please give ❤️ a thumbs up +Github repository plus a star ❤️