preface

It has been more than half a year since I wrote an article in the Nuggets last time (mainly because of my poor level and I don’t know what to write).

But the New Year, always want to write something summary summary.

I’ve been using Docker lately. So I want to share this knowledge. I don’t know if it’s best practice, but you’re welcome to step on me

1. Docker

What is Docker, in the nuggets platform has a lot of articles have been introduced, I introduced the estimate is not as good as them. So let’s just assume you have some Docker foundation

So this article basically is the content of actual combat respect, the word of specific theory, can go searching other article understanding learns. Or go to the website

1.1 installation

For Windows or MacOS, you can download Docker Desktop from hub.docker.com/?overlay=on…

And then there’s the dumb-ass installation, which I won’t introduce here

Note: Windows requires enterprise edition to install, home edition cannot install (or is difficult to install)

1.2 docker-compose

Docker-compose will be automatically installed by default after Docker Desktop is installed

Try it yourself on the command line to see the current version

docker-compose -v
docker -v
Copy the code

2 Preparing Items

2.1 Initializing onenodeproject

  1. Initialize thepackage.json
npm init -y
Copy the code
  1. So what we’re going to do is we’re going to set up an HTTP serverexpresswithkoaThat’s fine. See what you like
npm install express
Copy the code
  1. Write the code
let express = require('express')
let os = require('os')
let app = express()
// Obtain the local IP address
function getLocalIpAddress () {
    let ip = ' '
    let netInfo = os.networkInterfaces()
    let osType = os.type()
    if (osType === 'Windows_NT') { 
        for (const dev in netInfo) {
            For Windows 7, local connection is displayed; for Windows 10, Ethernet is displayed
            if (dev === 'Local connection' || dev === 'Ethernet') {
                for (let j = 0; j < netInfo[dev].length; j++) {
                    if (netInfo[dev][j].family === 'IPv4') {
                        ip = netInfo[dev][j].address;
                        break;
                    }
                }
            }
        }
    } else if (osType === 'Linux') {
        ip = netInfo.eth0[0].address;
    }
    return ip
}
app.get('/getJson', (request, response) => {
    response.send({
        title: 'Hello Express、Hello Docker'.ip: getLocalIpAddress(),
        env: process.env.NODE_ENV
    })
})
// Listen on port 3000
app.listen(3000, () = > {console.log('server is started')})Copy the code
  1. Prepare aDockerfilefile
  • DockerIn theThe mirrorandThe containerThe relationship is likeclasswithThe instanceThe relationship between
  • The mirror image can be passedDockerfileFiles are generated and containers are created using images

Dockerfile is used to generate the image

# specify a base image
FROM node:latest
# Working directory
WORKDIR /www/node-server/  
# copy package.json to your working directory
COPY package.json /www/node-server/package.json
# install dependencies
RUN npm install
Copy files from the current directory to the working directory
# If there are files that don't need to be ignored, you can write them in the.dockerignore file, such as the node_modules folder
COPY . /www/node-server/
Expose port 3000 outwards
EXPOSE 3000
The command executed after the container is run
CMD npm run start
Copy the code
  1. Build the mirror
docker build .
Copy the code

  1. Once the build is successful, create the container
docker run --name node-server-1 -p 3000:3000 node-server
Copy the code
  1. Use a browser to access port 3000 to see if it started successfully

3.docker-composebuild

Before we use the container, we need to define the Dockerfile file, and then use docker build, docker run and other commands to operate the container.

However, our system generally contains hundreds of services, and each service has multiple instances, if all manually to start and close, then the workload can be imagined

That’s easy and efficient to manage containers with Docker-compose, an application tool for defining and running multi-container Dockers

3.1 configurationdocker-compose.ymlfile

  • Create one in the project directorydocker-compose.ymlfile
version: "3"
services: # Service list
    node: # node service
        build: . # Dockerfile directory used to build the image
        container_name: node-server-1 # container name
        ports: # Exposed port
            - "3000:3000"
        restart: always Automatic restart
        environment: Set environment variables
            - NODE_ENV=production
        command: npm run start Overrides the commands executed by default when the container is started
Copy the code
  • Build the mirror
docker-compose build
Copy the code
  • Run the container
docker-compose up -d
Copy the code

Port 3000 is also accessible through a browser, if not by accident

3.2 Orchestrating Multiple services

For example, if we now need to build an Nginx service to broker requests to our Node-server, we need to build two services

So here’s the question

  • How does an nginx container use my own nginx.conf configuration file

    • Can be achieved byvolumesFile mapping
  • How do nginx containers and Node-server containers communicate

    1. You can usedocker-inspectCommand to viewnode-serverOf the containerIPAddress, and then modifynginx.confThe configuration,
    2. usenetworksandlinks

    When the Docker container is rebuilt, the IP address may not be the same, so the configuration of nginx.conf needs to be modified each time. Therefore, the efficiency of scheme 1 is obviously low.

  • Add a new nginx.conf configuration file

worker_processes 1; events { worker_connections 1024; } http { upstream node-server { server node:3000; } server { listen 80; server_name localhost; location / { proxy_pass http://node-server/; }}}Copy the code
  • Revise itdocker-compose.ymlfile
# docker-compose.yml
version: "3"
services: # service
    node: # node service
        build: . # Dockerfile directory used to build the image
        container_name: node-server-1 # container name
        ports: # Exposed port
            - "3000:3000"
        restart: always Automatic restart
        environment: 
            - NODE_ENV=production
        networks: # Join the network
            - "my-network"
        command: npm run start Overrides the commands executed by default when the container is started
    nginx:
        image: nginx:latest The specified nginx The mirror
        ports: # map port 8080 of the native to port 80 of the container
            - "8080:80"            
        container_name: nginx-node
        restart: always
        volumes: # map the native F:/nginx.conf file to the container /etc/nginx.conf :ro file
            - "F:/nginx.conf:/etc/nginx/nginx.conf:ro"
        networks: 
            - "my-network"
        links: Set the node service alias to /etc/hosts
            - "node"
        depends_on: Specify which service to rely on
            - node
networks: # network
    my-network: # network name
        driver: bridge
Copy the code
  • To rebuild
Delete the container that was last built
docker-compose down
Force-rm removes temporary containers from the build process.
docker-compose build --force-rm
Run the container
docker-compose up -d
Copy the code

Node-server is, unsurprisingly, accessible via a browser to the local port 8080

You can continue to extend redis, mysql services, etc., by adding network and links to communicate with each other

I’m not going to do that

4. Horizontally expand node services

When the number of users is relatively small, we can use only one Node service, but when the number of users is large, it is insufficient to use only one Node service.

Generally it is to upgrade the machine, add services, load balancing through NGINx

So how do we quickly scale the service horizontally with Docker-compose?

4.1 scale

Docker-compose provides us with a scale command for quickly building multiple services natively

Delete the container that was last built
docker-compose down
Force-rm removes temporary containers from the build process.
docker-compose build --force-rm
Run container increment --scale node=5
docker-compose up -d --scale node=5 
Copy the code

We build five node services with –scale node=5

However, if there is no accident, it will fail to build, reporting port occupation error

Because each of our Node services occupies 3000 local ports.

So we need to modify the docker-comemage. yml file to expose only the 3000 port of the container, not the port of the host

# docker-compose.yml
version: "3"
services: # service
    node: # node service
        build: . # Dockerfile directory used to build the image
        # container_name: node-server-1
        # ports: # Exposed ports
        # - "3000-3000"
        expose:
            - "3000"
        restart: always Automatic restart
        environment: 
            - NODE_ENV=production
        networks: # Join the network
            - "my-network"
        command: npm run start Overrides the commands executed by default when the container is startedNginx: image: nginx:latest Specifies the nginx mirroring ports:# map port 8080 of the native to port 80 of the container
            - "8080:80"            
        container_name: nginx-node
        restart: always
        volumes: # map the native F:/nginx.conf file to the container /etc/nginx.conf :ro file
            - "F:/nginx.conf:/etc/nginx/nginx.conf:ro"
        networks: 
            - "my-network"
        links: Set the node service alias to /etc/hosts
            - "node"
        depends_on: Specify which service to rely on
            - node
networks: # network
    my-network: # network name
        driver: bridge
Copy the code
  • Rerun the
Run container increment --scale node=5
docker-compose up -d --scale node=5 
Copy the code

If nothing goes wrong, the build will work this time

Docker ps-a allows you to view the current running container

docker ps -a
Copy the code

Not surprisingly, load balancing does not work on nginx access and the IP address is always the same

4.2 Modifying the Nginx Configuration File

Use docker inspect nginx-node to inspect the nginx container

docker inspect nginx-node
Copy the code

You can see “node-server_node_3:node” in nginx. Nginx proxies each request to the node-server_node_3 node service.

upstream node-server {  
    server node:3000;
} 
Copy the code

So let’s modify the nginx.conf configuration so that requests can be brokered to different machines

upstream node-server {  
    upstream node-server {  
        server node_1:3000 weight=3; # weighted weightserver node_2:3000; server node_3:3000; server node_4:3000; server node_5:3000; }}Copy the code

5. To summarize

Our current extension is scale in a standalone environment, and no matter how much service we extend, we can only extend it in a standalone environment. But the resources of one server are limited, so how to scale multiple servers? Swarm technology was needed. This will be shared later

Due to the limited technical level, there is a bad place to write welcome to point out, light spray!

Welcome to attention

Welcome to pay attention to the public account “code development”, share the latest technical information every day