Background and technology selection
According to my previous “Django series” articles, I used Django + Celery + RabbitMQ frameworks/services in the backend architecture. Now there are a few questions:
- How do you quickly deploy these three applications using containers?
- How to improve performance?
- How can back-end availability be guaranteed?
Docker Compose vs Swarm vs K8s
In my previous practice, container orchestration was implemented using Docker-compose, and the problem was solved immediately. However, Docker-compose is only a container for orchestration, which can start up three services each, so the performance and high availability may not meet the requirements.
For performance and high availability, Kubernetes(K8s) is currently the best choice for a large project, but my project is not large enough to be called a “large project”, so I’m thinking about how to improve performance and high availability on a single host.
Docker Swarm is an official cluster service integrated into Docker CLI (although it is outperformed by K8s, its design concept and functions are mature and perfect, and it has many similarities with K8s in architecture). Swarm can connect multiple hosts as a cluster node, of course, also supports the deployment of a single container cluster. Swarm’s cluster deployment requires the creation of services, secret, etc., which is also tedious. Therefore, the Stack tool can parse the compose file, so that the process of creating various services is described in the YAML script, which is more convenient to manage.
Of course, the compose file syntax for Stack parsing is slightly different from that supported by Docker-compose, but it’s mostly generic, as described below.
To sum up, we can summarize the relationship and differences of several tools:
The advantages/disadvantages are just for my back-end architecture
Tools/Services | advantage | disadvantage | Whether to meet |
---|---|---|---|
docker-compose | The container arrangement | High availability is not supported | no |
Kubernetes | Container choreography, high availability, suitable for large projects | There is no | is |
Swarm | Container orchestration, high availability | Slightly less mature than K8s | is |
Stack | Swarm command to apply the Compose file | N/A | N/A |
The table specifically adds Stack, because initially I thought Stack and Swarm were at the same level. Swarm is more of a cluster environment, coordinating various service components, while Stack is more of a command line tool, calling Swarm’s various commands to start different services, etc.
Swarm Architecture Introduction
Docker Docker Docker Docker Docker Docker Docker Docker Docker Docker Docker Docker Docker Docker Docker Docker Docker
Each service consists of a replica set of tasks, each of which is a Container. For example, for a back-end service, we can improve performance by starting three replication tasks. The service is the entry and exit of the program. Although three tasks are started, we can access the service by using the service name, and the request will be processed by RR(polling) mechanism between several tasks.
implementation
Docker Swarm Docker Swarm Docker Swarm Docker Swarm Docker Swarm
Swarm The cluster was initialized
Since I deployed my application on a single machine, the operation of extending Swarm nodes is not described.
Swarm Swarm Swarm Swarm
docker swarm init
Copy the code
When the cluster is successfully initialized, the Docker Network LS will see two new networks created:
- A group called
ingress
Overlay network for handling control commands and data interactions related to swarm services. - A group called
docker_gwbridge
Swarm swarm is a network bridge between each individual container in the swarm.
Exception resolution
Swarm was deployed on one of the servers. The ingress was only created on one of the servers. As a result, the task could not be started after the swarm was deployed.
By performing:
docker service ps --no-trunc {serviceName}
Copy the code
Some error messages are as follows:
ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network
Copy the code
It was found that there was a problem with the Docker network. By comparing the networks of several servers, it was found that there was a lack of docker_gwbridge.
Create the bridge manually to resolve the problem:
Docker network create \ - subnet configures 172.20.0.0/20 \ - gateway 172.20.0.1 \ -o com.docker.net work bridge. Enable_icc =false \
-o com.docker.network.bridge.name=docker_gwbridge \
docker_gwbridge
Copy the code
Application deployment
Applications can be deployed by manually creating services one by one, and by referring to the same Docker network, the communication between services can be guaranteed.
But Docker provides a more convenient way to deploy and extend an application ———— docker-comematery.yml configuration file
Compose the file
Docker-comemage. yml: docker-comemage. yml:
version: '3'
services:
rabbit:
image: rabbitmq:3
ports:
- "5672:5672"
networks:
- webnet
web:
image: myweb:latest
command: python manage.py runserver 0.0. 0. 0: 8000
environment:
- DJANGO_SETTINGS_MODULE=bd_annotation_proj.settings.staging
deploy:
replicas: 3
depends_on:
- rabbit
- worker
ports:
- "8000:8000"
networks:
- webnet
celery-worker:
image: myweb:latest
command: celery -A bd_annotation_proj worker -l info
environment:
- DJANGO_SETTINGS_MODULE=bd_annotation_proj.settings.staging
deploy:
replicas: 2
depends_on:
- rabbit
networks:
- webnet
networks:
webnet:
Copy the code
The meanings of important parameters are as follows:
version
Docker stack only supports version 3 of docker-composeservice
: Service Listimage
: Specifies the image to runcommand
: Run command to start the containerenvironment
: environment variable in the container, where Django configuration file pointing is configuredstaging
deploy
: Specify deployment restrictions, such as replication sets (replicas
) size, CPU upper limit, etcdepends_on
: Specifies the dependencies between services. Services will be started in sequence based on the dependencies during application deploymentnetwork
: Indicates the network that each service in an application accesses. After the service name is specified, the service communicates with each other
Create applications and services
With docker-comemage. yml, it’s easy to create applications and services. Assuming the application name is myApp, execute directly:
docker stack deploy -c docker-compose.yml myapp
Copy the code
Service to check the
# View all services
docker service ls
# View myApp related services
docker stack services myapp
Copy the code
The tasks running in the service can directly view the container ID and other information through Docker PS.
Expansion and application
Modify replicas configuration in docker-comemess. yml to extend the replication set:
.
delopy:
replicas: 5
Copy the code
Apply the new configuration after modification:
docker stack deploy -c docker-compose.yml myapp
Copy the code
Mirror update
If the image is updated, the service needs to be updated accordingly:
docker service update myapp_web --image myweb:latest --force
Copy the code
A progress bar displays the update progress of tasks in the current service.
The end of the application
By performing:
docker stack rm myapp
Copy the code
After the application life cycle is complete, all application-related tasks, services, and networks are deleted.
conclusion
After deploying my app with Docker Swarm, I actually tested the high availability by deleting the container and looking at the restart, which was exactly what I expected. Swarm ingress, like Nginx, has done the load balancing for us, and we just need to enjoy the convenience.
In terms of performance, due to the launch of three containers, partial request response time was improved by 10 times compared to the previous way of running the service using uWSGI in one container, which was quite satisfactory.
Containers are a boon to developers, and it is essential to learn and apply them to both the front and back end and test operations. While the container-choreographed battle was arguably won by K8s, Swarm was up to the job in many scenarios. Again: technology is not right or wrong, only appropriate or inappropriate!
reference
Docs.docker.com/compose/com… Docs.docker.com/get-started… Blog.51cto.com/wutengfei/2…