The conversation is introduced

DevOps is a complete IT operations oriented workflow, based on IT automation and continuous integration (CI), continuous deployment (CD), to optimize all aspects of program development, testing, system operations and maintenance

The concepts of Devops are many and well understood. As I understand it, it’s software engineering. It defines a philosophy that enables rapid development and delivery of software and products. Each team directly in this system, efficient communication, cooperation and so on.

This concept is first modeled on continuous integration (CI) and continuous delivery (CD). In this article, we will not expand the theoretical content of DevOPS, but mainly introduce how to quickly build a devOPS base system, which is mainly suitable for start-up teams and projects without historical burden.

Solution Architecture Diagram

Enterprise architecture diagram

System of

Code versioning

Gitlab is often used in enterprise development to build code repositories, which can be viewed as a starting point in a DevOPS architecture.

When building a CI process, code branch management needs to be regulated. The rest of the build process is based on that branch.

Here we simply expand a management mode and divide GitLab into three branches: Dev, Test and Master. Manage the three branches in three roles: development, test and operation.

  • After the developer merges the functional branch code into the dev branch, it triggers the build process, code packaging, image building, etc. After the build is completed, the newly built image is published through the container management platform.
  • Test When the developer delivers the code to the test department, the tester merges the code into the test branch, triggering the construction process of the test branch. After the construction, the test environment is released through the management platform.
  • After passing the operation and maintenance test acceptance, the operation and maintenance team will be delivered for online upgrade, the code will be merged into the Master branch, the release version information will be constructed, and the application will be released after the completion of construction.

The above is the branch management based on GitLab, and the three branches are built. Of course, in the actual operation environment, it is not so simple, and the projects that cannot be done may involve different problems. For example, the environment configuration information switch, rollback, as well as configuration file management, database SQL management and so on.

Continue to build

Continuous integration is a software development practice in which team development members integrate their work frequently, usually at least once a day per member, which means that multiple integrations may occur per day. Each integration is verified by an automated build (including compilation, release, and automated testing) to find integration errors as quickly as possible. Many teams find that this process greatly reduces integration problems, allowing teams to develop cohesive software more quickly.

This definition is somewhat academic and confusing. In layman’s terms, it refers to the process of manually integrating, managing, distributing, and updating new code by replacing it with machine automation. According to certain specifications, we automate this process.

There are many tools to implement such a process, among which Jenkins is the best. However, Jenkins is written in Java and requires installation of JDK8.0 or above when deploying, and its functions are too complex. A simple continuous integration tool, GitLab CI, is selected here.

Gitlab CI tools are integrated by default in releases after Gitlab 8.0.

When using gitLab CI, you need to use a tool called GitLab Runner to accomplish this process. The diagram is as follows:


The Gitlab CI acts as a business scheduler, distributing the business that needs to work to the Runner for execution. In the build, you need a.gitlab-ci.yml to write the build task. It defines the Pipeline workflow.

  • Pipeline

The three process branches defined in GitLab trigger the pipeline process through the hook program when their code changes.

  • Stages

  • job

Here’s a basic template for.gitlab-ci.yML

# define job job1: stage: test script: -echo "I am job1" -echo "I am in test stage" - echo "I am job2" - echo "I am in build stage"Copy the code

Based on the above build process, a template for Pipline is provided here

variables: REPOSITORY: "xxxx/xxxxxx" stages: - deploy build: stage: deploy only: - master script: -docker build -t $REPOSITORY:prod. -docker tag $REPOSITORY:prod "private REPOSITORY "/$REPOSITORY:prod -docker push $REPOSITORY:prod tags: -lable test: stage: deploy only: -test script: -docker build -t $REPOSITORY:testing. -docker tag $REPOSITORY:testing "private REPOSITORY address "/$REPOSITORY:testing - docker push /$REPOSITORY:testing tags: -lable dev: stage: deploy only: -dev script: -docker build -t $REPOSITORY:dev. -docker tag $REPOSITORY:testing "private REPOSITORY "/$REPOSITORY:dev - docker push "Private REPOSITORY address "/$REPOSITORY:dev tags: -lableCopy the code

Continuance is now used in conjunction with container technology, where the end state is that the code to be deployed is packaged into an image and published to the mirror library.

As an important platform for continuous integration, private image libraries ultimately store the built images.

In this platform build, the star project Harbor is now selected as the private image library. The specific will not be expanded here, there will be a special article for this content to expand.

Continuous delivery/continuous deployment

In continuous integration, we go from code to image. Finally, the generated image is delivered to a private image library. In continuous delivery continuous deployment, the completed image is published to the deployment environment.

Deployment is also an important part of a DevOPS environment. In short, one of the goals to be achieved in this step is the Docker Run Image. Change static image files into dynamic docker runtime environment.

The simplest application is the image that docker Run builds. However, systems are often composed of multiple components, such as Redis, Nginx, mysql, and other subsystems integrated together to form a completed project. In this case, container choreography is needed.

The purpose of the choreography is for the container to install the specification we define to run.

The dominant technology is Google’s Kubernetes technology. If the project is simple, you can also use Docker-compose directly for orchestration.

Here’s a docker-compose template. Take Harbor, for example.

Version: '2' services: log: image: vmware/harbor-log:v1.1.2 container_name: harbor-log restart: always volumes: - / var/log/harbor /, / var/log/docker: z ports: - 127.0.0.1:1514:514 networks: - harbor registry: image: Vmware/Registry :2.6.1- Photon container_name: Registry restart: Always Volumes: - /data/registry:/storage:z - ./common/config/registry/:/etc/registry/:z networks: - harbor environment: - GODEBUG=netdns=cgo command: ["serve", "/etc/registry/config.yml"] depends_on: - log logging: driver: "Syslog" options: syslog-address: "TCP ://127.0.0.1:1514" tag: "registry" mysql: image: Vmware/harbor - db: v1.1.2 container_name: harbor - db restart: always volumes: - / data/database: / var/lib/mysql: z networks: - harbor env_file: - ./common/config/db/env depends_on: - log logging: driver: "syslog" options: syslog-address: "TCP ://127.0.0.1:1514" tag: "mysql" AdminServer: image: vmware/ harp-AdminServer :v1.1.2 container_name: harbor-adminserver env_file: - ./common/config/adminserver/env restart: always volumes: - /data/config/:/etc/adminserver/config/:z - /data/secretkey:/etc/adminserver/key:z - /data/:/data/:z networks: - Harbor depends_on: -log logging: driver: "syslog" options: syslog-address: "TCP ://127.0.0.1:1514" tag: depends_on: -log logging: driver: "syslog" options: syslog-address:" TCP ://127.0.0.1:1514" tag: "Adminserver" UI: image: vmware/harbor- UI :v1.1.2 container_name: harbor- UI env_file: -./common/config/ UI /env restart: always volumes: - ./common/config/ui/app.conf:/etc/ui/app.conf:z - ./common/config/ui/private_key.pem:/etc/ui/private_key.pem:z - /data/secretkey:/etc/ui/key:z - /data/ca_download/:/etc/ui/ca/:z networks: - harbor depends_on: -log-adminserver-Registry logging: driver: "syslog" options: syslog-address: "TCP ://127.0.0.1:1514" tag: "UI" jobService: image: vmware/ harbor-jobService :v1.1.2 container_name: harbor-jobService env_file: - ./common/config/jobservice/env restart: always volumes: - /data/job_logs:/var/log/jobs:z - ./common/config/jobservice/app.conf:/etc/jobservice/app.conf:z - /data/secretkey:/etc/jobservice/key:z networks: - harbor depends_on: - ui - adminserver logging: driver: "Syslog" options: syslog-address: "TCP ://127.0.0.1:1514" tag:" jobService "proxy: image: Vmware /nginx: patched 1.11.5-patched Container_name: nginx restart: Always volumes: - ./common/config/nginx:/etc/nginx:z networks: - harbor ports: - 80:80 - 443:443 - 4443:4443 depends_on: - mysql-registry-UI-log logging: driver: "syslog" options: syslog-address: "TCP ://127.0.0.1:1514" tag: "proxy" networks: harbor: external: falseCopy the code

For the content of K8S, there will be a special topic to expand this content.

conclusion

From the above, we can build a simple Devops system closed loop, and there is still a lot to be done to achieve a complete platform. For example, automated testing, configuration center, release process, agile development, etc. This blueprint is based on the needs and pain points to drive gradual improvement.