At present, microservices have become the mainstream architecture of server development, and Go language is becoming more and more favored by developers because of its simple and easy to learn, built-in high concurrency, fast compilation, small memory and other characteristics. Micro-service practice series articles will learn the knowledge related to micro-services from the practical point of view. This series of articles will be a “blog system” from shallow to deep and everyone together step by step to build a complete micro-service system

This article is the first in the micro-service practice series. We will build the micro-service continuous integration and automatic build release system based on Go-Zero + GitLab + Jenkins + K8S. Firstly, a brief introduction is given to the above modules:

  • Go-zero is a Web and RPC framework that integrates various engineering practices. The stability of large concurrent server is guaranteed by elastic design, which has been fully tested in practice
  • Gitlab is a fully integrated software development platform based on Git. In addition, GitLab also has wiki, online editing, issue tracking, CI/CD and other functions
  • Jenkins is a Continuous integration tool developed based on Java for monitoring continuous repetitive work. It aims to provide an open and easy to use software platform to make continuous integration of software possible
  • Kubernetes, often referred to simply as K8s, is an open source system for automatically deploying, extending, and managing containerized applications. The system was designed by Google and donated to the Cloud Native Computing Foundation for use. It aims to provide “a platform for automated deployment, scaling, and running application containers across host clusters

Actual combat is mainly divided into five steps. The following five steps are explained in detail respectively

  1. The first step is to build the environment. Here, I used two Ubuntu16.04 servers to install GitLab and Jenkins respectively, and XXX cloud elastic K8S cluster was adopted
  2. The second step is to generate the project. Here, I use the Goctl tool provided by Go-Zero to quickly generate the project and make simple modifications to the project for testing
  3. The third generation of Dockerfile and K8S deployment files, K8S deployment file preparation complex and prone to error, goctl tool provides the generation of Dockerfile and K8S deployment file function is very convenient
  4. Jenkins Pipeline builds with declarative syntax, creates Jenkinsfile files and uses GitLab for version management
  5. Finally, the project is tested to verify whether the service is normal

Environment set up

First, we set up the experimental environment. Here, I used two ubuntu16.04 servers and installed gitlab and Jenkins respectively. Gtilab is installed directly using apt-get. After installation, start the service and check the service status. If each component is in the RUN state, the service is started and the default port is 9090

Gitlab-ctl start // Start the service gitlab-ctl status // Check the service status run: alertManager: (pid 1591) 15442s; run: log: (pid 2087) 439266s run: gitaly: (pid 1615) 15442s; run: log: (pid 2076) 439266s run: gitlab-exporter: (pid 1645) 15442s; run: log: (pid 2084) 439266s run: gitlab-workhorse: (pid 1657) 15441s; run: log: (pid 2083) 439266s run: grafana: (pid 1670) 15441s; run: log: (pid 2082) 439266s run: logrotate: (pid 5873) 1040s; run: log: (pid 2081) 439266s run: nginx: (pid 1694) 15440s; run: log: (pid 2080) 439266s run: node-exporter: (pid 1701) 15439s; run: log: (pid 2088) 439266s run: postgres-exporter: (pid 1708) 15439s; run: log: (pid 2079) 439266s run: postgresql: (pid 1791) 15439s; run: log: (pid 2075) 439266s run: prometheus: (pid 10763) 12s; run: log: (pid 2077) 439266s run: puma: (pid 1816) 15438s; run: log: (pid 2078) 439266s run: redis: (pid 1821) 15437s; run: log: (pid 2086) 439266s run: redis-exporter: (pid 1826) 15437s; run: log: (pid 2089) 439266s run: sidekiq: (pid 1835) 15436s; run: log: (pid 2104) 439266sCopy the code

Jenkins is also directly installed by apt-get. It should be noted that Java should be installed before Jenkins is installed. The process is relatively simple and will not be demonstrated here. Initial password path is/var/lib/Jenkins/secrets/initialAdminPassword, recommended plug-in can be installed initialization, behind can according to your own need to install other plug-ins

K8s cluster building process is complicated, although can use kubeadm tools such as fast, but the real production cluster still has certain gap, because of our service is ultimately want to go to the production, so here I choose XXX the elasticity of the cloud version for 1.16.9 k8s cluster, elastic clusters are the benefits of on-demand charges without additional cost, When we finished the experiment, we could immediately release the resources through Kubectl delete, which only generated little cost. Moreover, THE K8S cluster of XXX cloud provided us with friendly monitoring pages, through which we could view various statistical information. After the cluster was created, we needed to create the cluster access certificate to access the cluster

  • If the access client has not configured any cluster access credentials, that is, ~/. Kube /config is empty, paste the access credentials into ~/. Kube /config

  • If the access client has been configured with the access credentials of another cluster, run the following command to merge the credentials

    KUBECONFIG=~/.kube/config:~/Downloads/k8s-cluster-config kubectl config view --merge --flatten > ~/.kube/config
    export KUBECONFIG=~/.kube/config
    Copy the code

After the access permission is configured, run the following command to view the current cluster

kubectl config current-context
Copy the code

View the cluster version. The following output is displayed

kubectl version Client Version: Version. The Info {Major: "1", Minor: "16", GitVersion: "v1.16.9 GitCommit:" a17149e1a189050796ced469dbd78d380f2ed5ef." BuildDate GitTreeState: "clean" : "the 2020-04-16 T11:44:51 Z," GoVersion: "go1.13.9", the Compiler: "gc", Platform:"linux/amd64"} Server Version: Version. The Info {Major: "1", Minor: "16 +", GitVersion: "v1.16.9 - eks. 2", GitCommit: "f999b99a13f40233fc5f875f0607448a759fc613." GitTreeState:"clean", BuildDate:" 2020-10-09T12:54:13z ", GoVersion:" GO1.13.9 ", Compiler:" GC ", Platform:" Linux/AMD64 "}Copy the code

At this point our experiment has been set up and github is also available for version management

Generating project

The directory structure of the whole project is as follows. The outermost project is named blog, and the app directory is divided into different micro-services according to the business. For example, the user service is divided into API service and RPC service. RPC services provide high-performance data caching and other operations for internal communication

├ ─ ─ blog │ ├ ─ ─ app │ │ ├ ─ ─ the user │ │ │ ├ ─ ─ API │ │ │ └ ─ ─ the RPC │ │ ├ ─ ─ article │ │ │ ├ ─ ─ API │ │ │ └ ─ ─ the RPCCopy the code

After the project directory is created, we go to the API directory and create a user. API file. The file content is as follows, define the service port as 2233, and define a /user/info interface

type UserInfoRequest struct { Uid int64 `form:"uid"` } type UserInfoResponse struct { Uid int64 `json:"uid"` Name string  `json:"name"` Level int `json:"level"` } @server( port: 2233 ) service user-api { @doc( summary: @server(handler: UserInfo) get /user/info(UserInfoRequest) returns(UserInfoResponse)}Copy the code

After defining the API file, we run the following command to generate the API service code. One-click generation can greatly improve our productivity

goctl api go -api user.api -dir .
Copy the code

After the code is generated, we modify the code slightly to facilitate testing after deployment. The modified code returns the local IP address

func (ul *UserInfoLogic) UserInfo(req types.UserInfoRequest) (*types.UserInfoResponse, error) { addrs, err := net.InterfaceAddrs() if err ! = nil { return nil, err } var name string for _, addr := range addrs { if ipnet, ok := addr.(*net.IPNet); ok && ! ipnet.IP.IsLoopback() && ipnet.IP.To4() ! = nil { name = ipnet.IP.String() } } return &types.UserInfoResponse{ Uid: req.Uid, Name: name, Level: 666, }, nil }Copy the code

At this point, the service generation part is complete, because this section is to build the basic framework, so just add some test code, we will continue to enrich the project code

Generate images and deployment files

General common images such as mysql, memcache and so on we can directly pull from the image warehouse, but our service image needs to be customized, there are multiple ways to customize the image and the use of Dockerfile is the most used one way, using Dockerfile to define the image although it is not difficult but also very prone to error, So here we also use tools to automatically generate, here I have to say that goctl tool is really good, it also helps us to one-click generated Dockerfile, in the API directory to execute the following command

goctl docker -go user.go
Copy the code

The resulting file, slightly modified to fit our directory structure, is a two-stage build, with the first stage building the executable to ensure build independence from the host, and the second stage using the results of the first stage to build a minimalist image

FROM golang:alpine AS builder

LABEL stage=gobuilder

ENV CGO_ENABLED 0
ENV GOOS linux
ENV GOPROXY https://goproxy.cn,direct

WORKDIR /build/zero

RUN go mod init blog/app/user/api
RUN go mod download
COPY . .
COPY /etc /app/etc
RUN go build -ldflags="-s -w" -o /app/user user.go


FROM alpine

RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata
ENV TZ Asia/Shanghai

WORKDIR /app
COPY --from=builder /app/user /app/user
COPY --from=builder /app/etc /app/etc

CMD ["./user", "-f", "etc/user-api.yaml"]
Copy the code

Then run the following command to create an image

docker build -t user:v1 -f app/user/api/Dockerfile .
Copy the code

At this point, we use the docker images command to view the image and find that the user image has been created and the version is V1

REPOSITORY                            TAG                 IMAGE ID            CREATED             SIZE
user                                  v1                  1c1f64579b40        4 days ago          17.2MB
Copy the code

Similarly, k8S deployment file preparation is also more complex and prone to error, so we also use goctl to generate automatically, in the API directory to execute the following command

goctl kube deploy -name user-api -namespace blog -image user:v1 -o user.yaml -port 2233
Copy the code

The generated ymal file is as follows

apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-api
  namespace: blog
  labels:
    app: user-api
spec:
  replicas: 2
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: user-api
  template:
    metadata:
      labels:
        app: user-api
    spec:
      containers:
      - name: user-api
        image: user:v1
        lifecycle:
          preStop:
            exec:
              command: ["sh","-c","sleep 5"]
        ports:
        - containerPort: 2233
        readinessProbe:
          tcpSocket:
            port: 2233
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          tcpSocket:
            port: 2233
          initialDelaySeconds: 15
          periodSeconds: 10
        resources:
          requests:
            cpu: 500m
            memory: 512Mi
          limits:
            cpu: 1000m
            memory: 1024Mi
Copy the code

This is the end of the image generation and K8S deployment file step, mainly to demonstrate that in a real production environment, images are automatically created using continuous integration tools

Jenkins Pipeline

Jenkins is a commonly used continuous integration tool, which provides a variety of construction methods, and Pipeline is one of the most common construction methods. Pipeline supports both notortype and script type. Script type syntax is flexible and extensible, but it also means more complex and requires learning Grovvy language, which increases the learning cost. So declarative syntax, declarative syntax is a simpler, more structured syntax, and we’re going to use declarative syntax

Jenkinsfile is a plain text file, which is the expression of the concept of deployment pipeline in Jenkins. Just like Dockerfile is for Docker, all the deployment pipeline logic can be defined in Jenkinsfile. Note that Jenkins does not support Jenkinsfiles by default, so we need to install Pipeline Plugins and go to Manage Jenkins -> Manager Plugins to install the Plugins

It is possible to type the build script directly into the Pipeline interface, but versioning is not possible, so this is not recommended unless it is an AD hoc test. It is more common to have Jenkins pull the Jenkinsfile from the Git repository and execute it

First you need to install the Git plug-in, and then use SSH Clone to pull the code. Therefore, you need to put the Git private key into Jenkins, so that Jenkins can have the permission to pull the code from the Git repository

The steps to place the Git private key in Jenkins are: Manage Jenkins -> Manage Credentials -> Add credentials, set the type to SSH Username with private key, and then set it as prompted, as shown below

Then create a new project in our GitLab with just a Jenkinsfile file

Select Pipeline Script from SCM for Pipeline definition in user-API project, add Gitlab SSH address and corresponding token, as shown below

Then we can write Jenkinsfile according to the actual steps above

  • Pull code from GitLab, pull code from our GitLab repository, and use commit_id to differentiate between releases

    Steps {echo 'git credentialsId:' XXXXXXXX ', url: 'http://xxx.xxx.xxx.xxx:xxx/blog/blog.git' script { commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim() } } }Copy the code
  • Build a docker image. Use the Dockerfile generated by goctl to build the image

    Steps {echo 'docker build -t user:${commit_id} app/user/ API /"}}Copy the code
  • Upload the image to the mirror warehouse and push the image to the mirror warehouse

    Steps {echo "upload images to the database" sh "docker login -u XXX -p XXXXXXX "sh "docker tag user:${commit_id} xxx/user:${commit_id}" sh "docker push xxx/user:${commit_id}" } }Copy the code
  • Deploy to K8S, replace the version number in the deployment file, pull the mirror from the remote, and deploy using kubectl apply

    Stage (' deploy to k8S ') {steps {echo "deploy to k8S" sh "sed -i 's/<COMMIT_ID_TAG>/${commit_id}/' app/user/ API /user.yaml" sh "cp app/user/api/user.yaml ." sh "kubectl apply -f user.yaml" } }Copy the code

    The complete Jenkinsfile file is shown below

    Pipeline {agent any stages {stage(' gitlab ') {steps {echo 'git credentialsId:' XXXXXX ', url: 'http://xxx.xxx.xxx.xxx:9090/blog/blog.git' script { commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()}} stage(' build ') {steps {echo 'build' sh "docker build -t user:${commit_id} App /user/ API /"}} stage(' Upload image to Repository ') {steps {echo "Upload image to repository" sh "docker login -u XXX -p XXXXXXXX "sh "docker tag User :${commit_id} XXX /user:${commit_id}" sh "docker push XXX /user:${commit_id}"}} stage(' deploy to K8s ') {steps {echo Sh "sed -i 's/<COMMIT_ID_TAG>/${commit_id}/' app/user/ API /user.yaml" sh "kubectl apply -f app/user/api/user.yaml" } } } }Copy the code

    By now, all the configuration is basically finished, and our basic framework is also basically built. Now we start to execute pipeline. Click “Build now” on the left to generate a Build History serial number in the following Build History. Click the corresponding serial number and then click the Console Output on the left to view the details of the build process, which will also be Output if errors occur during the build process

The detailed output of the build is as follows. Each stage corresponding to the pipeline has detailed output

Started by user admin
Obtained Jenkinsfile from git [email protected]:gitlab-instance-1ac0cea5/pipelinefiles.git
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/user-api
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
Selected Git installation does not exist. Using Default
The recommended git tool is: NONE
using credential gitlab_token
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url [email protected]:gitlab-instance-1ac0cea5/pipelinefiles.git # timeout=10
Fetching upstream changes from [email protected]:gitlab-instance-1ac0cea5/pipelinefiles.git
 > git --version # timeout=10
 > git --version # 'git version 2.7.4'
using GIT_SSH to set credentials 
 > git fetch --tags --progress [email protected]:gitlab-instance-1ac0cea5/pipelinefiles.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
Checking out Revision 77eac3a4ca1a5b6aea705159ce26523ddd179bdf (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10
Commit message: "add"
 > git rev-list --no-walk 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (从gitlab拉取服务代码)
[Pipeline] echo
从gitlab拉取服务代码
[Pipeline] git
The recommended git tool is: NONE
using credential gitlab_user_pwd
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url http://xxx.xxx.xxx.xxx:9090/blog/blog.git # timeout=10
Fetching upstream changes from http://xxx.xxx.xxx.xxx:9090/blog/blog.git
 > git --version # timeout=10
 > git --version # 'git version 2.7.4'
using GIT_ASKPASS to set credentials 
 > git fetch --tags --progress http://xxx.xxx.xxx.xxx:9090/blog/blog.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
Checking out Revision b757e9eef0f34206414bdaa4debdefec5974c3f5 (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
 > git branch -a -v --no-abbrev # timeout=10
 > git branch -D master # timeout=10
 > git checkout -b master b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
Commit message: "Merge branch 'blog/dev' into 'master'"
 > git rev-list --no-walk b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ git rev-parse --short HEAD
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (构建镜像)
[Pipeline] echo
构建镜像
[Pipeline] sh
+ docker build -t user:b757e9e app/user/api/
Sending build context to Docker daemon  28.16kB

Step 1/18 : FROM golang:alpine AS builder
alpine: Pulling from library/golang
801bfaa63ef2: Pulling fs layer
ee0a1ba97153: Pulling fs layer
1db7f31c0ee6: Pulling fs layer
ecebeec079cf: Pulling fs layer
63b48972323a: Pulling fs layer
ecebeec079cf: Waiting
63b48972323a: Waiting
1db7f31c0ee6: Verifying Checksum
1db7f31c0ee6: Download complete
ee0a1ba97153: Verifying Checksum
ee0a1ba97153: Download complete
63b48972323a: Verifying Checksum
63b48972323a: Download complete
801bfaa63ef2: Verifying Checksum
801bfaa63ef2: Download complete
801bfaa63ef2: Pull complete
ee0a1ba97153: Pull complete
1db7f31c0ee6: Pull complete
ecebeec079cf: Verifying Checksum
ecebeec079cf: Download complete
ecebeec079cf: Pull complete
63b48972323a: Pull complete
Digest: sha256:49b4eac11640066bc72c74b70202478b7d431c7d8918e0973d6e4aeb8b3129d2
Status: Downloaded newer image for golang:alpine
 ---> 1463476d8605
Step 2/18 : LABEL stage=gobuilder
 ---> Running in c4f4dea39a32
Removing intermediate container c4f4dea39a32
 ---> c04bee317ea1
Step 3/18 : ENV CGO_ENABLED 0
 ---> Running in e8e848d64f71
Removing intermediate container e8e848d64f71
 ---> ff82ee26966d
Step 4/18 : ENV GOOS linux
 ---> Running in 58eb095128ac
Removing intermediate container 58eb095128ac
 ---> 825ab47146f5
Step 5/18 : ENV GOPROXY https://goproxy.cn,direct
 ---> Running in df2add4e39d5
Removing intermediate container df2add4e39d5
 ---> c31c1aebe5fa
Step 6/18 : WORKDIR /build/zero
 ---> Running in f2a1da3ca048
Removing intermediate container f2a1da3ca048
 ---> 5363d05f25f0
Step 7/18 : RUN go mod init blog/app/user/api
 ---> Running in 11d0adfa9d53
[91mgo: creating new go.mod: module blog/app/user/api
[0mRemoving intermediate container 11d0adfa9d53
 ---> 3314852f00fe
Step 8/18 : RUN go mod download
 ---> Running in aa9e9d9eb850
Removing intermediate container aa9e9d9eb850
 ---> a0f2a7ffe392
Step 9/18 : COPY . .
 ---> a807f60ed250
Step 10/18 : COPY /etc /app/etc
 ---> c4c5d9f15dc0
Step 11/18 : RUN go build -ldflags="-s -w" -o /app/user user.go
 ---> Running in a4321c3aa6e2
[91mgo: finding module for package github.com/tal-tech/go-zero/core/conf
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest/httpx
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest
[0m[91mgo: finding module for package github.com/tal-tech/go-zero/core/logx
[0m[91mgo: downloading github.com/tal-tech/go-zero v1.1.1
[0m[91mgo: found github.com/tal-tech/go-zero/core/conf in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/rest in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/rest/httpx in github.com/tal-tech/go-zero v1.1.1
go: found github.com/tal-tech/go-zero/core/logx in github.com/tal-tech/go-zero v1.1.1
[0m[91mgo: downloading gopkg.in/yaml.v2 v2.4.0
[0m[91mgo: downloading github.com/justinas/alice v1.2.0
[0m[91mgo: downloading github.com/dgrijalva/jwt-go v3.2.0+incompatible
[0m[91mgo: downloading go.uber.org/automaxprocs v1.3.0
[0m[91mgo: downloading github.com/spaolacci/murmur3 v1.1.0
[0m[91mgo: downloading github.com/google/uuid v1.1.1
[0m[91mgo: downloading google.golang.org/grpc v1.29.1
[0m[91mgo: downloading github.com/prometheus/client_golang v1.5.1
[0m[91mgo: downloading github.com/beorn7/perks v1.0.1
[0m[91mgo: downloading github.com/golang/protobuf v1.4.2
[0m[91mgo: downloading github.com/prometheus/common v0.9.1
[0m[91mgo: downloading github.com/cespare/xxhash/v2 v2.1.1
[0m[91mgo: downloading github.com/prometheus/client_model v0.2.0
[0m[91mgo: downloading github.com/prometheus/procfs v0.0.8
[0m[91mgo: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1
[0m[91mgo: downloading google.golang.org/protobuf v1.25.0
[0mRemoving intermediate container a4321c3aa6e2
 ---> 99ac2cd5fa39
Step 12/18 : FROM alpine
latest: Pulling from library/alpine
801bfaa63ef2: Already exists
Digest: sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436
Status: Downloaded newer image for alpine:latest
 ---> 389fef711851
Step 13/18 : RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata
 ---> Running in 51694dcb96b6
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
v3.12.3-38-g9ff116e4f0 [http://dl-cdn.alpinelinux.org/alpine/v3.12/main]
v3.12.3-39-ge9195171b7 [http://dl-cdn.alpinelinux.org/alpine/v3.12/community]
OK: 12746 distinct packages available
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/2) Installing ca-certificates (20191127-r4)
(2/2) Installing tzdata (2020f-r0)
Executing busybox-1.31.1-r19.trigger
Executing ca-certificates-20191127-r4.trigger
OK: 10 MiB in 16 packages
Removing intermediate container 51694dcb96b6
 ---> e5fb2e4d5eea
Step 14/18 : ENV TZ Asia/Shanghai
 ---> Running in 332fd0df28b5
Removing intermediate container 332fd0df28b5
 ---> 11c0e2e49e46
Step 15/18 : WORKDIR /app
 ---> Running in 26e22103c8b7
Removing intermediate container 26e22103c8b7
 ---> 11d11c5ea040
Step 16/18 : COPY --from=builder /app/user /app/user
 ---> f69f19ffc225
Step 17/18 : COPY --from=builder /app/etc /app/etc
 ---> b8e69b663683
Step 18/18 : CMD ["./user", "-f", "etc/user-api.yaml"]
 ---> Running in 9062b0ed752f
Removing intermediate container 9062b0ed752f
 ---> 4867b4994e43
Successfully built 4867b4994e43
Successfully tagged user:b757e9e
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (上传镜像到镜像仓库)
[Pipeline] echo
上传镜像到镜像仓库
[Pipeline] sh
+ docker login -u xxx -p xxxxxxxx
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[Pipeline] sh
+ docker tag user:b757e9e xxx/user:b757e9e
[Pipeline] sh
+ docker push xxx/user:b757e9e
The push refers to repository [docker.io/xxx/user]
b19a970f64b9: Preparing
f695b957e209: Preparing
ee27c5ca36b5: Preparing
7da914ecb8b0: Preparing
777b2c648970: Preparing
777b2c648970: Layer already exists
ee27c5ca36b5: Pushed
b19a970f64b9: Pushed
7da914ecb8b0: Pushed
f695b957e209: Pushed
b757e9e: digest: sha256:6ce02f8a56fb19030bb7a1a6a78c1a7c68ad43929ffa2d4accef9c7437ebc197 size: 1362
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (部署到k8s)
[Pipeline] echo
部署到k8s
[Pipeline] sh
+ sed -i s/<COMMIT_ID_TAG>/b757e9e/ app/user/api/user.yaml
[Pipeline] sh
+ kubectl apply -f app/user/api/user.yaml
deployment.apps/user-api created
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Copy the code

The -n parameter specifies the namespace of the pipeline. The -n parameter specifies the namespace of the pipeline

kubectl get pods -n blog

NAME                       READY   STATUS    RESTARTS   AGE
user-api-84ffd5b7b-c8c5w   1/1     Running   0          10m
user-api-84ffd5b7b-pmh92   1/1     Running   0          10m
Copy the code

We specify the namespace blog in the K8S deployment file, so we need to create the Namespance before executing pipeline

kubectl create namespace blog
Copy the code

Now that the service is deployed, how do you access it externally? The LoadBalancer mode is used here, and the Service Deployment file is defined as follows, with port 80 mapped to port 2233 of the container and the selector used to match the label defined in the Deployment

apiVersion: v1
kind: Service
metadata:
  name: user-api-service
  namespace: blog
spec:
  selector:
    app: user-api
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 80
      targetPort: 2233
Copy the code

After the service is created, the following output is displayed. The -n parameter must be added to specify a namespace

kubectl apply -f user-service.yaml
Copy the code
kubectl get services -n blog

NAME               TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)        AGE
user-api-service   LoadBalancer   <none>       xxx.xxx.xxx.xx   80:32470/TCP   79m
Copy the code

External-ip indicates the IP address provided for EXTERNAL network access. The port number is 80

Now that all of our deployment tasks are done, you’d better get your hands dirty

test

Finally, we will test whether the deployed service is normal and use external-IP to access

curl "http://xxx.xxx.xxx.xxx:80/user/info? Uid = 1 "{" uid" : 1, "name" : "172.17.0.5", "level" : 666} curl http://xxx.xxx.xxx.xxx:80/user/info\? {uid \ = 1 "uid" : 1, "name" : "172.17.0.8", "level" : 666}Copy the code

The LoadBalancer uses the Round Robin load balancing policy by default. The LoadBalancer uses the Round Robin load balancing policy by default. The LoadBalancer uses the Round Robin load balancing policy by default

conclusion

Above, we implemented the entire DevOps process from code development to version management to build and deploy, completing an infrastructure that is still rudimentary. In the rest of this series, we will use this blogging system as a basis for gradually improving the entire architecture, such as gradually improving CI, CD processes, adding monitoring, improving blogging system functionality, high availability best practices and principles, and so on

Good tools can greatly improve our productivity and reduce the chance of making mistakes. We used goctl a lot above, which is a bit too much to put down hahaha, until next time

Due to the limited personal ability, there are inevitably mistakes in the expression of the place, welcome the general audience grandpa criticism!

The project address

Github.com/tal-tech/go…

Welcome to use star and support us! 👏