Preface:
Earlier Jenkins took on all ci/ CD functions in Kubernetes Jenkins Pipeline evolution, here is ready to split CD continuous integration into Spinnaker! Of course, the normal idea would be to integrate Jenkins Spinnaker’s user account with LDAP first. The Spinnaker account system has already integrated with LDAP. Jenkins has also done relevant experiments before. The Jenkins integration with LDAP is omitted here. After all, the goal is to break up pipeline pipeline practices. Account system interoperability is not that urgent! . Of course, the first step I think is missing the mirror scan step, first do a wave of mirror scan! After all, safety comes first
Image scan of Jenkins pipeline
Note: The Image repository uses Harbor
Trivy
Harbor’s default image scanner isTrivy. In the early days it seemed to beclair? remember
View Harbor’s API (can’t integrate with pipeline to provide scan report)
Took a look at Harbor’s API. Harbor’s API can be used directlyscanScan:
But there was a catch: I wanted the report to go straight to the Jenkins assembly line,GETJenkins assembly line integrates automatic scanning in Harbor. After the scanning is completed, please continue to log in harbor to confirm whether there are any bugs in the mirror image. So this function is very weak to the outside world. However, with a learning attitude, experience the automatic scanning of images in Jenkins Pipeline. First, we refer to the example of automatic cleaning of images by Zeyang Big man:
Import groovy.jsonslurper //Docker image repository information registryServer = "harbor.layame.com" projectName = "${JOB_NAME}".split('-')[0] repoName = "${JOB_NAME}" imageName = "${registryServer}/${projectName}/${repoName}" HarborAPI = "" // Pipeline pipeline{agent {node {label "build01"}} triggers {GenericTrigger(causeString: 'Generic Cause', genericVariables: [[defaultValue: '', key: 'branchName', regexpFilter: '', value: '$.ref']], printContributedVariables: true, printPostContent: true, regexpFilterExpression: '', regexpFilterText: '', silentResponse: true, token: 'spinnaker-nginx-demo') } stages{ stage("CheckOut"){ steps{ script{ srcUrl = "https://gitlab.layabox.com/zhangpeng/spinnaker-nginx-demo.git" branchName = branchName - "refs/heads/" currentBuild.description = "Trigger by ${branchName}" println("${branchName}") checkout([$class: 'GitSCM', branches: [[name: "${branchName}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'gitlab-admin-user', url: "${srcUrl}"]]]) } } } stage("Push Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) { sh """ sed -i -- "s/VER/${branchName}/g" app/index.html docker login -u ${username} -p ${password} ${registryServer} docker build -t ${imageName}:${data} . docker push ${imageName}:${data} docker rmi ${imageName}:${data} """ } } } } stage("scan Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) { sh """ sed -i -- "s/VER/${branchName}/g" app/index.html docker login -u ${username} -p ${password} ${registryServer} docker build -t ${imageName}:${data} . docker push ${imageName}:${data} docker rmi ${imageName}:${data} """ } } } } stage("Trigger File"){ steps { script{ sh """ echo IMAGE=${imageName}:${data} >trigger.properties echo ACTION=DEPLOY >> trigger.properties cat trigger.properties """ archiveArtifacts allowEmptyArchive: true, artifacts: 'trigger.properties', followSymlinks: false } } } } }Copy the code
Transform spinnaker nginx – demo pipeline
Spinnaker-nginx-demo example to verify, see: Jenkins configuration -spinnaker-nginx-demo, modify pipeline as follows:
Split ('-')[0] repoName = "${JOB_NAME}" ${JOB_NAME}" imageName = "${registryServer}/${projectName}/${repoName}" //pipeline pipeline{ agent { node { label "build01"}} GenericTrigger(causeString: 'Generic Cause', genericVariables: [[defaultValue: '', key: 'branchName', regexpFilter: '', value: '$.ref']], printContributedVariables: true, printPostContent: true, regexpFilterExpression: '', regexpFilterText: '', silentResponse: true, token: 'spinnaker-nginx-demo') } stages{ stage("CheckOut"){ steps{ script{ srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git" branchName = branchName - "refs/heads/" currentBuild.description = "Trigger by ${branchName}" println("${branchName}") checkout([$class: 'GitSCM', branches: [[name: "${branchName}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'gitlab-admin-user', url: "${srcUrl}"]]]) } } } stage("Push Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) { sh """ sed -i -- "s/VER/${branchName}/g" app/index.html docker login -u ${username} -p ${password} ${registryServer} docker build -t ${imageName}:${data} . docker push ${imageName}:${data} docker rmi ${imageName}:${data} """ } } } } stage("scan Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {harborAPI = "HTTP: / / https://harbor.xxxx.com/api/v2.0/projects/$/ repositories / ${projectName} {repoName}" apiURL = "artifacts/${data}/scan" sh """ curl -X POST "${harborAPI}/${apiURL}" -H "accept: application/json" -u ${username}:${password} """ } } } } stage("Trigger File"){ steps { script{ sh """ echo IMAGE=${imageName}:${data} >trigger.properties echo ACTION=DEPLOY >> trigger.properties cat trigger.properties """ archiveArtifacts allowEmptyArchive: true, artifacts: 'trigger.properties', followSymlinks: false } } } } }Copy the code
Modified the pipeline script of cleaning Image by referring to Yangming boss, and added stage of scan Image! In fact, they are all based on harbor API documents. For more details, you can refer to harbor official API.
Trigger Jenkins build
The Spinnaker-Nginx-Demo pipeline was triggered by GitLab, updating a file in any master branch of the GitLab repository triggered Jenkins build:
Log in harbor warehouse for verification:
Ok has been verified successfully. Of course, if you have other requirements, you can refer to harbor’S API document. Of course, the premise is that harbor supports…… , the scanning report could not be integrated into Jenkins, so I gave up Trivy in Harbor. Of course, it may be because I was not familiar with Trivy, so I just read harbor API……. instead of in-depth reading of Trivy documents
anchore-engine
Installation of the Anchore-Engine helm
Anchore – engine, is inadvertently found Jenkins scan image online to see: these keywords cloud.tencent.com/developer/a… . Very good article, and then look at the website, have a helm installation: engine. Anchore. IO/docs/instal… Install it and test it
[root@k8s-master-01 anchore-engine]# helm repo add anchore https://charts.anchore.io
[root@k8s-master-01 anchore-engine]# helm repo list
Copy the code
Note: HAhaha, I have done it before, so the helm warehouse has been added before and also installed version 1.14.6. But without a successful integration with Jenkins, I want to try the idea of the latest version….. . But reality got the better of me. Jenkins mods are too old. (The following step by step verification, is their own in-depth study……. Actually it can.) Review it by the wayhelmOrders!
[root@k8s-master-01 anchore-engine]# helm search repo anchore/anchore-engine
[root@k8s-master-01 anchore-engine]# helm repo update
[root@k8s-master-01 anchore-engine]# helm search repo anchore/anchore-engine
[root@k8s-master-01 anchore-engine]# helm fetch anchore/anchore-engine
Copy the code
[root@k8s-master-01 anchore-engine]# ls
[root@k8s-master-01 anchore-engine]# tar zxvf anchore-engine-1.15.1.tgz
[root@k8s-master-01 anchore-engine]# cd anchore-engine
Copy the code
vim values.yaml
Copy the code
Just changed the storage size and set up a password and email!
helm install anchore-engine -f values.yaml . -n anchore-engine
Copy the code
Found a pit dad……. whyKubernetes domains are all set to cluster.local by default? I looked through the configuration file and did not find the modified…….
Jenkins configuration
Jenkins will install the plug-in firstConfiguration: System Management – System Configuration:
Build pipelining:
Use Anchore Enine to improve the Demo in DevSecOps toolchain (modified build node, Github repository and DockerHub repository key) :
Jenkins creates pipeline task anchore-enchore
Pipeline {agent {node {label "build01"}} environment {registry = "Duiniwukenaihe /spinnaker-cd"; Used to push images to the mirror repository. RegistryCredential = 'duiniwukenaihe' // The credential used to log in to the mirror repository, } stages {// Jenkins download the code stage('Cloning Git') {steps {Git 'https://github.com.cnpmjs.org/duiniwukenaihe/docker-dvwa.git'}} / / Build mirror stage (' Build Image ') {steps {script {app = Docker.build (registry+ ":$BUILD_NUMBER")}}} Steps {script {docker.withregistry (", RegistryCredential) {app.push()}}}} // Image Scan stage('Container Security Scan') {steps {sh 'echo "'+registry+':$BUILD_NUMBER `pwd`/Dockerfile" > anchore_images' anchore engineRetries: "240", name: 'anchore_images' } } stage('Cleanup') { steps { sh script: "docker rmi " + registry+ ":$BUILD_NUMBER" } } } }Copy the code
Note: github.com was changed to github.com.cnpmjs.org to speed up…. After all, the wall crack can’t pull the moving code
Running the Pipeline task
Anyway was engaged several times is failure end…. , slowly peel away to find the problem ing…….
Docker – compose anchore – engine installation
Docker-compose: Docker-compose: docker-compose: docker-compose: Docker-compose: Docker-compose: Docker-compose My cluster default crI is containerd, k8S-Node-06 node is docker do run, and does not participate in scheduling, anchore-engine is ready to install on this server! Internal IP address: 10.0.4.18.
Docker-compose installation:
docker-compose up -d
The default YAML file was used without additional modifications, and the comparison was just a test.
# curl https://docs.anchore.com/current/docs/engine/quickstart/docker-compose.yaml > docker-compose.yaml
# docker-compose up -d
Copy the code
# This is a docker-compose file for development purposes. It refereneces unstable developer builds from the HEAD of master branch in https://github.com/anchore/anchore-engine # For a compose file intended for use with a released Version, see https://engine.anchore.io/docs/quickstart/ # - version: '2.1' volumes: anchore - db - volume: # Set this to 'true' to use an external volume. In which case, it must be created manually with "docker volume create anchore-db-volume" external: false services: # The primary API endpoint service API: image: anchore/anchore-engine:v1.0.0 depends_on: -db-catalog ports: - "8228:8228" logging: driver: "json-file" options: max-size: 100m environment: - ANCHORE_ENDPOINT_HOSTNAME=api - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword command: ["anchore-manager", "service", "start", "apiext"] # Catalog is the primary persistence and state manager of the system catalog: image: Anchore /anchore-engine:v1.0.0 depends_on: -DB Logging: driver: "json-file" Options: max-size: 100m expose: - 8228 environment: - ANCHORE_ENDPOINT_HOSTNAME=catalog - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword command: ["anchore-manager", "service", "start", "catalog"] queue: image: Anchore /anchore-engine:v1.0.0 depends_on: -db-catalog expose: -8228 Logging: driver: "json-file" options: max-size: 1.0.0 Depends_on: -db-catalog expose: -8228 Logging: driver: "json-file" options: max-size: 1.0.0 100m environment: - ANCHORE_ENDPOINT_HOSTNAME=queue - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword command: ["anchore-manager", "service", "start", "simplequeue"] policy-engine: Image: anchore/anchore-engine:v1.0.0 depends_on: -db-catalog expose: -8228 Logging: driver: "json-file" options: max-size: 100m environment: - ANCHORE_ENDPOINT_HOSTNAME=policy-engine - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword - ANCHORE_VULNERABILITIES_PROVIDER=grype command: ["anchore-manager", "service", "start", "policy_engine"] Analyzer: Image: anchore/anchore-engine:v1.0.0 Depends_on: - db - catalog expose: - 8228 logging: driver: "json-file" options: max-size: 100m environment: - ANCHORE_ENDPOINT_HOSTNAME=analyzer - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword volumes: - /analysis_scratch command: ["anchore-manager", "service", "start", "analyzer"] db: image: "postgres:9" volumes: - anchore-db-volume:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=mysecretpassword expose: - 5432 logging: driver: "json-file" options: max-size: 100m healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] # # Uncomment this section to add a prometheus instance to gather metrics. This is mostly for quickstart to demonstrate prometheus metrics exported # prometheus: # image: docker.io/prom/prometheus:latest # depends_on: # - api # volumes: # - ./anchore-prometheus.yml:/etc/prometheus/prometheus.yml:z # logging: # driver: "json-file" # options: # max-size: 100m # ports: # - "9090:9090" # # # Uncomment this section to run a swagger UI service, for inspecting and interacting with the anchore engine API via a browser (http://localhost:8080 by default, change if needed in both sections below) # swagger-ui-nginx: # image: docker.io/nginx:latest # depends_on: # - api # - swagger-ui # ports: # - "8080:8080" # volumes: # - ./anchore-swaggerui-nginx.conf:/etc/nginx/nginx.conf:z # logging: # driver: "json-file" # options: # max-size: 100m # swagger-ui: # image: docker.io/swaggerapi/swagger-ui # environment: # - URL=http://localhost:8080/v1/swagger.json # logging: # driver: "json-file" # options: # max-size: 100m #Copy the code
[root@k8s-node-06 anchore]# docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------------
anchore_analyzer_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchore_api_1 /docker-entrypoint.sh anch ... Up (healthy) 0.0.0.0:8228->8228/tcp
anchore_catalog_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchore_db_1 docker-entrypoint.sh postgres Up (healthy) 5432/tcp
anchore_policy-engine_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
anchore_queue_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
Copy the code
Modifying Jenkins Configuration
Pipeline Test:
Wrong guess:
Is it because my Containerd is not Docker? Or is the server version too high?
The following questions:
How do I scan a private warehouse image?
But then there are problems: anchore-Enchorepipeline mirror repository default is DockerHub, my repository is private harbor repository, Spinnaker -nginx-demo application pipeline add scan cannot run…….
Split ('-')[0] repoName = "${JOB_NAME}" ${JOB_NAME}" imageName = "${registryServer}/${projectName}/${repoName}" //pipeline pipeline{ agent { node { label "build01"}} GenericTrigger(causeString: 'Generic Cause', genericVariables: [[defaultValue: '', key: 'branchName', regexpFilter: '', value: '$.ref']], printContributedVariables: true, printPostContent: true, regexpFilterExpression: '', regexpFilterText: '', silentResponse: true, token: 'spinnaker-nginx-demo') } stages{ stage("CheckOut"){ steps{ script{ srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git" branchName = branchName - "refs/heads/" currentBuild.description = "Trigger by ${branchName}" println("${branchName}") checkout([$class: 'GitSCM', branches: [[name: "${branchName}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'gitlab-admin-user', url: "${srcUrl}"]]]) } } } stage("Push Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) { sh """ sed -i -- "s/VER/${branchName}/g" app/index.html docker login -u ${username} -p ${password} ${registryServer} docker build -t ${imageName}:${data} . docker push ${imageName}:${data} docker rmi ${imageName}:${data} """}}}} stage('Container Security Scan') {steps {script{sh """ echo "Start scanning" echo "${imageName}:${data} ${WORKSPACE}/Dockerfile" > anchore_images """ anchore engineRetries: "360",forceAnalyze: true, name: 'anchore_images' } } } stage("Trigger File"){ steps { script{ sh """ echo IMAGE=${imageName}:${data} >trigger.properties echo ACTION=DEPLOY >> trigger.properties cat trigger.properties """ archiveArtifacts allowEmptyArchive: true, artifacts: 'trigger.properties', followSymlinks: false } } } } }Copy the code
Github Issuse found inspiration:
What’s going on? Github Anchore Warehouse issue: github.com/anchore/anc… , found a solution……
Add private warehouse configuration
[root@k8s-node-06 anchore]# docker exec -it d21c8ed1064d bash
[anchore@d21c8ed1064d anchore-engine]$ anchore-cli registry add harbor.xxxx.com zhangpeng xxxxxx
[anchore@d21c8ed1064d anchore-engine]$ anchore-cli --url http://10.0.4.18:8228/v1/ --u admin --p foobar --debug image add harbor.layame.com/spinnaker/spinnaker-nginx-demo:202111192008
Copy the code
Well, I added my Harbor warehouse. Take a look. I run my Jenkins and it looks like my assembly line is ready for a report Log in to the anchore_API_1 container to verify:
[anchore@d21c8ed1064d anchore-engine]$ anchore-cli image list
Copy the code
Modify helm’s anchore-enchore
For the same reason I now suspect that the harbor deployed by my helm is also wrong… Begin to doubt wrong, modify try!
[root@k8s-master-01 anchore-engine]# kubectl get pods -n anchore-engine NAME READY STATUS RESTARTS AGE anchore-engine-anchore-engine-analyzer-fcf9ffcc8-dv955 1/1 Running 0 10h anchore-engine-anchore-engine-api-7f98dc568-j6tsz 1/1 Running 0 10h anchore-engine-anchore-engine-catalog-754b996b75-q5hqg 1/1 Running 0 10h anchore-engine-anchore-engine-policy-745b6778f7-hbsvx 1/1 Running 0 10h anchore-engine-anchore-engine-simplequeue-695df4498-wgss4 1/1 Running 0 10h anchore-engine-postgresql-9cdbb5f7f-4dcnk 1/1 Running 0 10h [root@k8s-master-01 anchore-engine]# kubectl get svc -n anchore-engine NAME TYPE CLUSTER-IP External-ip PORT(S) AGE anchore-engine-anchore-engine-API ClusterIP 172.19.255.231 < None > 8228/TCP 10h Anchore-engine-anchore-engine-catalog ClusterIP 172.19.254.163 < None > 8082/TCP 10h anchore-engine-anchore-engine-policy ClusterIP 172.19.254.91 <none> 8087/TCP 10h anchore-engine-anchore-engine-simpleQueue ClusterIP 172.19.253.141 <none> 8083/TCP 10h anchore-engine- postgresQL ClusterIP 172.19.252.126 <none> 5432/TCP 10h [root@k8s-master-01 anchore-engine]# kubectl run -i --tty anchore-cli --restart=Always --image anchore/engine-cli --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS = XXXXXX - env ANCHORE_CLI_URL = http://172.19.255.231:8228/v1 [anchore @ anchore - cli anchore - cli) $ anchore-cli registry add harbor.xxxx.com zhangpeng xxxxxxxx [anchore@anchore-cli anchore-cli]$ anchore-cli --url http://172.19.255.231:8228/v1/ - u admin - p XXXX - debug image add harbor.xxxx.com/spinnaker/spinnaker-nginx-demo:202111192008Copy the code
Looks like it worked, too! Override my runtime assumptions or version of the problemModify Jenkins’ configuration to set up anchore-engine API address for helm. I don’t like directly using service address in cluster due to cluter.local:
Run Jenkins task Spinnaker – Nginx – Demo pipeline
The gitlab file is still modified to trigger the pipeline task. It is a pity that the high-risk vulnerability detection fails, haha, but the pipeline finally runs:
Compare Trivy with Anchore-Engine
Take the spinnaker mirror – nginx – demo 107 products to compare, products labeled harbor.xxxx.com/spinnaker/spinnaker-nginx-demo:202111201116: anchore – engine report:Harbor trivy scan is bug free.Instantly fell into a loophole to solve the obsessive compulsive cycle…….
To sum up:
- Harbor custom image scan plugin tivy, clair is also available, and it seems to work with Anchore-engine
- The anchore-engine will add to the private repository, and the hemL installation address will be changed to cluster.local if it is a custom cluster
- Anchore-engine is more rigorous than Trivy’s scanning
- Be good with the –help command: anchore-cli –help