Application management

Application
Server Group
Cluster
Load Balancer
Firewall
Page preview

Deployment management
Pipeline

  • Strong Pipeline capability: Its Pipeline can be extremely complex, it also has strong expression capabilities (subsequent operations in the preceding parameters are obtained by expression).

  • Triggers: timed tasks, manual triggers, Jenkins Jobs, Docker images, or other Pipeline steps.

  • Notification mode: Email, SMS or HipChat.

  • Integrate all operations into Pipeline, such as rollback, canary analysis, correlation CI, and so on.


Deployment strategy

  • Set, which conforms to the first approach by stopping all old pods and then starting the new ones.

  • RollingUpdate, or rolling upgrade, corresponds to the second method shown in the following figure.

  • Canary is described separately below.

Installation provides the means

  • Halyard Installation (officially recommended installation)

  • Helm builds the Spinnaker platform

  • Development Version Installation

Halyard mode installation attention points
Halyard agent configuration

Vim/opt/halyard/bin/halyardDEFAULT_JVM_OPTS = '- Dhttp. ProxyHost = 192.168.102.10 - Dhttps. ProxyPort = 3128'Copy the code

Deployment machine selection

18 GB of RAMA 4 core CPUUbuntu 14.04, 16.04 or 18.04Copy the code

Spinnaker installation steps

  1. Halyard download and install.

  2. Select cloud Provider: I choose Kubernetes Provider V2 (Manifest Based), which needs to complete Kubernetes cluster authentication and permission management on the machine where Spinnaker is deployed.

  3. Choose the installation environment when deploying: I chose the Debian package.

  4. Select storage: Minio is officially recommended. I chose Minio.

  5. Select the installed version: THE latest version I had was V1.8.0.

  6. The next step is deployment. The initial deployment takes a long time and the agent is connected to download the corresponding package.

  7. After downloading and viewing logs, use localhost:9000 to access logs.

Components involved

  • Deck: A user UI interface component that provides an intuitive and brief operation interface and a visual operation release and deployment process.

  • API: Oriented to call API components, we can not use the UI provided, directly call API operations, by its background to help us perform tasks such as publishing.

  • Gate: Is the gateway component of the API, which can be understood as a proxy, and all requests are forwarded by its proxy.

  • Rosco: Is the component that builds beta images and needs to be configured for use by the Packer component.

  • Orca: Is the core process engine component that manages the process.

  • Igor: is used to integrate other CI system components, such as Jenkins, a component.

  • Echo: notifies system components and sends information such as emails.

  • Front50: storage management component, need to configure Redis, Cassandra and other components to use.

  • Cloud Driver: Components used to adapt to different Cloud platforms, such as Kubernetes, Google, AWS EC2, Microsoft Azure, etc.

  • Fiat: authentication component, configure permission management, support OAuth, SAML, LDAP, GitHub Teams, Azure Groups, Google Groups, etc.

component port Relying on the component port
Clouddriver 7002 Minio
Fiat 7003 Jenkins
Front50 8080 Ldap
Orca 8083 GitHub
Gate 8084
Rosco 8087
Igor 8088
Echo 8089
Deck 9000
Kayenta 8090

Example Pipeline deployment

Stage-configuration

Stage-jenkins

Stage-deploy

- apiVersion: extensions/v1beta1kind: Deploymentmetadata:name: '${ parameters.deployName }-deployment'namespace: devspec:replicas: 2template: metadata: labels: name: '${ parameters.deployName }-deployment' spec: containers: - image: > -192.168.105.2:5000 /${parameters.imageSource}/${parameters.deployName}:${parameters.imageversion} name: '${ parameters.deployName }-deployment' ports: - containerPort: 8080 imagePullSecrets: - name: registrypullsecret- apiVersion: v1kind: Servicemetadata:name: '${ parameters.deployName }-service'namespace: devspec:ports: - port: 8080 targetPort: 8080selector: name: '${ parameters.deployName }-deployment'- apiVersion: extensions/v1beta1kind: Ingressmetadata:name: '${ parameters.deployName }-ingress'namespace: devspec:rules: - host: '${ parameters.deployName }-dev.ingress.dev.yohocorp.com' http: paths: - backend: serviceName: '${ parameters.deployName }-service' servicePort: 8080 path: /Copy the code

Stage-Webhook

Stage-Manual Judgment

Stage-Check Preconditions

Stage-undo Rollout (Manifest)

Stage-Canary Analysis

  • Validation data

  • Clean up the data

  • The comparison indicators

  • Score calculation

  • We need to turn on the Canary configuration for our application.

  • Create deployment Baseline and Canary with the same Service pointing to both deployments.

  • We use the indicator of reading Prometheus here, and need to add Prometheus configuration in Hal. Metric can directly match the metrics of Prometheus.

    You need to configure the collection counters and their weights:

  • The frequency of collection and analysis and the source to be specified are specified in the Pipeline, and scoring can be configured to override the configuration in the template.

  • Execution record of each analysis:

  • The result is shown below. Since we set the target of 75, the result of pipeline is judged as failure.

  1. Kubernetes implements Overlay networks in self-built clusters. In the Tencent cloud environment, it is itself a software-defined network VPC, so its implementation on the network can be as fast as the native VM network in the container environment, without any performance sacrifice.

  2. The application load balancer is associated with the Ingress in Kubernetes and can be quickly created for services that require external access.

  3. The cloud storage of Tencent Cloud can be managed by Kubernetes for easy operation of persistence.

  4. Tencent cloud deployment and alarm also provides external services and interfaces to better view monitoring related Node and Pod.

  5. Tencent cloud log service is well integrated with containers to collect and retrieve logs conveniently.