A background

Kubernetes cluster backup is a big challenge. Although K8S cluster backup can be achieved through ETCD backup, it is difficult to recover a single Namespace.

For K8s cluster data backup and recovery, as well as replication of the current cluster data to other clusters are very convenient. You can clone applications and namespaces between two clusters to create a temporary development environment.

Two Velero overview

2.1 What is Velero

Velero is a cloud-native disaster recovery and migration tool. It is itself open source, written in Go, and can safely back up, restore, and migrate Kubernetes cluster resources and persistent volumes.

Velero is Spanish for sailboat, which fits the naming style of the Kubernetes community. Velero’s developer Heptio, previously acquired by VMware, and its founder, who worked for Google in 2014, was considered a core member of Kubernetes at the time.

Velero is a cloud-native Kubernetes optimization that supports standard K8S clusters, both private and public. In addition to DISASTER recovery, it can also perform resource migration, supporting the migration of container applications from one cluster to another cluster.

Heptio Velero (formerly known as ARK) is an open source tool for backup, migration, and disaster recovery of Kubernetes cluster resources and persistent storage volumes (PV).

Velero can be used to back up and restore clusters, reducing the impact of cluster DR. The basic principle of Velero is to back up the data of the cluster to the object store and pull the data from the object store during recovery. You can view the received object store from the official documentation, and you can use Minio for local storage. The following shows how to use Velero to restore openShift cluster backup on openstack to OpenShift on Ali Cloud.

2.2 Velero Workflow

The flow chart of 2.2.1

2.2.2 Backup Process

  1. localVeleroThe client sends backup instructions.
  2. KubernetesOne will be created within the clusterBackupObject.
  3. BackupControllermonitoringBackupObject and begin the backup process.
  4. BackupControllerWill send toAPI ServerQuery related data.
  5. BackupControllerThe queried data is backed up to the remote object storage.

2.3 Velero’s features

Velero currently includes the following features:

  • supportKubernetesBack up and restore cluster data
  • Support replication of currentKubernetesCluster resources to othersKubernetesThe cluster
  • Support replication of production environments to development and test environments

2.4 Velero form

The Velero component consists of two parts, the server and the client.

  • Server: Run on youKubernetesThe cluster
  • Client: Tools that run on the local command line and must be configuredkubectlAnd the clusterkubeconfigOn the machine

2.5 Backup storage

  • AWS S3 and S3-compatible storage such as Minio
  • Azure BloB storage
  • Google Cloud storage
  • Aliyun OSS Storage (github.com/AliyunConta…)

Project address: github.com/heptio/vele…

2.6 Application Scenario

  • Disaster scenarios: Backs up and recovers K8S clusters
  • Migration scenario: Provides the ability to copy cluster resources to other clusters (replication synchronous development, testing, cluster configuration in production environment, simplified environment configuration)

2.7 Differences with ETCD

In contrast to Etcd backup, direct Etcd backup is to back up all the resources of the cluster. Velero can back up objects at the Kubernetes cluster level. In addition to backing up the whole Kubernetes cluster, Velero can also back up or restore objects such as Type, Namespace and Label by category.

Note: Objects created during the backup process are not backed up.

3 Backup Process

Velero creates many CRDS and related controllers in the Kubernetes cluster. The operation of backup and recovery is essentially the operation of related CRDS.

#CRD created by Velero in the Kubernetes cluster
$ kubectl -n velero get crds -l component=velero
NAME                                CREATED AT
backups.velero.io                   2019-08-28T03:19:56Z
backupstoragelocations.velero.io    2019-08-28T03:19:56Z
deletebackuprequests.velero.io      2019-08-28T03:19:56Z
downloadrequests.velero.io          2019-08-28T03:19:56Z
podvolumebackups.velero.io          2019-08-28T03:19:56Z
podvolumerestores.velero.io         2019-08-28T03:19:56Z
resticrepositories.velero.io        2019-08-28T03:19:56Z
restores.velero.io                  2019-08-28T03:19:56Z
schedules.velero.io                 2019-08-28T03:19:56Z
serverstatusrequests.velero.io      2019-08-28T03:19:56Z
volumesnapshotlocations.velero.io   2019-08-28T03:19:56Z

Copy the code

3.1 Ensuring data consistency

The object store data is the only data source, that is, the controller within the Kubernetes cluster checks the remote OSS storage and creates the associated CRD within the cluster when it finds a backup. If the remote storage does not have the storage data associated with the CRD in the current cluster, the CRD in the current cluster will be deleted.

3.2 Supported back-end Storage

Velero supports two CRDS for back-end storage, BackupStorageLocation and VolumeSnapshotLocation.

3.2.1 BackupStorageLocation

BackupStorageLocation is used to define the location where Kubernetes cluster resource data is stored, that is, cluster object data, not PVC data. The main supported back-end storage is S3 compatible storage, such as Mino and Ali Cloud OSS.

3.2.1.1 Minio

apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  name: default
  namespace: velero
spec:
# Only AWS GCP Azure
  provider: aws
  Store the main configuration
  objectStorage:
  # bucket name
    bucket: myBucket
    # in the bucket
    prefix: backup
Different providers have different configurations
  config:
    # in the bucket
    region: us-west-2
    S3 Authentication information
    profile: "default"
    # when using Minio, default is false
    AWS S3 supports two types of Url Bucket urls
    # 1 Path style URL: http://s3endpoint/BUCKET
    # 2 Virtual-Hosted Style URL: http://oss-cn-beijing.s3endpoint puts the Bucker Name in the Host Header
    Ali Cloud OSS will error 403 if "true" is written below hosted
    s3ForcePathStyle: "false"
    # s3 address, in the format of http://minio:9000
    s3Url: http://minio:9000
Copy the code

3.2.1.2 ali OSS

apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  labels:
    component: velero
  name: default
  namespace: velero
spec:
  config:
    region: oss-cn-beijing
    s3Url: http://oss-cn-beijing.aliyuncs.com
    s3ForcePathStyle: "false"
  objectStorage:
    bucket: build-jenkins
    prefix: ""
  provider: aws
Copy the code

3.2.2 VolumeSnapshotLocation

VolumeSnapshotLocation is used to create snapshots for the PV, which requires plug-ins provided by the cloud provider. Ali Cloud has provided plug-ins, which need to use storage mechanisms such as CSI. You can also use the specialized backup tool Restic to back up PV data to Ali Cloud OSS (custom option is required during installation).

Custom options are required for installation
--use-restic

# here we store PV using OSS (BackupStorageLocation), so we don't need to create VolumeSnapshotLocation
--use-volume-snapshots=false
Copy the code

Restic is a data encryption backup tool developed in the GO language. As the name implies, Restic encrypts local data and sends it to a specific repository. Supported warehouses include Local, SFTP, Aws S3, Minio, OpenStack Swift, Backblaze B2, Azure BS, Google Cloud Storage, and Rest Server.

Project address: github.com/restic/rest…

Four practices velero backup minio

4.1 Environment Requirements

  • Kubernetes > 1.7;

4.2 the deployment of velero

2 download velero

Wget tar ZXVF - https://github.com/vmware-tanzu/velero/releases/download/v1.4.2/velero-v1.4.2-linux-amd64.tar.gz Velero - v1.4.2 - Linux - amd64. Tar. GzCopy the code

4.2.2 installation minio

cd Velero - v1.4.2 - Linux - amd64
[root@master Velero - v1.4.2 - Linux - amd64]# cat examples/minio/00-minio-deployment.yaml 
---
apiVersion: v1
kind: Namespace
metadata:
  name: velero

---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: velero
  name: minio
  labels:
    component: minio
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      component: minio
  template:
    metadata:
      labels:
        component: minio
    spec:
      volumes:
      - name: storage
        emptyDir: {}
      - name: config
        emptyDir: {}
      containers:
      - name: minio
        image: minio/minio:latest
        imagePullPolicy: IfNotPresent
        args:
        - server
        - /storage
        - --config-dir=/config
        env:
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        ports:
        - containerPort: 9000
        volumeMounts:
        - name: storage
          mountPath: "/storage"
        - name: config
          mountPath: "/config"

---
apiVersion: v1
kind: Service
metadata:
  namespace: velero
  name: minio
  labels:
    component: minio
spec:
  # ClusterIP is recommended for production environments.
  # Change to NodePort if needed per documentation,
  # but only if you run Minio in a test/trial environment, for example with Minikube.
  type: ClusterIP
  ports:
    - port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    component: minio

---
apiVersion: batch/v1
kind: Job
metadata:
  namespace: velero
  name: minio-setup
  labels:
    component: minio
spec:
  template:
    metadata:
      name: minio-setup
    spec:
      restartPolicy: OnFailure
      volumes:
      - name: config
        emptyDir: {}
      containers:
      - name: mc
        image: minio/mc:latest
        imagePullPolicy: IfNotPresent
        command:
        - /bin/sh
        - -c
        - "mc --config-dir=/config config host add velero http://minio:9000 minio minio123 && mc --config-dir=/config mb -p velero/velero"
        volumeMounts:
        - name: config
          mountPath: "/config"
Copy the code

As you can see from the resource light above, when installing minio

MINIO_ACCESS_KEY: minio

MINIO_SECRET_KEY: minio123

The IP address of the service is http://minio:9000. The type of the service is ClusterIP. You can view it by mapping it to NodePort

Finally, a job was executed to create a bucket named: velero/velero, which was adapted at creation time.

  • The installation
[root@master Velero - v1.4.2 - Linux - amd64]# kubectl apply -f examples/minio/00-minio-deployment.yaml 
namespace/velero created
deployment.apps/minio created
service/minio created
job.batch/minio-setup created
[root@master Velero - v1.4.2 - Linux - amd64]# kubectl get all -n velero 
NAME                       READY   STATUS              RESTARTS   AGE
pod/minio-fdd868c5-xv52k   0/ 1     ContainerCreating   0          14s
pod/minio-setup-hktjb      0/ 1     ContainerCreating   0          14s


NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/minio   ClusterIP   10.233. 39204.   <none>        9000/TCP   14s


NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio   0/ 1     1            0           14s

NAME                             DESIRED   CURRENT   READY   AGE
replicaset.apps/minio-fdd868c5   1         1         0       14s



NAME                    COMPLETIONS   DURATION   AGE
job.batch/minio-setup   0/ 1           14s        14s
Copy the code

After all services have been started, log in to Minio to check whether a Bucket for Velero or Velero has been created successfully.

Modify the SVC. Log in to the SVC

[root@master velero-v1.4.2-linux-amd64]# kubectl get SVC -n velero minio NAME TYPE cluster-ip external-ip PORT(S) AGE Minio NodePort 10.233.39.204 < None > 900:30401 /TCP 2M26sCopy the code

Holdings installation velero

4.2.3.1 Creating a Key

Installing Velero requires you to create a key that will allow you to log into Minio properly

cat > credentials-velero <<EOF
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
EOF
#Install velero
cp velero /usr/bin/
Copy the code

4.2.3.2 Installing Velero in the K8s Cluster

#Enable quick completionVelero completion bash velero install \ --provider aws \ --plugins velero/velero-plugin-for-aws:v1.0.0 \ --bucket velero  \ --secret-file ./credentials-velero \ --use-volume-snapshots=false \ --backup-location-config Region = minio, s3ForcePathStyle = "true", s3Url = http://minio.velero.svc:9000 [root @ master velero - v1.4.2 - Linux - amd64] # velero Install --provider aws --plugins velero/velero-plugin-for-aws:v1.0.0 --bucket velero --secret-file./ credessions-velero --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000 CustomResourceDefinition/backups.velero.io: attempting to create resource CustomResourceDefinition/backups.velero.io: created CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource CustomResourceDefinition/backupstoragelocations.velero.io: created CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource CustomResourceDefinition/deletebackuprequests.velero.io: created CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource CustomResourceDefinition/downloadrequests.velero.io: created CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource CustomResourceDefinition/podvolumebackups.velero.io: created CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource CustomResourceDefinition/podvolumerestores.velero.io: created CustomResourceDefinition/resticrepositories.velero.io: attempting to create resource CustomResourceDefinition/resticrepositories.velero.io: created CustomResourceDefinition/restores.velero.io: attempting to create resource CustomResourceDefinition/restores.velero.io: created CustomResourceDefinition/schedules.velero.io: attempting to create resource CustomResourceDefinition/schedules.velero.io: created CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource CustomResourceDefinition/serverstatusrequests.velero.io: created CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource CustomResourceDefinition/volumesnapshotlocations.velero.io: created Waiting for resources to be ready in cluster... Namespace/velero: attempting to create resource Namespace/velero: already exists, proceeding Namespace/velero: created ClusterRoleBinding/velero: attempting to create resource ClusterRoleBinding/velero: created ServiceAccount/velero: attempting to create resource ServiceAccount/velero: created Secret/cloud-credentials: attempting to create resource Secret/cloud-credentials: created BackupStorageLocation/default: attempting to create resource BackupStorageLocation/default: created Deployment/velero: attempting to create resource Deployment/velero: created Velero is installed! ⛵ Use 'kubectl logs deployment/ velero-n velero' to view the status. [root@master velero-v1.4.2-linux-amd64]# kubectl API versions - | grep velero velero. IO/v1 / root @ master velero - v1.4.2 - Linux - amd64] # kubectl get pod -n velero NAME READY STATUS RESTARTS AGE minio-fdd868c5-xv52k 1/1 Running 0 56m minio-setup-hktjb 0/1 Completed 0 56m velero-56fbc5d69c-8v2q7  1/1 Running 0 32mCopy the code

At this point velero is fully deployed.

4.3 velero command

$ velero create backup NAME [flags]

# remove namespace
--exclude-namespaces stringArray                  namespaces to exclude from the backup

Remove the resource type
--exclude-resources stringArray                   resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io

Include cluster resource types
--include-cluster-resources optionalBool[=true]   include cluster-scoped resources in the backup

# include the namespace
--include-namespaces stringArray                  namespaces to include in the backup (use The '*' for all namespaces) (default *)

Contains the namespace resource type
--include-resources stringArray                   resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use The '*' for all resources)

Tag this backup
--labels mapStringString                          labels to apply to the backup
-o, --output string                               Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table'. 'json'. and 'yaml'. 'table' is not valid for the install command.

Back up the resource for the specified tag
-l, --selector labelSelector                      only back up resources matching this label selector (default <none>)

Create a snapshot for the PV
--snapshot-volumes optionalBool[=true]            take snapshots of PersistentVolumes as part of the backup

# specify the backup location
--storage-location string                         location in which to store the backup

How often to delete backup data

--ttl duration                                    how long before the backup can be garbage collected (default 720h0m0s)

# specify the location of the snapshot, i.e. which public cloud driver
--volume-snapshot-locations strings               list of locations (at most one per provider) where volume snapshots should be stored

Copy the code

4.4 test

Velero is very user-friendly and has a test demo ready for us in the installation package. We can use the test demo to verify the test.

4.4.1 Creating a Test Application

[root@master velero-v1.4.2-linux-amd64]# kubectl apply-f examples/nginx-app/base.yaml namespace/nginx-example created Apps /nginx-deployment created service/my-nginx created service [root@master velero-v1.4.2-linux-amd64]# kubectl get all -n nginx-example NAME READY STATUS RESTARTS AGE pod/nginx-deployment-f4769bfdf-8jrsz 0/1 ContainerCreating 0 12s pod/nginx-deployment-f4769bfdf-sqfp4 0/1 ContainerCreating 0 12s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Service /my-nginx LoadBalancer 10.233.10.49 <pending> 80:32401/TCP 13s NAME READY up-to-date AVAILABLE AGE deployment.apps/nginx-deployment 0/2 2 0 14s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-deployment-f4769bfdf 2  2 0 14sCopy the code

4.4.2 Performing backup

[root@master Velero - v1.4.2 - Linux - amd64]# velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
[root@master Velero - v1.4.2 - Linux - amd64]# velero backup describe nginx-backup
Name:         nginx-backup
Namespace:    velero
Labels:       velero.io/storage-location=default
Annotations:  Velero. IO/source - cluster - k8s - gitversion = v1.15.5
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=1
Phase:  Completed
Errors:    0
Warnings:  0
Namespaces:
  Included:  nginx-example
  Excluded:  <none>
Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto
Label selector:  <none>
Storage Location:  default
Velero-Native Snapshot PVs:  auto
TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1

Started:    The 2020-07-21 19:12:16 + 0800 CST
Completed:  The 2020-07-21 19:12:24 + 0800 CST

Expiration:  The 2020-08-20 19:12:16 + 0800 CST

Total items to be backed up:  23
Items backed up:              23

Velero-Native Snapshots: <none included>

Copy the code

4.4.3 Viewing Backup Information

  • Log in to minio to view backup information

  • Viewing the directory Structure

4.4.4 Performing a Recovery Test

4.4.4.1 Deleting the Nginx Service

[root@master velero-v1.4.2-linux-amd64]# kubectl delete -f examples/nginx-app/base.yaml 
namespace "nginx-example" deleted
deployment.apps "nginx-deployment" deleted
service "my-nginx" deleted

Copy the code

4.4.4.2 Restoring the Nginx Service

[root@master velero-v1.4.2-linux-amd64]# velero restore create --from-backup nginx-backup --wait
Restore request "nginx-backup-20200722134728" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.

Restore completed with status: Completed. You may check for more information using the commands `velero restore describe nginx-backup-20200722134728` and `velero restore logs nginx-backup-20200722134728`.
[root@master velero-v1.4.2-linux-amd64]# kubectl  get pods -n nginx-example
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-f4769bfdf-8jrsz   1/1     Running   0          7s
nginx-deployment-f4769bfdf-sqfp4   1/1     Running   0          7s

Copy the code

Note: Velero Restore does not overwrite existing resources, but only restores resources that do not exist in the current cluster. Existing resources cannot be rolled back to the previous version. If the rollback is required, delete the existing resources before restore.

Five practices velero backup OSS

This example practices how to use Velero to complete backup and migration in Aliyun container service ACK.

ACK plugin address: github.com/AliyunConta…

5.1 Creating an OSS Bucket

The storage type is low-frequency access storage and the permission is private

  • Create a bucket that is used in configuring velero’s prefix

  • Configure the object storage life cycle

5.2 Creating aliYun RAM Users

In this case, it is best to create an Ali Cloud RAM user for operating OSS and ACK resources, for permission classification, to improve security.

5.2.1 Creating a Permission Policy

Policy content:

{
"Version": "1",
"Statement": [
    {
        "Action": [
            "ecs:DescribeSnapshots",
            "ecs:CreateSnapshot",
            "ecs:DeleteSnapshot",
            "ecs:DescribeDisks",
            "ecs:CreateDisk",
            "ecs:Addtags",
            "oss:PutObject",
            "oss:GetObject",
            "oss:DeleteObject",
            "oss:GetBucket",
            "oss:ListObjects"
        ],
        "Resource": [
            "*"
        ],
        "Effect": "Allow"
    }
]
}

Copy the code

5.2.2 Creating a User

Select programmatic access to AccessKeyID and AccessKeySecret when creating a new user. Create a new one for backup. Do not use the old user’s AK and AS.

5.3 Deploying the Server

5.3.1 Pull the Velero plug-in

git clone https://github.com/AliyunContainerService/velero-plugin

Copy the code

5.3.2 Setting Parameters

  • Modify theinstall/credentials-veleroThe file will be obtained in the new userAccessKeyIDAccessKeySecretTo fill in.

ALIBABA_CLOUD_ACCESS_KEY_ID=<ALIBABA_CLOUD_ACCESS_KEY_ID>
ALIBABA_CLOUD_ACCESS_KEY_SECRET=<ALIBABA_CLOUD_ACCESS_KEY_SECRET>

Copy the code
  • Set up the backup OSS Bucket and availability zones and deploy Velero

1. Create velero namespace and Ali cloud secret

Create the velero namespace
$ kubectl create namespace velero

# Create Ali Cloud Secret
$ kubectl create secret generic cloud-credentials --namespace velero --from-file cloud=install/credentials-velero

Copy the code

2. Replace the object storage information in CRD and deploy CRD and Velero

#OSS Bucket names
$ BUCKET=devops-k8s-backup

#Availability zone of OSS
$ REGION=cn-shanghai

#The bucket name
$ prefix=velero

#The deployment of velero CRD
$ kubectl apply -f install/00-crds.yaml

#Replace the OSS Bucket and availability zone
$ sed -i "s#<BUCKET>#$BUCKET#" install/01-velero.yaml
$ sed -i "s#<REGION>#$REGION#" install/01-velero.yaml

#Replace the name prifix in the bucket


#Check the difference[root@master velero-plugin]# git diff install/01-velero.yaml diff --git a/install/01-velero.yaml b/install/01-velero.yaml index 5669860.. 7dd4c5a 100644 -- a/install/01-velero.yaml +++ b/install/01-velero.yaml @@-31,10 +31,10 @@metadata: namespace: velero spec: config: - region: <REGION> + region: cn-shanghai objectStorage: - bucket: <BUCKET> - prefix: "" + bucket: Devops-k8s-backup + prefix: "velero" provider: alibabacloud -- @@-47,7 +47,7 @@metadata: namespace: velero spec: config: - region: <REGION> + region: cn-shanghai provider: alibabacloud ---
#Create authentication secret
kubectl create namespace velero


#The deployment of velero
$ kubectl apply -f install/

#Check the velero
$kubectl get pods -n velero
[root@master velero-plugin]# kubectl  get pods -n velero   
NAME                     READY   STATUS      RESTARTS   AGE
velero-fcc8d77b8-569jz   1/1     Running     0          45s


#Check the position of
[root@master velero-plugin]# velero get backup-locations
NAME      PROVIDER       BUCKET/PREFIX              ACCESS MODE
default   alibabacloud   devops-k8s-backup/velero   ReadWrite

Copy the code

5.4 Backup and Restoration

5.4.1 backup

$ velero backup create nginx-example --include-namespaces nginx-example

Copy the code

5.4.2 recovery

 velero restore create --from-backup nginx-example

Copy the code

5.4.3 Periodic Tasks

# Create a backup every 6 hours
velero create schedule NAME --schedule="0 */6 * * *"

# Create a backup every 6 hours with the @every notation
velero create schedule NAME --schedule="@every 6h"

# Create a daily backup of the web namespace
velero create schedule NAME --schedule="@every 24h" --include-namespaces web

# Create a weekly backup, each living for 90 days (2160 hours)
velero create schedule NAME --schedule="@every 168h" --ttl 2160h0m0s


#The anchnet-devops-dev/anchnet-devops-test/anchnet-devops-prod/ xxx-devops-common-test namespace is backed up daily
velero create schedule anchnet-devops-dev --schedule="@every 24h" --include-namespaces xxxxx-devops-dev 
velero create schedule anchnet-devops-test --schedule="@every 24h" --include-namespaces xxxxx-devops-test
velero create schedule anchnet-devops-prod --schedule="@every 24h" --include-namespaces xxxxx-devops-prod 
velero create schedule anchnet-devops-common-test --schedule="@every 24h" --include-namespaces xxxxx-devops-common-test 

Copy the code

Six Matters needing attention

  • When Velero backs up, objects created during the backup process are not backed up.

  • Velero Restore does not overwrite existing resources, but only restores resources that do not exist in the current cluster. Existing resources cannot be rolled back to the previous version. If the rollback is required, delete the existing resources before restore.

  • Later we can talk about Velero running as a CrontJob and backing up data regularly.

  • In higher version 1.16.x, error: Unable to recognize “filebeat. Yml “: unable to recognize “filebeat. Yml “: unable to recognize “filebeat. No matches for kind “DaemonSet” in version “extensions/v1beta1” The cause is that the version of Kubernetes used between 1.14.x version, 1.16.x version abandoned some API support!

The resources

  • www.hi-linux.com/posts/60858…
  • zhuanlan.zhihu.com/p/92853124
  • www.cnblogs.com/charlieroro…
  • Bingohuang.com/heptio-vele…
  • Mp.weixin.qq.com/s/2WEgLm717…
  • velero.io/
  • Github.com/heptio/vele…
  • Github.com/heptio/vele…
  • www.cncf.io/webinars/ku…
  • Developer.aliyun.com/article/705…
  • Github.com/AliyunConta…
  • Developer.aliyun.com/article/726…
  • Cloud.tencent.com/developer/a…