preface

Six months ago, I wrote how to build K8S from scratch, so why write now?

em… This time use Kubespray to build K8S cluster (officially recommended for production environment)

Comparison of several schemes:

Deployment plan advantages disadvantages
Kubeadm The official ‘ More difficult to deploy, more difficult to upgrade, less GitHub Star
Kubespray Official production, simple deployment, suitable for production Mirror image requires scientific Internet access
Kops Suitable for production, GitHub Star up to Unable to cross platform
Rancher Good for production, perfect, great UI interaction pay
  • After a week of torturing, I almost stepped on the pit, this article will lead you to avoid the pit

preparation

Better prepare 4 machines, of course I only prepared 3, don’t ask me what (poor, the machine can not run)

The host System version configuration IP
Ansible CentOS 8.3 2 nuclear 2 g 192.168.1.103
Master CentOS 8.3 2 nuclear 2 g 192.168.1.104,
Node1 CentOS 8.3 2 nuclear 2 g 192.168.1.105
Node2 CentOS 8.3 2 nuclear 2 g 192.168.1.106

Node2, I didn’t add it, you can add it yourself

In this deployment, KubeSpray 2.15.0 is used

Perform operations on Master and Node nodes

Disable the firewall, SELinux, and swap partitions on Ansible, Master, and Node nodes

Disabling the Firewall
[root@k8s-master k8s]$systemctl stop firewald. service # disable firewall startup [root@k8s-master k8s]$systemctl Disable firewald. service # Check firewall status [root@k8s-master k8s]$firewall-cmd --state not runningCopy the code
Disable SELinux
/etc/selinux/config set selinux =disabled. [root@k8s-master k8s]$vim /etc/selinux/config selinux =disabled [root@k8s-master k8s]$seStatus SELinux status: disabledCopy the code

Disabling Swap Partitions
  1. Method 1: Modify the configuration
# edit /etc/fstab to comment out swap. [root@k8s-master k8s]$vim /etc/fstab #/dev/mapper/cl-swap swap defaults 0 0Copy the code
  1. The command is shut down and becomes invalid after restart
[root@k8s-master k8s]$swapoff -a [root@k8s-master k8s]$free -m Total Used Free shared buff/cache available Mem: 1816 1488 88 17 239 158 Swap: 0 0 0Copy the code

If it is not closed, it is reported as shown below

[ERROR Swap]: running with swap on is not supported. Please disable swap
Copy the code

Configure Ansibe to connect to other machines

Ansibe is a script that needs to operate on the Master and Node nodes, which need to be connected to the machine via SSH

SSH public key and private key [root@k8s-master k8s]$ssh-keygen [root@k8s-master k8s]$ssh-copy-id [email protected] [root@k8s-master k8s]$ SSH - copy - id [email protected]Copy the code

Install Kubespray

Preparations before Installation

Kubespray we only need to install it on Ansible machines, so the following operations are for Ansible machines

EPEL (Extra Packages for Enterprise Linux) : automatic configuration of yum software repository ansible: automatic operation and maintenance tool Jinja2: Python based template engine

[root@k8s-master k8s]$sudo yum -y install epel-release # [root@k8s-master k8s]$sudo yum install python36 -y # [root@k8s-master k8s]$yum install python- PIP Ansible [root@k8s-master k8s]$sudo yum install -y ansible [root@k8s-master k8s]$PIP install jinja2 --upgradeCopy the code
Install PIP in script mode

# yum install python-pip # yum install python-pip

[root @ # download PIP k8s - master k8s] $wget https://bootstrap.pypa.io/get-pip.py # installation PIP [root @ k8s - master k8s] $python3 get-pip.pyCopy the code

Start installing KubeSpray

Install KubeSpray

Download KubeSpray source code

KubeSpray official source code

# I'm using the 2.15.0 branch, or download zip directly, I did [root @ k8s - master k8s] $git clone https://github.com/kubernetes-sigs/kubespray.git # folder rename, [root@k8s-master k8s]$mv kubespray-2.15.0 kubersprayCopy the code

Download zip directly

After the source code is pulled, the installation begins

[root@k8s-master k8s]$CD kuberspray # Install dependency (PIP uses domestic mirror super fast, Otherwise there will be a timeout error problem) sudo pip3 install - r requirements. TXT -i https://pypi.tuna.tsinghua.edu.cn/simple # copy configuration file cp - RFP Ansible Inventory file declare -a IPS=(192.168.1.104 192.168.1.105) CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}Copy the code
# View or modify the default parameters, you can modify the version you want (optional operation, Do you need) [root @ k8s - master k8s] $cat inventory/mycluster/group_vars/k8s - cluster/k8s - cluster. Yml kube_version: V1.19.7 kube_network_plugin: calico kube_service_addresses: 10.233.0.0/18 ube_pods_subnet: 10.233.64.0/18 kube_proxy_mode: ipvsCopy the code

Congratulations on the successful installation of KuberSpray, now deploy the cluster

Deploy the K8S cluster

CD kubespray vim inventory/mycluster/hosts. Yaml all: hosts: node1: ansible_host: 192.168.1.104, IP: 192.168.1.104 access_IP: 192.168.1.104 node2: anSIBLE_host: 192.168.1.105 IP: 192.168.1.105 Access_IP: 192.168.1.105 children: kube-master: hosts: node1: node2: kube-node: hosts: node1: node2: etcd: hosts: node1: k8s-cluster: children: kube-master: kube-node: calico-rr: hosts: {}Copy the code

Kube-master defines the master node. Kube-node defines the slave node

CD kubespray # deployment ansible - the playbook - I inventory/mycluster/hosts. The yaml - become - VVV - become - user = root cluster. YmlCopy the code

– VVV Yes Prints detailed logs. This option is optional

Replace the pull mirror

When you see a failed to pull the image, congratulations to you, indicating that your previous installation is successful

Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Copy the code

K8s.gcr. IO is a mirror that can be pulled by scientific Internet users

Encounter can not pull the mirror, can build mirror, reference before how to borrow Ali cloud to build foreign mirror

Try to pull the mirror as follows (failed)

Why registry.aliyuncs.com/google_containers mirror is not complete

When I was trying to write a script to replace the image, I found that there is a better way (I found a lot of online scripts to replace the image, but those articles are older, most of them are from 2018).

Ali cloud also offer image, is actually in registry.aliyuncs.com/google_containers, so we need to replace the image source

[root@k8s-master k8s]$ vim inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
kube_image_repo: "registry.aliyuncs.com/google_containers"
Copy the code
Use honest and stupid methods

After docker is installed successfully, manually pull the image and modify the tag(PS: we will do other optimization later).

[root@node1 k8s]$vim pull_images.sh docker pull nginx:1.19 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/coredns:1.7.0 docker tag registry.cn-hangzhou.aliyuncs.com/owater/coredns:1.7.0 K8s. GCR. IO/coredns: 1.7.0 docker rmi registry.cn-hangzhou.aliyuncs.com/owater/coredns:1.7.0 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/k8s-dns-node-cache:1.16.0 docker tag Registry.cn-hangzhou.aliyuncs.com/owater/k8s-dns-node-cache:1.16.0 k8s. GCR. IO/DNS/k8s - DNS - node - cache: 1.16.0 docker rmi Registry.cn-hangzhou.aliyuncs.com/owater/k8s-dns-node-cache:1.16.0 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/cluster-proportional-autoscaler-amd64:1.8.3 docker tag registry.cn-hangzhou.aliyuncs.com/owater/cluster-proportional-autoscaler-amd64:1.8.3 K8s. GCR. IO/CPA/cluster - proportional - autoscaler - amd64:1.8.3 docker rmi Registry.cn-hangzhou.aliyuncs.com/owater/cluster-proportional-autoscaler-amd64:1.8.3 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/kube-apiserver:v1.19.7 docker tag Registry.cn-hangzhou.aliyuncs.com/owater/kube-apiserver:v1.19.7 k8s. GCR. IO/kube - apiserver: v1.19.7 docker rmi Registry.cn-hangzhou.aliyuncs.com/owater/kube-apiserver:v1.19.7 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/kube-controller-manager:v1.19.7 docker tag Registry.cn-hangzhou.aliyuncs.com/owater/kube-controller-manager:v1.19.7 k8s. GCR. IO/kube - controller - manager: v1.19.7 Docker rmi registry.cn-hangzhou.aliyuncs.com/owater/kube-controller-manager:v1.19.7 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/kube-scheduler:v1.19.7 docker tag Registry.cn-hangzhou.aliyuncs.com/owater/kube-scheduler:v1.19.7 k8s. GCR. IO/kube - the scheduler: v1.19.7 docker rmi Registry.cn-hangzhou.aliyuncs.com/owater/kube-scheduler:v1.19.7 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/kube-proxy:v1.19.7 docker tag Registry.cn-hangzhou.aliyuncs.com/owater/kube-proxy:v1.19.7 k8s. GCR. IO/kube - proxy: v1.19.7 docker rmi Registry.cn-hangzhou.aliyuncs.com/owater/kube-proxy:v1.19.7 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/etcd:v3.4.13 docker tag registry.cn-hangzhou.aliyuncs.com/owater/etcd:v3.4.13 Quay. IO/coreos etcd: v3.4.13 docker rmi registry.cn-hangzhou.aliyuncs.com/owater/etcd:v3.4.13 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/cni:v3.16.5 docker tag registry.cn-hangzhou.aliyuncs.com/owater/cni:v3.16.5 Quay. IO/calico/the cni: v3.16.5 docker rmi registry.cn-hangzhou.aliyuncs.com/owater/cni:v3.16.5 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/kube-controllers:v3.16.5 docker tag Registry.cn-hangzhou.aliyuncs.com/owater/kube-controllers:v3.16.5 quay. IO/calico/kube - controllers: v3.16.5 docker rmi Registry.cn-hangzhou.aliyuncs.com/owater/kube-controllers:v3.16.5 docker pull Registry.cn-hangzhou.aliyuncs.com/owater/node:v3.16.5 docker tag registry.cn-hangzhou.aliyuncs.com/owater/node:v3.16.5 Quay. IO/calico/node: v3.16.5 docker rmi registry.cn-hangzhou.aliyuncs.com/owater/node:v3.16.5Copy the code

Execute the script

[root@node1 k8s]$chmod +x pull_images.sh # Execute the script [root@node1 k8s]$./pull_images.shCopy the code

OK, this version of Kubespray requires the mirror pull is complete

Github resources are slow

Some resources need to be downloaded from Github, so students without scientific Internet access or slow access to Github need to wait patiently

Test the download speed, at this time you need to control the screen is static state, need to wait patiently, or even timeout error

Check whether the verification is successful.

Look at any node

[root@node1 k8s]$kubectl get Nodes NAME STATUS ROLES AGE VERSION Node1 Ready master 23m v1.19.7 node2 Ready Master 22m v1.19.7Copy the code

Hit the pit

Install kuberspray dependency when

You can also see from the error that the download file timed out, so you just need to set the timeout or modify the PIP source

Pip3 --default-timeout=50 install -r requirements. TXT # pip3 install -r requirements. TXT -i https://pypi.tuna.tsinghua.edu.cn/simpleCopy the code
[root@k8s-master k8s]$ pip3 install -r requirements.txt
Requirement already satisfied: ansible==2.9.16 in /usr/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (2.9.16)
Requirement already satisfied: PyYAML in /usr/lib64/python3.6/site-packages (from ansible==2.9.16->-r requirements.txt (line 1)) (3.12)
Requirement already satisfied: cryptography in /usr/lib64/python3.6/site-packages (from ansible==2.9.16->-r requirements.txt (line 1)) (2.3)
Collecting jinja2==2.11.1
  Using cached Jinja2-2.11.1-py2.py3-none-any.whl (126 kB)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/lib64/python3.6/site-packages (from jinja2==2.11.1->-r requirements.txt (line 2)) (0.23)
Collecting jmespath==0.9.5
  Using cached jmespath-0.9.5-py2.py3-none-any.whl (24 kB)
Collecting netaddr==0.7.19
  Downloading netaddr-0.7.19-py2.py3-none-any.whl (1.6 MB)
     |██████████▌                     | 532 kB 7.7 kB/s eta 0:02:23ERROR: Exception:
     
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/response.py", line 438, in _error_catcher
    yield
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/response.py", line 519, in read
    data = self._fp.read(amt) if not fp_closed else b""
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/cachecontrol/filewrapper.py", line 62, in read
    data = self.__fp.read(amt)
  File "/usr/lib64/python3.6/http/client.py", line 459, in read
    n = self.readinto(b)
  File "/usr/lib64/python3.6/http/client.py", line 503, in readinto
    n = self.fp.readinto(b)
  File "/usr/lib64/python3.6/socket.py", line 586, in readinto
    return self._sock.recv_into(b)
  File "/usr/lib64/python3.6/ssl.py", line 971, in recv_into
    return self.read(nbytes, buffer)
  File "/usr/lib64/python3.6/ssl.py", line 833, in read
    return self._sslobj.read(len, buffer)
  File "/usr/lib64/python3.6/ssl.py", line 590, in read
    v = self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 224, in _main
    status = self.run(options, args)
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 180, in wrapper
    return func(self, options, args)
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 321, in run
    reqs, check_supported_wheels=not options.target_dir
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 122, in resolve
    requirements, max_rounds=try_to_avoid_resolution_too_deep,
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 445, in resolve
    state = resolution.resolve(requirements, max_rounds=max_rounds)
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 339, in resolve
    failure_causes = self._attempt_to_pin_criterion(name, criterion)
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 207, in _attempt_to_pin_criterion
    criteria = self._get_criteria_to_update(candidate)
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 198, in _get_criteria_to_update
    for r in self._p.get_dependencies(candidate):
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/provider.py", line 172, in get_dependencies
    for r in candidate.iter_dependencies(with_requires)
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/provider.py", line 171, in <listcomp>
    r
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 257, in iter_dependencies
    requires = self.dist.requires() if with_requires else ()
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 239, in dist
    self._prepare()
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 226, in _prepare
    dist = self._prepare_distribution()
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 319, in _prepare_distribution
    self._ireq, parallel_builds=True,
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 480, in prepare_linked_requirement
    return self._prepare_linked_requirement(req, parallel_builds)
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 505, in _prepare_linked_requirement
    self.download_dir, hashes,
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 257, in unpack_url
    hashes=hashes,
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 130, in get_http_url
    from_path, content_type = download(link, temp_dir.path)
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/network/download.py", line 163, in __call__
    for chunk in chunks:
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/cli/progress_bars.py", line 168, in iter
    for x in it:
  File "/usr/local/lib/python3.6/site-packages/pip/_internal/network/utils.py", line 88, in response_chunks
    decode_content=False,
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/response.py", line 576, in stream
    data = self.read(amt=amt, decode_content=decode_content)
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/response.py", line 541, in read
    raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
  File "/usr/lib64/python3.6/contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/response.py", line 443, in _error_catcher
    raise ReadTimeoutError(self._pool, None, "Read timed out.")
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
Copy the code

The mirror cannot be pulled

Fatal: [node1 -> 192.168.1.104]: FAILED! = > {" attempts ": 4," changed ": true," CMD ": ["/usr/bin/docker" and "pull", "k8s. GCR. IO/coredns: 1.7.0"], "delta" : "0:00:15. 116121", "end" : "the 2021-01-16 11:35:22. 398178", "invocation" : {" module_args ": {" _raw_params" : "/usr/bin/docker pull k8s.gcr. IO /coredns:1.7.0", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": True}}, "MSG ": "non-zero return code", "rc": 1, "start": "2021-01-16 11:35:07.282057", "stderr": "Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", "stderr_lines": [ "Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" ], "stdout": "", "stdout_lines": [] }Copy the code

Execute the pull_images.sh script described above

The deployment is faulty

SSH Timeout Problem

SSH timeout occurred when the image was pulled out and the deployment started. Procedure

Timeout (12s) waiting for privilege escalation prompt:
Copy the code

Solutions:

Change the timeout period in /etc/ansible/ansible. CFG

# SSH timeout
timeout = 30
Copy the code
Different versions of Containerd. IO, podman conflicts

Because my CentOS was installed with Containerd.io-1.2.6-3.3.el7.x86_64. RPM, it is recommended that CentOS should be clean and Docker is not installed in advance

"msg": "Depsolve Error occured: \n Problem: Problem with installed package Podman-2.0.5-5.module_EL8.3.0 +512+b3b58dca.x86_64\ n-package Podman-2.0.5-5.module_el8.3.0 +512+b3b58dca.x86_64 requires runc >= 1.0.0-57, But none of the providers can be installed\ n-package containerd.io-1.2.6-3.3.el7.x86_64 conflicts with containerd Provided by containerd.io-1.3.9-3.1.el8.x86_64\ n-package containerd.io-1.3.9-3.1.el8.x86_64 conflicts with containerd Provided by containerd.io-1.2.6-3.3.el7.x86_64\ n-package containerd. io-1.3.9-3.3.el8.x86_64 conflicts with runc Provided by containerd.io-1.2.6-3.3.el7.x86_64\n - Cannot install both containerd.io-1.3.9-3.3.el8.x86_64 and Containerd. io-1.2.6-3.3.el7.x86_64\ n-package containerd. io-1.3.9-3.3.el8.x86_64 conflicts with runc provided by containerd. io-1.3.9-3.3.el8.x86_64 conflicts with runc provided by Runc-1.0.0-68.rc92.module_el8.3.0 +475+ c50CE30b.x86_64 \ N-package ContainerD.iO-1.3.9-3.1.el8.x86_64 Obsoletes runc Provided by Runc-1.0.0-68.rc92.module_EL8.3.0 +475+ c50CE30b.x86_64 \ n-conflicting requests\ n-package Runc-1.0.0-64.rc10.module_el8.3.0 +479+ 69E2AE26.x86_64 is filtered out by Modular filtering",Copy the code

Solution:

Podman [root@k8s-Node1 k8s]$RPM -q Podman [root@k8s-Node1 k8s]$DNF remove PodmanCopy the code
Buildah conflict

The reason as above

"msg": "Depsolve Error occured: \n Problem: Problem with installed package buildah-1.15.1-2. Module_el8.3.0 +475+ c50CE30b.x86_64 \ n-package Buildah-1.15.1-2.module_el8.3.0 +475+c50ce30b.x86_64 requires Runc >= 1.0.0-26, But none of the providers can be installed\ n-package containerd.io-1.2.6-3.3.el7.x86_64 conflicts with containerd Provided by containerd.io-1.3.9-3.1.el8.x86_64\ n-package containerd.io-1.3.9-3.1.el8.x86_64 conflicts with containerd Provided by containerd.io-1.2.6-3.3.el7.x86_64\ n-package containerd. io-1.3.9-3.3.el8.x86_64 conflicts with runc Provided by containerd.io-1.2.6-3.3.el7.x86_64\n - Cannot install both containerd.io-1.3.9-3.3.el8.x86_64 and Containerd. io-1.2.6-3.3.el7.x86_64\ n-package containerd. io-1.3.9-3.3.el8.x86_64 conflicts with runc provided by containerd. io-1.3.9-3.3.el8.x86_64 conflicts with runc provided by Runc-1.0.0-68.rc92.module_el8.3.0 +475+ c50CE30b.x86_64 \ N-package ContainerD.iO-1.3.9-3.1.el8.x86_64 Obsoletes runc Provided by Runc-1.0.0-68.rc92.module_EL8.3.0 +475+ c50CE30b.x86_64 \ n-conflicting requests\ n-package Runc - 1.0.0-56. Rc5. Dev. Git2abd837. Module_el8. 3.0 + 569 + 1 bada2e4 x86_64 is filtered out by modular filtering \ n - package Runc-1.0.0-64.rc10.module_el8.3.0 +479+ 69E2AE26.x86_64 is filtered out by Modular filtering",Copy the code

Solution:

Buildah RPM -q buildah # delete buildah DNF buildahCopy the code
Jinja2 needs to be upgraded
The < 192.168.1.105 > (0, b '\ n {" CMD ": "rpm -qa | grep epel-release || rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm", "Stdout" : "epel - release - 7-13. Noarch", "stderr" : ""," rc ": 0," start ":" the 2021-01-16 01:21:40. 213172 ", "end" : "2021-01-16 01:21:42.432345", "delta": "0:00:02.219173", "changed": true, "Invocation ": {"module_args": {"_raw_params": "rpm -qa | grep epel-release || rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm", "_uses_shell": true, "warn": true, "stdin_add_newline": true, "strip_empty_ends": true, "argv": null, "chdir": null, "executable": null, "creates": null, "removes": null, "stdin": null}}, "warnings": ["Consider using the yum, dnf or zypper module rather than running \'rpm\'. If you need to use command because yum, dnf or zypper is insufficient you can add \'warn: false\' to this command task or set \'command_warnings=False\' in ansible.cfg to get rid of this message."]}\n', b'') fatal: [node2]: FAILED! => { "msg": "The conditional check 'epel_task_result|succeeded' failed. The error was: template error while templating string: no filter named 'succeeded'. String: {% if epel_task_result|succeeded %} True {% else %} False {% endif %}" }Copy the code
FAILED - RETRYING: download_file | Download item (3 retries left).Result was: { "attempts": 2, "changed": false, "elapsed": 10, "invocation": { "module_args": { "attributes": null, "backup": null, "checksum": "sha256:c63ef1842533cd7888c7452cab9f320dcf45fc1c173e9d40abb712d45992db24", "client_cert": null, "client_key": Null, "Content ": NULL, "delimiter": null," Dest ": "/ TMP/Releases /kubeadm-v1.19.7-amd64", "directory_mode": null, "follow": false, "force": false, "force_basic_auth": false, "group": null, "headers": null, "http_agent": "ansible-httpget", "mode": "0755", "owner": "root", "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "sha256sum": "", "src": null, "timeout": 10, "tmp_dest": null, "unsafe_writes": null, "url": "Https://storage.googleapis.com/kubernetes-release/release/v1.19.7/bin/linux/amd64/kubeadm", "url_password" : null, "url_username": null, "use_proxy": true, "validate_certs": true } }, "msg": "failed to create temporary content file: The read operation timed out", "retries": 5 }Copy the code

conclusion

Kubespray can be a bit tricky to build as a whole, mainly because the image can’t be pulled, otherwise it’s just a matter of a few lines of commands

Next continue to toss, or continue to share more K8S related technical points