Author’s brief introduction

Janakiram MSV is principal analyst at Janakiram & Associates and an adjunct faculty member at the International School of Information Technology. He is also a Google Qualified Developer, Amazon Certified Solution Architect, Amazon Certified Developer, Amazon Certified SysOps Administrator and Microsoft Certified Azure Professional.

Janakiram is an ambassador for the Cloud Native Computing Foundation and one of the first Kubernetes certified administrators and Kubernetes certified application developers. He has worked at Microsoft, AWS, Gigaom Research and other well-known companies.

In previous articles, we learned about the core components of the cloud native edge computing stack: K3s, Project Calico, and Portworx.

This tutorial will take you through installing and configuring the software on an edge cluster, a group of Intel NUC minis running Ubuntu 18.04. This infrastructure can be used to run AI and IoT workloads that are reliable, scalable, and secure on the edge.

Custom K3s installation for Calico

By default, K3s uses Flannel as the container network interface (CNI) and VXLAN as the default backend. In this article, we’ll replace it with Calico.

To integrate K3s with the network stack Calico, we need a custom installation to enable CNI support.

Note that at the edge you need at least 3 nodes running on the K3s cluster for high availability.

On the first node designated as the server, run the following command:

 export K3S_TOKEN="secret_edgecluster_token"
Copy the code
Export INSTALL_K3S_EXEC="--flannel-backend= None --disable= Traefik --cluster-cidr= 172.16.2.2/24 --cluster-init"Copy the code
curl -sfL https://get.k3s.io | sh -
Copy the code

If 172.16.2.0/24 is already occupied in your network, you must select a different POD network CIDR to replace 172.16.2.0/24 in the command above.

On the remaining server nodes, run the following command. Notice that we added the **–server** switch to setup, pointing it to the IP address of the first node.

export K3S_TOKEN="secret_edgecluster_token" export INSTALL_K3S_EXEC="--flannel-backend=none --disable=traefik - cluster - cidr = 172.16.2.0/24 - server https://10.0.0.60:6443 "curl - sfL | sh - https://get.k3s.ioCopy the code

Run the following command to configure the worker node or agent:

Export K3S_URL = https://10.0.0.60:6443 export K3S_TOKEN = "secret_edgecluster_token" curl - sfL | sh - https://get.k3s.ioCopy the code

Use the IP address of K3s server instead of K3S_URL

At the end of this step, you should have a cluster with four nodes.

None of the nodes are ready because the network has not been configured. As soon as we apply Calico Specs to the cluster, the state of these nodes will become ready.

Before the next, from one of the first server nodes copy * * / etc/rancher/k3s/k3s yaml to your local workstation and KUBECONFIG environment variable pointing to it. Don’t forget to update the master URL in the YAML file **. This provides remote access to the K3s cluster through the Kubectl CLI.

Install Calico on a multi-node K3s cluster

We will start by downloading Calico Manifests and modifying them:

wget https://docs.projectcalico.org/manifests/tigera-operator.yaml
Copy the code
wget https://docs.projectcalico.org/manifests/custom-resources.yaml
Copy the code

During the K3s installation, open the custom-Resources.yaml file and change the CIDR to the same IP address segment as mentioned above.

Apply two MANIFEsts to configure the Calico network for the K3s cluster

kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml
Copy the code

In a few minutes, the cluster will be ready

Finally, modify cni-config configmap in the calico-system namespace to enable IP forwarding:

kubectl edit cm cni-config -n calico-system
Copy the code

Change the values shown below to enable IP forwarding:

"container_settings": {
              "allow_ip_forwarding": true
          }
Copy the code

Verify that Calico is up and running with the following command:

kubectl get pods -n calico-system
Copy the code

Install Portworx on K3s

Portworx 2.6 and above supports K3s, and the installation process is no different from other Kubernetes distributions. If you don’t know, you can install Portworx by following the tutorial in the link below:

Thenewstack. IO/tutorial – in…

If you don’t have an ETCD cluster handy, you can choose the built-in KVDB in the PX-Central installation wizard.

I chose NVMe disks attached to each host as the storage option. You can modify it according to your storage configuration.

One of the most important prerequisites for K3s is support for CSI, so make sure you select the Enable CSI option in the last step.

The picture

Copy the specification and apply it to your cluster

Within minutes, the Portworx cluster on K3s will be up and running:

kubectl get pods -l name=portworx -n kube-system
Copy the code

The CSI driver connects as a sidecar to each Pod in DaemonSet, which is why we see two containers in the Pod.

SSH into one of the nodes and check the Portworx cluster status with the following command.

sudo /opt/pwx/bin/pxctl status
Copy the code

We now have a fully configured edge infrastructure based on K3s, Calico, and Portworx. In the next article, we will deploy an AIoT workload running on the edge. Stay tuned

Original link:

Thenewstack. IO/tutorial – co…