An important part of my job is attending various technical conferences. The most recent was KubeCon in North America last November. On the last day of the conference, everyone was exhausted and I kept repeating myself mechanically to different people. Later, I was so upset that I decided to escape the crowd and listen to a lecture. I stumbled upon a talk by Darren Shepherd, Rancher’s CTO, on “Behind the K3s: Building a Production-grade Lightweight Kubernetes Distribution.” I was fascinated by the presentation, and then I began to learn more about K3s.
K3s is a lightweight Kubernetes distribution for Internet of Things and edge computing created by Rancher Labs, the creator of the most widely used Kubernetes management platform in the industry. It is 100% open source. It has small binaries and is optimized for ARM making it perfect for my IoT home project. Then I started thinking about how to expose services inside K3s Server with the Kong Gateway running on K3s.
To my surprise, K3s comes with an Ingress Controller by default. While the default proxy/ load balancer works, I need some plug-in functionality that it doesn’t support unless I use Kong Gateway. So, let’s go through a quick guide to how to start K3s in Ubuntu, configure it to support Kubernetes’ Kong, and deploy some services/plug-ins.
Configure K3s to deploy the Kong Ingress Controller
First, install K3s as a service on Systemd and OpenRc-based systems using the installation script from get.k3s.io. But we need to add some additional environment variables to configure the installation. First –no-deploy, this command turns off the existing Ingress Controller because we want to deploy Kong to take advantage of some plug-ins. Second –write-kubeconfig-mode, which allows writing kubeconfig files. This is useful for allowing the K3s cluster to be imported into Rancher.
$ curl -sfL https://get.k3s.io | sh -s - --no-deploy traefik --write-kubeconfig-mode 644 [INFO] Finding release for Channel stable [INFO] Using V1.18.4 + K3S1 as Release [INFO] Downloading Hash https://github.com/rancher/k3s/releases/download/v1.18.4+k3s1/sha256sum-amd64.txt/INFO Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.4+k3s1/k3s/INFO Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, already exists [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink from /etc/systemd/system/multi-user.target.wants/k3s.service to /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3sCopy the code
To check that both nodes and pods are up and running, use k3s kubectl… Run the same command as kubectl.
$k3s kubectl get Nodes NAME STATUS ROLES AGE VERSION Ubuntu - Xenial Ready master 4m38s v1.18.4+k3s1 $k3s kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system metrics-server-7566d596c8-vqqz7 1/1 Running 0 4m30s kube-system local-path-provisioner-6d59f47c7-tcs2l 1/1 Running 0 4m30s kube-system coredns-8655855d6-rjzrq 1/1 Running 0 4m30sCopy the code
Install Kong for Kubernetes on K3s
Once K3s is up and running, you can follow the normal steps to install Kong for Kubernetes, such as the manifest shown below:
$ k3s kubectl create -f https://bit.ly/k4k8s
namespace/kong created
customresourcedefinition.apiextensions.k8s.io/kongclusterplugins.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/tcpingresses.configuration.konghq.com created
serviceaccount/kong-serviceaccount created
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created
service/kong-proxy created
service/kong-validation-webhook created
deployment.apps/ingress-kong created
Copy the code
After the Kong Proxy and Ingress Controller are installed on the K3s server, you should check the service to see the external IP of kong-Proxy LoadBalancer.
$ k3s kubectl get svc --namespace kong NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kong-validation-webhook ClusterIP 10.43.157.178 < none > 443 / TCP 61 s kong - proxy LoadBalancer 10.43.63.117 10.0.2.15 80:32427 / TCP, 443:30563 / TCPCopy the code
Export IP as a variable by running the following command:
$ PROXY_IP=$(k3s kubectl get services --namespace kong kong-proxy -o jsonpath={.status.loadBalancer.ingress[0].ip})
Copy the code
Finally, before we throw any services behind the proxy, check to see if the proxy responds:
$curl -I $PROXY_IP HTTP/1.1 404 Not Found Date: Mon, 29 Jun 2020 20:31:16 GMT Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 48 X-Kong-Response-Latency: 0 Server: Kong /2.0.4 {"message":" No Route matched with those values"}Copy the code
It should return 404 because we haven’t added any services in K3s yet. But as you can see in the header, it is being brokered by the latest version of Kong and shows some additional information, such as response latency.
Set up your K3s application to test the Kong Ingress Controller
Now, let’s set up an echo Server application in K3s to demonstrate how to use the Kong Ingress Controller:
$ k3s kubectl apply -f https://bit.ly/echo-service
service/echo created
deployment.apps/echo created
Copy the code
Next, create an ingress rule to proxy the echo-server created earlier:
$ echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: echo
servicePort: 80
" | k3s kubectl apply -f -
ingress.extensions/demo created
Copy the code
Test Ingress rule:
$curl -i $PROXY_IP/foo HTTP/1.1 200 OK Content-type: text/plain; $curl -i $PROXY_IP/foo HTTP/1.1 200 OK Content-type: text/plain; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Date: Mon, 29 Jun 2020 20:31:07 GMT Server: Echoserver x-kong-proxy-latency: 1 Via: Kong /2.0.4 Hostname: echo-78b867555-jkhhl Pod Information: node name: ubuntu-xenial pod name: echo-78b867555-jkhhl pod namespace: Default pod IP: 10.42.0.7 <-- clipped -->Copy the code
If everything is deployed correctly, you should see the response above. This verifies that Kong can correctly route traffic to the application running in Kubernetes.
Install a rate-limiting plugin using The Kong Ingress
Kong Ingress allows plug-ins to execute at the service level, that is, whenever a request is sent to a particular K3s service, Kong executes a plug-in regardless of which Ingress path it comes from. You can also attach plug-ins to the Ingress path. But in the following steps, I’ll use the rate-limiting plug-in to limit the IP from making too many requests on any one particular service.
Create a KongPlugin resource:
$ echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: rl-by-ip
config:
minute: 5
limit_by: ip
policy: local
plugin: rate-limiting
" | k3s kubectl apply -f -
kongplugin.configuration.konghq.com/rl-by-ip created
Copy the code
Next, apply the konghq.com/plugins annotation on the K3s service that needs to limit the rate.
$ k3s kubectl patch svc echo -p '{"metadata":{"annotations":{"konghq.com/plugins": "rl-by-ip\n"}}}'
service/echo patched
Copy the code
Now, any request sent to the service will be protected by the rate limit Kong executes:
$curl -i $PROXY_IP/foo HTTP/1.1 200 OK Content-type: text/plain; $curl -i $PROXY_IP/foo HTTP/1.1 200 OK Content-type: text/plain; charset=UTF-8 Connection: keep-alive Date: Mon, 29 Jun 2020 20:35:40 GMT Server: echoserver X-RateLimit-Remaining-Minute: 4 X-RateLimit-Limit-Minute: 5 RateLimit-Remaining: 4 RateLimit-Limit: Ratelimit-reset: 1 X- kong-upstream: 1 X- kong-proxy-latency: 2 Via: Kong /2.0.4 ratelimit-reset: 2 X- kong-upstream: 1 X- kong-proxy-latency: 2 Via: Kong /2.0.4Copy the code
As this little exercise shows, the possibilities for K3s are endless, as you can add any plug-in to any Ingress path or service. You can find all the plug-ins on The Kong Hub. This comes in handy in home automation projects, where you can also run K3s with raspberry PI and make it more possible with plug-ins.