Introduction to the

This article mainly introduces how k8S uses Kube-Router to achieve POD communication, service proxy, network policy isolation and other functions

Kube-router is a new k8S network plug-in, which uses LVS as proxy and load balancing, and iptables as network isolation policy. Simple deployment, only one Daemonset can be deployed on each node, with high performance and easy maintenance. Supports pod to POD communication, as well as proxy for services.

The environment that

This experiment is conducted on the basis of the k8S cluster that has been installed and configured. The INSTALLATION of K8S refer to other articles in the blog.

The experimental architecture

Lab1: master 11.11.11.111 LAB2: node 11.11.11.112 LAB3: node 11.11.11.113Copy the code

The installation

# This experiment recreates the cluster, using a cluster environment that was previously tested with other network plug-ins that was not successful
# It may be due to environmental interference, so attention should be paid in the experiment

Create kube-router directory to download related files
mkdir kube-router && cd kube-router
rm -f generic-kuberouter-all-features.yaml
wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/generic-kuberouter-all-features.yaml

Enable POD network communication, network isolation policy, service agent all functions
CLUSTERCIDR kube-controller-manager startup parameter --cluster-cidr
# APISERVER kube-apiserver --advertise-address
CLUSTERCIDR='10.244.0.0/16'
APISERVER='https://11.11.11.111:6443'
sed -i "s; %APISERVER%;$APISERVER; g" generic-kuberouter-all-features.yaml
sed -i "s; %CLUSTERCIDR%;$CLUSTERCIDR; g" generic-kuberouter-all-features.yaml
kubectl apply -f generic-kuberouter-all-features.yaml

# remove kube - proxy
kubectl -n kube-system delete ds kube-proxy

Execute on each node
For binary installation use the following command
systemctl stop kube-proxy

Execute on each node
# delete rules left by kube-proxyDocker run - ring -.net = host registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.10.2 kube-proxy --cleanup# check
kubectl get pods -n kube-system
kubectl get svc -n kube-system
Copy the code

test

Please install and configure kube-DNS or CoreDNS before testing

Start the Deployment for testing
kubectl run nginx --replicas=2 --image=nginx:alpine --port=80
kubectl expose deployment nginx --type=NodePort --name=example-service-nodeport
kubectl expose deployment nginx --name=example-service

# check
kubectl get pods -o wide
kubectl get svc -o wide

DNS and access tests
kubectl run curl --image=radial/busyboxplus:curl -i --tty
nslookup kubernetes
nslookup example-service
curl example-service

# to clean up
kubectl delete svc example-service example-service-nodeport
kubectl delete deploy nginx curl
Copy the code

Network Isolation Policy

The deployment of application

Create the production staging namespace
kubectl create namespace production
kubectl create namespace staging

Deploy one set of services per namespace
cd kube-router
wget https://raw.githubusercontent.com/mgxian/istio-test/master/service/node/v1/node-v1.yml
wget https://raw.githubusercontent.com/mgxian/istio-test/master/service/go/v1/go-v1.yml
kubectl apply -f node-v1.yml -n production
kubectl apply -f go-v1.yml -n production
kubectl apply -f node-v1.yml -n staging
kubectl apply -f go-v1.yml -n staging

# check status
kubectl get pods --all-namespaces -o wide
Copy the code

Test POD communication

Get relevant POD information
PRODUCTION_NODE_NAME=$(kubectl get pods -n production | grep Running | grep service-node | awk '{print $1}')
STAGING_NODE_NAME=$(kubectl get pods -n staging | grep Running | grep service-node | awk '{print $1}')
PRODUCTION_GO_IP=$(kubectl get pods -n production -o wide | grep Running | grep service-go | awk '{print $6}')
STAGING_GO_IP=$(kubectl get pods -n staging -o wide | grep Running | grep service-go | awk '{print $6}')
echo $PRODUCTION_NODE_NAME $PRODUCTION_GO_IP
echo $STAGING_NODE_NAME $STAGING_GO_IP


Communicate with pod of namespace
kubectl exec -it $PRODUCTION_NODE_NAME --namespace=production -- ping -c4 $PRODUCTION_GO_IP 
kubectl exec -it $STAGING_NODE_NAME --namespace=staging -- ping -c4 $STAGING_GO_IP 

# Pod communication of different namespaces
kubectl exec -it $PRODUCTION_NODE_NAME --namespace=production -- ping -c4 $STAGING_GO_IP
kubectl exec -it $STAGING_NODE_NAME --namespace=staging -- ping -c4 $PRODUCTION_GO_IP

Conclusion: Any POD of any namespace can communicate directly with each other
Copy the code

Set up default policy tests

Set the default policy to deny all traffic
cat >default-deny.yml<<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress
EOF
kubectl apply -f default-deny.yml -n production
kubectl apply -f default-deny.yml -n staging

# Test communication
Communicate with pod of namespace
kubectl exec -it $PRODUCTION_NODE_NAME --namespace=production -- ping -c4 $PRODUCTION_GO_IP 
kubectl exec -it $STAGING_NODE_NAME --namespace=staging -- ping -c4 $STAGING_GO_IP 

# Pod communication of different namespaces
kubectl exec -it $PRODUCTION_NODE_NAME --namespace=production -- ping -c4 $STAGING_GO_IP
kubectl exec -it $STAGING_NODE_NAME --namespace=staging -- ping -c4 $PRODUCTION_GO_IP

# Conclusion: All pods cannot communicate with each other
Copy the code

Setting permission Rules

Set service-go to allow access from service-node
cat >service-go-allow-service-node.yml<<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: service-go-allow-service-node
spec:
  podSelector:
    matchLabels:
      app: service-go
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: service-node
EOF
kubectl apply -f service-go-allow-service-node.yml -n production
kubectl apply -f service-go-allow-service-node.yml -n staging

TCP 80 TCP 80
cat >service-node-allow-tcp-80.yml<<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: service-node-allow-tcp-80
spec:
  podSelector:
    matchLabels:
      app: service-node
  ingress:
  - from:
    ports:
    - protocol: TCP
      port: 80
EOF
kubectl apply -f service-node-allow-tcp-80.yml -n production
kubectl apply -f service-node-allow-tcp-80.yml -n staging

# Test communication
Communicate with pod of namespace
kubectl exec -it $PRODUCTION_NODE_NAME --namespace=production -- ping -c4 $PRODUCTION_GO_IP 
kubectl exec -it $STAGING_NODE_NAME --namespace=staging -- ping -c4 $STAGING_GO_IP 

# Pod communication of different namespaces
kubectl exec -it $PRODUCTION_NODE_NAME --namespace=production -- ping -c4 $STAGING_GO_IP
kubectl exec -it $STAGING_NODE_NAME --namespace=staging -- ping -c4 $PRODUCTION_GO_IP

Pass the service test
PRODUCTION_GO_SVC=$(kubectl get svc -n production | grep service-go | awk '{print $3}')
STAGING_GO_SVC=$(kubectl get svc -n staging | grep service-go | awk '{print $3}')
echo $PRODUCTION_GO_SVC $STAGING_GO_SVC
curl $PRODUCTION_GO_SVC
curl $STAGING_GO_SVC

Conclusion: Pods of the same namespace can communicate with each other, but pods of different namespaces cannot communicate with each other. Only pods with network rules are allowed to communicate with each other
Network isolation policies cannot be bypassed through service
Copy the code

Clean up the

Delete namespace Automatically deletes related resources
kubectl delete ns production
kubectl delete ns staging
Copy the code

Reference documentation

  • https://github.com/cloudnativelabs/kube-router/blob/master/docs/generic.md
  • https://kubernetes.io/docs/concepts/services-networking/network-policies/
  • https://cloudnativelabs.github.io/post/2017-05-1-kube-network-policies/