MetalLB acts as a diversion in the Kubernetes cluster. Layer2 mode and BGP mode are provided. This article will show how MetalLB is used in different scenarios.
Installation in experimental environment
Experimental environment
During the experiment, four virtual machines were used as test hosts. The host list for the VM service is as follows:
IP | use |
---|---|
192.168.56.15 | The Master node |
192.168.56.16 | The Node Node |
192.168.56.17 | Virtual interactive machine host on which to install Zebra. Simulated routes support BGP |
192.168.56.18 | The Node Node |
Deploy two Nginx at the same time, as shown below:
The name of the | instructions |
---|---|
nginxv1 | Nnginx service, 3 copies |
nginxv2 | Nginx service, 1 copy |
As shown in the list above, in nginxv1zhogn, the root address of the access service returns v1_1, V1_2, v1_3, and the nginx2 request returns v2. This is convenient for us to verify the result after forwarding.
MetalLB installation
Before installing MetalLB, from kubernetes.1.4 or later, you need to open Kube-proxy strictARP by executing the following command:
kubectl edit configmap -n kube-system kube-proxy
Copy the code
Then add the following values in IPVS mode:
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
Copy the code
As shown in the code snippet above, strictARP:true if there is no value to add, and if there is, ensure that the value is true. If the cluster has been installed in advance, you can use kubectl delete Po to delete the POD of kube-proxy to make the configuration take effect again.
Start installing MetalLB service when the preparation is complete. You can install it on github’s official website. The current experimental version is 0.9.5. The installation commands are as follows:
Kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml kubectl apply - f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yamlCopy the code
After executing the above command, you can view the namespace of MetalLB. The details are as follows:
Virtual Router Installation
During BGP authentication, a router that supports BGP routing is required. Common home routers do not support this protocol, so QuagGA is used as an analog router to implement BGP write support. In this article, yum is used for installation. Quagga and Zebra were installed and configured as follows:
- Install the quagga
yum install quagga
Copy the code
- Set SElinux
setsebool -P zebra_write_config 1
Copy the code
- Set startup and configuration logs
Cp/usr/share/doc/quagga - 0.99.22.4 / zebra. Conf. Sample/etc/quagga/zebra. The conf systemctl start zebra systemctl enable zebraCopy the code
BGP installation procedure:
-
Starting the BGP Service
Cp/usr/share/doc/quagga - 0.99.22.4 / BGPD. Conf. Sample/etc/quagga/BGPD conf systemctl start BGPD systemctl enable BGPD systemctl status bgpdCopy the code
-
Configuring Node Services
Vtysh configure terminal no router BGP 7675 Router BGP 200 no auto-summary no synchronization network 10.1.245.0/24 Neighbor 192.168.56.15 remote-as 100 neighbor 192.168.56.15 Description "provider 15" neighbor 192.168.56.16 remote-as 100 Neighbor 192.168.56.16 Description "provider 16"Copy the code
1. Change the default ASID from 7675 to 200, and declare 192.168.56.15 and 192.168.56.16 AS remote AS with ASids 100 and 300 respectively.
# # Layer2 mode
The configuration of Layer2 mode is simple and the working principle is simple. In Layer2 mode, no specific protocol configuration is required. You only need to configure the IP address pool. It works by exposing the service in response to ARP requests. Therefore, Layer2 mode is generally limited to local area networks, exposing services across switches that are unreachable.
If Layer2 mode is used, the ConfigMap configuration information is as follows:
apiVersion: v1
data:
config: | address - pools: - name: the default protocol: layer2 addresses: - 192.168.56.20-192.168.56.30kind: ConfigMap
metadata:
name: config
namespace: metallb-system
Copy the code
Once created, create Servcie:
apiVersion: v1
kind: Service
metadata:
name: appv1-lb
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginxv1
type: LoadBalancer
#type: ClusterIP
Copy the code
Once created, utility stickerskubect get svc
The Service you can see looks like this:
In the figure above, Kubernetes assigns ClusterIP and external-IP to the LoadBalance Service. External-ip has MetalLB listening Service assigned and also exposes a Nodeport. The ipvS rules are as follows:
As can be seen from the above two figures, Kube-proxy provides forwarding rules for both ClusterIP and Exteernal-IP. It can be seen from this rule that Kube-proxy forwards to POD directly rather than through nodeport. Issue the following command:
for ((i=1; i<=10; i++)); Do the curl "http://192.168.56.20/"; doneCopy the code
The following information is displayed:
Respectively in192.168.56.17
Performed on theip neigh
View arp results and in192.168.56.18
Performed on theip addr show dev enp0s8
It can be seen as follows:
In the preceding figure, 192.168.56.20 has been resolved to 192.168.56.18.
BGP mode
Simple configuration
To configure the BGP mode, you need to have a router that supports BGP and an IP address segment that does not conflict with the live network. Before configuration, ensure that:
- MetalLB THE IP address of the router to which it should connect;
- AS number of the router;
- MetalLB should use the AS number;
- IP address range with CIDR prefix.
The installation of soft route simulation has been described in the experimental environment. After the configuration is complete, run the following command to view active AS information:
show ip bgp summary
Copy the code
Modify the configuration information to the BGP mode as follows:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: | peers: - peer - address: 192.168.56.17 peer - the asn: 200 my - the asn: 100 address - pools: - name: the default protocol: BGP avoid-bugg-ips: true addresses: -10.1.245.0/24Copy the code
As shown in the preceding configuration information, peer-address indicates the address of the remote peer, peer-ASN indicates the ASID, and my-ASN indicates the ASID negotiated with the current peer. Otherwise, avoid-buggy-ips: true indicates that 0 and 254 are used for IP address allocation. After the configuration information is created, run the following command to view the following information when creating the Service:
kubect get svc
Copy the code
The following information is displayed.
You can see that the external-IP address has switched to the virtual IP address defined by BGP. In this case, run the following command on 192.168.56.17 to view BGP synchronization messages:
show ip bgp summary
Copy the code
Run the following command on host 192.168.56.17 to view the forwarding result:
for ((i=1; i<=10; i++)); Do the curl "http://10.1.245.1/"; doneCopy the code
As shown above, the service is already accessible through the virtual IP. To check the status of ipvS, run the following command:
ipvsadm -Ln
Copy the code
As shown in the figure above, Kube-proxy directly forwards external-IP to a specific POD. MetalLB directs traffic to the host and receives specific forwarding via IPVS.
Node selection
BGP can be used to ensure security, shorten the impact range of packets due to service interruption, and balance the traffic to the same order. Fixed node or host label can be used to solve the problem. MetalLB provides support in two ways, match-labels and match-Expressions. The configuration information is as follows:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: | peers: - peer - address: 192.168.56.17 peer - the asn: 200 my - the asn: 100 node - selectors: - match - labels: rack: frontend match-expressions: - key: network-speed operator: NotIn values: [slow] - match-expressions: - key: kubernetes.io/hostname operator: In values: [hostA, hostB] address-pools: - name: default protocol: BGP avoid-bugg-ips: true addresses: -10.1.245.0/24
Copy the code
As shown above, the node-selectors field indicates the node selection mode and provides two options, namely match-labels and match-expressions. The following uses match-Expressions as an example. Take node selection as an example. The ConfigMap content is as follows:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: | peers: - peer - address: 192.168.56.17 peer - the asn: 200 my - the asn: 100 node - selectors: - match - expressions: - the key: Kubernetes. IO /hostname operator: In values: [192.168.56.16] address-pools: -name: default protocol: BGP avoid-bugg-ips: true addresses: -10.1.245.0/24Copy the code
As shown above, the 192.168.56.16 node is used as the externally exposed node. Re-create the Service to see the results. SiHongshow ip bgp summary
Check the synchronization status of the router
As shown in the figure above, you can see that two nodes are active, but only 192.168.56.16 initiates the synchronous routing rule.
Configure multiple IP address pools
In many scenarios, multiple IP address pools can be configured. The specific use of the Ip pool can then be selected through annotations in the Service. In this way, Layer2 mode and BGP mode can coexist. The default mode is BGP, and the Service decides which case to use.
Take a look at the configuration file, which looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: | peers: - peer - address: 192.168.56.17 peer - the asn: 200 my - the asn: 100 node - selectors: - match - expressions: - the key: Kubernetes. IO /hostname operator: In values: [192.168.56.16] address-pools: -name: default protocol: BGP avoid-bugg-ips: true addresses: -10.1.245.0/24-name: layer20-30 protocol: layer2 addresses: - 192.168.56.20-192.168.56.30Copy the code
As shown above, there are two address pools, default and Layer20-30. BGP uses virtual IP10.1.245.0/24, layer22-30 uses layer2 mode, and the IP address segment is 192.168.56.20-192.168.56.30. Let’s start by creating two services, as follows:
Service1:
apiVersion: v1
kind: Service
metadata:
name: appv1-lb
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginxv1
type: LoadBalancer
Copy the code
Service2:
apiVersion: v1
kind: Service
metadata:
name: appv1-1-lb
annotations:
metallb.universe.tf/address-pool: layer20-30
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginxv1
type: LoadBalancer
Copy the code
As shown above, create two services, appv1-lb and appv1-1-lb. Appv1-lb specifies the address pool to use, and APPv1-1-LB specifies the address pool of Layer 20-30. Run the following command to check the service status:
kubectl get svc
Copy the code
The following information is displayed:
As shown in the preceding figure, APPV1-1-LB is assigned an IP address of 192.168.56.20, which belongs to pool Layer2-20-30. Appv1-lb then allocates the IP of the default address pool.
conclusion
From the whole experiment process, MetalLB implements Layer2 and BGP traffic diversion mode. To divert traffic to hosts, ipvS is used for load balancing.
During the experiment, it was found that the configuration information of Controller and Speaker could not be loaded and take effect. But from the code point of view it did change, and that’s something that needs to be addressed.