1. Install the helm

1.1. Install the helm

  • Project address: github.com/helm/helm
  • Installation:
# download wget https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz # Unpack tar ZXVF helm-v3.6.1-linux-amd64.tar.gz # Install Mv linux-amd64/helm /usr/local/bin/Copy the code

1.2. Basic Command Reference

Helm repo Update # View the currently installed charts helm List -A # Install helm Install helm uninstall helm UpgradeCopy the code

2. Install the RabbitMQ

2.1. Download the Chart package

# # add bitnami warehouse helm repo add bitnami https://charts.bitnami.com/bitnami query chart helm search repo mkdir bitnami # create a working directory -p ~/test/rabbitmq CD ~/test/rabbitmqCopy the code

2.2. Set parameters

2.2.1. Edit the configuration file

Official configuration reference: github.com/bitnami/cha…

  • Go to the working directory and configure persistent storage and the number of copies
  • You are advised to modify values directly during the first deployment, rather than using the -set mode. In this way, you do not need to set the upgrade configuration repeatedly.
cd  ~/test/rabbitmq/rabbitmq
vim values.yaml
Copy the code

2.2.2. Set the administrator password

  • Method 1: Specify it in the configuration
auth:
  username: admin
  password: "admin@mq"
  existingPasswordSecret: ""
  erlangCookie: secretcookie
Copy the code

  • Method 2: Specify the password in set mode during installation (to avoid password leakage)
--set auth.username=admin,auth.password=admin@mq,auth.erlangCookie=secretcookie
Copy the code

If all rabbitMQ pods go down at the same time, they cannot be restarted. Therefore, you must enable clustering. ForceBoot in advance

clustering:
  enabled: true
  addressType: hostname
  rebalance: false
  forceBoot: true
Copy the code

2.2.4. Simulating rabbitMQ Cluster Downtime (skip)

  • Is not setclustering.forceBootBy deleting all pods from the RabbitMQ cluster, the first node is not ready when the cluster restarts

  • The following error message is displayed:

  • To enable theclustering.forceBootAnd update rabbitMQ. The cluster is restarted properly
helm upgrade rabbitmq -n test .
kubectl delete pod -n test rabbitmq-0
get pod -n test -w
Copy the code

2.2.5. Specify the time zone

extraEnvVars: 
  - name: TZ
    value: "Asia/Shanghai"
Copy the code

2.2.6. Specify the number of copies

replicaCount: 3
Copy the code

2.2.7. Setting persistent Storage

  • If persistence is not required, theenabledSet tofalse
  • Block storage is required for persistence. In this paper, aws EBS-CSI is used to create storageClass, and self-built block storage storageClass is also used

Note: It is better for sc to have capacity expansion properties

persistence:
  enabled: true
  storageClass: "ebs-sc"
  selector: {}
  accessMode: ReadWriteOnce
  existingClaim: ""
  size: 8Gi
Copy the code

2.2.8. Set up the service

  • By default, ClusterIP exposes ports such as 5672 (AMQP) and 15672 (Web Management interface) for internal cluster use. The external access methods are explained in chapter 3
  • You are not advised to configure nodeport in Values

2.3. The RabbitMQ deployment

2.3.1. Creating a namespace

cd  ~/test/rabbitmq/rabbitmq
kubectl create ns test 
Copy the code

2.3.2. Install

  • Method 1: Specify the administrator account password in the configuration file
helm install rabbitmq -n test .
Copy the code

  • Method 2: Specify a password in set mode
helm install rabbitmq -n test . \
  --set auth.username=admin,auth.password=admin@mq,auth.erlangCookie=secretcookie
Copy the code

2.3.3. Check the RabbitMQ installation status

  • View the RabbitMQ installation progress
kubectl get pod -n test -w

1
Copy the code
  • After all nodes are started, view the SVC
kubectl get svc -n test
Copy the code

Currently, RabbitMQ is exposed through ClusterIP for internal cluster access. External access is described in the next chapter.

  • Viewing Cluster Status
Kubectl exec-it-n test rabbitmq-0 -- bash rabbitmqctl cluster_status rabbitmqctl Rabbitmqctl set_cluster_name [cluster_name]Copy the code

3. Configure the RabbitMQ cluster external access mode

3.1. Suggested methods

  • It is not recommended to specify the nodeport in the default installation mode, but to create it separately
  • 5672: RecommendedService - Indicates the private network load balancerExposure to other applications on the private network
  • 15672: Recommended for approvalingressorService-public network load balancerExposure to outside access
port Mode of exposure (see Mode 3 below) access
5672 Service-loadbalancer (configured as a private network LoadBalancer) K8s Rabbitmq. test:5672 Private network: private network load balancing IP address :5672
15672 Ingress-alb (Configured as a public network load balancer) URL of public network load balancing

Note: This article uses the Amazon hosted EDITION K8S cluster and has been configured with AWS – Load-Balancer – Controller

3.2. Method 1: service-nodePort (5672,15672)

  • Obtain the yamL file of the original service-ClusterIP:
cd /test/rabbitmq
kubectl get svc -n test rabbitmq -o yaml > service-clusterip.yaml
Copy the code
  • See service-clusterIP to create service-nodeport.yaml
cp service-clusterip.yaml service-nodeport.yaml
Copy the code
  • Configuring service-nodePort (Remove redundant information)
apiVersion: v1
kind: Service
metadata:
  name: rabbitmq-nodeport
  namespace: test
spec:
  ports:
  - name: amqp
    port: 5672
    protocol: TCP
    targetPort: amqp
    nodePort: 32672
  - name: http-stats
    port: 15672
    protocol: TCP
    targetPort: stats
    nodePort: 32673
  selector:
    app.kubernetes.io/instance: rabbitmq
    app.kubernetes.io/name: rabbitmq
  type: NodePort
Copy the code
  • Create a service
kubectl apply -f service-nodeport.yaml
kubectl get svc -n test
Copy the code

  • The service can be accessed via NodeIP:Port.

3.3. Mode 2: service-public network LoadBalancer (5672,15672)

  • createservice-loadbalancer.yaml:
vim service-loadbalancer.yaml
Copy the code
apiVersion: v1
kind: Service
metadata:
  name: rabbitmq-loadbalance
  namespace: test
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
  ports:
  - name: amqp
    port: 5672
    protocol: TCP
    targetPort: amqp
  - name: http-stats
    port: 15672
    protocol: TCP
    targetPort: stats
  selector:
    app.kubernetes.io/instance: rabbitmq
    app.kubernetes.io/name: rabbitmq
  type: LoadBalancer
Copy the code
  • Create a service:
kubectl apply -f service-loadbalancer.yaml
kubectl get svc -n test
Copy the code

3.4. Mode 3: Service- private network LoadBalancer (5672) +Ingress- public network ALB (15672)

3.4.1. Creating Service- Private network LoadBalancer

vim service-lb-internal.yaml
Copy the code
apiVersion: v1 kind: Service metadata: name: rabbitmq-lb-internal namespace: test annotations: service.beta.kubernetes.io/aws-load-balancer-type: external service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: IP # service. Beta. Kubernetes. IO/aws - the load balancer - scheme: the Internet - is the private net after facing # comments spec: ports: - name: closer port: 5672 protocol: TCP targetPort: amqp selector: app.kubernetes.io/instance: rabbitmq app.kubernetes.io/name: rabbitmq type: LoadBalancerCopy the code
kubectl apply -f service-lb-internal.yaml
Copy the code

3.4.2. Create Ingress – propagated

vim ingress-alb.yaml
Copy the code
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: rabbitmq
  namespace: test
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
  labels:
    app: rabbitmq
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: "rabbitmq"
              servicePort: 15672
Copy the code
kubectl apply -f ingress-alb.yaml
Copy the code

4. Configure the mirroring mode to implement high availability

4.1. Mirroring Mode

Mirrored mode: The queues that need to be consumed are mirrored on multiple nodes to achieve RabbitMQ high availability. The effect is that the message entity actively synchronizes between mirror nodes, rather than reading the data temporarily while the consumer consumes it, as is normal. The disadvantage is that synchronous communication within the cluster consumes a lot of network bandwidth.

4.2. Rabbitmqctl Set the mirroring mode

Kubectl exec-it-n test rabbitmq-0 -- bash rabbitmqctl list_policies rabbitmqctl Set_policy ha-all "^" '{"ha-mode":"all", "ha-sync-mode":"automatic"}' # list rabbitmqctl list_policiesCopy the code

Console view

5. Clear the RabbitMQ cluster

5.1. Uninstall the RabbitMQ

helm uninstall rabbitmq -n test
Copy the code

5.2. Delete the PVC

kubectl delete pvc -n test data-rabbitmq-0 data-rabbitmq-1 data-rabbitmq-2
Copy the code

5.3. Clear manually created Services and ingress

kubectl delete -f service-nodeport.yaml
kubectl delete -f service-loadbalancer.yaml
kubectl delete -f ingress-alb.yaml
Copy the code