background

Since all the internal services of the company run on Ali Cloud K8S, the DEFAULT IP reported by Dubbo providers to the registry is Pod IP, which means that dubbo service cannot be invoked in the network environment outside the K8S cluster. If the local development needs to access the Dubbo provider service in K8S, the service needs to be manually exposed to the external network. We do this by exposing an SLB IP+ custom port for each provider service. And through DUBBO_IP_TO_REGISTRY and DUBBO_PORT_TO_REGISTRY environment variables provided by DuBBo to register the corresponding SLB IP+ custom port in the registry, so as to achieve the local network and K8S Dubbo service through. However, it is very troublesome to manage in this way. Each service has to define a port, and ports between each service cannot conflict. When there are many services, it is very difficult to manage.

Therefore, I was wondering if I could implement a layer 7 proxy + virtual domain name like Nginx Ingress to double use a port, and do the corresponding forwarding through the target Dubbo provider’s application.name, so that all services only need to register the same SLB IP+ port. Greatly improve the convenience, one party after the investigation found feasible to open masturbation!

The project is open source: github.com/monkeyWie/d…

Technical pre-research

Train of thought

  1. First, dubbo RPC calls go by defaultDubbo agreement, so I need to check whether there is any packet information in the protocol that can be used for forwarding. That is, I need to look for the Host request header similar to that in THE HTTP protocol. If there is, I can do it according to this informationThe reverse proxyandVirtual domain nameOn this basis to achieve a similarnginxtheDubbo gateway.
  2. The second step is to achievedubbo ingress controller, updates dynamically through the Watcher mechanism of K8s IngressDubbo gatewayVirtual domain name forwarding configuration, then all provider services for this service are the same forwarding, and the address reported to the registry is the same address for this service.

Architecture diagram

Dubbo agreement

Here’s the official protocol:

It can be seen that the header of dubbo protocol is fixed with 16 bytes, and there is no extensible field similar to HTTP header in it, nor does it carry the application. Name field of the target provider, so I proposed an issue to the official. The official reply is to add the application.name of the target provider to the attachments through the custom Filter of the consumer. Here I have to make a bit of a poke at the Dubbo protocol. The extended field is actually put in the body. However, this is not a problem, because it is mainly done for the development environment, this step can barely be achieved.

k8s ingress

K8s ingress is designed for HTTP, but it contains sufficient fields.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: user-rpc-dubbo
  annotations:
    kubernetes.io/ingress.class: "dubbo"
spec:
  rules:
    - host: user-rpc
      http:
        paths:
          - backend:
              serviceName: user-rpc
              servicePort: 20880
            path: /
Copy the code

Configure the same forwarding rules as HTTP, but host is configured with the target provider’s application.name, and the back-end service is the corresponding service of the target provider. There is a more special is to use a kubernetes. IO/ingress. The class notes, the note which can specify the ingress of ingress controller to take effect, Our Dubbo Ingress Controller will only resolve the ingress configuration with the annotation value dubbo.

The development of

All went well with the pre-technical research, and then we entered the development phase.

Consumers customize the Filter

As mentioned above, if the request carries the target provider’s application.name, the consumer needs to define the Filter as follows:

@Activate(group = CONSUMER)
public class AddTargetFilter implements Filter {

  @Override
  public Result invoke(Invoker
        invoker, Invocation invocation) throws RpcException {
    String targetApplication = StringUtils.isBlank(invoker.getUrl().getRemoteApplication()) ?
        invoker.getUrl().getGroup() : invoker.getUrl().getRemoteApplication();
    // The target provider's application.name goes into the Attachment
    invocation.setAttachment("target-application", targetApplication);
    returninvoker.invoke(invocation); }}Copy the code

The dubbo consumer initiates a metadata request the first time it visits. This request cannot be obtained through invoker.geturl ().getremoteApplication (). Get it through invoker.geturl ().getGroup().

Dubbo gateway

Here is to develop a similar to nginx Dubbo gateway, and achieve seven-layer proxy and virtual domain forwarding, the programming language directly choose GO, first go network development mental burden is low, in addition, there is a Dubbo-Go project, can directly use the decoder inside. Then go has native K8S SDK support, which is perfect!

Start a TCP Server, then parse the dubbo request packet, get the attachment target-application attribute, then reverse proxy to the real Dubbo provider service, the core code is as follows:

routingTable := map[string]string{
  "user-rpc": "user-rpc:20880"."pay-rpc":  "pay-rpc:20880",
}

listener, err := net.Listen("tcp".": 20880")
iferr ! =nil {
  return err
}
for {
  clientConn, err := listener.Accept()
  iferr ! =nil {
    logger.Errorf("accept error:%v", err)
    continue
  }
  go func(a) {
    defer clientConn.Close()
    var proxyConn net.Conn
    defer func(a) {
      ifproxyConn ! =nil {
        proxyConn.Close()
      }
    }()
    scanner := bufio.NewScanner(clientConn)
    scanner.Split(split)
    // Get a complete request
    for scanner.Scan() {
      data := scanner.Bytes()
      // Deserialize []byte into a Dubbo request structure via the library provided by Dubo-Go
      buf := bytes.NewBuffer(data)
      pkg := impl.NewDubboPackage(buf)
      pkg.Unmarshal()
      body := pkg.Body.(map[string]interface{})
      attachments := body["attachments"]. (map[string]interface{})
      Get the application.name of the target provider from attachments
      target := attachments["target-application"]. (string)
      if proxyConn == nil {
        // Reverse proxy to the real back-end service
        host := routingTable[target]
        proxyConn, _ = net.Dial("tcp", host)
        go func(a) {
          // Original forward
          io.Copy(clientConn, proxyConn)
        }()
      }
      // Write the original packet to the real back-end service and then forward the original packet
      proxyConn.Write(data)
    }
  }()
}

func split(data []byte, atEOF bool) (advance int, token []byte, err error) {
	if atEOF && len(data) == 0 {
		return 0.nil.nil
	}

	buf := bytes.NewBuffer(data)
	pkg := impl.NewDubboPackage(buf)
	err = pkg.ReadHeader()
	iferr ! =nil {
		if errors.Is(err, hessian.ErrHeaderNotEnough) || errors.Is(err, hessian.ErrBodyNotEnough) {
			return 0.nil.nil
		}
		return 0.nil, err
	}
	if! pkg.IsRequest() {return 0.nil, errors.New("not request")
	}
	requestLen := impl.HEADER_LENGTH + pkg.Header.BodyLen
	if len(data) < requestLen {
		return 0.nil.nil
	}
	return requestLen, data[0:requestLen], nil
}
Copy the code

Dubbo Ingress Controller implementation

We have implemented a Dubbo gateway, but the routingTable is still written in code. Now we need to dynamically update this configuration when we detect an update to k8S ingress.

Nginx Ingress Controller: Nginx Ingress Controller: nginx Ingress Controller: Nginx Ingress Controller When a configuration change is detected, nginx-s reload is triggered to reload the configuration file.

The core technology used is Informers, using it to monitor changes in K8S resources, example code:

// Get the K8S access configuration within the cluster
cfg, err := rest.InClusterConfig()
iferr ! =nil {
  logger.Fatal(err)
}
// Create a K8S SDK Client instance
client, err := kubernetes.NewForConfig(cfg)
iferr ! =nil {
  logger.Fatal(err)
}
// Create Informer factory
factory := informers.NewSharedInformerFactory(client, time.Minute)
handler := cache.ResourceEventHandlerFuncs{
  AddFunc: func(obj interface{}) {
    // Add event
  },
  UpdateFunc: func(oldObj, newObj interface{}) {
    // Update the event
  },
  DeleteFunc: func(obj interface{}) {
    // Delete the event}},// Listen for ingress changes
informer := factory.Extensions().V1beta1().Ingresses().Informer()
informer.AddEventHandler(handler)
informer.Run(ctx.Done())
Copy the code

Update the forwarding configuration dynamically by implementing the above three events, each event will carry the corresponding Ingress object information, and then the corresponding processing can be:

ingress, ok := obj.(*v1beta12.Ingress)
if ok {
  // Filter out Dubbo Ingress with annotations
  ingressClass := ingress.Annotations["kubernetes.io/ingress.class"]
  if ingressClass == "dubbo" && len(ingress.Spec.Rules) > 0 {
    rule := ingress.Spec.Rules[0]
    if len(rule.HTTP.Paths) > 0 {
      backend := rule.HTTP.Paths[0].Backend
      host := rule.Host
      service := fmt.Sprintf("%s:%d", backend.ServiceName+"."+ingress.Namespace, backend.ServicePort.IntVal)
      // Get the service corresponding to host in ingress configuration and notify dubbo gateway for update
      notify(host,service)
    }
  }
}
Copy the code

Docker images are provided

All services on K8S need to run in containers, and this is no exception. Dubbo Ingress Controller needs to be built into a Docker image. Here, the image volume can be reduced through two-stage construction optimization:

FROM golang:1.17.3 AS builder
WORKDIR /src
COPY.
ENV GOPROXY https://goproxy.cn
ENV CGO_ENABLED=0
RUN go build -ldflags "-w -s" -o main cmd/main.go

FROM debian AS runner
ENV TZ=Asia/shanghai
WORKDIR /app
COPY --from=builder /src/main .
RUN chmod +x ./main
ENTRYPOINT ["./main"]
Copy the code

Yaml templates are provided

To access the K8S API within the cluster, you need to authorize Pod, through K8S RBAC, and deploy as a Deployment-type service. The final template is as follows:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: dubbo-ingress-controller
  namespace: default

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dubbo-ingress-controller
rules:
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dubbo-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: dubbo-ingress-controller
subjects:
  - kind: ServiceAccount
    name: dubbo-ingress-controller
    namespace: default

---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: default
  name: dubbo-ingress-controller
  labels:
    app: dubbo-ingress-controller
spec:
  selector:
    matchLabels:
      app: dubbo-ingress-controller
  template:
    metadata:
      labels:
        app: dubbo-ingress-controller
    spec:
      serviceAccountName: dubbo-ingress-controller
      containers:
        - name: dubbo-ingress-controller
          image: Liwei2633 / dubbo - ingress - controller: 0.0.1
          ports:
            - containerPort: 20880
Copy the code

If needed, it can be managed by Helm.

Afterword.

So far, dubbo Ingress Controller has been realized. It can be said that although it is small, it has all the five elements, including DuBBo protocol, TCP protocol, seven-layer agent, K8S Ingress, Docker and many other contents. Much of this knowledge is needed to master in the era of cloud native is becoming more and more popular, and I feel a lot of benefit after development.

The full tutorial is available on Github.

Reference links:

  • Dubbo agreement
  • dubbo-go
  • Use multiple -ingress-controllers
  • Use Golang to customize Kubernetes Ingress Controller

This article was first published on my blog: monkeywie.cn. Share the knowledge of JAVA, Golang, front-end, Docker, K8S and other dry goods from time to time.