Problem description

  1. Front-end cross-domain issues: When a page loads or asynchronously requests resources from a different source than the original page, the browser’s same-origin policy intercepts the request. A simple way to do this is to add a nginx service to the back end and add CORS related headers to the response data
  2. Certificate problem: When the HTTPS page loads HTTP resources or resources that do not have valid certificates, the page displays an unsafe identifier. Therefore, you need to apply for a certificate for the domain name that requests the resources. You can use Let’s Encrypt to apply for a certificate for free. If an internal domain name cannot be accessed from the Internet, you must purchase a certificate and configure it on nginx

In the following, nginx Service and Nginx Ingress are used respectively with K8S

The solution

Solution 1: Use the Nginx Service

Deploy two groups of services, one is The Service of Web app and Deployment, the other is nginx. The user directly accesses nginx’s service, and then Nginx forwards it to the service where the Web app is located. This process is similar to the traditional nGINx deployment mode. You only need to change the proxy_pass address to the domain name that can be resolved by k8S internal DNS: my-svc.my-namespace.svc.cluster-domain-example. The structure is similar to the following:

This solution requires a separate nginx deployment, and the local file needs to be mounted to the POD, which needs to be configured according to host, and cannot be fully one-click deployment.

The scheme file directory is as follows:

app
├ ─ ─ nginx
   ├ ─ ─ deploy.yaml
   └ ─ ─ svc.yaml
├ ─ ─ app.conf      # nginx configuration file
└ ─ ─ app.yaml
Copy the code

Nginx configuration

  1. Nginx service and deployment
# app/nginx/deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: Nginx: 1.7.9
        volumeMounts:
        - mountPath: /etc/nginx/conf.d/default.conf  # nginx configuration file path in pod
          name: nginx-volume
        ports:
        - containerPort: 80
      volumes:
      - name: nginx-volume
        hostPath:
          path: /path/to/app/app.conf  # nginx configuration file on the local path, will be mounted to the upper volumeMounts configuration path, and covering the pod in the/etc/nginx/conf. D/default. Conf

# app/nginx/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    run: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: nginx
Copy the code
  1. Nginx configuration
# app/app.conf
server {
	server_name_;listen 80;

	location / {  
		The access-control-allow-headers header is allowed for CORS. If the browser still displays errors related to access-control-allow-headers, you need to add and modify the access-control-allow-headers header
	    add_header Access-Control-Allow-Origin *;
	    add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
	    add_header Access-Control-Allow-Headers 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization';

		proxy_pass http://app.default.svc.cluster.local:8018;  Enter the address of the web app according to the INTERNAL DNS rules of k8S cluster. Host follows the rules of my-svc.my-namespace.svc.cluster-domain-example}}Copy the code

Configure the Web app

# app/app.yaml
apiVersion: v1
kind: Service
metadata:
  name: app
spec:
  selector:
    app: app
  ports:
  - protocol: "TCP"
    port: 8018
    targetPort: 8018
  The External IP address of the service is always 
      
        because the External IP address of the service cannot be assigned by LoadBalancer. Therefore, the LoadBalancer mode cannot be used. The default NodePort can be used instead
      
  
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app: app
  replicas: 2
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app
        image: app:prod
        ports:
        - containerPort: 8018
Copy the code

HTTPS Certificate Configuration

If the server has an extranet IP address, you can use Let’s Encrypt to apply for a free certificate. The general process is to use certbot to apply for a certificate on the server to obtain.crt and.key files. Then use k8s’s volumes to mount the certificate to the appropriate location in the Nginx container

# app/nginx/deploy.yaml
.
    spec:
      containers:
      - name: nginx
        image: Nginx: 1.7.9
        volumeMounts:
        - mountPath: /etc/nginx/conf.d/default.conf  # nginx configuration file path in pod
          name: nginx-volume
        - mountPath: /etc/ssl/YOUR_SSL.crt  # CRT file path in pod
          name: ssl-crt
        - mountPath: /etc/ssl/YOUR_DOMAIN_NAME.key  The path of the key file in pod for the domain name
          name: domain-key
        ports:
        - containerPort: 80
      volumes:
      - name: nginx-volume
        hostPath:
          path: /path/to/app/app.conf  # nginx configuration file on the local path, will be mounted to the upper volumeMounts configuration path, and covering the pod in the/etc/nginx/conf. D/default. Conf
      - name: ssl-srt
	    hostPath:
	      path: /path/to/YOUR_SSL.crt  # CRT file in the local path
	  - name: domain-key
	    hostPath:
	      path: /path/to/YOUR_DOMAIN_NAME.key  # Domain name key file in the local path
Copy the code

The configuration of nginx is as follows:

server {  
	listen 443;  
  
	ssl on;  
	ssl_certificate /etc/ssl/YOUR_SSL.crt;  
	ssl_certificate_key /etc/ssl/YOUR_DOMAIN_NAME.key;  
  
	server_name YOUR.DOMAIN;  
	location / {  
	    proxy_passhttp://app.default.svc.cluster.local:8018; }}Copy the code

Using this approach also requires configuring an automatic update certificate related program on the server, which can also be installed via CertBot.

If the certificate is purchased by yourself, you can also mount and configure the.crt and.key files in the preceding way

Scheme 2

With Ingress, the external request first accesses the Ingress and then distributes the request to different services. In this way, there is no need to create another NGINx Deployment. The structure is as follows:

The scheme file directory is as follows:

app
├ ─ ─ cloud-generic.yaml
├ ─ ─ mandatory.yaml
├ ─ ─ ingress.yaml
├ ─ ─ deploy.yaml
└ ─ ─ svc.yaml
Copy the code

Configuration ingress – nginx

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
Copy the code

Ingress-nginx defined in cloud-generic uses type: LoadBalancer. If the External IP address cannot be allocated in an Intranet environment, change type: NodePort

$ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE nginx-ingress-controller-7dcc95dfbf-zxmc4 1/1 Running  0 11h $ kubectlexec-it nginx-ingress-controller-7dcc95dfbf-zxmc4 bash -n ingress-nginx www-data@nginx-ingress-controller-7dcc95dfbf-zxmc4:/etc/nginx$cat nginx.conf The default nginx configuration is displayedCopy the code

Configure the ingress. Set cross-domain nGINX configurations on the ingress

# app/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-connect-timeout: '30'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '500'
    nginx.ingress.kubernetes.io/proxy-read-timeout: '500'
    nginx.ingress.kubernetes.io/send-timeout: "500"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-methods: "*"
    nginx.ingress.kubernetes.io/cors-allow-origin: "*"
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
    paths:
    - path: /app
      backend:
        serviceName: app
        servicePort: 8018
Copy the code

After configuring ingress.yaml, check the nginx configuration in nginx-ingress pod again as follows:

# nginx.conf ## start server _ server { server_name _ ; listen 80 default_server reuseport backlog=511 ; listen 443 default_server reuseport backlog=511 ssl http2 ; . location /app{ set $namespace "default"; set $ingress_name "app-ingress"; set $service_name ""; set $service_port ""; set $location_path "/app"; . more_set_headers 'Access-Control-Allow-Origin: *'; more_set_headers 'Access-Control-Allow-Credentials: true'; more_set_headers 'Access-Control-Allow-Methods: GET, PUT, POST, DELETE, PATCH, OPTIONS'; more_set_headers 'Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization'; }... } ## end server _Copy the code

You can see that the CORS related header has been added to /app. If additional locations need to be added, you can configure them in the spec/rules/ Paths section of ingress.yaml

The deploy.yaml and svc.yaml of the Web App are the same as those of the previous method

HTTPS Certificate Configuration

You need to use the Cert Manager to configure HTTPS certificates on the Ingress. This tool is a native certificate management tool of K8S. It supports free certificates such as Let’s Encrypt and purchased certificates. For details, see How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes

reference

  • Browser homology policy and its circumvention
  • Nginx Ingress Controller can’t find nodes on Google Kubernetes Engine
  • Service – Kubernetes Guide with Examples
  • How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes
  • Kubernetes Ingress with Nginx Example
  • Kubernetes Ingress simply visually explained