Undeniably, Nginx has a long history as the old load software and is still the best choice in most situations, but in the cloud native era, Nginx is not the best.

Since microservices architecture, Docker technology and K8s choreography tools have only become popular in recent years, reverse proxy servers such as Nginx and Apache did not support it at the beginning. That’s why the Ingress Controller is used to interface between K8S and Nginx. Traefik has built-in Docker and K8S support, meaning that Traefik itself can interact with the K8S API to sense back-end changes, so tools like Ingress Controller and Nginx are meaningless when using Traefik.

In the current stage where docker-compose is used to orchestrate all microservices, Traefik can automatically detect service changes and dynamically adjust its load balancing configuration depending on its own characteristics.

A, sample

Here’s an example of how Traefik excels in the containerization era.

Version: "3.8" Services: Image :v2.4.8 Ports: -18181:80-18182 :8080 Volumes: - "/ var/run/docker. The sock: / var/run/docker. The sock" # trafik docker. Through the sock to monitor the backend change command: -- "--api.insecure=true" -- "api.dashboard=true" Providers. Docker =true # providers. Docker =true "-- entrypoints. Testapi. Address = : 80" # setting-up a testapi entrance to monitor port 80 whoami: image: containous/whoami labels: - "traefik. HTTP. Routers. App. Rule = Host (` 192.168.1.178 `)" # use localhost user access, This service to respond - "traefik. HTTP. Routers. App. Entrypoints = testapi" for testapi # entryCopy the code

Docker compoes start docker – compose the up – scale whoami = 3, use curl to access the curl http://192.168.1.178:18181/, get the response as follows:

Hostname: 21274e83b339 IP: 127.0.0.1 IP: 172.30.0.2 RemoteAddr: 172.30.3:56066 GET/HTTP/1.1 Host: Accept: */* Accept-Encoding: gzip X-Forwarded-For: 10.100.1.7 X-Forwarded-host: 192.168.1.178:18181 X-Forwarded-port :18181 X-Forwarded-Proto: HTTP X-Forwarded-Server: 10.100.1.7 X-Forwarded-host: 192.168.1.178:18181 X-Forwarded-port :18181 X-Forwarded-Proto: HTTP X-Forwarded-Server: 89 a1ed50d8e4 X - Real - Ip: 10.100.1.7Copy the code

Multiple visits, with a different Hostname for each response, indicate that Traefik implements the request to be loaded between multiple WHOAMI containers.

If multi-level paths are used to determine routes, you can add the following configuration

- "traefik. HTTP. Routers. App. Rule = Path (` / test `)" # / test specified Path to the responseCopy the code

At this point we need to use http://192.168.1.178:18181/test to obtain the correct response.

Cross Docker-compose load demo

The above are basic configuration demos, but here are a few more interesting configurations for Docker-compose.

We start a new whoami service in another Docker-compose

Version: "3.8" services: whoami2: image: containous/whoami labels: - "traefik. HTTP. Routers. 1. The rule = Host (` 192.168.1.178 `)" # use localhost user access, This service to respond - "traefik. HTTP. Routers. 1. The rule = Path (` / test2 `)" - "traefik. HTTP. Routers. 1. Entrypoints = testapi" for testapi # entryCopy the code

When we use http://192.168.1.178:18181/test to access the response was the first in the docker – compse service, using interview with http://192.168.1.178:18181/test2, The service in the second Docker-compose responds.

Load balancing demo

What if you want to load between two services in docker-compose across the /test path?

Version: "3.8" Services: Image :v2.4.8 Ports: -18181:80-18182 :8080 Volumes: - "/ var/run/docker. The sock: / var/run/docker. The sock" # trafik docker. Through the sock to monitor the backend change command: -- "--api.insecure=true" -- "api.dashboard=true" Providers. Docker =true # providers. Docker =true "-- entrypoints. Testapi. Address = : 80" # setting-up a testapi entrance to monitor port 80 whoami: image: containous/whoami labels: - "traefik. HTTP. Routers. App. Entrypoints = testapi" # entry for testapi - "traefik. HTTP. Routers. App. Rule = Host (` 192.168.1.178 `)" # When users use localhost to access, This service to respond - "traefik. HTTP. Routers. App. Rule = Path (` / test `)" - "traefik. HTTP. Services. Whoami. Loadbalancer. Server scheme = HTTP"  - "traefik.http.services.whoami.loadbalancer.server.port=80" whoami_2: image: containous/whoami labels: - "traefik.http.services.whoami.loadbalancer.server.scheme=http" - "traefik.http.services.whoami.loadbalancer.server.port=80"Copy the code

Using curl http://192.168.1.178:18181/test at this time there will be 2 container load response, respectively, from then on, the container service changes do not need to update the nginx configuration file

Observability

In traefik-2.x’s ecology, observability is divided into the following components

  • Service logs: Operation logs of the Traefik process
  • Access log: Access log of the proxy service taken over by Traefik
  • Metrics: Detailed Metrics data provided by Traefik
  • Tracing: Traefik also provides a link Tracing interface that can be used to visualize calls in distributed or microservices

We will focus on the monitoring capabilities of Metrics, starting with the Promethus data

- "-- the metrics. Prometheus = true" - "-- the metrics. Prometheus. Buckets = 0.100000, 0.300000, 1.200000, 5.000000,"Copy the code

Configure a monitoring task for Prometheus

Static_configs: -targets: ["192.168.1.178:18182"]Copy the code

The following panel can be observed

6. Other issues

How to solve the problem of CORS cross domain: doc. Traefik. IO/traefik/mid… How to use the HTTPS: doc. Traefik. IO/traefik/HTTP…

For more configuration and usage, please consult related documents!