ELK logging system is familiar to everyone (Zipkin + Jaeger, Prometheus + Grafana (Fluentd + Elasticsearch + Kibana) provides a solution for link and statistics collection, but the real log storage is a professional solution. Fluentd (EFK + Elasticsearch + Kibana)Fluentd is the official solution for Istio Is an open source logging collector that supports multiple data outputs and has a pluggable architecture. Elasticsearch is a popular back-end logging program that Kibana uses for viewing.
Attach:
A meow blog :w-blog.cn
Istio official address :preliminary.istio. IO /zh
Istio Chinese document: preliminary. Istio. IO/useful/docs /
PS: This section is based on the latest ISTIO version 1.0.3
1. Prepare the environment
We put Fluentd, Elasticsearch and Kibana in a non-production collection Services and Deployments in a new Namespace called logging.
> vim logging-stack.yaml # Logging Namespace. All below are a part of this namespace. apiVersion: v1 kind: Namespace metadata: name: logging --- # Elasticsearch Service apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: logging labels: app: elasticsearch spec: ports: - port: 9200 protocol: TCP targetPort: db selector: app: elasticsearch --- # Elasticsearch Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: elasticsearch namespace: logging labels: app: elasticsearch annotations: sidecar.istio.io/inject: "false" spec: template: metadata: labels: app: elasticsearch spec: containers: - image: Docker. Elastic. Co/elasticsearch/elasticsearch - oss: 6.1.1 name: elasticsearch resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: discovery.type value: single-node ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: elasticsearch mountPath: /data volumes: - name: elasticsearch emptyDir: {} --- # Fluentd Service apiVersion: v1 kind: Service metadata: name: fluentd-es namespace: logging labels: app: fluentd-es spec: ports: - name: fluentd-tcp port: 24224 protocol: TCP targetPort: 24224 - name: fluentd-udp port: 24224 protocol: UDP targetPort: 24224 selector: app: fluentd-es --- # Fluentd Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: fluentd-es namespace: logging labels: app: fluentd-es annotations: sidecar.istio.io/inject: "false" spec: template: metadata: labels: app: fluentd-es spec: containers: - name: Fluentd-es image: gcr. IO/Google-containers/Fluentd-ElasticSearch :v2.0.1 env: - name: FLUENTD_ARGS value: --no-supervisor -q resources: limits: memory: 500Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: config-volume mountPath: /etc/fluent/config.d terminationGracePeriodSeconds: 30 volumes: - name: config-volume configMap: name: fluentd-es-config --- # Fluentd ConfigMap, contains config files. kind: ConfigMap apiVersion: v1 data: forward.input.conf: |- # Takes the messages sent over TCP <source> type forward </source> output.conf: |- <match **> type elasticsearch log_level info include_tag_key true host elasticsearch port 9200 logstash_format true # Set the chunk limits. buffer_chunk_limit 2M buffer_queue_limit 8 flush_interval 5s # Never wait longer than 5 minutes between retries. max_retry_wait 30 # Disable the limit on the number of retries (retry forever). disable_retry_limit # Use multiple threads for processing. num_threads 2 </match> metadata: name: fluentd-es-config namespace: logging --- # Kibana Service apiVersion: v1 kind: Service metadata: name: kibana namespace: logging labels: app: kibana spec: ports: - port: 5601 protocol: TCP targetPort: ui selector: app: kibana --- # Kibana Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kibana namespace: logging labels: app: kibana annotations: sidecar.istio.io/inject: "false" spec: template: metadata: labels: app: kibana spec: containers: - name: kibana image: Docker. Elastic. Co/kibana/kibana - oss: 6.1.1 resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: ELASTICSEARCH_URL value: http://elasticsearch:9200 ports: - containerPort: 5601 name: ui protocol: TCP ---Copy the code
Create a resource
kubectl apply -f logging-stack.yamlCopy the code
2. Configure Istio
You now have a running Fluentd daemon that configures Istio with the new log type and sends these logs to the listening daemon.
Create a new YAML file to hold the log flow configuration, which Istio will automatically generate and collect.
> vim fluentd-istio.yaml apiVersion: "config.istio.io/v1alpha2" kind: logentry metadata: name: newlog namespace: istio-system spec: severity: '"info"' timestamp: request.time variables: source: source.labels["app"] | source.workload.name | "unknown" user: source.user | "unknown" destination: destination.labels["app"] | destination.workload.name | "unknown" responseCode: response.code | 0 responseSize: response.size | 0 latency: response.duration | "0ms" monitored_resource_type: '"UNSPECIFIED" -- # FluentD Handler configuration apiVersion: "config.istio. IO /v1alpha2" Kind: Fluentd Metadata: name: UNSPECIFIED handler namespace: istio-system spec: address: ApiVersion: "config.istio. IO /v1alpha2" kind: "fluentd-es. Logging :24224" -- rule metadata: name: newlogtofluentd namespace: istio-system spec: match: "true" # match for all requests actions: - handler: handler.fluentd instances: - newlog.logentry ---Copy the code
PS: The line address: “Fluentd-es. Logging :24224” in the handler configuration points to the sample Software stack of the Fluentd daemon that we set up.
To make it effective
kubectl apply -f fluentd-istio.yamlCopy the code
Third, view the collected logs
Let’s start with our example program, Bookinfo, and then access Kibana the old-fashioned way via port mapping
kubectl -n logging port-forward $(kubectl -n logging get pod -l app=kibana -o jsonpath='{.items[0].metadata.name}') A 5601-5601Copy the code
PS: It is recommended that ES and Kibana be deployed outside the cluster. ES has high requirements on storage and resources