Prometheus is introduced

Prometheus is an open source monitoring system developed in the Go language and a similar implementation of Google’s BorgMon monitoring system.

The basic principle of Prometheus is to periodically capture the status of monitored components through HTTP protocol, and any component can access monitoring as long as it provides the corresponding HTTP interface. Prometheus is one of the monitoring systems suitable for Docker, Kubernetes and other environments. The HTTP interface that outputs monitoring information is referred to as an exporter.

Prometheus architecture

Prometheus uses the Pull model, in which Prometheus Server pulls monitoring data from targets through HTTP.

  • Retrieval: defines where Prometheus Server needs to pull data from
    • Jobs/Exporters: Prometheus can pull monitoring data from Jobs or Exporters. My Exporter exposes the data collection interface in the form of a Web API.
    • Prometheus Server: Prometheus may also pull data from other Prometheus servers.
    • Pushgateway: For components running as temporary jobs, which may have ended before Prometheus pulled monitoring data from them, the Job runtime can push monitoring data to Pushgateway. Prometheus pulls data from Pushgateway to prevent monitoring data loss.
    • Service: Indicates that Prometheus dynamically discovers services and pulls data from DNS, Kubernetes, Consul for monitoring.
  • Storage: Storage of Prometheus using local Storage of Prometheus Server.
  • PromQL: Prometheus query that can be integrated with webuIs such as Grafana
  • AlertManager: an external component independent of Prometheus, which monitors system alarms. Alarm rules can be configured in a configuration file, and Prometheus pushes alarms to the AlertManager.

The characteristics of Prometheus

  • Multi-dimensional data model, a time series is determined by a metric index and multiple label key-value pairs;
  • Flexible query language to reorganize the collected time series data;
  • Powerful data visualization, in addition to the built-in browser, integration with Grafana;
  • Efficient storage, memory plus local disk, with function sharding and federation to extend performance;
  • Simple operation and maintenance, only rely on the local disk, Go binary installation package without any other library package dependence;
  • Accurate alarm;
  • Very many client libraries;
  • A number of exporters are provided to collect common system metrics;
  • Temporal sequence data can be pushed through the intermediate gateway;
  • Discover target service objects through service discovery or static configuration.

The core concept

The data model

Fundamentally, all Data stored by Prometheus is Time Serie Data. Sequential data is a time-stamped data stream that belongs to a Metric and labels under that Metric.

  • Metric: Describes a measurement feature that is being monitored. The metric name consists of ASCII letters, digits, underscores (_), and colons (.), and must match the regular expression[a-zA-Z_:][a-zA-Z0-9_:]*.
  • Tag: For the same metric, different Tag values combine to form a sequence of specific dimensions. The tag enables Prometheus’s multidimensional data model. Prometheus’ query language filters and aggregates temporal data through metrics and labels. The label name can contain ASCII letters, digits, and underscores and must match the regular expression[a-zA-Z_][a-zA-Z0-9_]*The tag names with _ underscores are reserved for internal use. Tag values can contain any Unicode characters, including Chinese.
  • Sample value: Temporal data is a series of samples. Each sample value includes:
    • A 64 bit floating point data
    • A timestamp accurate to the millisecond
  • Annotations: An Annotation consists of a metric and a set of label key-value pairs.

metrics

Metrics in Prometheus are of the following types:

  • Counter: a cumulative measure, which is a value that can only increase. Counters are used to count things like server requests, tasks completed, and errors.
  • Gauge: Represents a metric that can be increased or decreased. Gauges are used to measure instantaneous data such as temperature and memory usage.
  • histogram(Histogram) : Observations are sampled (typically from data such as request duration or response size) and aggregated in configurable buckets. There are several ways to generate a histogram (assuming the metric is<basename>) :
    • In barrels, it’s equivalent to<basename>_bucket{le="<upper inclusive bound>"}
    • The sum of the sampled values is equivalent to<basename>_sum
    • The total number of sample values is equivalent to<basename>_count, which is also equivalent to counting all the sampled values in a bucket<basename>_bucket{le="+Inf"}
  • summary(Summary) : Sampling the observation results. In addition to the statistics of the sum and total of sample values, but also according to the quantile statistics. There are several ways to generate a summary (assuming the metrics are<basename>) :
    • The quantile, the number of samples less than the quantile is less than the percentage of the totalphi, which is equivalent to< the basename > {quantile = "< phi >}"
    • The sum of the sampled values is equivalent to<basename>_sum
    • The total number of sample values is equivalent to<basename>_count

Tasks and instances

In Prometheus, endpoints from which sampled values can be fetched are called instances, and multiple such instances copied for performance scaling form a task.

  • Job: Captures the owning task.
  • Instance: Grabs the source Instance.

Prometheus monitors the actual combat

Prometheus is installed and deployed

# 1. DownloadWget HTTP: / / https://github.com/prometheus/prometheus/releases/download/v2.10.0/prometheus-2.10.0.linux-amd64.tar.gz# 2. UnzipThe tar ZXVF Prometheus - 2.10.0. Linux - amd64. Tar. Gz# 3. The start
cdPrometheus - 2.10.0. Linux - amd64. / Prometheus - config file = Prometheus. YmlCopy the code

Export Deployment

node_exporter

# 1. DownloadWget HTTP: / / https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.darwin-amd64.tar.gz# 2. UnzipThe tar ZXVF node_exporter - 0.18.1. Darwin - amd64. Tar. Gz# 3. The start
cdNode_exporter - 0.18.1. Darwin - amd64. / node_exporterCopy the code

nginx-vts-exporter

I’m using OpenResty here, as I would if I were using Nginx

Download dependent packages

cd /usr/local/ SRC/openresty - 1.15.8.1 / bundle# openssl
wget https://www.openssl.org/source/ openssl - 1.1.1 c.t ar. Gz# ngx_cache_purgeWget http://github.com/FRiCKLE/ngx_cache_purge/archive/2.3.zip - O ngx_cache_purge. Zip# nginx-module-vtsWget http://github.com/vozlt/nginx-module-vts/archive/v0.1.18.zip - O nginx - module - VTS. ZipUpstream health check
wget https://github.com/yaoweibin/nginx_upstream_check_module/archive/master.zip -O nginx_upstream_check_module.zip
unzip nginx_upstream_check_module.zip
cdNginx-1.15.8 patch-p1 <.. / nginx_upstream_check_module - master/check_1. 14.0 +. PatchCopy the code

Nginx builds and installs

./configure --prefix=/opt/openresty \ --with-http_auth_request_module \ --with-http_realip_module \ --with-http_v2_module \ --with-debug \ --with-http_stub_status_module \ --with-http_ssl_module \ --with-http_gzip_static_module \ --with-http_gunzip_module \ --with-http_random_index_module \ --with-threads \ --with-pcre \ --with-luajit \ --with-mail \ --with-file-aio \ --with-http_v2_module \ --with-http_flv_module \ --with-http_mp4_module \ --with-http_dav_module \ --with-http_sub_module \ --with-http_addition_module \ --with-stream \  --with-stream_ssl_module \ --with-stream_realip_module \ --with-http_secure_link_module \ --with-stream_ssl_preread_module \ --with-openssl=./bundle/ openSSL-1.1.1 c \ --add-module=./bundle/ngx_cache_purge-2.3 \ --add-module=./bundle/nginx-module-vts-0.1.18 \ --add-module=./bundle/nginx_upstream_check_module-master \ -j2 gmake && gmake installCopy the code

Nginx configuration

http { vhost_traffic_status_zone; vhost_traffic_status_filter_by_host on; If this function is enabled, traffic statistics will be calculated based on different server_name. Otherwise, all traffic will be calculated based on the first one by default. . server { ... location /status { vhost_traffic_status_display; vhost_traffic_status_display_format html; } # vhost_traffic_status off; }}Copy the code

If you do not want to monitor this domain name, you need to configure vhost_TRAFFic_status off in the server module.

Nginx – VTS – exporter installation

# 1. Download
wget https://github.com/hnlq715/nginx-vts-exporter/releases/download/v0.10.0/nginx-vts-exporter-0.10.0.linux-amd64.tar.gz
# 2. UnzipThe tar ZXVF nginx - VTS - exporter - 0.10.0. Linux - amd64. Tar. Gz# 3. The start
cdNginx - VTS - exporter - 0.10.0. Linux - amd64Copy the code

redis_exporter

# 1. DownloadWget HTTP: / / https://github.com/oliver006/redis_exporter/releases/download/v1.0.3/redis_exporter-v1.0.3.linux-amd64.tar.gz# 2. UnzipThe tar ZXVF redis_exporter - v1.0.3. Linux - amd64. Tar. Gz# 3. The start
cdLinux -amd64./ redis_exporters - redisis.addr 192.168.102.55:7000 -redis.passwordtest-- web. Listen - address 0.0.0.0:9121Copy the code

Prometheus added the Exporter configuration

After installing My exporter, add my exporter to the Prometheus configuration as follows:

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']
    
  - job_name: "node"
    static_configs:
    - targets: 
      - '192.168.26.18:9100'
      - '192.168.102.51:9100'
      - '192.168.102.58:9100'
      - '192.168.102.59:9100'
      #labels:
      # instance: "192.168.26.18:9100"
      # env: "pro"
      # the name: "192.168.26.18"
    
  - job_name: 'nginx'
    static_configs:
    - targets: 
      - '192.168.102.51:9913'
      - '192.168.102.58:9913'
      - '192.168.102.59:9913'
    
  - job_name: 'redis-exporter'
    file_sd_configs:
    - files: ['./redis.json']

Copy the code

The redis. Json configuration file is as follows:

[{
        "targets": [
            "192.168.102.53:9121"."192.168.102.53:9122"."192.168.102.54:9121"."192.168.102.54:9122"."192.168.102.55:9121"."192.168.102.55:9122"."192.168.102.70:9121"."192.168.102.70:9122"."192.168.102.71:9121"."192.168.102.71:9122"."192.168.102.72:9121"."192.168.102.72:9122"]."labels": {
            "service": "redis"}}]Copy the code

Restart Prometheus. The last step is to configure the Grafana visualization.

Grafana installation configuration

Grafana is a cross-platform open source metric analysis and visualization tool that enables you to query and visually present collected data with timely notifications.

  1. Grafana Installation Guide

  2. Grafana adds Prometheus data source

  3. Import the Dashboard

    • Nginx VTS Stats Dashboard
    • Redis Dashboard
    • Node Exporter Dashboard

Effect:

  • Nginx monitoring

  • Redis monitoring

  • The Node monitoring

— END —
Please long click the picture below to follow the public account DigNew


Recommended reading:

  • Learn Big Data – Data Warehouse Modeling from 0

  • Learn Big Data – Data Warehouse Theory from 0

  • Learn Big Data from 0 -Hive Performance Optimization

  • Learn Big Data from 0 -Hive Basics

  • Take you quick-and-dirty HBase | HBase to read and write performance optimization

  • Take you quick-and-dirty HBase | HBase column family optimization

  • Take you quick-and-dirty HBase | HBase RowKey design