One day, I suddenly found that the service had abnormal access records. Although my small machine with 1 core and 2 chickens was not worth attacking, I also had the idea to deploy a monitoring system.
↓ This is only the record of static acceleration of free CDN.
After some simple investigation, I found that the mainstream log monitoring systems are ELK combination (Elasticsearch, Logstash, Kibana) and Prometheus system, etc. However, these systems have relatively high learning cost and resource occupation, and are suitable for complex scenarios and enterprise level. It was not suitable for personal use, so I chose my old friend Grafana as the foundation to quickly build a lightweight and scalable log monitoring system.
Grafana itself supports multiple data interconnects and scalability is very high. This article will not go into details, but all the monitoring indicators are around Nginx. Since all my services use Nginx to proxy ports, the log data source is provided by Nginx. The data in the text is parsed for analysis, or parsed and converted to other log stacks for better data processing power. Here, I used Loki developed by the Grafana team as the data stack. Loki also provides a proxy service that parses and pushes logs.
After deployment, the default screen display is somewhat interesting without doing anything:
The Docker environment
Docker official automatic script, after running automatically Docker stable version installed in the system:
curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh --mirror Aliyun
Copy the code
Configuring mirror acceleration:
mkdir -p /etc/docker
Copy the code
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://kfwkfulq.mirror.aliyuncs.com"]
}
EOF
Copy the code
sudo systemctl daemon-reload
sudo systemctl start docker
Copy the code
The Docker’s official netease ali cloud kfwkfulq.mirror.aliyuncs.com hub-mirror.c.163.com registry.docker-cn.com
Loki installation
Download the required configuration files from a directory that will not be changed very often because they will be mapped to the container later
Loki wget https://raw.githubusercontent.com/grafana/loki/v2.4.2/cmd/loki/loki-local-config.yaml - O - config. YamlCopy the code
Loki Docker installation
docker run -d --name loki -u root -v $(pwd):/mnt/config -v /data/nginx-home/loki:/tmp/loki -v The/etc/localtime: / etc/localtime -p 3002:3100 grafana/Loki: 2.4.2 - config. The file = / MNT/config/Loki - config. YamlCopy the code
Wait a while to visit http://xxxxxx:3002/ready
Grafana Loki install Grafana Loki
Grafana installation
docker run -d -p 3001:3000 -v /etc/localtime:/etc/localtime --name=grafana grafana/grafana
Copy the code
Visit http://xxxxxx:3001 with the default user name and password admin. It takes a long time to initialize the first time. After that, click configure a data source, click Loki, configure the url, and click the Save&test button at the bottom.
Next import template 12559 (Nginx monitor template, more templatesVisit the website)
However, currently there is no data and logs need to be brokered into Loki.
Promtail installation
Run install promtail
Again, find a directory to download the default configuration first
Wget https://raw.githubusercontent.com/grafana/loki/v2.4.2/clients/cmd/promtail/promtail-docker-config.yaml - O promtail-config.yamlCopy the code
After downloading, open edit and modify configuration:
server: http_listen_port: 0 grpc_listen_port: 0 positions: filename: /tmp/positions.yaml clients: - url: http://xxxxxxx:3002/loki/api/v1/push # modified for your Loki service delivery address scrape_configs: - job_name: system static_configs: - the targets: - localhost labels: job: nginx_access_log agent: promtail __path__: /usr/local/nginx/logs/host.access.logCopy the code
/data/nginx-home/extra/logs
docker run -d --name promtail -v $(pwd):/mnt/config -v /etc/localtime:/etc/localtime -v / data/nginx - home/extra/logs: / usr/local/nginx/logs grafana/promtail: 2.4.2 - config. The file = / MNT/config/promtail - config. YamlCopy the code
Modify the Nginx configuration
log_format json_analytics '{' '"msec": "$msec", ' # request unixtime in seconds with a milliseconds resolution '"connection": "$connection", ' # connection serial number '"connection_requests": "$connection_requests", ' # number of requests made in connection '"pid": "$pid", ' # process pid '"request_id": "$request_id", ' # the unique request id '"request_length": "$request_length", ' # request length (including headers and body) '"remote_addr": "$remote_addr", ' # client IP '"remote_user": "$remote_user", ' # client HTTP username '"remote_port": "$remote_port", ' # client port '"time_local": "$time_local", ' '"time_iso8601": "$time_iso8601", ' # local time in the ISO 8601 standard format '"request": "$request", ' # full path no arguments if the request '"request_uri": "$request_uri", ' # full path and arguments if the request '"args": "$args", ' # args '"status": "$status", ' # response status code '"body_bytes_sent": "$body_bytes_sent", ' # the number of body bytes exclude headers sent to a client '"bytes_sent": "$bytes_sent", ' # the number of bytes sent to a client '"http_referer": "$http_referer", ' # HTTP referer '"http_user_agent": "$http_user_agent", ' # user agent '"http_x_forwarded_for": "$http_x_forwarded_for", ' # http_x_forwarded_for '"http_host": "$http_host", ' # the request Host: header '"server_name": "$server_name", ' # the name of the vhost serving the request '"request_time": "$request_time", ' # request processing time in seconds with msec resolution '"upstream": "$upstream_addr", ' # upstream backend server for proxied requests '"upstream_connect_time": "$upstream_connect_time", ' # upstream handshake time incl. TLS '"upstream_header_time": "$upstream_header_time", ' # time spent receiving upstream headers '"upstream_response_time": "$upstream_response_time", ' # time spend receiving upstream body '"upstream_response_length": "$upstream_response_length", ' # upstream response length '"upstream_cache_status": "$upstream_cache_status", ' # cache HIT/MISS where applicable '"ssl_protocol": "$ssl_protocol", ' # TLS protocol '"ssl_cipher": "$ssl_cipher", ' # TLS cipher '"scheme": "$scheme", ' # http or https '"request_method": "$request_method", ' # request method '"server_protocol": "$server_protocol", '# request protocol, like HTTP/1.1 or HTTP/2.0 '"pipe": "$pipe", ' # "p" if request was pipelined, "." otherwise '"gzip_ratio": "$gzip_ratio", ' '"http_cf_ray": "$http_cf_ray"' '}'; access_log /etc/nginx/extra/logs/host.access.log json_analytics;Copy the code
Restart each service to apply the configuration
docker restart nginx
docker restart loki
docker restart promtail
Copy the code
You can then go back to Grafana and look at the data in the dashboard
Log cleaning
In the past for too long the log information is not much value, so need to be regularly cleaned, otherwise it’s easy to become a killer disk capacity, Loki design goal is to reduce the cost of storage log, so I think to keep its default deletion policy can (definitely not because too lazy to go to see the document), and the nginx log file can be deleted, Avoid accumulating too much.
Start by creating an auto_clear.sh file:
#! / bin/bash rm - rf/data/nginx - home/extra/logs/host access. # log into your log path docker restart nginx # restart nginx, simple and crudeCopy the code
Open scheduled Tasks
crontab -e
Copy the code
The write script runs automatically every day at 4am
00 04 * * * /bin/bash /data/nginx-home/extra/logs/auto_clear.sh
Copy the code
Wq Exits and refreshes the application
service crond restart
Copy the code
View scheduled tasks:
crontab -l
Copy the code
Nginx banned IP
Nginx is very simple to block IP. First create a blackips.conf file:
# each line represents a blacklist IP deny 1.2.3.4; Deny 110.191.215.8; Deny 110.191.214.214;Copy the code
Write the file path to the HTTP object in the configuration file nginx.conf:
include /etc/nginx/extra/blockips.conf;
Copy the code