Abstract: Container through container-type compilation, packaging, deployment, greatly improve the application iteration speed. For architects, containers deliver minute deployments, second scaling and recovery, an order of magnitude increase in iteration speed, and a base cost savings of around 50%.
Introduction to the
Container through container-type compilation, packaging, and deployment, the application iteration speed is greatly improved. For architects, containers deliver minute deployments, second scaling and recovery, an order of magnitude increase in iteration speed, and a base cost savings of around 50%. But for developers implementing containers on the ground. 80% of the work deals with pre-container and post-container issues, where pre-container refers to how to develop, integrate, test, and deploy locally to the container environment; After the container refers to how to monitor, operate, maintain, alert, and tune the deployment to the container environment. Today we will focus on how to monitor resource dimensions in the context of a container.
Let’s start with containers and monitoring
There are many types of container monitoring schemes, including Prometheus, Telegraf, InfluxDB, Cadvisor, Heapster, etc. But in principle, it can be divided into push mode acquisition and pull mode acquisition. Push-mode collection refers to the way that the corresponding agent is deployed to push the monitored indicators to the server for data aggregation and alarm. Telegraf is a representative of this mode. Prometheus was the embodiment of pull collection, in which resource utilization is pulled directly from a container using apis or scripts from a centralized server. Compared with traditional application monitoring, container monitoring faces greater challenges. First, because containers are mostly scheduled in resource pools, the traditional statically configured monitoring agent becomes very troublesome. If the monitoring Agent is deployed only on the host, it will lack necessary information to identify monitored objects. Secondly, the life cycle of containers is shorter than that of traditional applications, and the upper concepts abstracted by containers, such as Service in Swarm Mode or ReplicaSet and Deployment in Kubernetes, are not well abstracted from the collected data in reverse. As a result, simple container monitoring data cannot effectively aggregate monitoring data and generate alarms. Once the application is published, the original monitoring and alarm rules may not take effect. Finally, container monitoring requires more dimensions, resource dimensions, logical resource dimensions, application dimensions, and so on.
How do I monitor resources on container services
In fact, the main reason why containers are difficult to monitor is that logical concepts and physical concepts cannot be unified in monitoring data and life cycle. Alibaba cloud container service Kubernetes and cloud monitoring has been deeply integrated, with the application group to abstract the logical concept, today we look at how to carry out Kuberbetes resource monitoring and alarm.
Firstly, Kubernetes nodes are functionally divided into Worker and Master nodes. Management control applications are usually deployed on the Master node. The overall resource requirements are strong and robust. The Worker node is more responsible for the actual Pod scheduling, and the overall resource is mainly scheduling capacity. When you create a Kubernetes cluster, the container service automatically creates two resource groups for you, one for the Master group and one for the Worker group. The Master group contains the Master node and its associated load balancer. The Worker group contains all the work nodes.
You can click the list view to display the resources in the current resource group. For example, the Master group contains three Master nodes and two SLBS. In addition, the alarm rules for any resources under the resource group are automatically inherited, so you can see the health status of all resources on the topology overview page.
In the monitoring view, you can view detailed monitoring data at the group level and instance level
For Mater node, the health status of various components running on it is more important, so the health check of the core components of all nodes is set in the Master group. When the health check status of all nodes is faulty, the cluster status of Kubernetes can be obtained through the way of nails, emails, and SMS.
For older clusters whose version is 1.8.4 or later, you can quickly set up resource alarm groups by upgrading the monitoring service. You can set customized alarm rules for resources in a resource group. Alarm rules are automatically applied to the resource group and added to the resource group in automatic cluster scaling scenarios.
The last
In this video, we explain how to monitor and alarm through resource grouping. Monitoring for Kubernetes pod and Service will be released in April. Please look forward to it.
The original link
To read more articles, please scan the following QR code: