Application and Practice of Kubernetes Events in Monitoring Scenarios
- 2022-10-30 15:44:59
- Han Tao
- Translated 1143
I. Preface
Monitoring is an important part of ensuring system stability. In the Kubernetes open-source ecosystem, resource monitoring tools and components are flourishing. In addition to the metrics server incubated by the community, and Prometheus graduated from CNCF, many options are available. However, resource-based monitoring is not enough because it has the following two main disadvantages:
1. Inadequate real-time and accurate of resource monitoring
Most resource monitoring is conducting data offline based on push or pulls mode. Therefore, the data is usually collected every once in a while. Most acquisition systems will swallow the anomalous situations if there are some burrs or strange situations in the time interval, and they are recovered when the next acquisition point arrives. For the burr scene, the peak will be automatically clipped during the phase of acquisition, thus reducing the accuracy.
2. Inadequate scene coverage for resource monitoring
Some monitoring scenarios cannot be expressed by resources, such as the start and stop of a Pod, which the utilization rate of resources cannot simply measure because when the resource is 0, we cannot distinguish the real cause of this state.
Based on the above two problems, how does Kubernetes solve them?
To better understand the internal state of Kubernetes, Kubernetes introduces the Events system, which records changes to Kubernetes resources in the form of events in the APIServer and can be viewed through the API or Kubectl commands. By acquiring events, users can diagnose the exceptions and problems of the cluster in real time and find problems easily ignored by resource monitoring on time.
II. Introduction of Kubernetes Events
1. What are Kubernetes Events?
Kubernetes Events is a Kubernetes resource object that records an action taken by a component at a certain time to show what happened in the cluster. When the resource state in the Kubernetes cluster changes, new events can be generated.
Each component in the Kubernetes system will report various events at runtime (such as what the scheduler makes decisions and why some pods are expelled from the node) to the Kubernetes API Server. Kubernetes API Server stores events in Etcd. To avoid the disk space of Etcd being filled, its default retention policy is to delete the events that occurred one hour ago after the last event.
We can use the following:
Or
Command to view what events have occurred in the Kubernetes cluster or to view the event information of related resources. Only events that occurred in the last 1 hour will be displayed.
2. Events in Kubectl Describe
It executes the following command:
We can view the events of pod dubbo-ops-1-bccd87fb8-6zh5f.
Here we see an output like 28s (x240979 over 167d). It means that this type of event has occurred 240979 times in 167d, and the most recent one occurred 28 seconds ago. But when you directly conduct the kubectl get event, you will not see 240979 repeated events. This indicates that Kubernetes will automatically merge repeated events.
No events are found when the pod's deployment to which the pod belongs is described, as shown in the following figure.
We can find that when we describe different resource objects, the event content we can see is directly related to ourselves. This indicates that the Event object contains the information of the resource object it describes and is directly related.
3. We View all Events of the Specified Namespace
The system can view all events under bigdata namespace by executing the following command.
But we find that, by default, kubectl get event does not sort events in the order they occur, so we often need to add the -sort-by='{.metadata.creationTimestamp}'
parameter to make the output sortable by time.
Since the event is a resource in the Kubernetes cluster, its name should normally be included in its metadata. name for individual operations. So we can use the following command to get the names of all events under the specified namespace.
kubectl-n bigdata get event --sort-by='{.metadata.creationTimestamp}' -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
4. Single Event Object
We randomly select an event and output its contents in YAML format:
kubectl get event aitm-customerservice-chatbot-2-65cd489c97-j4scv.1708265c3fe14fcb -n bigdata -o yaml
The meanings of the main fields are as follows:
- count: indicates how many times the current event of the same type has occurred
- firstTimestamp and lastTimestamp: respectively, indicate the first and last occurrence of this event
- involvedObject: the resource object that this event belongs to (the object that triggers the event). The structure in the source code is as follows:
- Reason: indicates the reason for initiating this action and a brief event summary. Fixed code is more suitable for filtering conditions, mainly for machine readability. There are currently more than 50 such codes.
- Message provides a more readable, detailed description.
- Source: indicates the event's creator, including Host (creator hostname) and Component (creator component name). The structure in the source code is as follows:
- Type: Currently, there are only two types: Normal (normal event) and Warning (warning event), and their meanings are also written in the source code:
The warning event indicates that the state transition of the event is generated between unexpected states; The Normal event indicates that the expected state is consistent with the current state.
Let's use the life cycle of a Pod as an example. When we create a Pod, the Pod will first enter the Pending state and then wait for actions such as scheduling, image pulling, container startup, etc. When the health check passes, the Pod's state becomes Running, and the Normal event will be generated at this time. And if the Pod enters the Failed state due to OOM or other reasons during operation, and this state is not expected, then a Warning event will be generated at this time. For this scenario, if we can monitor the occurrence of events, we can view some problems easily overlooked by resource monitoring on time.
III. Node-Problem-Detector
Several components of Kubernetes (e.g., kubelet, deployment-controller, job-controller, etc.) generate events. However, the built-in components only focus on container management-related issues. They do not provide additional detection capabilities for the Kubernetes node's operating system, container runtime, and dependency systems (network, storage, etc.). When a Kubernetes node is abnormal, there is no node-related event generated. The stability of containers depends strongly on the stability of Kubernetes nodes, but the management of the node is relatively weak in Kubernetes. Perhaps for the initial design of Kubernetes, node management should be a matter for IaaS. But as Kubernetes develops, it became more and more like an operating system in the cloud-native era, and it manages more and more content, so the NPD project (GitHub: node-problem-detector) is extended to enhance the monitoring capabilities of Kubernetes nodes.
NPD is a DaemonSet responsible for node diagnostic checks in Kubernetes, which requires additional deployment. The check output of NPD fully complies with the event specification of Kubernetes. It can convert anomalous node situations (such as Docker Engine Hang, Linux Kernel Hang, Network anomaly, and file descriptor anomaly) into node events, which are pushed to APIServer for unified event management. When the node is detected as abnormal by NPD, an event about the abnormal node will be generated. The operation and maintenance personnel can quickly view the abnormal information and reasons of the node through kubectl describe node $nodeName.
The NPD architecture is shown below:
NPD supports multiple exception checks, such as:
- Basic service problem: NTP service has not started
- Hardware problem: CPU, memory, disk, and network card are damaged
- Kernel problem: KernelDeadlock, file system corruption
- Kubelet problem: KubeletUnhealthy, Kubelet restarts frequently
- Container runtime problem: ContainerRuntimeUnhealthy, Docker restarts frequently, Containerd restarts frequently.
IV. Persistent Storage Kubernetes Events
Events in Kubernetes are only saved in etcd for 1 hour and etcd does not support complex analysis operations. By default, Kubernetes only provides simple methods, such as screening according to Reason, time, type, etc. At the same time, these events are only passively stored in etcd and do not support the active push to other systems. Usually, they can only be viewed manually with the'kubectl describe $resourceName 'or'kubectl get event' commands.
We have a very high demand for Kubernetes events, such as:
- We need Kubernetes event queries for a longer range of events when troubleshooting issues.
- Do real-time alerts for abnormal events, such as Failed, Evicted, FailedMount, FailedScheduling, etc.
- Subscribe to these events for custom monitoring.
- Filter and screen according to various dimensions.
- Events support classified statistics, such as the ability to calculate the trend of events and compare them with the previous period to make judgments and decisions based on statistical indicators.
In order to use Kubernetes events more conveniently, we need to use the Kubernetes event offline tool to store Kubernetes events persistently. [kubernetes-event-exporter] and [kube-event] are frequently used.
Among them, Kube-eventer is open source from Alibaba Cloud and can be deployed in the Kubernetes cluster in the form of Deployment. Kube-eventer supports offline Kubernetes events to DingTalk robots, Alibaba Cloud log service SLS, Kafka, time series database InfluxDB, Elasticsearch, etc. The architecture is shown in the figure below.
V. Kubernetes Event Center
The overall logic of Kubernetes Event Center is to install node-problem-detector and Kube-eventer in all Kubernetes clusters. The node-problem-detector is responsible for converting node exceptions into Kubernetes events. The Kube-eventer is responsible for offline storing Kubernetes events to Alibaba Cloud's logging service SLS. Kube-eventer is based on SLS's query, analysis, visualization, and alerting capabilities for Dashboard display and real-time alerting of abnormal events.
1. Visual Report
- Event Overview
- Node Event Query
- Pod Event Query
2. Real-time Alarm
We analyse and alert common abnormal events in real-time to better manage and monitor the cluster. We will notify you in real-time through SLS when an error occurs. The notification method supports DingTalk group, SMS, phone call, custom WebHook, etc.
The following monitoring alerts have been created so far:
- Risk control Flink OOM: When the Pod of risk control Flink OOM occurs, we will send an alert to the risk control DingTalk group.
- K8S Pod eviction event: When a Pod is evicted in the Kubernetes cluster, we will send an alert to the operation and maintenance DingTalk group.
- K8S Pod scheduling failure: When the Kubernetes cluster has a Pod scheduling failure, we will send an alarm to the operation and maintenance of the DingTalk group.
- Conntrack Table Full: When the Conntrack Table with nodes in the Kubernetes cluster is full, we will send an alert to the functional DingTalk group.
Summary
Kubernetes event contains a lot of useful information, which is helpful for operation and maintenance personnel to observe Kubernetes resource changes and location problems in daily life. This article briefly introduces Kubernetes Event resource objects; these NPD components generate Node events and event offline tool Kube-eventer and share the visual reports and real-time alerts of the Bixin Kubernetes event centre. Event monitoring is another monitoring method in Kubernetes, which can make up for the lack of real-time, accuracy, and scenarios of resource monitoring.
Support
- Book a Demo
- Tech Forum
- GitHub
- SourceForge
About Us
- Company
- Privacy Policy
- Term of Use
- Blogs
- Partners
Contact Us
- Leave a Message
- Email Us: [email protected]