Hao Liang's Blog

Embrace the World with Cloud Native and Open-source

【Scheduling】Priority and preemption mechanism, affinity scheduling, in-tree scheduling algorithm (new features in version 1.19)

1. Priority and preemption mechanism During the scheduling process, Kube-scheduler takes out the Pod from the scheduling queue (SchedulingQueue) each time and performs one round of scheduling. So in what order are the Pods in the scheduling queue added to the queue? The Pod resource object supports setting the Priority attribute. Through different priorities, Pods with high priority are placed in front of the scheduling queue and scheduled first. If the scheduling of a Pod with a high priority fails and no suitable node is found, it will be placed in the UnschedulableQueue and enter the preemption phase.

【Scheduling】kube-scheduler architecture design and startup process code breakdown

1. kube-scheduler architecture design The core function of the scheduler is to find the most suitable node for the Pod to run on. For small-scale clusters, each scheduling cycle will traverse all nodes in the cluster to find the most suitable node for scheduling. For large-scale clusters, each scheduling cycle will only traverse some nodes in the cluster, and find the most suitable nodes among these nodes for scheduling. The entire scheduling process is mainly divided into three nodes: pre-selection, optimization and binding.

【Troubleshooting】Analysis of kube-apiserver log report 'has no resources'

1. Problem description In the production environment, the api-server service is restarted one after another. Check the monitoring and find that the CPU, memory and network traffic of the master node where the APIServer is located are jittering. In the APIServer log, you can see that there is a warning log that it has no resources in the log: {"log":"W0814 03:04:44.058851 1 genericapiserver.go:342] Skipping API image.openshift.io/1.0 because it has no resources.

Client-go code breakdown (4): Work Queue

1. Introduction to WorkQueue In Informer, the Delta FIFO queue triggers Add, Update, and Delete callbacks. In the callback method, the key of the resource object change event that needs to be processed is put into the WorkQueue work queue. Wait for the Control Loop to be retrieved from the work queue, and then retrieve the complete resource object from the Indexer local cache through Lister for processing. Image source: Geek Time – “Kubernetes in a Simple and In-depth manner” The main function of WorkQueue is marking and deduplication, and supports the following features:

Client-go code breakdown (3): Informer mechanism

1. Introduction In Kubernetes, the controller needs to monitor the status of resource objects in the cluster to coordinate the actual status of the resource objects with the desired status defined through yaml. So how does the controller monitor the resource object and make corresponding processing based on the actual status changes of the object? In fact, it is implemented through the Informer mechanism in the Client-go package. Image source: Geek Time – “Kubernetes in a Simple and In-depth manner” From the picture above, we can roughly understand the entire process of the Informer mechanism:

Client-go code breakdown (2): Resync mechanism analysis in Informer

1. Informer workflow diagram in Client-go The Reflector in Informer obtains the change events (events) of all resource objects in the cluster from the apiserver through List/watch, puts them into the Delta FIFO queue (saved in the form of Key and Value), and triggers onAdd, onUpdate, and onDelete callbacks. Put the Key into the WorkQueue. At the same time, the Key is updated in the Indexer local cache. Control Loop obtains the Key from the WorkQueue, obtains the Value of the Key from the Indexer, and performs corresponding processing.

Client-go code breakdown (1): Client Object

1. Source code structure 2. Client object RESTClient restClient encapsulates RESTful-style HTTP requests and is used to interact with apiserver for HTTP request data. The process of obtaining kubernetes resource objects through restClient is: read kubeconfig configuration information –》Encapsulate HTTP request&ndas

Implementation of zero-disruption rolling updates in Kubernetes

In a Kubernetes cluster, businesses usually use Deployment + LoadBalancer type Service to provide external services. The typical deployment architecture is shown in Figure 1. This architecture is very simple to deploy and operate, but service interruptions may occur when applications are updated or upgraded, causing online problems. Today we will analyze in detail why this architecture will cause service interruption when updating applications and how to avoid service interruption.

Analysis on the principle of seamless reloading using Route's HA Proxy in Openshift

1. Background In the openshift cluster (hereinafter referred to as OCP), the forwarding of external traffic is achieved by controlling the routing rules in the Route object through the Router controller to overload the HAProxy configuration file in the Infra node. In step 3 in the above figure, during the process of Router reloading the Haproxy configuration, the Haproxy service will be unavailable for a short period of time. So how to ensure that user requests are not lost during the Haproxy reloading process?

Detailed explanation of Raft algorithm

1. Leader election process In the raft protocol, a node is in one of the following three states at any time: leader: master node follower: slave node candidate: candidate master node 1. Election at startup: When a node starts, it is in the follower state. If it does not receive a heartbeat from the leader within a period of time, it switches from follower to candidate and initiates an election. If it receives the majority of votes in the cluster (including its own vote), it will switch to the leader state; if it is found that other nodes have become the leader before itself, it will actively switch to the follower state.