Whether in the early load balancer, or the current micro service client-based load balancing, there is a most basic polling algorithm, that is, the request is evenly distributed to multiple machines, today talk about on this basis, kube proxy is how to achieve affinity polling core data structure. Understand affinity policy implementation and retry mechanisms
1. Foundation building
1.1 the Service Endpoints
Service and Endpoint are concepts in Kubernetes, where Service represents a Service, usually followed by a bunch of POD, because the IP address of pod is not fixed, use Servicel to provide a unified access to the back-end group of POD. And Endpoints are a set of IP addresses and ports that provide the same service at the back end and in this section if you know those,
1.2 Polling algorithm
Polling is probably the simplest algorithm, and most implementations in GO store all the currently accessible back-end addresses in a slice, while index stores the index of the host allocated in the slice for the next request
1.3 affinity
Affinity is also relatively simple to implement. Affinity means that when an IP address repeatedly invokes a back-end service, the IP address can be forwarded to the previous host
2. Implementation of core data structure
2.1 Affinity Implementation
2.1.1 Affinity Affinity policies
Affinity policies are designed to be implemented in three parts: affinityPolicy: Affinity type, that is, based on the information of the client, affinity is based on the clientipaffinityMap: Based on the affinity type defined in the Policy, the hash key is used to store clientip affinity information. TtlSeconds: stores the expiration time of the affinity. If the expiration time is exceeded, the RR polling algorithm is selected again
Type affinityPolicy struct {affinityType v1.ServiceAffinity // The type field is just a string // map client IP -> affinity info ttlSeconds int }Copy the code
2.1.2 Affinity Affinity status
As mentioned above, affinityMap is used to store affinity status. In fact, the key information of affinity status contains two endpoints (the endpoint to be accessed by the back-end) and lastUsed(the time when affinity was last accessed).
type affinityState struct {
clientIP string
//clientProtocol api.Protocol //not yet used
//sessionCookie string //not yet used
endpoint string
lastUsed time.Time
}Copy the code
2.2 Load balancing status of Service data structure
BalancerState stores load balancing status data of the current Service, endpoints stores IP :port sets of back-end PODS, index is the node index that implements the RR polling algorithm, and affinity stores affinity policy data
type balancerState struct {
endpoints []string // a list of "ip:port" style strings
index int // current index into endpoints
affinity affinityPolicy
}Copy the code
2.3 Load Balancing Polling Data structure
The core data structure mainly stores the load balancing status of the corresponding service through the Services field, and is protected by the service Map through read/write locks
type LoadBalancerRR struct {
lock sync.RWMutex
services map[proxy.ServicePortName]*balancerState
}Copy the code
2.4 Load balancing algorithm implementation
We only focus on load balancing polling and affinity allocation. For some codes aware of Service and endpoints, we omit logic such as update and deletion. The following chapter is the implementation of NextEndpoint
2.4.1 Locking and validity verification
Validity verification mainly checks whether the corresponding service exists and whether the corresponding endpoint exists
Lb.lock. lock() defer lb.lock.Unlock() // lock // check whether the service exists state, exists := lb.services[svcPort] if! exists || state == nil { return "", If len(state.endPoints) == 0 {return "", ErrMissingEndpoints } klog.V(4).Infof("NextEndpoint for service %q, srcAddr=%v: endpoints: %+v", svcPort, srcAddr, state.endpoints)Copy the code
2.4.2 Affinity Type Detection is supported
By checking the affinity type, you can determine whether affinity is supported, that is, by checking whether the corresponding field is set
sessionAffinityEnabled := isSessionAffinity(&state.affinity)
func isSessionAffinity(affinity *affinityPolicy) bool {
// Should never be empty string, but checking for it to be safe.
if affinity.affinityType == "" || affinity.affinityType == v1.ServiceAffinityNone {
return false
}
return true
}Copy the code
2.4.3 Affinity Matching and Last Access Update
If affinity matching is performed, the corresponding endpoint is preferentially returned. However, if the endpoint fails to be accessed, you need to re-select the node and reset affinity
var ipaddr string if sessionAffinityEnabled { // Caution: Don't shadow ipaddr var err Error // Obtain the corresponding srcIP err = net.SplitHostPort(srcAddr.String()) if err ! = nil { return "", fmt.Errorf("malformed source address %q: %v", srcAddr.string (), err)} // Affinity reset, false by default, but if the current endpoint access error, reset // Because the connection error, you must choose a new machine, The current affinity cannot continue to be used if! SessionAffinityReset {// If affinity exists, Returns the corresponding endpoint sessionAffinity, exists := state.affinity.affinityMap[ipaddr] if exists && int(time.Since(sessionAffinity.lastUsed).Seconds()) < state.affinity.ttlSeconds { // Affinity wins. endpoint := sessionAffinity.endpoint sessionAffinity.lastUsed = time.Now() klog.V(4).Infof("NextEndpoint for service %q from IP %s with sessionAffinity %#v: %s", svcPort, ipaddr, sessionAffinity, endpoint) return endpoint, nil } } }Copy the code
2.4.4 Establishing affinity status based on clientIP
// Get an endpoint, Endpoints [state.index] state.index = (state.index + 1) % len(state.endPoints) if SessionAffinityEnabled {/ / save the affinity state var affinity * affinityState affinity = state. The affinity. AffinityMap [ipaddr] the if affinity == nil { affinity = new(affinityState) //&affinityState{ipaddr, "TCP", "", endpoint, time.Now()} state.affinity.affinityMap[ipaddr] = affinity } affinity.lastUsed = time.Now() affinity.endpoint = endpoint affinity.clientIP = ipaddr klog.V(4).Infof("Updated affinity key %s: %#v", ipaddr, state.affinity.affinityMap[ipaddr]) } return endpoint, nilCopy the code
Ok, that’s all for today’s analysis, hoping to help you understand the implementation of affinity polling algorithm, learn the core data structure design, and some design to deal with failures in the generation. Thank you for sharing your attention, thank you
Wechat id: Baxiaoshi2020
Watch the bulletin number to read more source code analysis articles
More articles can be found at www.sreguide.com
This post is posted by OpenWrite, a blogging platform