The door
After listing the basic algorithms in the previous article, we noticed that stacking is an important feature of the basic algorithm. In addition, how factors are constrained is an important feature because it helps determine how stacked LBS perform secondary selection.
Construction of the library
So eventually, I built a Golang LB library: here.
Lb libraries are a hell of a lot, but I also have a specific purpose (as mentioned earlier and in the previous article) :
- Thread safety
- Stackable design
- You can constrain factor
Thread safety is nothing more than resolving the go Routines data racing problem.
Stackable means that the base algorithm should be able to be stacked to form a composite custom LB algorithm. For example, WRR and random can be stacked to form a weighted random algorithm. Or adding weights to consistent hashes is not out of the question.
One of our design goals is to constrain factor so that callers can pass in an interpreter for a particular second-level algorithm that will explain what results can be extracted if the constraints are satisfied. It is also one of the components for the complete stackable.
In addition, the algorithm is meant to be extensible.
Finally, since algorithm research is an informal research activity, we are not going to optimize it for HFT this time, because that would often be unrecognizable. However, due to the selection of each algorithm, the performance is very good, so for the sake of avoiding competition, performance testing is not done.
Nested design
How can we design our LB library so that basic algorithms, especially features like weighting, can be nested on top of other basic algorithms to form composite LB algorithms?
A weighted random algorithm, for example, is really different from a weighted polling algorithm.
Basically, the implementation of nesting dolls has gotten into the realm of strong artificial intelligence, which is how I create my own ultimate proposition. So, WTF, I’m just talking nonsense. In fact, the key is to make the Child interface include an embed to the parent interface, so as to achieve the implicit purpose of equality. I’ve taught the ultimate architectural design trick in Golang.
That’s a little convoluted. For our LB design, note that we assume that you add a bunch of peers (s) to the Balancer, and then calculate the equilibrium on them, one of which is the Factor Factor. Of course, for polling and randomization these algorithms don’t need Factor. Therefore, when we want to overlay Weighted algorithms on Random Balancer, we need a connecting A2B interface, which is both the Factor of Weighted algorithm and the Peer of Random algorithm, and this solves the problem.
Then, in each of our Next projects, we will identify whether the Next – Picked Peer is also a BalancerLite. If so, we will recurse into it to achieve the purpose of nesting dolls.
For example, our final random algorithm looks like this (and Next in WRR is similar) :
func (s *randomS) Next(factor lbapi.Factor) (next lbapi.Peer, c lbapi.Constraintenable) {
next = s.miniNext()
if fc, ok := factor.(lbapi.FactorComparable); ok {
next, c, ok = fc.ConstrainedBy(next)
} else if nested, ok := next.(lbapi.BalancerLite); ok {
next, c = nested.Next(factor)
}
return
}
func (s *randomS) miniNext(a) (next lbapi.Peer) {
s.rw.RLock()
defer s.rw.RUnlock()
l := int64(len(s.peers))
ni := atomic.AddInt64(&s.count, inRange(0, l)) % l
next = s.peers[ni]
return
}
Copy the code
It does a BalancerLite type diagnosis on the one hand and then recurses to child.next, so when you Add pairs to balancer.add (peers) if the Peer object is Balancer coupled, the nested stack will cascade down smoothly.
On the other hand, Next also does a ConstrainedBy for factors that can be constrained, which you can use for enhanced LB purposes.
Here’s how to extend and customize our SET of LB libraries.
Additional algorithm implementation
Here, we’ll take a look at each of the two examples of extended thinking. They are included in the official LB library release, both as a demonstration and as a way of saving you from the unnecessary waste of code duplication – don’t worry, I’ve met all kinds of weird requirements in actual production, you don’t have to worry about running out of code to write, and the generic library won’t do everything for you.
Weighted random algorithm: Performs weighted stack on random
Since WRR can be weighted and RANDOM can be random, we can stack the two with the help of the infrastructure already built.
In this requirement, we need to use the capabilities provided by this branch in Next:
if nested, ok := next.(lbapi.BalancerLite); ok {
next, c = nested.Next(factor)
}
Copy the code
The way to do this is to create a new Peer implementation that is random LB and provides a weight value, because first of all WRR requires that all weightedPeers are added to it. This is the wpPeer struct in the code. It implements the WBPeer interface.
package wrandom
import (
"github.com/hedzr/lb/lbapi"
"github.com/hedzr/lb/wrr"
)
func New(opts ... lbapi.Opt) lbapi.Balancer { return wrr.New(opts...) }
func WithWeightedBalancedPeers(peers ... WBPeer) lbapi.Opt {
return func(balancer lbapi.Balancer) {
for _, b := range peers {
if bp, ok := b.(lbapi.WeightedPeer); ok {
balancer.Add(bp)
}
}
}
}
func NewPeer(weight int, gen func(opts ... lbapi.Opt) lbapi.Balancer.opts.lbapi.Opt) WBPeer {
return&wpPeer{ weight: weight, lb: gen(opts...) ,}}// WBPeer is a weighted, balanced peer.
type WBPeer interface {
lbapi.WeightedPeer
lbapi.Balancer
}
type wpPeer struct {
weight int
lb lbapi.Balancer
}
func (w *wpPeer) String(a) string { return "wpBeer" }
func (w *wpPeer) Weight(a) int { return w.weight }
func (w *wpPeer) Next(factor lbapi.Factor) (next lbapi.Peer, c lbapi.Constrainable) {
return w.lb.Next(factor)
}
func (w *wpPeer) Count(a) int { return w.lb.Count() }
func (w *wpPeer) Add(peers ... lbapi.Peer) { w.lb.Add(peers...) }
func (w *wpPeer) Remove(peer lbapi.Peer) { w.lb.Remove(peer) }
func (w *wpPeer) Clear(a) { w.lb.Clear() }
Copy the code
Following the lapi interfaces definition provided in the previous article, in the example we construct an lbapi.peer implementation of wbPeer. L38 implements Peer, L40 helps implement WeightedPeer, and L42 completes Balancer. Note that the wpPeer structure contains an instance of random Balancer, W.b.
WithWeightedBalancedPeers(peers … WBPeer requires a set of Peer objects created by NewPeer, which are eventually used by New to construct a WRR LB.
So we can use it like this:
package wrandom_test
import (
"github.com/hedzr/lb/lbapi"
"github.com/hedzr/lb/random"
"github.com/hedzr/lb/wrandom"
"testing"
)
type exP string
func (s exP) String(a) string { return string(s) }
func withPeers(peers ... lbapi.Peer) lbapi.Opt {
return func(balancer lbapi.Balancer) {
for _, p := range peers {
balancer.Add(p)
}
}
}
func TestWR1(t *testing.T) {
peer1, peer2, peer3 := exP("172.16.0.7:3500"), exP("172.16.0.8:3500"), exP("172.16.0.9:3500")
peer4, peer5 := exP("172.16.0.2:3500"), exP("172.16.0.3:3500")
lb := wrandom.New(
wrandom.WithWeightedBalancedPeers(
wrandom.NewPeer(3, random.New, withPeers(peer1, peer2, peer3)),
wrandom.NewPeer(2, random.New, withPeers(peer4, peer5)),
),
)
sum := make(map[lbapi.Peer]int)
for i := 0; i < 5000; i++ {
p, _ := lb.Next(lbapi.DummyFactor)
sum[p]++
}
// results
for k, v := range sum {
t.Logf("%v: %v", k, v)
}
}
Copy the code
Of course, the code here is for testing purposes. In business code you can split the above all-in-one initialization statement into multi-step, run-time Add/Remove wpPeer object to Add or Remove its subordinate back end instead of directly adding peer1, Peer2… , peer5.
One reference is:
Wr_test. go:42: 172.16.0.8:35:1018 WR_test. go:42: 172.16.0.3:35:1012 WR_test. go:42: 172.16.0.9:3500: 1026 wr_test.go:42: 172.16.0.2:35:988 WR_test. go:42: 172.16.0.7:35:956Copy the code
The five back-end nodes eventually split all requests equally, which is what we want.
In the same way, random algorithms do not have to have absolute equipartition, allowing some statistical deviation.
Weighted version comparison algorithm: constraints on factor
Any combination of two existing Balancer algorithms, as demonstrated in the previous section, is relatively easy to implement.
Now let’s make a new weighting algorithm WV. It contains a semver comparator set, and we assume that the caller will pass in a host+port set with a specific version number as a factor. What WV does is:
- Extract a semver comparator X in proportion to the weight
- Factor is passed into X to select a host+port+version so that its version meets the conditions constrained by X.
We use the “github.com/Masterminds/semver” to do semver related operations.
New
First you need a proprietary builder of WRR and its peers:
package version
func New(opts ... lbapi.Opt) lbapi.Balancer { return wrr.New(opts...) }
func WithConstrainedPeers(cs ... lbapi.Constrainable) lbapi.Opt {
return func(balancer lbapi.Balancer) {
for _, vp := range cs {
balancer.Add(vp)
}
}
}
Copy the code
It looks unremarkable, except for the peer input using a Constrainable interface.
So we will be version. The New (version. WithConstrainedPeers (…). Pass ina bunch of peers that are Constrainable. Review the definition of Constrainable, which itself satisfies the Peer interface.
constrainablePeer
Now implement a special peer like this:
package version
import (
"fmt"
"github.com/Masterminds/semver"
"github.com/hedzr/lb/lbapi"
)
func NewConstrainablePeer(constraints string, weight int) (peer lbapi.Constrainable) {
p := &constrainablePeer{
constraints: constraints,
weight: weight,
constraintsObj: nil,
}
vc, err := semver.NewConstraint(constraints)
if err == nil {
p.constraintsObj = vc
peer = p
}
return
}
type constrainablePeer struct {
constraints string
weight int
constraintsObj *semver.Constraints
}
func (s *constrainablePeer) String(a) string { return s.constraints }
func (s *constrainablePeer) Weight(a) int { return s.weight }
func (s *constrainablePeer) Constraints(a) *semver.Constraints { return s.constraintsObj }
func (s *constrainablePeer) CanConstrain(o interface{}) (yes bool) {
_, yes = o.(*semver.Version)
return
}
func (s *constrainablePeer) Check(factor interface{}) (satisfied bool) {
if s.constraintsObj == nil {
var err error
s.constraintsObj, err = semver.NewConstraint(s.constraints)
iferr ! =nil {
fmt.Printf("illegal constraints: %q. %v\n", s.constraints, err)
}
}
if v, ok := factor.(*semver.Version); ok {
satisfied = s.constraintsObj.Check(v)
} else if v, ok := factor.(interface{ Version() *semver.Version }); ok {
satisfied = s.constraintsObj.Check(v.Version())
}
return
}
Copy the code
The key to constrainablePeer is its Check function, where it attempts to unbind the *semver.Version value V contained in factor and then uses the constraint s.constraintsobj to Check that V meets the constraint.
I am version 1.3, am I < 1.3.x? ConstrainablePeer would say, yes.
NewConstrainablePeer
A public function NewConstrainablePeer is prepared for constrainablePeer, so:
var testConstraints = []lbapi.Constrainable{
version.NewConstrainablePeer("< = 1.1 x".2),
version.NewConstrainablePeer("X ^ 1.2.".4),
version.NewConstrainablePeer("^2.x".11),
version.NewConstrainablePeer("^3.x".3), } lb := version.New(version.WithConstrainedPeers(testConstraints...) )Copy the code
TestCase
What is needed now is to prepare factor and do the test:
func initFactors(a) version.BackendsFactor {
fa := version.NewBackendsFactor(rr.New)
fa.AddPeers(
version.NewBackendFactor("1.1"."172.16.0.6:3500"),
version.NewBackendFactor("1.3"."172.16.0.7:3500"),
version.NewBackendFactor("2.0"."172.16.0.8:3500"),
version.NewBackendFactor("3.13"."172.16.0.9:3500"),return fa
}
factor := initFactors()
peer, c := lb.Next(factor)
Copy the code
Here we’ve skipped over NewBackendsFactor and most of the implementation details of NewBackendFactor, with one particular mention being fs, ConstrainedBy of backendsFactor:
func (fa *backendsFactor) ConstrainedBy(constraints interface{}) (peer lbapi.Peer, c lbapi.Constrainable, satisfied bool) {
if cc, ok := constraints.(lbapi.Constrainable); ok {
var lb lbapi.Balancer
// for this object cc, build a lb and associate with it
fa.crw.RLock()
if_, ok := fa.constraints[cc]; ! ok { fa.crw.RUnlock() lb = fa.generator() fa.crw.Lock() fa.constraints[cc] = lb fa.crw.Unlock() }else {
lb = fa.constraints[cc]
fa.crw.RUnlock()
lb.Clear()
}
// find all satisfied backends/peers and add them into lb
for _, f := range fa.backends {
if cc.Check(f) {
satisfied = true
lb.Add(f)
}
}
// now, pick up the next peer of them
peer, c = lb.Next(lbapi.DummyFactor)
if c == nil {
c = cc
}
}
return
}
Copy the code
Generally speaking, the fa here. The generator should be a rr. New (by fa: = version. NewBackendsFactor (rr. New)), also namely lb would be created as a lb of polling algorithm, or other can also. The backend that all satisfy the constraints (via Check) is then treated as a set of peers for lb to select.
Interested friends can go to the source code.
For the sake of clear logic, the expensive operation is used, that is, Check all the fa.backends each time. You can imagine that it’s not a good approach for a large backend. The cache will be added and deleted incrementally to eliminate Check all as much as possible.
Small nodules
What’s the point? This weighted version range comparison algorithm is actually an idea of mine when I was doing API GW, and its purpose is to achieve the grayscale on-line test of the whole self-developed framework.
Of course, as a complete framework to achieve the goal of gray line, the need is a full range of adaptation. It will include at least these:
- Weights can be assigned according to the constraint expression of the version range
- Dispatch incoming requests transparently
- Use the configuration file automatic monitoring, automatic loading and merging, automatic change trigger mechanism provided by CMDR lock to adjust weight allocation Settings in real time while MicroService is online
- If you don’t have profile merge capability, you need to consider using configuration items such as redis cache to ensure real-time weight assignment.
- You need to be able to integrate multiple load balancing algorithms, especially to correctly extract Properties in the request context to complete the distribution decision
Therefore, the truly full-featured API GW is probably not known to work out of the box, may be limited by code adaptation, or need to do unknown hacks in various corners of the system.
The last
Finally, to put all LB algorithms into the previous package, we use a map that is equipped with an RWMutex to prevent you from using it in a multithreaded environment. But actually we think you should set up an instance of lb and then dynamically add/remove its peers, and in addition, We also assume that you should have registered your own custom Balancer algorithm and its generator at the beginning of the app — that is, this lock has limited meaning.
Ok, in the master of the previous level, the code snippet looks like this:
func New(algorithm string, opts ... lbapi.Opt) lbapi.Balancer {
kbs.RLock()
defer kbs.RUnlock()
if g, ok := knownBalancers[algorithm]; ok {
return g(opts...)
}
log.Fatalf("unknown/unregistered balancer and generator: %q", algorithm)
return nil // unreachable
}
// WithPeers adds the initial peers.
func WithPeers(peers ... lbapi.Peer) lbapi.Opt {
return func(balancer lbapi.Balancer) {
for _, p := range peers {
balancer.Add(p)
}
}
}
// Register assign a (algorithm, generator) pair.
func Register(algorithm string, generator func(opts ... lbapi.Opt) lbapi.Balancer) {
kbs.Lock()
defer kbs.Unlock()
knownBalancers[algorithm] = generator
}
// Unregister revoke a (algorithm, generator) pair.
func Unregister(algorithm string) {
kbs.Lock()
defer kbs.Unlock()
delete(knownBalancers, algorithm)
}
const (
// Random algorithm
Random = "random"
// RoundRobin algorithm
RoundRobin = "round-robin"
// WeightedRoundRobin algorithm
WeightedRoundRobin = "weighted-round-robin"
// ConsistentHash algorithm
ConsistentHash = "consistent-hash"
// WeightedRandom algorithm
WeightedRandom = "weighted-random"
// VersioningWRR algorithm
VersioningWRR = "versioning-wrr"
)
func init(a) {
kbs.Lock()
defer kbs.Unlock()
knownBalancers = make(map[string]func(opts ... lbapi.Opt) lbapi.Balancer)
knownBalancers[Random] = random.New
knownBalancers[RoundRobin] = rr.New
knownBalancers[WeightedRoundRobin] = wrr.New
knownBalancers[ConsistentHash] = hash.New
knownBalancers[WeightedRandom] = wrandom.New
knownBalancers[VersioningWRR] = version.New
}
var knownBalancers map[string]func(opts ... lbapi.Opt) lbapi.Balancer
var kbs sync.RWMutex
Copy the code
usage
This library allows you to register your own LB algorithms in order to give callers a more uniform interface. Using an LB algorithm is simple:
package main
import (
"fmt"
lb "github.com/hedzr/lb"
"github.com/hedzr/lb/lbapi"
)
func main(a) {
b := lb.New(lb.RoundRobin)
b.Add(exP("172.16.0.7:3500"), exP("172.16.0.8:3500"), exP("172.16.0.9:3500"))
sum := make(map[lbapi.Peer]int)
for i := 0; i < 300; i++ {
p, _ := b.Next(lbapi.DummyFactor)
sum[p]++
}
for k, v := range sum {
fmt.Printf("%v: %v\n", k, v)
}
}
type exP string
func (s exP) String(a) string { return string(s) }
Copy the code
Depending on your scenario, you’ll need your own exP implementation (which implements lbapi.peer), but for your convenience, a Peer only needs to implement the String() String interface.
The end of the
So, in this release of the LB class library, we provide a bunch of basic algorithms and compound algorithms:
- random
- polling
Minimum number of connections- hashing
- Weighted polling
- A weighted random
- Weighted version range comparison
Furthermore, we believe that sufficient infrastructure has been provided to enable you to scale load balancing algorithms in a unified framework to meet your actual needs.
I went to Github yesterday and found that the profile page was quite different. There were pages like Explore and Trending. Today and now it’s back to normal. I was not hallucinating, or was it last night thinking about how to write the second chapter and fell asleep in dreaming?
:end: