This article introduces the use of Divide plug-in in Soul gateway framework, and briefly analyzes the implementation principle.
background
The Soul Gateway framework has rich built-in plug-in support. The Divide plug-in is used for HTTP forward proxy. All HTTP requests are load-balanced by the divide plug-in.
Use of the Divide plug-in
-
Start soul-admin (gateway management background), soul-bootstrap (gateway service), soul-examples- HTTP (back-end service);
-
Log in to the gateway management background, and you can see that the API in soul-examples-http is registered with the Divide module.
-
The SelectorList panel lists a selector selector configured in the soul-examples-http back-end service;
- The contextPath of the code configuration item is the path prefix context of the forward back-end service, corresponding to the Name in the Selector configuration.
- You can set the weight of the request backend service by setting weight in Selector.
http: adminUrl: http://localhost:9095 port: 8188 contextPath: /http appName: http full: false Copy the code
-
The RulesList panel lists the API paths and rule options in the soul-examples- HTTP service, such as setting conditions for matching requests and load balancing policies.
-
Test client HTTP request to access gateway service;
-
Start two back-end service instances 127.0.0.1:8188 and 127.0.0.1:8189, change the weight of instance to 1 and 100 respectively, observe the gateway log, you can see that multiple requests are forwarded to 127.0.0.1:8189, because the latter has a higher weight.
Divide plug-in source code parsing
Divide plug-in process
-
All plug-ins in turn call the AbstractSoulPlugin#execute method via the responsibility chain pattern;
-
The AbstractSoulPlugin#execute method calls the DividePlugin#doExecute method after querying the selectors and rules for the plug-in configuration.
Matching of rules
- One plug-in corresponds to multiple selectors, and one selector corresponds to multiple rules. The selector is the first filter of traffic, and the rule is the final filter.
- After the request is matched by the selector, it will enter the match of the rule.
private RuleData matchRule(final ServerWebExchange exchange, final Collection<RuleData> rules) { return rules.stream() .filter(rule -> filterRule(rule, exchange)) .findFirst().orElse(null); } private Boolean filterRule(final RuleData ruleData, final ServerWebExchange exchange) { return ruleData.getEnabled() && MatchStrategyUtils.match(ruleData.getMatchMode(), ruleData.getConditionDataList(), exchange); } public class MatchStrategyUtils { /** * Match boolean. * * @param strategy the strategy * @param conditionDataList the condition data list * @param exchange the exchange * @return the boolean */ public static boolean match(final Integer strategy, final List<ConditionData> conditionDataList, Final ServerWebExchange exchange) {/ / to get matching way and or or String matchMode = MatchModeEnum. GetMatchModeByCode (strategy); // Get the matching strategy implementation class MatchStrategy MatchStrategy = ExtensionLoader.getExtensionLoader(MatchStrategy.class).getJoin(matchMode); Return matchstrategy. match(conditionDataList, exchange); }}Copy the code
Load Balancing Policy
-
The load balancing policy is implemented through SPI extension points configured, which includes three implementation policies.
-
Random Indicates a random selection policy
public DivideUpstream doSelect(final List<DivideUpstream> upstreamList, Final String IP) {// Calculate the sum of the weights of the backend service int totalWeight = calculateTotalWeight(upstreamList); Boolean sameWeight = isAllUpStreamSameWeight(upstreamList); If (totalWeight > 0 &&! sameWeight) { return random(totalWeight, upstreamList); } // If the weights are the same or the weights are 0 then random return random(upstreamList); } // The weighted random distribution is similar to the segmented distribution, and the weighted one occupies a large proportion in the line segment, Private DivideUpstream Random (final int totalWeight, final List<DivideUpstream> upstreamList) { // If the weights are not the same and the weights are greater than 0, then random by the total number of weights int offset = RANDOM.nextInt(totalWeight); // Determine which segment the random value falls on for (DivideUpstream divideUpstream : upstreamList) { offset -= getWeight(divideUpstream); if (offset < 0) { return divideUpstream; } } return upstreamList.get(0); } // Select private DivideUpstream random(final List<DivideUpstream> upstreamList) {return upstreamList.get(RANDOM.nextInt(upstreamList.size())); }Copy the code
- RoundRobin Polling policy
public DivideUpstream doSelect(final List<DivideUpstream> upstreamList, final String ip) { String key = upstreamList.get(0).getUpstreamUrl(); ConcurrentMap<String, WeightedRoundRobin> map = methodWeightMap.get(key); if (map == null) { methodWeightMap.putIfAbsent(key, new ConcurrentHashMap<>(16)); map = methodWeightMap.get(key); } int totalWeight = 0; long maxCurrent = Long.MIN_VALUE; long now = System.currentTimeMillis(); DivideUpstream selectedInvoker = null; WeightedRoundRobin selectedWRR = null; for (DivideUpstream upstream : upstreamList) { String rKey = upstream.getUpstreamUrl(); WeightedRoundRobin weightedRoundRobin = map.get(rKey); int weight = getWeight(upstream); if (weightedRoundRobin == null) { weightedRoundRobin = new WeightedRoundRobin(); weightedRoundRobin.setWeight(weight); map.putIfAbsent(rKey, weightedRoundRobin); } if (weight ! = weightedRoundRobin.getWeight()) { //weight changed weightedRoundRobin.setWeight(weight); } long cur = weightedRoundRobin.increaseCurrent(); weightedRoundRobin.setLastUpdate(now); if (cur > maxCurrent) { maxCurrent = cur; selectedInvoker = upstream; selectedWRR = weightedRoundRobin; } totalWeight += weight; } if (! updateLock.get() && upstreamList.size() ! = map.size() && updateLock.compareAndSet(false, true)) { try { // copy -> modify -> update reference ConcurrentMap<String, WeightedRoundRobin> newMap = new ConcurrentHashMap<>(map); newMap.entrySet().removeIf(item -> now - item.getValue().getLastUpdate() > recyclePeriod); methodWeightMap.put(key, newMap); } finally { updateLock.set(false); } } if (selectedInvoker ! = null) { selectedWRR.sel(totalWeight); return selectedInvoker; } // should not happen here return upstreamList.get(0); }Copy the code
- Hash Hash policy
public DivideUpstream doSelect(final List<DivideUpstream> upstreamList, final String ip) { final ConcurrentSkipListMap<Long, DivideUpstream> treeMap = new ConcurrentSkipListMap<>(); for (DivideUpstream address : upstreamList) { for (int i = 0; i < VIRTUAL_NODE_NUM; i++) { long addressHash = hash("SOUL-" + address.getUpstreamUrl() + "-HASH-" + i); treeMap.put(addressHash, address); } } long hash = hash(String.valueOf(ip)); SortedMap<Long, DivideUpstream> lastRing = treeMap.tailMap(hash); if (! lastRing.isEmpty()) { return lastRing.get(lastRing.firstKey()); } return treeMap.firstEntry().getValue(); } private static long hash(final String key) { // md5 byte MessageDigest md5; try { md5 = MessageDigest.getInstance("MD5"); } catch (NoSuchAlgorithmException e) { throw new SoulException("MD5 not supported", e); } md5.reset(); byte[] keyBytes; keyBytes = key.getBytes(StandardCharsets.UTF_8); md5.update(keyBytes); byte[] digest = md5.digest(); // hash code, Truncate to 32-bits long hashCode = (long) (digest[3] & 0xFF) << 24 | ((long) (digest[2] & 0xFF) << 16) | ((long) (digest[1] & 0xFF) << 8) | (digest[0] & 0xFF); return hashCode & 0xffffffffL; }Copy the code
conclusion
- Learn the usage and source code of divide plug-in, combined with practical experience, at present our company uses the system is the domain name weight and back-end service split, there are special platform to apply for domain name and mount machine weight, there are API gateway platform for back-end service API registration and forwarding; Soul’s load-balancing strategy is more flexible and can be precise down to the request path;
- Source code analysis or a focus on a piece, the other can be understood as a black box, after the slow analysis, so that you can effectively control the pace of learning;
series
- Soul source learning [1] – preliminary exploration