1. Load balancing
1.1 Load Balancing Policies in Dubbo
- Random LoadBalance(default) Random. Set the Random probability by weight. The probability of collision on a section is high, but the distribution is more uniform with the increase of adjustment dosage, and the distribution is more uniform after using weight according to the probability, which is conducive to dynamic adjustment of provider weight.
- RoundRobin LoadBalance Polling. Set the polling ratio based on the weights specified in the convention. There is the problem of slow providers accumulating requests, for example: the second machine is slow but not hung up, gets stuck when the request is switched to the second machine, and over time all the requests get stuck to the second machine.
- LeastActive LoadBalance Minimum number of active calls. Random number of the same active calls. Make slower providers receive fewer requests, because slower providers have a larger difference in the count before and after the invocation.
- ConsistentHash LoadBalance Consistency Hash, where requests with the same parameters are always sent to the same provider. When a provider hangs, requests originally sent to that provider are spread over other providers based on virtual nodes without drastic changes. By default, only the first parameter Hash is used. If you want to change it, configure 160 virtual nodes by default
1.2 configuration
1.2.1 XML way
It can be configured on the provider side or the consumer side. There are several of them. Choose one of them
- Service level of the server
<dubbo:service interface="..." loadbalance="roundrobin" />
Copy the code
- Server method level
<dubbo:service interface="...">
<dubbo:method name="..." loadbalance="roundrobin"/>
</dubbo:service>
Copy the code
- Client service level
<dubbo:reference interface="..." loadbalance="roundrobin" />
Copy the code
- Client method level
<dubbo:reference interface="...">
<dubbo:method name="..." loadbalance="roundrobin"/>
</dubbo:reference>
Copy the code
1.2.2 Annotation method
- Provider configuration
@Service(loadbalance = "")
public class UserServiceImpl implements UserService {}
Copy the code
- Consumer configuration, via the loadBalance attribute
@Reference(loadbalance = "roundrobin ")
private UserService userService;
Copy the code
2. The cluster is fault tolerant
2.1 Fault tolerance types in clusters
When a cluster invocation fails, Dubbo provides multiple fault tolerance schemes. By default, failover retry is used.
2.2 Fault tolerance strategy in Dubbo
- Failover Cluster Failover automatically occurs. If the Failover occurs, retry other servers. Typically used for read operations, but retries introduce longer delays. Retries =”2″ can be used to set the number of retries (excluding the first). It can be configured on the provider side or the consumer side
// Provider configuration<dubbo:service retries="2" />// Consumer configuration<dubbo:reference retries="2" />
Copy the code
- Failfast Cluster fails rapidly. An error is reported immediately after the Failfast Cluster invocation is initiated only once. Typically used for non-idempotent writes, such as new records.
- Failsafe Cluster Failsafe. If exceptions occur, ignore them. It is used to write audit logs.
- Failback Cluster automatically recovers when Failback Cluster fails. Failure requests are recorded in the background and resent periodically. Typically used for message notification operations.
- The Forking Cluster calls multiple servers in parallel and returns if one succeeds. It is usually used for read operations that require high real-time performance but waste more service resources. The maximum parallel number can be set by forks=”2″.
- Broadcast Cluster broadcasts calls to all providers, one by one, and an error is reported on any one [2]. Typically used to notify all providers to update local resource information such as caches or logs.
2.3 configuration
2.3.1 XML way
- Service Provider
<dubbo:service cluster="failsafe" />
Copy the code
- Service consumer
<dubbo:reference cluster="failsafe" />
Copy the code
2.3.2 Annotation method
- Service Provider
@Service(cluster = "failsafe")
Copy the code
- Service consumer
@Reference(cluster = "failsafe")
Copy the code
3.SpringBoot integrated fuse Hystrix
3.1 Hystrix overview
Hystrix is an open source library for handling latency and fault tolerance in distributed systems. Hystrix enables your system to more gracefully cope with failures by isolating dependent services in the event of failure, preventing cascading failures, and providing a failback mechanism. The “circuit breaker” itself is a switching device. When a service unit fails, the fault monitoring of the circuit breaker (similar to blowing a fuse) returns an expected and manageable FallBack response to the caller. Rather than waiting too long or throwing exceptions that the caller can’t handle, this ensures that the service caller’s thread is not tied up unnecessarily for too long and avoids an avalanche of failures across a distributed system.
3.2 Integrate Hystrix for fault tolerance
3.2.1 the disclosing party
- Add Hystrix starter dependencies in pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>Spring ‐ cloud ‐ starter ‐ netflix ‐ hystrix</artifactId>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>Spring ‐ cloud ‐ dependencies</artifactId>
<version>Finchley.SR1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Copy the code
- EnableHystrix on the launcher class (@enablehystrix)
@EnableHystrix
@EnableDubbo
@SpringBootApplication
public class UserApplication {
public static void main(String[] args) { SpringApplication.run(UserApplication.class,args); }}Copy the code
- Add the annotation @hystrixCommand above the provided method
@Service
public class UserServiceImpl implements UserService {
/** * Returns all shipping addresses by user ID *@param userId
* @return* / @HystrixCommand
public List<Address> findUserAddressList(String userId) {
// Simulate dao query database
System.out.println("UserServiceImpl....");
List<Address> list = new ArrayList<Address>();
list.add(new Address(1."Beijing"."1"."Zhang"."12306"));
list.add(new Address(2."Wuhan"."2"."Bill"."18170"));
returnlist; }}Copy the code
3.2.2 consumers
- Add Hystrix starter dependencies in pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>Spring ‐ cloud ‐ starter ‐ netflix ‐ hystrix</artifactId>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>Spring ‐ cloud ‐ dependencies</artifactId>
<version>Finchley.SR1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Copy the code
- EnableHystrix on the launcher class (@enablehystrix)
@EnableHystrix
@EnableDubbo
@SpringBootApplication
public class OrderApplication {
public static void main(String[] args) { SpringApplication.run(OrderApplication.class,args); }}Copy the code
- Add the annotation @hystrixCommand above the called method
@RestController
@RequestMapping("/order")
public class OrderController {
@Reference(cluster = "") private UserService userService;
@HystrixCommand(fallbackMethod = "nativeMethod")
@RequestMapping("/save")
public List<Address> save(a){
List<Address> list = userService.findUserAddressList("1");
System.out.println(list);
// Call the business...
return list;
}
// When the remote method call fails, the current method is fired
public List<Address> nativeMethod(a){
List<Address> list = new ArrayList<Address>();
list.add(new Address(1."Default address"."1"."Default consignee"."Default phone"));
returnlist; }}Copy the code
4. The Zookeeper cluster
4.1.Zookeeper Cluster Overview
4.1.1 Why Do I Set up a Zookeeper Cluster
Most distributed applications require a master, coordinator, or controller to manage physically distributed child processes. At present, most of them have to develop private coordinators, which lack a general mechanism, waste of repeatedly writing coordinators, and it is difficult to form a general and scalable coordinator. Zookeeper provides a general distributed lock service to coordinate distributed applications. Therefore, ZooKeeper is a collaboration service for distributed applications. Zookeeper acts as a registry, and both servers and clients have to access it, so if there is a lot of concurrency, there is definitely a wait. Therefore, you can use the ZooKeeper cluster to solve the problem. The following is the zooKeeper cluster deployment structure:
4.1.2 Leader election
The leader election is the most important and complicated step in the startup process of Zookeeper. So what is a leader election? Why does ZooKeeper need a Leader election? What is the process of the zooKeeper leader election? See what a leader election is. In fact, it is quite easy to understand. The leader election is just like the presidential election. Everyone votes one vote, and the one who wins the majority votes is elected as the president. In the ZooKeeper cluster, each node votes. If a node obtains more than half of the votes, it becomes the leader node.
4.2 Creating a Zookeeper Cluster
4.2.1 Construction Requirements
A real cluster needs to be deployed on different servers, but during our test, starting more than a dozen virtual machines at the same time would not be enough memory, so we usually set up a pseudo cluster, that is, all the services are set up on a virtual machine and distinguished by ports. Set up a three-node Zookeeper cluster (pseudo cluster).
4.2.2 Preparations
(1) Install JDK. (2) Upload the Zookeeper compressed package to the server [docker can also be used] (3) Decompress Zookeeper and create a data directory. CFG file in conf to zoo. CFG. (4) Create the /usr/local/zookeeper-cluster directory. Copy the decompressed Zookeeper file to the /usr/local/zookeeper-cluster/zookeeper-1 /usr/local/zookeeper-cluster/zookeeper-2 directory / usr/local/zookeeper cluster/zookeeper – 3
[root@localhost ~]# mkdir/usr/local/zookeeper ‐ cluster
[root@localhost ~]/usr/local/zookeeper‐cluster/zookeeper‐1
[root@localhost ~]/usr/local/zookeeper‐cluster/zookeeper‐2
[root@localhost ~]‐r zookeeper‐3.4.6 /usr/local/zookeeper‐cluster/zookeeper‐3
Copy the code
(5) Set the dataDir (zoo.cfg) clientPort of each Zookeeper to 2181 2182 2183
- Modify/usr/local/zookeeper cluster/zookeeper – 1 / conf/zoo. CFG
clientPort=2181
dataDir=/usr/local/ zookeeper ‐ cluster/zookeeper ‐ 1 / dataCopy the code
- Modify/usr/local/zookeeper cluster/zookeeper – 2 / conf/zoo. CFG
clientPort=2182
dataDir=/usr/local/ zookeeper ‐ cluster/zookeeper ‐ 2 / dataCopy the code
- Modify/usr/local/zookeeper cluster/zookeeper – 3 / conf/zoo. CFG
clientPort=2183
dataDir=/usr/local/ zookeeper ‐ cluster/zookeeper ‐ 3 / dataCopy the code
4.2.3 Configuring a Cluster
(1) Create a myID file in the data directory of each ZooKeeper. The contents are 1, 2, and 3 respectively. This file records the ID of each server
‐‐‐‐ knowledge tips ‐‐‐‐‐ if you are creating a text file that is simple in content, we can passechoCommand to quickly create a file in the following format:echoFor example, if the ID of the first ZooKeeper is 1, run the commandCopy the code
- The following table lists the CLUSTER server IP addresses
Server. 1 = 192.168.25.140:2881-3881 for server 2 = 192.168.25.140:2882:3882 server. 3 = 192.168.25.140:2883-3883Copy the code
Server. server ID= SERVER IP address: communication port between servers: port for voting between servers
4.2.4 Starting a Cluster
(1) To start a cluster is to start each instance separately
- Query the first service. Mode = follower
- The second service Mod is leader.
- Query third for follower (from)
4.2.5 Simulation Cluster Exception
(1) First let’s test what happens if we hang from the server
- Shut down server # 3, observe # 1 and # 2, and find no change in status
From this conclusion, 3 nodes of the cluster, from the server down, the cluster is normal
(2) We also stop the no. 1 server (secondary server), check the status of no. 2 (primary server), and find that it has stopped running.
From this conclusion, in a 3-node cluster, both secondary servers are down and the primary server is also down. Because there are no more than half of the total number of running machines in the cluster.
(3) We started server 1 again and found that server 2 started working normally again. And still a leader.
Therefore, we can draw a conclusion that when the primary server in the cluster hangs, other servers in the cluster will automatically elect the state and then create a new leader
(5) We tested again, and when we restarted server 2, What happens when it starts? Will Server 2 be the new leader again? Let’s see what happens
From this we conclude that when a leader is created, new servers are added to the cluster again, which does not affect the current leader.
4.3.Dubbo Connects to the ZooKeeper cluster
- Modify the Spring configuration files for the service provider and service caller
<! ‐ ‐Specified registry address ‐‐>
<dubbo:registry protocol="zookeeper" address="192.168.25.140:2181192168 25.140:2182192168 25.140:2183">
</dubbo:registry>
Copy the code