The introduction

Originally did not want to write this theme, in order to so-and-so children’s shoes can better thrive, write a temporary load balance. Load balancing, you’ve probably heard of 3 layer load balancing, 4 layer load balancing, 7 layer load balancing? For example, nginx works at the application layer, which happens to be layer 7. Therefore, nginx can also be called layer 7 load balancing. I wanted to go through the layers slowly, starting with the most basic network protocols, but on second thought, that’s not the way to go in a hurry. So I changed my mind and talked about the evolution of load balancing architecture directly, so that I could talk about the final product in the interview, because load balancing is basically this architecture now! .

The body of the

DNS

In the beginning, we only had one Web-server for our application. Then you want: type guduyan.com to locate the server!

That is very simple, as long as the DNS domain name and your server mapping relationship, you can access it! The process is shown below

Now, with a Web-server, you can load balance DNS polling by adding a configuration to DNS. As shown in the figure below

Nginx+DNS

Now let’s say we have a little bit more demand. Your system is split into two functional modules: a user system and an order system. So you want to navigate to the user system when you type guduyan.com/user/. Navigate to the order system when typing guduyan.com/order/.

DNS+nginx is used for load balancing. As shown in the figure below

Ps: Nginx can also do static and static separation oh, you should understand!

What if nginx is down if the access pressure on the system increases further? How to introduce hot spare to Nginx? This is where Keepalived is used. Make a cluster of two Nginx, deploy Keepalived separately, set to the same virtual IP, so that if one node crashes, the other can automatically take over, as shown in the figure below

Lvs+Nginx+DNS

Then as the system size continues to grow, you will find that nginx can no longer support it! Nginx works at layer 7 of the network, so it can do traffic policies for HTTP applications themselves, such as domain names, directory structures, and so on. Lvs works at the network layer 4, with strong load resistance and high performance, which can reach 60% of F5. It has low memory and CPU resource consumption, and is stable and reliable. It uses the Linux kernel to forward without generating traffic. The amount of concurrency it can support depends on the memory size of the machine, generally speaking, it is not a problem to support hundreds of thousands of concurrency! It’s basically nGINx +Lvs load balancing architecture right now! Ps: Think about why nginx+Lvs is used at the same time, pay attention to my evolution process, interview will ask! Note that if it is a relatively small website (daily PV <10 million), using Nginx is completely ok.

So, the architecture diagram in this case looks like this

One might wonder why nginx layers don’t use Keepalived as hot standby. The main reason is that nginx is not a single platform under this architecture, and if Nginx fails, Lvs will help you forward to another available Nginx!

Finally, multiple Lvs cluster addresses are configured on the DNS side to cope with multi-billion PVS. As shown below.

At this point, there is no need to extend the Lvs layer to new nodes. This architecture has been able to withstand hundreds of millions of PV. If your app works, of course! In addition, it is feasible to replace Lvs with F5 if funds are sufficient.

conclusion

OK, this architecture has been able to withstand PV in the tens of millions. This article can be used as a reference for answering questions from interviewers, such as how to design high-concurrency architectures.