High concurrency high technology Angle architecture evolution
Standalone application (WebSite)
Gradually, as the number of users increased, the problem: one server was not enough, the server was not stable. Challenge: High availability/high concurrency. Solution: Then we will prepare two servers to form a cluster
Simple Cluster (WebSite)
After taking a cluster, if so 10 users visit a server, now average open, 5 people visit the server above, 5 people visit another server. The user experience will be slightly better.
Benefits: Simple high availability, if one of the servers is down, there is no impact on user access, because users can access another good server
Problem: One limitation of this approach is that if two servers exist at the same time, two external IP/ domain names will exist at the same time. Solution: We added a proxy server, so users don’t need to remember the IP address or domain name of two servers, but just one
Load Balancing Cluster (WebSite)
After the proxy server is added, users do not need to remember the IP address or domain name of two servers, but only one (proxy server)IP address. The proxy server is responsible for distributing whether users are accessing server A or server B. Whether the user accesses server A or server B is determined by the weight Settings in nginx.
Advantage: The server is highly available, users do not need to remember the IP or domain name of two servers, only one (proxy server)IP can be remembered
Problems: There are many limitations, data storage problems, lack of server roles, such as disk corruption, data is not secure.
Solution: So we use the MVC design idea of JAVA here, we extract the data, server A and server B are only responsible for the distribution of dynamic proxy, but not for the storage of data, the specific data in the data server, database separation
MVC cluster (WebSite)
After the proxy server is added, users do not need to remember the IP address or domain name of two servers, but only one (proxy server)IP address. The proxy server is responsible for distributing whether users are accessing server A or server B. Whether the user accesses server A or server B is determined by the weight Settings in nginx.
Advantage: The server is highly available, users do not need to remember the IP or domain name of two servers, only one (proxy server)IP can be remembered
Problem: Users are distributed through a proxy server to the application server, which reads and writes data from the database server.
Solution: here is the relational database, is to follow the atomicity, consistency, isolation, persistence of four characteristics; At this time, the business layer is separated, that is, the master server is responsible for writing, the slave server is responsible for reading, which is the data consistency and the separation of the master and the library read and write.
DataBase cluster
Users are distributed to the application server through a proxy server, and the application server reads and writes data from the database server. Then the database server and data server B is two servers, if the data through the application server is A deposit to the data service A, so it is written, but read the data can be distributed to the user application server B, but the data is the data server B, then ensure that the same data.
Benefits: here is a relational database, is to follow the atomicity, consistency, isolation, persistence of four characteristics; At this time, the business layer is separated, that is, the master server is responsible for writing, the slave server is responsible for reading, which is the data consistency and the separation of the master and the library read and write
Problem: If there is A large network promotion activity such as Double 11, the user volume increases sharply, so the data server A and the data server B can not meet the user access, because the user concurrency is very, very large at this time.
Solution: In this case, some data is not stored in the database, but directly stored in the cache file; However, cached data has an expiration date, and if it expires, the data is put into the database
Message queue Architecture?
Service Oriented (SOA) Architecture?
Full Service Cluster (WebSite)
Real clusters are clusters of application servers and database servers behind firewalls.
Load: loads requests to different servers; Balancing: Equalizing the number of requests to the server.
Hot backup: It automatically backs up the database based on scheduled tasks or quantity capacity without stopping the database.
Benefits: The user accesses the application server through the proxy server. The external network segment allowed by the Intranet depends on the firewall. The firewall allows only the external IP address of the proxy server. Then the application server is safe. The first two proxy servers are not clusters. One is a load balancing proxy server in use, and the other is a standby load balancing proxy server
Small company server architecture
Access via IP is equivalent to the user input is direct access to the IP points to the server, and access by domain name, is the user to enter the domain name after the request is sent to the first domain name management control by the DNS server, one of the DNS server database, the database entities by the domain name to the corresponding IP address, DNS servers as a middleman, By forwarding the request to the server corresponding to this IP address, you achieve access by domain name, so access by domain name is essentially access by IP. Then, the architecture diagram of the dog subsidiary should look like this.
Server architecture for mid-sized companies
Three servers run the same code at the same time, set up three user entrance in the code page, if the user enters a entrance found not to enter, choose another entrance, each entrance corresponds to a server.
1 load balancing 2 Redis cache 3 Read/write separation
Large corporate server Architecture 1
Although the size of the company is large, the development of more and more products, more and more comprehensive functions. Server architectures are increasingly complex.
1 load balancing 2 Redis cache 3 Read/write separation
This architecture is overly complex and is an evolution of the Mid-sized Corporate server Architecture. The following is recommended: large Corporate Server Architecture 2
Large Corporate server Architecture 2
China packaging
It may be difficult to explain at first, but let’s start with the fact that the product in our architecture diagram can actually be viewed as an entire system, assuming that the system is called System A. So, we’ve now developed various other products or various pages, which can also be viewed as new systems B and C. Now, there are a lot of duplicate parts between system ABC, so we came up with the idea of putting these duplicate parts together for all three systems to call at the same time. In fact, this repetitive part is the encapsulated microservice. Microservice is a kind of modular development, which distills the function of the product into multiple modules, and splicing the modules together during development can save a lot of work. In order to implement a module, you also need something like a server cluster in the module
K8S cluster
A K8S cluster consists of two parts: master node and node node.
Master node is mainly responsible for cluster control, pod scheduling, token management and other functions.
Node does the job of starting and managing containers. Do not deploy the master and Node nodes on the same machine.
The architecture diagram above, for example, shows a master node and two nodes. However, in actual production, multiple master nodes need to be deployed for high availability
K8S High availability cluster
Kubernetes cluster, in the production environment, must achieve high availability: to achieve the high availability of the Master node and its core components; If the Master node fails, the cluster is out of control
Concrete implementation: