There are a lot of web architecture sharing on the Internet, some of which are mainly analyzed from the perspective of operations and infrastructure (stack machines, do clustering) and are too focused on technical details to be understood by the average developer.
The first part of this article will focus on the extension of large-scale web infrastructure, and the second part will focus on the extension and evolution of web architecture from the perspective of applications.
Grassroots period, rapid development of the website and online. Of course, usually just test the water, user scale has not formed, economic capacity and investment is very limited.
There is a certain amount of business and user scale, want to improve the speed of the site, so, cache appeared.
Market response is good, the number of users in the growth every day, database crazy reading and writing, gradually found that a server can not hold fast. So, we decided to separate DB from APP.
A single database also feels like it can’t hold up, so it usually tries to do “read-write separation”. This is due to the “read more than write less” nature of much of the Internet. The number of Salve units depends on the read-write ratio evaluated by business.
The database level is alleviated, but there is also a bottleneck in the application level. Due to the increase in traffic, and the poor code written by the early programmers, the staff turnover is also large, which is difficult to maintain and optimize. Therefore, the most common method is still “stack machine”.
Add machine who can add, the key is to add after the effect, add after may cause some problems. For example very common: page output cache and local cache issues, Session save issues……
At this point, we have basically done the horizontal expansion of the DB level and the application level. We can start to pay attention to some other aspects, such as: the accuracy of the site search, the dependence on DB, and the introduction of full-text indexing.
In the Java field, Lucene and Solr are used more, while in the PHP field, Sphinx/Coreseek is used more.
So far, a medium-sized website architecture that can carry millions of daily visits has been basically introduced. Of course, there will be a lot of technical implementation details in each step of expansion, and I will write a separate article to analyze those details later.
After making extensions to meet basic performance requirements, we gradually focus on “usability” (that is, the SLAs, a few nines we usually hear people brag about). How to ensure true “high availability” is also a difficult problem.
Almost mainstream large and medium-sized Internet companies, will use similar architecture, but the number of nodes is different.
There is a way to use more, that is static separation. This can be done with developer cooperation (putting static resources in a separate site) or without developer cooperation (using a 7-layer reverse proxy to determine resource types based on information such as name extensions). With a separate static file server, storage is also an issue and needs to be extended. How to keep files consistent on multiple servers, and how to afford shared storage? Distributed file systems also come in handy.
There is also a very common technology used at home and abroad before CDN acceleration. Competition is fierce and the sector is already cheaper. The north-south Internet problem is quite serious in China, and CDN can effectively solve this problem.
The basic principle of CDN is not complicated and can be understood as intelligent DNS+Squid reverse proxy cache, which then requires many machine room nodes to provide access.
So far, very little has been done to change the architecture of the application, or in layman’s terms, to make extensive code changes.
What if all the above methods are used up and still can’t hold up? Keep adding machines is not the way ah?
As businesses become more complex and websites have more and more functions, although the deployment level is clustered, the application architecture level is still “centralized”, which leads to a lot of coupling, which is not easy to develop, maintain, and easy to “lose”. As a result, it is common to split the site into separate sub-sites to host separately.
Applications are dismantled, because of the connection of a single database, QPS, TPS, I/O processing capacity is very limited, DB level can also do vertical branch operation
After splitting the application and DB, there are still a lot of problems. Different sites may have the same logic and functionality in the code. Of course, for some basic functions we can package DLLS or JARS to provide references everywhere, but this kind of strong dependency can also cause problems (versioning issues, dependencies, and so on can be very troublesome to deal with). In this way, the fabled value of SOA is realized.
There are still dependencies between applications and services, and this is where high-throughput decoupling comes in
Finally, also introduced a large Internet companies with a unique skill — sub – database sub – table. Personal experience, not business stations and all aspects are very urgent, do not easily take this step.
Because who will do, the key is how to do after the demolition. Currently, there is no completely open source and free solution that will allow you to solve the database split problem once and for all.