Good card application?

What? The app crashed again?

How do we make our applications withstand concurrency?

I think you need to Get a “new posture”


First of all, let’s understand the basic concepts we need to know in this paper:

(1) Cluster (a software deployed on multiple servers, and as a whole, to provide a class of services)

(2) Distributed (in the system, multiple modules are deployed on different servers)

(3) High availability (when some nodes in the system fail, other nodes can continue to work or have corresponding processing plans)

(4) Load balancing (distribute requests evenly to multiple nodes)


Conceptual diagram

The cluster

The small restaurant is a chef, cut vegetables wash vegetables prepare stir-fry all dry. Later, there were too many guests in the kitchen for one cook, so another cook was hired. Both cooks could cook the same dishes. The relationship between the two cooks was cluster.

distributed

In order to make the chef concentrate on cooking and make the dishes to the extreme, a chef is invited to be responsible for cutting vegetables, preparing vegetables and preparing materials. The relationship between the chef and the chef is distributed, and even a chef is too busy to come over, so the relationship between the two chefs is cluster. So distributed architectures may have clusters, but clustering does not mean distributed.

The evolution of the architecture

Single application Architecture

When site traffic is low, only one application is needed to deploy all functions together to reduce deployment nodes and costs.


Cluster Application Architecture

When the volume of traffic gradually increases, a single application to increase the acceleration of the machine is getting smaller and smaller, one of the methods to improve efficiency is to split the application into several unrelated applications to avenue load balancing, the purpose of improving efficiency. The same applies to database bottlenecks, and dividing tables and libraries is a good way to improve efficiency.


Microservices Architecture

As the number of vertical applications increases, the interaction between applications is inevitable. In this case, core businesses are extracted as independent services to gradually form a stable service center, so that front-end applications can respond to changing market demands more quickly.


01

Single machine architecture

Bottlenecks: User growth, competition between services and databases for resources, and insufficient single-node performance to support services.

Solution: Consider separating services from databases


02

Services and databases deployed separately (most common architecture)

The service and database monopolize server resources, significantly improving their respective performance

Bottleneck: As the number of users increases, concurrent database reads and writes, especially reads, become bottlenecks.

Workaround: Consider introducing caching


03

The introduction of the cache

Adding a cache to the server can block most requests, especially query requests, before they reach the database

The cache service can be placed on one server, or several more cache servers can be configured as master/slave synchronization (sentinels, clusters, etc., can be added to improve availability).

Bottleneck: As the number of users increases, the concurrency pressure is mainly on the stand-alone service, and the response becomes slow.

Solution: Consider load balancing


04

The cluster

A reverse proxy is used to distribute a large number of user requests evenly to each server

Bottleneck: The amount of concurrency supported by the application server is greatly increased, and the cache capacity can be easily expanded. The increase in concurrency means that more requests penetrate the database, and the standalone database becomes the bottleneck.

Solution: can use separate table, read and write separation and other database optimization means


05

Database optimization

Reading and writing separation

The database is divided into read libraries and write libraries, which can have more than one read library, through the synchronization mechanism to synchronize the data of the write library to the read library

Bottleneck: As the number of services increases, there is a large gap between the visits of different services. Different services directly compete with each other for databases, resulting in mutual impact on performance.


The database is divided by service

Different service data is stored in different databases. In this way, the resource competition between services is reduced, and services with heavy traffic can expand more servers.

Bottleneck: As the number of users increases, the performance of the stand-alone write library gradually reaches the bottleneck.


Database sharding

The practice of sharding a database can be called a distributed database, but it is still logically a complete database


Introduce NoSQL databases and search engines

When the amount of data reaches a certain scale, the relational database is no longer suitable, and the query with large amount of data, the relational database may not be able to run the result.

For massive data processing, HDFS can be used to store data.

Data of the key-value type can be processed by HBase or Redis.

For crawl query, can be solved by ES and so on.

For multidimensional analysis, kylin, Druid, etc.

The introduction of more components will greatly increase the complexity of the system, and the data of different components also need to be synchronized, which needs to consider the problem of consistency.


06

Micro service

Applications are divided by service blocks to clarify the responsibilities of each application.

Advantage:

1, the coupling degree between the systems is greatly reduced, independent development, independent deployment, independent testing, the boundary between the system and the system is very clear, troubleshooting has become quite easy, the development efficiency is greatly improved.

2, the coupling degree between the systems is reduced, so that the system is easier to expand. We can scale specific services, namely subsystem clusters.

3. Higher service reusability. For example, when we use the user system as a separate service, all of the company’s products can use the system as a user system without repeated development.

Disadvantages: Common modules exist among different applications. As a result, all related codes may have to be upgraded when the parts containing common functions are upgraded, which increases the deployment pressure.


07

conclusion


From the perspective of architecture evolution, the overall architecture evolution is towards more and more lightweight, more and more flexible applications. Even the Serverless architecture that has matured in the last two years. The trend from monolithic services to layered services, to service orientation, to microservices and even no services is increasingly challenging for architecture.