preface

Today, I saw someone talking about the current system has 200 million PV, how to optimize? When I saw this topic, I suddenly thought that I also encountered a lot of high concurrency scenes in my work, so I improvise. Here is a brief summary and share, welcome to corrections and supplement.

The body of the

A read operation

Regarding reading, we generally follow the following priorities:

priority Technical solution instructions The sample
The highest Statically as much as possible For the data with low real-time performance, CDN should be used as much as possible For example, access to basic commodity information
high Nearby memory Priority Server memory, remote memory service Such as seckill, snap up inventory (allocate inventory to server memory first, remote memory service second < also involves additional network IO>)
Very low Database (don’t read if you can) Connection pooling, SQL optimization Common business

The write operation

For writes, we generally look at the data consistency level:

Data consistency requirements Technical solution
Is not high Memory is written (priority from server memory to remote memory service) and stored asynchronously
high Perform the most critical tasks synchronously. Asynchrony ensures the ultimate success of other tasks

The peak current limiting

From simple to complex:

How simple Technical solution
The most simple Percentage of traffic rejected (random, no first come first served not fair)
simple Atomic operation flow limiting (priority using server memory, second remote memory service)
A bit of trouble Queue traffic limiting (first come, first served, fair)

Service stability

In high concurrency scenarios, sometimes we need to degrade the services of some minor businesses to ensure the normal operation of the core business. A simple downgrading scheme is as follows:

  1. Configuration switch degrade: The configuration switch is degraded manually
  2. Timing switch degrade: automatic timing degrade

System architecture

In terms of the system architecture, don’t think too complicated, simply detack the business.

The operational architecture

Deployment layer, where possible, deploy such services separately.

weapons

“If you want to do a good job, you must sharpen your weapon first.” Of course, we need good weapons to deal with high concurrency. Here are the three musketeers of high concurrency:

Technical term instructions
asynchronous Asynchronous callbacks, layers of callbacks are a disaster (Promise is also very bloated chained code)
epoll IO multiplexing, Nginx/Redis scheme
coroutines Lightweight, user-mode scheduling with high concurrency capability