Author: Lin Guanhong/The Ghost at my Fingertips
The Denver nuggets: https://juejin.cn/user/1785262612681997
Blog: http://www.cnblogs.com/linguanh/
Making: https://github.com/af913337456/
Tencent cloud column: https://cloud.tencent.com/developer/user/1148436/activities
Server. Go source code analysis can go to search, there have been a lot of and also good articles.
Body:
ListenAndServe(port,router) since we started HTTP.ListenAndServe(port,router), the server. Go internal ends up waiting for the client connection to arrive in a Accept method in a for loop.
Every time an Accept is received, a Gorutine is started to process connections to the current IP address. Go C. sever (CTX) in the source code. This step in C. sever (CTX) is not a simple form:
Request --> process the request --> return the result --> disconnect the connection --> end the current Gorutine
According to myDebugging results
withSource code analysis
The correct form is shown as follows:
-
SetReadDeadline(time. time {}). If no internal error occurs, the current connection disconnection condition is that the client disconnects by itself or NAT times out.
-
After the connection is established, all HTTP requests from the current client, such as GET and POST, are distributed and processed in this initiated Gorutine, in terms of IP.
-
That is, multiple requests from the same IP will not trigger another accept, and will not start a go c.sever (CTX).
Above, we draw the conclusion that:
-
If there are 1 million Acceptances, then there are 1 million connections and 1 million IP connections to the current server. That’s what we call a million connections
-
A million connection is not a million request
-
For each connection, it can make multiple HTTP requests, all of which are made in the gorutine that currently initiates the connection.
-
c.serve(…) The for loop in the source code reads each request and redistributes it
for {
w, err := c.readRequest(ctx) // Read an HTTP request
/ /...
ServeHTTP(...)
}
Copy the code
- our
1 million
Inside the connection, it’s possible to send more requests, say millions of requests, oneThe client
Make multiple calls quicklyRequest API
Diagram to summarize
Combine master-worker concurrency mode
According to our analysis above, every new connection comes, Go will start a Gorutine, in the source code does not see there is an order of magnitude limit, that is, the number of connections will not be accepted. We also know that servers have processing bottlenecks.
So, an optimization point is inserted here, which is to set a limit on the number of connections within Server.go.
Master-worker mode itself starts multiple worker threads to concurrently read tasks in bounded queues and execute them.
I have already implemented a go version of master-worker and tried the following:
- in
go c.serve(ctx)
Is modified as follows.
if srv.masterWorkerModel {
// lgh --- way to execute
PoolMaster.AddJob(
masterworker.Job{
Tag:" http server ",
Handler: func(a) {
c.serve(ctx)
fmt.Println("finish job") // This sentence will be output only after the current IP is disconnected}})}else{
go c.serve(ctx)
}
func (m Master) AddJob(job Job) {
fmt.Println("add a job ")
m.JobQueue <- job // jobQueue is buffered
}
Copy the code
// worker
func (w Worker) startWork(master *Master) {
go func(a) {
for {
select {
case job := <-master.JobQueue:
job.doJob(master)
}
}
}()
}
Copy the code
// job
func (j Job) doJob(master *Master) {
go func(a) {
fmt.Println(j.Tag+" --- doing job...")
j.Handler()
}()
}
Copy the code
It’s not hard to understand the pattern.
Now we use the producer-consumer pattern to assume that the connection generation is the producer and <-master.JobQueue is the consumer, because every consumption is a gorutine that starts a process.
Because we are accept a request to the < – master. JobQueue, pipe the output of a the process, is not time-consuming operation, this job, it was soon output pipe. In other words, consumption is fast, so in the actual production environment, our worker’s work coroutines will start more than 5~10.
If consumption falls behind, the extra jobs will be buffered into the channel. This could happen as follows:
The establishment of 100,000 + level connection in a short time will cause the worker to fail to read. But if it did happen, it was done quickly. Because the time in between is almost negligible!
That is, for a short time a large number of connections are established, its bottleneck is the number of buffers in the queue. But even if a bottleneck occurs, it can be quickly dispatched and disposed of. So said.
-
The significance of my attempt at the first point is actually not very great. Just a different way to distribute go C. sever (CTX).
- This is the second way of combining
master-worker
Placed inServeHTTP
Distribution phase. The following code, for example, is commonhttp handler
So we can nest it.
func (x XHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
/ /...
if x.MasterWorker {
poolMaster.AddJob(master_worker.Job{
Tag:"normal",
XContext:xc,
Handler: func(context model.XContext) {
x.HandleFunc(w,r)
},
})
return
}
x.HandleFunc(w,r)
/ /...
}
Copy the code
In this way, we can control the maximum number of concurrent requests for all connections. Any excess will be queued up for execution, rather than causing the server to hang up due to an uncontrolled increase in HTTP requests in a short period of time.
In addition, there is a second problem: reading, premature closure, which is left to the reader to try to solve.