Performance bottlenecks for most traditional FPM projects are the overhead of recreating ZendVM on each request, and frequent context switches due to IO blocking. Swoole addresses this kind of problem.

My official group click here

This article will show you how to maximize the performance of Swoole’s HTTP server.

The pressure test script is as follows: Single-core, 2 GB memory, and 50 GB hard disk:

<? php use Swoole\Http\Request; use Swoole\Http\Response;$process = new Swoole\Process(function (Swoole\Process $process) {
    $server = new Swoole\Http\Server('127.0.0.1', 9501, SWOOLE_BASE);
    $server->set([
        'log_file'= >'/dev/null'.'log_level' => SWOOLE_LOG_INFO,
        'worker_num' => swoole_cpu_num() * 2,
        // 'hook_flags' => SWOOLE_HOOK_ALL,
    ]);
    $server->on('workerStart'.function () use ($process.$server) {
        $process->write('1');
    });
    $server->on('request'.function (Request $request, Response $response) use ($server) {
        try {
            $redis = new Redis;
            $redis->connect('127.0.0.1', 6379);
            $greeter = $redis->get('greeter');
            if (!$greeter) {
                throw new RedisException('get data failed');
            }
            $response->end("<h1>{$greeter}</h1>");
        } catch (\Throwable $th) {
            $response->status(500);
            $response->end(); }});$server->start();
});
if ($process->start()) {
    register_shutdown_function(function () use ($process) {
        $process: :kill($process->pid);
        $process: :wait(a); });$process->read(1);
    System('ab-c 256-n 10000-k http://127.0.0.1:9501/ 2>&1');
}Copy the code

First, we create a Swoole\Process object that starts a child Process. In the child Process, I create an HTTP Server that is in BASE mode. In addition to the BASE pattern, there is a PROCESS pattern. In PROCESS mode, the socket connection is maintained by the Master PROCESS, and the Master PROCESS and Worker PROCESS will have an extra layer of IPC communication overhead. However, when the Worker PROCESS crashes, the connection will not be disconnected because the connection is maintained by the Master PROCESS. Therefore, the Process pattern is suitable for scenarios where a large number of long connections are maintained.

BASE mode maintains its own connections for each worker process, so performance is better than Master. In addition, under HTTP Server, the BASE mode is more suitable.

Here, we set worker_num, or the number of processes, to twice the number of CPU cores on the current machine. However, in real projects, we need to constantly pressure test, to adjust this parameter.

At workerStart, when the worker process is started, we let the child write a piece of data to the parent process in the pipeline. The parent process will read some data at this point, and then the parent process will start to pressure.

At this point, the pressure test request goes into the onRequest callback. In this callback, we create a Redis client that connects to the Redis server and requests a piece of data. After we get the data, we call the end method in response to the pressure survey request. When an error occurs, we return a response with error code 500.

Before we can start the manometry, we need to installRedisExtension:

pecl install redisCopy the code

Then enable the Redis extension in the php.ini configuration.

We also need to be inRedisInsert data into the server:

127.0.0.1:6379> SET greeter swoole
OK
127.0.0.1:6379> GET greeter
"swoole"127.0.0.1:6379 >Copy the code

OK, we are now under pressure:

~/codeDir/phpCode/swoole/server # php server.php
Concurrency Level:      256
Time taken forTests: 2.293 seconds Complete Requests: 10000 Failed requests: 0 keep-alive Requests: 10000 Total extension: 1680000 bytes HTML transferred: 150000 bytes Requests per second: 4361.00 [#/sec] (mean)Time per Request: 58.702 [MS] (mean) Time per Request: 0.229 [MS] (mean, across all concurrent requests) Transfer rate 715.48 Kbytes/SEC receivedCopy the code

We found that the current QPS is relatively low, only 4361.00.

Because the Redis extension we’re currently using is PHP’s official synchronous blocking client and doesn’t take advantage of coroutines (or asynchronous features). When a process attempts to connect to the Redis Server, it may block the entire process and prevent the process from processing other connections, so that the HTTP Server cannot process requests as quickly as possible. However, this compression test results are better than FPM because Swoole is resident to the process.

Now, let’s turn on Swoole’s RuntimeHook mechanism, which dynamically replaces PHP’s synchronous blocking methods with asynchronous non-blocking coroutine scheduling at run time. We simply add a line to the server->set configuration:

'hook_flags' => SWOOLE_HOOK_ALLCopy the code

At this point, let’s run the script again:

Concurrency Level:      256
Time taken forTests: 1.643 seconds Complete Requests: 10000 Failed requests: 0 keep-alive Requests: 10000 Total extension: 1680000 bytes HTML transferred: 150000 bytes Requests per second: 6086.22 [#/sec] (mean)Time per Request: 42.062 [MS] (mean) Time per Request: 0.164 [MS] (mean, across all concurrent requests) 998.52 Kbytes/SEC receivedCopy the code

We found that there was a certain improvement in QPS at this time. (Here, during the pressure test in the video, requests will be tamped, resulting in very low QPS, but this situation did not happen in my actual test, which is probably related to the configuration of the number of connections by Redis server itself)

However, to avoid the problem of creating too many connections due to too many requests, we can use a Redis connection pool. Synchronous blocking is not a problem with too many Redis connections, because once the worker process is blocked, subsequent requests will not continue and new Redis connections will not be created. Therefore, in synchronous blocking mode, the maximum number of Redis connections is the number of worker processes.)

Now let’s implement Redis connection pooling:

class RedisQueue
{
    protected $pool;
    public function __construct()
    {
        $this->pool = new SplQueue;
    }
    public function get(): Redis
    {
        if ($this->pool->isEmpty()) {
            $redis = new \Redis();
            $redis->connect('127.0.0.1', 6379);
            return $redis;
        }
        return $this->pool->dequeue();
    }
    public function put(Redis $redis)
    {
        $this->pool->enqueue($redis);
    }
    public function close()
    {
        $this->pool = null; }}Copy the code

Here the connection pool is implemented through the QUEUE of SPL. If there are no connections in the pool, we create a new connection and return the created connection. If there are connections in the pool, then we get the previous connection in the queue. When we run out of connections, we call the PUT method to return them. In this way, we can reuse Redis connections to some extent, relieving the strain on the Redis server, and reducing the overhead of creating Redis connections frequently.

We now use this connection pool queue:

<? php use Swoole\Http\Request; use Swoole\Http\Response;$process = new Swoole\Process(function (Swoole\Process $process) {
    $server = new Swoole\Http\Server('127.0.0.1', 9501, SWOOLE_BASE);
    $server->set([
        'log_file'= >'/dev/null'.'log_level' => SWOOLE_LOG_INFO,
        'worker_num' => swoole_cpu_num() * 2,
        'hook_flags' => SWOOLE_HOOK_ALL,
    ]);
    $server->on('workerStart'.function () use ($process.$server) {
        $server->pool = new RedisQueue;
        $process->write('1');
    });
    $server->on('request'.function (Request $request, Response $response) use ($server) {
        try {
            $redis = $server->pool->get();
            // $redis = new Redis;
            // $redis->connect('127.0.0.1', 6379);
            $greeter = $redis->get('greeter');
            if (!$greeter) {
                throw new RedisException('get data failed');
            }
            $server->pool->put($redis);
            $response->end("<h1>{$greeter}</h1>");
        } catch (\Throwable $th) {
            $response->status(500);
            $response->end(); }});$server->start();
});
if ($process->start()) {
    register_shutdown_function(function () use ($process) {
        $process: :kill($process->pid);
        $process: :wait(a); });$process->read(1);
    System('ab-c 256-n 10000-k http://127.0.0.1:9501/ 2>&1');
}
class RedisQueue
{
    protected $pool;
    public function __construct()
    {
        $this->pool = new SplQueue;
    }
    public function get(): Redis
    {
        if ($this->pool->isEmpty()) {
            $redis = new \Redis();
            $redis->connect('127.0.0.1', 6379);
            return $redis;
        }
        return $this->pool->dequeue();
    }
    public function put(Redis $redis)
    {
        $this->pool->enqueue($redis);
    }
    public function close()
    {
        $this->pool = null; }}Copy the code

We create this RedisQueue when the worker process initializes. Then in the onRequest phase, get a Redis connection from this RedisQueue.

Now, let’s take the pressure test:

Concurrency Level:      256
Time taken forTests: 1.188 seconds Complete Requests: 10000 Failed requests: 0 keep-alive Requests: 10000 Total extension: 1680000 bytes HTML transferred: 150000 bytes Requests per second: 8416.18 [#/sec] (mean)Time per Request: 30.418 [MS] (mean) Time per Request: 0.119 [MS] (mean, across all concurrent requests) Transfer rate 1380.78 Kbytes/SEC receivedCopy the code

QPS increased to 8416.18.

However, connection pooling via splQueue is flawed because the queue can be infinitely long. Thus, it is still possible to create a very large number of connections when the concurrency is extremely high, because the connection pool may always be empty.

At this point, we can use channels to implement connection pooling. The code is as follows:

class RedisPool
{
    protected $pool;
    public function __construct(int $size = 100)
    {
        $this->pool = new \Swoole\Coroutine\Channel($size);
        for ($i = 0; $i < $size; $i{+ +)while (true) {
                try {
                    $redis = new \Redis();
                    $redis->connect('127.0.0.1', 6379);
                    $this->put($redis);
                    break;
                } catch (\Throwable $th) {
                    usleep(1 * 1000);
                    continue;
                }
            }
        }
    }
    public function get(): \Redis
    {
        return $this->pool->pop();
    }
    public function put(\Redis $redis)
    {
        $this->pool->push($redis);
    }
    public function close()
    {
        $this->pool->close();
        $this->pool = null; }}Copy the code

As you can see, in the constructor we set the size of the Channel to the parameter passed in. In addition, create size connections. These connections are created when the pool is initialized and are in the ready state. This has both advantages and disadvantages. The disadvantage is that each process initializes and occupies some connections, but the process does not receive connections. The advantage is that the Redis connection is created ahead of time, so the latency of the server response is reduced.

Elsewhere, though, the code is essentially the same as the RedisQueue implementation. However, the underlying layer is quite different from RedisQueue. Because when there is no Redis connection in a Channel, the current coroutine is suspended and other coroutines continue to be executed. The suspended coroutine will not resume execution until a coroutine returns the Redis connection to the connection pool. That’s how coroutine collaboration works.

Now, let’s modify the server code:

<? php use Swoole\Http\Request; use Swoole\Http\Response;$process = new Swoole\Process(function (Swoole\Process $process) {
    $server = new Swoole\Http\Server('127.0.0.1', 9501, SWOOLE_BASE);
    $server->set([
        'log_file'= >'/dev/null'.'log_level' => SWOOLE_LOG_INFO,
        'worker_num' => swoole_cpu_num() * 2,
        'hook_flags' => SWOOLE_HOOK_ALL,
    ]);
    $server->on('workerStart'.function () use ($process.$server) {
        $server->pool = new RedisPool(64);
        $process->write('1');
    });
    $server->on('request'.function (Request $request, Response $response) use ($server) {
        try {
            $redis = $server->pool->get();
            // $redis = new Redis;
            // $redis->connect('127.0.0.1', 6379);
            $greeter = $redis->get('greeter');
            if (!$greeter) {
                throw new RedisException('get data failed');
            }
            $server->pool->put($redis);
            $response->end("<h1>{$greeter}</h1>");
        } catch (\Throwable $th) {
            $response->status(500);
            $response->end(); }});$server->start();
});
if ($process->start()) {
    register_shutdown_function(function () use ($process) {
        $process: :kill($process->pid);
        $process: :wait(a); });$process->read(1);
    System('ab-c 256-n 10000-k http://127.0.0.1:9501/ 2>&1');
}
class RedisPool
{
    protected $pool;
    public function __construct(int $size = 100)
    {
        $this->pool = new \Swoole\Coroutine\Channel($size);
        for ($i = 0; $i < $size; $i{+ +)while (true) {
                try {
                    $redis = new \Redis();
                    $redis->connect('127.0.0.1', 6379);
                    $this->put($redis);
                    break;
                } catch (\Throwable $th) {
                    usleep(1 * 1000);
                    continue;
                }
            }
        }
    }
    public function get(): \Redis
    {
        return $this->pool->pop();
    }
    public function put(\Redis $redis)
    {
        $this->pool->push($redis);
    }
    public function close()
    {
        $this->pool->close();
        $this->pool = null; }}Copy the code

You only need to modify parts of workerStart. You do not need to modify other parts of workerStart. Thus, a maximum of 64 Redis connections can be created per process.

We continue the manometry:

Concurrency Level:      256
Time taken forTests: 0.817 seconds Complete Requests: 10000 Failed requests: 0 keep-alive Requests: 10000 Total extension: 1680000 bytes HTML transferred: 150000 bytes Requests per second: 12234.30 [#/sec] (mean)Time per Request: 20.925 [MS] (mean) Time per Request: 0.082 [MS] (mean, across all concurrent requests) Transfer rate 2007.19 Kbytes/SEC receivedCopy the code

We found that QPS has been improved. Why is my QPS not improved as much as the video? This is related to the test environment. My own machine can no longer be improved. It’s like the top students can only get 100 points. (In fact, in my tests, if I adjust the maximum number of connections in the connection pool, THE QPS improves)

Hope the above content can help you, join my official group click here. Get more swoole learning materials and video source notes.

The original

Swoole Fundamentals Lecture 3: How to Set up HTTP Services correctly