The original article is reprinted from “Liu Yue’s technology blog” v3u.cn/a_id_167

The title of this article is a bit unfair, because Asgi (Asynchronous Server Gateway Interface) is an extension of Wsgi (Web Server Gateway Interface). The FastAPI, after all, is the one that’s standing on Flask’s shoulders. Most people hear about Asgi, probably because Django’s latest version (3.0) has already announced support for the Asgi Web specification. This is obviously exciting news, and in 2020, if you don’t talk about Asgi in your Web development interview, Obviously a little bit behind the curve.

So what is Wsgi and what is Asgi? Rest assured, no CGI, no abstract concepts, simple and rough understanding:

Wsgi is a synchronous communication service specification where a client requests a service, waits for the service to complete, and only continues to work when it receives the result of the service. Of course, you can define a timeout, and if the service does not complete within the specified time, the call is considered to have failed and the caller continues to work.

Schematic diagram of Wsgi simple working principle:

Simple implementation:

#WSGI example   
  
  
def application(environ, start_response):  
  
  
    start_response('200 OK', [('Content-Type'.'text/plain')])  
  
  
    return b'Hello, Wsgi\n'
Copy the code

Asgi is an asynchronous communication service specification. The client initiates a service call but does not wait for the result. The caller immediately continues its work, not caring about the result. If the caller is interested in the result, there are mechanisms for it to be returned by the callback method at any time.

Schematic diagram of SIMPLE working principle of Asgi:

Simple implementation:

#Asgi example  
  
async def application(scope, receive, send):  
  
  
    event = await receive()  
  
  
    ...  
  
  
    await send({"type": "websocket.send". })Copy the code

In summary: Asgi is asynchronous, Wsgi is synchronous, and WSGI-based Flask is a synchronous framework, and Asgi based FastAPI is an asynchronous framework. That’s it. What’s the difference between synchronous framework and asynchronous framework? Why change Flask to FastAPI?

Do not pat forehead son, also do not hear hearsay, echo what others say. Technology should be based on data and arguments should always be based on arguments, so let’s simply test the performance of both frameworks by first installing the dependent libraries separately.

Flask:

pip install gunicorn  
pip install gevent  
pip install flask
Copy the code

FastAPI:

pip install fastapi  
pip install uvicorn
Copy the code

One of the first things we did was look at how Flask and FastAPI handle multiple requests from multiple clients. Especially when your code has efficiency issues (such as long database queries), where time.sleep() is deliberately used to simulate time-consuming tasks, why not asyncio? For a well-known reason: time.sleep is blocked.

Flask:

from flask import Flask  
from flask_restful import Resource, Api  
from time import sleep  
  
app = Flask(__name__)  
api = Api(app)  
  
class Root(Resource):  
    def get(self):  
        print('Sleep for 10 seconds')  
        sleep(10)  
        print('wake up')  
        return {'message': 'hello'}  
  
api.add_resource(Root, '/')  
  
if __name__ == "__main__":  
    app.run()
Copy the code

FastApi:

import uvicorn  
from fastapi import FastAPI  
from time import sleep  
app = FastAPI()  
  
@app.get('/')  
async def root():  
    print('Sleep for 10 seconds')  
    sleep(10)  
    print('wake up')  
    return {'message': 'hello'}  
  
if __name__ == "__main__":  
    uvicorn.run(app, host="127.0.0.1", port=8000)
Copy the code

Starting services separately

Flask:python3 manage.py

FastAPI:uvicorn manage:app –reload

At the same time, open multiple browsers and concurrently request the home page.

Flask:http://localhost:5000

FastAPI:http://localhost:8000

Observe the background print results:

Flask:

FastAPI:

As you can see, for the same four requests, Flask blocks for 40 seconds and then returns the results in turn, while FastAPI returns directly after the first block. This means that an event queue is blocked in FastAPI, proving that FastAPI is an asynchronous framework, while in Flask, the request may be running in a new thread. Move all CPU-bound tasks into separate processes, so in the FastAPI example, just sleep in the event loop (so it’s better to use asyncio.sleep instead of time.sleep in the asynchronous framework). In FastAPI, I/O bound tasks are run asynchronously.

That doesn’t mean much, of course, but we continue to test both frameworks separately using the well-known ApacheBench.

A total of 5000 requests are set and the QPS is 100(please forgive my poor machine).

Ab-n 5000-c 100 http://127.0.0.1:5000/ ab-n 5000-c 100 http://127.0.0.1:8000/Copy the code

In order to be fair, Flask starts 3 workers with Gunicorn server, and FastAPI starts 3 workers with Uvicorn server.

Flask pressure test results:

Liuyue: Mytornado liuyue$ab-n 5000-c 100 http://127.0.0.1:5000/ This is ApacheBench, Version 2.3 <$Revision: 1826891 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 127.0.0.1 (be patient) Completed 500 requests Completed 1000 requests Completed 1500  requests Completed 2000 requests Completed 2500 requests Completed 3000 requests Completed 3500 requests Completed 4000  requests Completed 4500 requests Completed 5000 requests Finished 5000 requests Server Software: Gunicorn /20.0.4 Server Hostname: 127.0.0.1 Server Port: 5000 Document Path: / Document Length: 28 bytes Concurrency Level: 100 Time takenforTests: 4.681 seconds Complete requests: 5000 Failed requests: 0 Total transferred: 1060000 bytes HTML transferred: 14000bytes Requests per second: 1068.04 [#/sec] (mean)  Time per request: 93.629 [ms] (mean) Time per Request: 0.936 [ms] (mean, across all concurrent requests) Transfer rate: 221.12 Kbytes/SEC receivedCopy the code

FastAPI pressure test results:

Liuyue: Mytornado liuyue$ab-n 5000-c 100 http://127.0.0.1:8000/ This is ApacheBench, Version 2.3 <$Revision: 1826891 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 127.0.0.1 (be patient) Completed 500 requests Completed 1000 requests Completed 1500  requests Completed 2000 requests Completed 2500 requests Completed 3000 requests Completed 3500 requests Completed 4000  requests Completed 4500 requests Completed 5000 requests Finished 5000 requests Server Software: Uvicorn Server Hostname: 127.0.0.1 Server Port: 8000 Document Path: / Document Length: 19 bytes Concurrency Level: 100 Time takenforTests: 2.060 seconds Complete requests: 5000 Failed requests: 0 Total transferred: 720000 bytes HTML transferred: 95000 bytes Requests per second: 2426.78 [#/sec] (mean)  Time per Request: 41.207 [ms] (mean) Time per Request: 0.412 [ms] (mean, across all concurrent requests) Transfer rate: 341.27 Kbytes/SEC receivedCopy the code

Obviously, for 5,000 total requests, Flask takes 4.681 seconds to process 1,068.04 requests per second, while FastAPI takes 2.060 seconds to process 2,426.78 requests per second.

Conclusion: Once upon a time, when people talked about the performance of the Python framework, it was always easy to laugh. Now, the asynchronous Python ecosystem is changing dramatically, with new frameworks being created (Sanic, FastAPI), and old frameworks being reinvented (Django3.0). Many libraries are also starting to support asynchrony (HTTPX, Sqlalchemy, Mortor). The history of software technology shows that the emergence and application of a new technology often bring profound changes to the field. There is an old saying: those who see the trend are wise, those who take advantage of the trend win, and those who control the trend stand alone in the world. Therefore, only by embracing the future, embracing new technologies and keeping up with The Times is the right and sustainable development path.

The original article is reprinted from “Liu Yue’s technology blog” v3u.cn/a_id_167