Synchronous blocking programming
Most application and service development frameworks are based on multithreading. With one thread per connection, developers can use traditional imperative code.
Multiple threads can live in the same process, perform work in parallel, and share the same memory space.
Problems with synchronous blocking programming
The calling thread may block when using the traditional blocking API to do the following:
- Read data from the Socket
- Write data to disk
- Send a message to the recipient and wait for a response
- A lot of other things
In this case, the thread cannot do anything while it waits for the result of processing.
This means that if you use a blocking API to handle a lot of concurrency, you need a lot of threads to keep your application from stopping.
The memory used by these threads (such as their stacks) and thread context switching can be expensive. (A thread takes up about 521KB-1m of memory, and context switching is about 2.7-5.48us)
Disadvantages are as follows:
- The workload is too high, and multithreading these requests creates too many context-switching actions.
- Threads in the thread pool that are IO blocked cannot be reused (to process new Http requests).
- It is necessary to solve the thread safety problem of memory access by multiple threads.
- The blocking approach is difficult to scale for the level of concurrency required by modern applications.
Verx asynchronous non-blocking programming
Using Asynchronous I/O, you can handle more connections with smaller threads. When an I/O operation occurs at a task, the Asynchronous I/O does not block a thread, but rather performs other pending tasks until the I/O result is ready to continue.
Vert.x uses Eventloop multiplexing to process the workload concurrently.
Code running on event loop should not perform blocking I/O and long processing logic. So vert. x has a golden rule: Don’t block the Event loop.
The Golden Rule – Don’t Block the Event Loop
Think about:
How long will you have to wait? It depends on your application and the amount of concurrency required.
If you only have a single Event Loop and you want to process 10,000 HTTP requests per second, obviously each request can take no more than 0.1 milliseconds to process, so you can’t block any more than 0.1 milliseconds.
Because the vert. x API does not block threads, with vert. x you can handle a large number of concurrent requests with only a few threads. For example, a single Event Loop can handle thousands of HTTP requests very quickly. Each Vertx instance maintains multiple Event Loop threads. By default, the number of Event loops is set based on the number of cores available on the machine.
Best practices for Verx
The client initiates a request, the server receives the request, through the eventbus request distribution to the specific Service as a method, in this method to query the database of the asynchronous operation, after getting the result of an asynchronous (Future/Promise/Coroutines), A response is sent back via the Event Bus, and finally written back to the client. The entire operation process is asynchronous.
Interested partners, you can clone down my code: code repository