“This is the 14th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”
Understand the nature of buffering
Buffer Is used to temporarily store data and then transfer or operate data in batches, usually in sequence, to alleviate frequent but slow random reads and writes between different devices.
You can think of a buffer zone as a reservoir. The tap is always running, and if there is water in the pool, it runs at a constant rate without pausing; The faucet speed of water supply is uncertain, sometimes faster, sometimes very slow. By judging the state of the water in the pool, it can freely control the speed of the water.
Or imagine the process of making dumplings, where the filling is waiting for the wrappers to roll. If each roll is handed to the stuffing, the speed will be very slow; But if you put a basin in the middle, and the wrappers just throw it in, and the wrappers just take it out, the process is much faster. Many factory assembly lines also often use this method, which shows the popularity and practicality of the concept of “buffer”.
At a macro level, the JVM’s heap is a large buffer in which code constantly produces objects while the garbage collector process quietly collects garbage in the background.
Using the above metaphor and interpretation, you can see the benefits of buffers:
- Both sides of the buffer can maintain their own operation rhythm, operation and processing order will not be disrupted, can be carried out in one by one order;
- Batch processing to reduce network interactions and heavy I/O operations, thereby reducing performance losses;
- Optimize the user experience, such as common audio/video buffering loading, by caching data in advance for smooth playback.
Buffering is widely used in the Java language. If you search Buffer in IDEA, you’ll see a long list of classes, the most typical of which are file read and write character streams.
File read/write flow
Next, I’ll use the example of a file reading and writing character stream.
Java I/O flow design, using the decorator pattern, when the need to add new functionality to the class, the decorator can be passed through the parameters to the decorator, encapsulation of the new function method. Decorator patterns are more flexible than subclassing in terms of adding functionality.
In apis for reading and writing to streams, BufferedInputStream and BufferedReader can speed up character reading, BufferedOutputStream and BufferedWriter can speed up writing.
The following is the code to read the file directly:
int result = 0; try (Reader reader = new FileReader(FILE_PATH)) { int value; while ((value = reader.read()) ! = -1) { result += value; } } return result;Copy the code
To use buffered reading, just decorate the FileReader:
int result = 0; try (Reader reader = new BufferedReader(new FileReader(FILE_PATH))) { int value; while ((value = reader.read()) ! = -1) { result += value; } } return result;Copy the code
Let’s first look at a similar implementation of the BufferedInputStream class:
Public synchronized int read() throws IOException {if (pos >= count) {fill(); if (pos >= count) return -1; } return getBufIfOpen()[pos++] & 0xff; }Copy the code
When the contents of the buffer have been read, we try to read the input stream into the buffer using the fill function:
Private void fill() throws IOException {byte[] buffer = getBufIfOpen(); if (markpos < 0) pos = 0; /* no mark: throw away the buffer */ else if (pos >= buffer.length) /* no room left in buffer */ if (markpos > 0) { /* can throw away early part of the buffer */ int sz = pos - markpos; System.arraycopy(buffer, markpos, buffer, 0, sz); pos = sz; markpos = 0; } else if (buffer.length >= marklimit) { markpos = -1; /* buffer got too big, invalidate mark */ pos = 0; /* drop buffer contents */ } else if (buffer.length >= MAX_BUFFER_SIZE) { throw new OutOfMemoryError("Required array size too large"); } else { /* grow buffer */ int nsz = (pos <= MAX_BUFFER_SIZE - pos) ? pos * 2 : MAX_BUFFER_SIZE; if (nsz > marklimit) nsz = marklimit; byte nbuf[] = new byte[nsz]; System.arraycopy(buffer, 0, nbuf, 0, pos); if (! bufUpdater.compareAndSet(this, buffer, nbuf)) { // Can't replace buf if there was an async close. // Note: This would need to be changed if fill() // is ever made accessible to multiple threads. // But for now, the only way CAS can fail is via close. // assert buf == null; throw new IOException("Stream closed"); } buffer = nbuf; } count = pos; int n = getInIfOpen().read(buffer, pos, buffer.length - pos); if (n > 0) count = n + pos; }Copy the code
The program adjusts some of the read positions and updates the buffer position, then reads the data using the decorated InputStream:
int n = getInIfOpen().read(buffer, pos, buffer.length - pos);
Copy the code
So why do it? Can’t you just read and write?
This is because characters stream objects, typically files or sockets, are very slow to retrieve data from these slow devices through frequent interactions. Buffer data is stored in memory, which significantly improves read and write speed.
With all the benefits, why not read all the data into the buffer?
This is a tradeoff. If the buffer is too large, it will increase the time of single reads and writes, and memory is too expensive to use indefinitely. The default buffer size for buffered streams is 8192 bytes, or 8KB, which is a compromise value.
It is like moving bricks. If you move bricks one by one, time will be wasted on the way back and forth. But if give you a small cart, the number of round trips can be reduced greatly, efficiency nature can be promoted somewhat.
Buffer optimization
There is no doubt that buffering can improve performance, but it often introduces an asynchronous problem that complicates the programming model.
Using the file read/write stream and Logback examples, let’s look at some general operations for buffer design.
Suppose resource A reads or writes to resource B. This is A normal flow of operations, but the flow is truncated due to the insertion of an additional storage layer, and you need to manually handle the resource coordination between the two truncated parties.
Truncated services are divided into synchronous and asynchronous operations based on different resources.
1. Synchronize operations
The programming model for synchronous operations is relatively simple and can be done in a single thread. You only need to control the size of the buffer and the timing of the processing. For example, a batch operation is triggered when the buffer size reaches a threshold, or when the buffer elements time out in the buffer.
Because all operations are done in a single thread, or synchronized method block, and resource B has limited processing power, many operations block and wait on the calling thread. For example, when writing a file, you need to wait for the previous data to be written before processing subsequent requests.
2. Asynchronous operation
Asynchronous operations are much more complicated.
The producer of the buffer is usually called synchronously, but it can also be filled asynchronously. Once the buffer is filled asynchronously, some response strategy of the producer is involved.
At this point, these policies should be abstracted and selected based on business attributes, such as discard directly, throw an exception, or wait directly on the user’s thread. You will find that this is similar to the thread pool saturation strategy, which will be explained in detail in 12 lectures.
Many applications also have more complex policies, such as setting a timeout period while the user thread waits, and a callback function after a successful entry into the buffer.
The consumption of the buffer is usually started by threads. If there are multiple threads consuming the buffer, there will be information synchronization and order problems.
3. Other practices
There are many ways to use buffers to improve performance. Here are a few more examples:
- StringBuilder and StringBuffer improve string concatenation by buffering the string to be processed and concatenating it.
- When writing to disk or network I/O, the operating system turns on special buffers to improve the efficiency of information flow. You can use the flush function to forcibly refresh data. For example, you can adjust Socket parameters SO_SNDBUF and SO_RCVBUF to improve network transmission performance.
- MySQL’s InnoDB engine can reduce page swapping and improve database performance by configuring innodb_buffer_pool_size properly.
- Caching is also used in some lower-level tools. Common ID generators, for example, allow users to avoid frequent, time-consuming interactions by buffering a portion of the ID segment.
4. Precautions
While the buffer can greatly improve the performance of our application, it also has a number of problems, and we should be aware of these exceptions when designing.
Among them, the more serious is the loss of buffer content. Even if you do a graceful shutdown with addShutdownHook, some situations are still hard to prevent, such as sudden power failure of the machine, sudden death of application processes, etc. At this time, the unprocessed information in the buffer will be lost, especially the loss of financial information and e-commerce order information is more serious.
Therefore, you need to write logs before writing content into the buffer. After a fault occurs, data is recovered based on the logs. In the database field, there are many scenarios for file buffering, which are generally solved by write-Ahead Logging (WAL). Systems that are strict about data integrity may even use batteries or UPS to ensure that buffers fall to the ground. This is the new problem of performance optimization that must be solved.