The CPU sends graphics drawing commands to the GPU. If we follow the mode of “send a command, perform a drawing operation”, it will cause performance waste. If the GPU takes a long time to draw, does the CPU have to wait? It could have issued several more commands at this time. Or if the CPU is occupied by another task, does the GPU have to wait?
This is the problem of one-to-one connections, which can be mitigated to some extent by creating a buffer. “Buffer”, don’t think of the buffer in war, but think of the model of a pool: a pool, one switch for water, another switch for water; When the water is more, the water is suspended, the water is less on the open. In this way, the water switch can keep the water, completely do not need to suspend, and the water switch can freely control the rhythm of water. Corresponding to the above, “water” means that the CPU executes the drawing command, and “water” means that the GPU executes the drawing operation.
The nice thing about buffering is that both sides work at their own pace, but they can be completely sequenced.
Video buffer
In junior high contact the computer, watching video card, there will be * * * “buffer” in the tip, although you know the meaning of this video is in the load, but why don’t have to “load”, TMD and “buffer” * * * for the reminder of the war, the concept of buffer could damage the video need to wait?
First of all, I think the user experience was not deeply emphasized at that time. Many things were named “programmer style”, and many operation tips of Windows really scared computer beginners.
Another is the programming logic behind the name “programmer style”. Where does the concept of “buffering” come from from a program perspective?
The video playback process is divided into two parts: decoding and rendering.
- Decode: Get data packet from network or local file, extract frame data, decode.
- Render: The frame data into image data display or provide an OpenGL these graphics processing library display.
Are these two parts very similar to the relationship between CPU and GPU above? If the mode of “one-to-one” is adopted, that is, one frame is acquired and one frame is played. If the network is a little unstable, the playback will be slow and fast.
A buffer is a good solution. This is probably why the “in buffer” prompt appears when the video playback freezes.
The word “buffer zone”
Buffer buffer buffer buffer buffer buffer buffer buffer buffer buffer buffer buffer buffer In the right place to use the term, it represents essentially a chunk of in-memory data, with a pattern of “in-out dynamic storage.” The use of the word buffer with the meaning of “lighten” may be due to the fact that it lightens the severity of the joint on both sides of the connection.
To take a graphic example, there are two cases:
-
Two people cooperate to make dumplings, one person rolling skin, one person filling. If you put a basin between you, just roll the dough, roll it out and put it in the basin, and just take out the dough from the basin and wrap it. This basin is the buffer.
-
The two men are standing on a wire in the air, two meters apart, and the wrappers can only throw the skin to the stuffing. This is the one-to-one model.
Compare the intensity of the two situations. That’s what “less” means.