First, filter principle
In some small video apps, it is often seen that a filter is added to the current picture during camera capture, which will make the picture captured by the camera more interesting.
So how does the filter principle work?
In the previous article, we implemented loading an image with GLSL. In a custom fragment shader it looks like this:
precision highp float; // Precision VARYING lowp VEC2 varyTextCoord; Uniform sampler2D colorMap; Void main() {vec4 texture = texture2D(colorMap, varyTextCoord); // texture gl_FragColor = texture; }Copy the code
Texture data and texture coordinates are used to generate texture elements and then rendered to the screen. If there is an intervention on the striin at this time, the effect will be different. For example, I need to lower the red channel of the entire image, so we can do something on top of the pixel shader stripper:
vec4 texture = texture2D(colorMap, varyTextCoord); Texture. R * = 0.5; gl_FragColor = texture;Copy the code
In this way, the red channel data of the whole image is halved, and the effect will be different.
As we all know, the picture principle of a camera is actually to continuously capture the image data of the current picture through the camera. Therefore, GLSL is used to carry out an intervention processing on the image data of each frame captured by the camera and get different pictures, which is often seen as the filter.
Second, the processing of split screen filter
On some apps, there are some filters that divide the current screen into the same two parts:
If you think about the principle, it’s actually quite simple to implement this filter, which only shows a certain part of the pixel. When the position of the pixel is not where it should be displayed. I’m going to shift the position of this pixel down by some distance
The blue part is the part we need to display in split screen, and the green part is the part we need to process.
- When the pixel is in the green part above, the position of the pixel is not in the range to be displayed, so the texture data of the pixel above needs to be moved down by 0.25
- At this point, the upper part of the blue corresponds to the green part at the top, so the texture data read by the upper part of the blue also needs to be moved down 0.25
- When the pixel is in the green part below, the pixel position is not in the range to be displayed, so the texture data we call for the pixel in the following part needs to be moved up 0.25
- The lower part of the blue corresponds to the green part at the bottom, so the texture data read by the lower part of the blue also needs to be moved up 0.25
In summary: when the pixel point is in the range of 0 0.5, we need to move all y coordinates down 0.25, when the pixel point is in the range of 0.5 1, we need to move all y coordinates up 0.25.
Three, code implementation
precision highp float; // Precision VARYING lowp VEC2 varyTextCoord; Uniform sampler2D colorMap; Void main() {vec2uv = varyTextCoord.xy; // Get texture coordinates xy float y; / / new y coordinates the if (uv) > = 0.0 && y uv) < = 0.5) {y y = uv. Y + 0.25; } else {y = uv. y-0.25; } gl_FragColor = texture2D(colorMap, vec2(uv.x, y)); // Render after generating the stripper}Copy the code
Four,
There are also very similar filters, such as: three split screen, four split screen, nine split screen, etc. Now that we know how it works, we’re essentially going to have to do a recalculation of the x and y coordinates. The texture data that corresponds to the texture coordinates is the only part that needs to be displayed. Think of some mosaics, where each pixel and its surrounding area are blurred with a color. Some peels also increase the brightness of the current color. These will be studied in the future.