Programmable shaders (Shaders) are programs that run on gpus and are the basis of modern graphics rendering techniques. Shader gives developers the ability to “color per pixel.” Desktop/mobile graphical application apis such as OpenGL, OpenGL ES, DirectX and the new generation of Vulkan and Shader are the core of the core.

WebGL makes rendering 3D scenes in a browser environment a breeze. But WebGL and Shader can be used not only to render 3D scenes, but also to do some other cool things. A few days ago, I used Shader technology to transform/copy a business page developed before, a lot of experience and inspiration, may wish to record.

Advertising: With the help of GCanvas, front-end development can use the techniques used in this article in Hybrid environments such as Weex, RN, etc. See GCanvas for details.

Let’s take a look at the effect: links

On the left is the original page, the address; On the right is the page reproduced with Shader, the address.

This is actually an activity page of the 2018 Spring Festival Gala project. The page structure is very simple. I wrote the page then, so it looks familiar now.

We can see that the page before the recopy (later called “the original page”) is static, loading 1 JS file and 6 images with a total of 599K resources (including a 502K large transparent PNG image). On the page after reproduction, there are many elements moving, loading 1 JS file and 4 pictures, a total of 122K resources. The improvement in both visual effects and page size is obvious.

Let’s take this page as an example and analyze how using shader makes this page smaller, shinier, and faster.

Reading the following text requires some basic knowledge of WebGL and GLSL. Those who have done some training in the group should not feel any pressure. For those who have not, you can also read the Book of Shaders. Shader is easier to use than you think, trust me.

Large size transparent background

One of the biggest problems with the original page was that it had a very large transparent background.

This image is a whopping 501K in size because it is a PNG image with transparent channels. And because this picture is the content of the advertisement, there may be more than one, it cannot be integrated into the background, must be transparent. So how do you optimize that? We know that PNG compression with transparent channels is difficult; However, pictures without transparent channels can be converted to JPG and other formats, with a much higher compression ratio, so that we can exchange larger compression space with a smaller loss of quality.

My idea is to split the transparent PNG image into two opaque JPG images. The two opaque images, one inherits the RGB channel of PNG images, and the other uses only the R channel to store PNG images in channel A. The two images are then spliced together for WebGL to use. Since the Mosaic image has no transparency component, it can be compressed in JPG format and greatly reduced in size. This graph is only 41.5K, or about 8.2% of what it was before.

It looks something like this:

Note that the first image is 750×571 pixels and the second is 1024×1024 pixels. I didn’t compress the image by reducing the pixel size.

Also, the second image looks a little distorted because the image size is an integer power of 2, and WebGL can easily generate a MiPMap, which doesn’t affect our use.

In shader, we take the color from the image according to the pixel coordinates. Note that we need to take a color from the top half and the bottom half of the image, and then put it together according to certain rules.

precision mediump float; uniform vec2 uResolution; uniform sampler2D uImage; void main(){ vec2 st = gl_FragCoord.xy / uResolution; Vec4 c2 = texture2D(uImage, vec2(St.x, St.y *0.5)); Vec4 c1 = texture2D(uImage, st.x, St. y*0.5+0.5); Gl_FragColor = vec4(c1.xyz, c2.r > 0.6? C2. R: 0.0); }Copy the code

Converting a PNG image to a JPG image can be done easily in a browser using Canvas (for example) or with some other tools.

Moving background

First, we noticed that the background of the original page was on top of a red gradient, with small pieces of yellow atmosphere scattered randomly. The red gradient background and these small fragments are all drawn on a static JPG image, as shown below (1). As shown in the original image.

Before copying, I divided the pictures used in the original page into two categories: pattern nature and image nature. Pictures of Pattern nature do not transmit information themselves, and are usually used as shading, atmosphere and other scenes. The image is the carrier of information.

This picture is obviously of pattern nature. Such pictures tend to be large in size and volume (especially translucent patterns). In fact, these patterns can be “hand-drawn” by Shader without having to load the image. Another advantage of using shader to draw patterns is that they move regularly. In this case, it would be great if the pieces came down like flowers, right? However, those who are familiar with front-end animation will surely think that the performance of animation composed of so many particles will not be good if it is made by pure CSS or Canvas 2D. The more particles there are, the worse the performance of animation will be. Particle animations with WebGL and Shader do not suffer from performance degradation as the number of particles increases.

Let’s see how to draw these fragments using shader.

Vec2 random2(vec2 st){st = dot(st,vec2(127.1,311.7)), dot(st,vec2(269.5,183.3))); St = -1.0 + 2.0* frACT (sin(st)*43758.5453123); return st; } Float noise2(vec2 ist, vec2 FST){vec2 g1 = random2(ist+vec2(0.0, 0.0)); Vec2 g2 = random2(ist+vec2(1.0, 0.0)); Vec2 g3 = random2(ist+vec2(0.0, 1.0)); Vec2 g4 = random2(ist+vec2(1.0, 1.0)); Vec2 f1 = fST-vec2 (0.0, 0.0); Vec2 f2 = FST-VEC2 (1.0, 0.0); Vec2 F3 = FST-VEC2 (0.0, 1.0); Vec2 F4 = FST-vec2 (1.0, 1.0); float p1 = dot(g1, f1); float p2 = dot(g2, f2); float p3 = dot(g3, f3); float p4 = dot(g4, f4); FST = smoothstep(0.0, 1.0, FST); float p = mix( mix(p1, p2, fst.x), mix(p3, p4, fst.x), fst.y ); return p; } float inFrag(){ vec2 st = gl_FragCoord.xy / uResolution.xx; St = st * 60.0; float res = noise2(floor(st), fract(st)); return res; } void main(){ float pct = inFrag(); Gl_FragColor = vec4 (vec3 (PCT), 1.0); }Copy the code

First we will use a gradient noise function, NoisE2 (see this tutorial). For each pixel, input the pixel coordinates and this function will output a grayscale value. The output image of this function is roughly as shown above (2. Gradient noise). If you are unfamiliar with principles such as the “noise function”, it doesn’t really matter. You can find a wide variety of functions out of the box in the community, just to know what their effect is, without getting too caught up in the internal mechanics. Obviously, there is a gap between the gradient noise in FIG. 2 and what we expect. Next, we use a sieve to screen out points that are brighter than a certain threshold:

function initFrag(){ ... Res = step (0.5, res); return res; }Copy the code

Thus, use the step function to directly sieve out points greater than 0.5. In order to keep the edge of the fragment smooth, we use the smoothsStep function to intercept it.

Res = smoothstep(0.35, 0.5, res);Copy the code

Thus, we get the result of stretching in Figure 3.

Figure 3 is just a grayscale image, and we use this grayscale to mix red and yellow to make a color image.

vec3 bgColor(){ float y = gl_FragCoord.y / uResolution.y; Vec3 C1 = VEC3 (0.96, 0.02, 0.16); Vec3 C2 = VEC3 (0.96, 0.25, 0.21); return mix(c1, c2, y); } void main(){ ... vec3 cRed = bgColor(); Vec3 cYello = VEC3 (0.96, 0.70, 0.26); Gl_FragColor = vec4(mix(cRed, cYello, PCT), 1.0); }Copy the code

Here the bgColor method returns red, and since the red background still has a bit of a vertical gradient effect, it is also necessary to mix two additional red colors (the blending coefficient is related to the Y value of the pixel coordinates) to create the gradient.

At this point, our results are a little different from the original intent:

  • In the original image, the fragments in the bottom half of the page are more transparent, and the further down the page the fragments become more transparent (blending into the red background).
  • In the original image, there are no fragments in the middle of the circle (the area occupied by the red pane).
  • In the original image, the distribution of debris is not so uniform, often a small area is completely free of debris, as if a larger random variable is at work.

Starting from the above three points, we made three channels, which were successively superimposed (As shown in FIG. 5, FIG. 6 and FIG. 7), and finally got as shown in FIG. 7. The result is superimposed on figure 3, which means that the points screened out in Figure 3 will also be dimmed if they are dark in Figure 7. The result of this step is then used to mix the colors, and the final result is as shown in Figure 8.

void main() { float pct = inFrag(); pct = min(pct, yFactor()); pct = min(pct, rFactor()); pct = min(pct, mFactor()); . }Copy the code

Next, let’s get the pieces moving. When generating the fragment, add the offset related to time to the coordinate data of the noise function:

float inFrag(){ vec2 st = gl_FragCoord.xy / uResolution.xx; St = st * 60.0; St. y += uTime * 2.0; // Add a time-dependent offset float res = noise2(floor(st), fract(st)); Res = smoothstep(0.35, 0.5, res); return res; }Copy the code

Finally, for a better effect, I made two shards here, the two shards have different falling speed, forming some parallax effect.

void main() { float pct = inFrag(); pct = min(pct, yFactor()); pct = min(pct, rFactor()); pct = min(pct, mFactor()); float pct2 = inFrag2(); pct2 = min(pct2, yFactor()); pct2 = min(pct2, rFactor()); pct2 = min(pct2, mFactor()); pct = max(pct, pct2); . }Copy the code

In this way, the atmosphere shard effect was created by drawing directly with shader without relying on external resources at all.

Hand-painted design

The original page has a circular pane, which is also drawn on a transparent image. I don’t know if you’ll notice that this pane is drawn directly with the shader in the copied page.

In fact, panes of this complexity can also be categorized as patterns, and shaders can be drawn directly. Let’s take a look at drawing panes using Shader. A pane is made up of lines, and its basic unit is a line. First let’s see how the lines are drawn:

// Add float line(float e, float w, float d, float p){float e1 = e-w /2.0; Float e2 = e + w/2.0; float e2 = e + w/2.0; Return smoothstep(E1-D / 2.0, E1 + D / 2.0, P) * smoothstep(e2 + D / 2.0, E2-D / 2.0, P); } // Draw the grid vec3 veins(){float r = uResolution. X * 0.4; Vec2 center = VEC2 (uresolution. x/2.0, uresolution. y-r-5.0); vec2 st = gl_FragCoord.xy - center; St /= uresolve.x * 0.5; Float p = line(0.0, 0.3, 0.2, st.x); float p = line(0.0, 0.3, 0.2, st.x); return mix(veinsBgColor, veinsFgColor, p); } // The main function void main(){vec3 res = veins(); Gl_FragColor = vec4 (vec3 (res), 1.0); }Copy the code

The main function calls veins, which in turn calls line to get a gray value, and then mixes the two colors. The result of the above procedure is shown below.

To explain a few parameters: p is the x or Y value of the current pixel (depending on whether the horizontal is the Y value or the vertical is the X value), and e is the coordinate of the drawn line. W refers to the width of the line, while D refers to the width of the area where the line meets the non-line and is used for smoothing.

In the code above, w takes 0.3 and D takes 0.2, making the lines look thick. Later, we will fix these two values at 0.035 and 0.003.

Since the pane pattern contains multiple following lines, we need to call the line function several times and get gray values one by one. If the current pixel returns a gray value greater than zero in the “either” line function, we consider that pixel to be on the pattern. In other words, we take the largest gray value returned by line function many times as the final gray value to calculate the color. The code looks like this:

float maxList(float list[20]){ float res = list[0]; for(int i=0; i<20; i++){ if(list[i]>res){ res = list[i]; } } return res; } vec3 veins(){ ... Float (p = 0.0; float pl[20]; Pl [0] = line(0.29, 0.035, 0.003, st.x); pl[0] = line(0.29, 0.035, 0.003, st.x); Pl [1] = line(0.58, 0.035, 0.003, st.x); pl[1] = line(0.58, 0.035, 0.003, st.x); . Pl [7] = line(-0.58, 0.035, 0.003, st.y); pl[7] = line(-0.58, 0.035, 0.003, st.y); p = maxList(pl); . }Copy the code

We computed eight lines, and the results are shown in Figure 2 below.

Disassembling the pattern, we found that light with straight lines is not enough, but also needs rays and rectangular frames. Also, we introduce ray ray and rectangle box function.

float rayV(vec2 ep, float w, float d, float dir, vec2 st){ float pct = line(ep.x, w, d, st.x); If (st.y-ep. y) * dir < 0.0){PCT = 0.0; } return pct; } float rayH(vec2 ep, float w, float d, float dir, vec2 st){ float pct = line(ep.y, w, d, st.y); If (st.x-ep.x)* dir < 0.0){PCT = 0.0; } return pct; } float box(vec2 center, float width, float height, float w, float d, vec2 st){ float l1 = line(center.x, width+w, d, st.x); float l2 = line(center.y, height+w, d, st.y); float inBox = l1*l2; float plist[20]; Plist [0] = line(center.x+width*0.5, w, d, st.x); Plist [1] = line(center.x-width*0.5, w, d, st.x); Plist [2] = line(center. Y +height*0.5, w, d, st.y); Plist [3] = line(center. Y-height *0.5, w, d, st.y); float p = maxList(plist); p *= inBox; return p; }Copy the code

Then add content to the pattern in turn to get the effect shown in Figure 4 and figure 6. Through the final overlay, the effect of Figure 7 is obtained. The code looks like this (don’t be put off by the number of floating point numbers, they’re just fixed coordinates, only a few meaningful values, plus and minus signs combined to form patterns) :

Float (p = 0.0; float pl[20]; Pl [0] = line(0.29, 0.035, 0.003, st.x); pl[0] = line(0.29, 0.035, 0.003, st.x); Pl [1] = line(0.58, 0.035, 0.003, st.x); pl[1] = line(0.58, 0.035, 0.003, st.x); Pl [2] = line(-0.29, 0.035, 0.003, st.x); pl[2] = line(-0.29, 0.035, 0.003, st.x); Pl [3] = line(-0.58, 0.035, 0.003, st.x); pl[3] = line(-0.58, 0.035, 0.003, st.x); Pl [4] = line(0.29, 0.035, 0.003, st.y); pl[4] = line(0.29, 0.035, 0.003, st.y); Pl [5] = line(0.58, 0.035, 0.003, st.y); pl[5] = line(0.58, 0.035, 0.003, st.y); Pl [6] = line(-0.29, 0.035, 0.003, st.y); pl[6] = line(-0.29, 0.035, 0.003, st.y); Pl [7] = line(-0.58, 0.035, 0.003, st.y); pl[7] = line(-0.58, 0.035, 0.003, st.y); Pl [8] = rayV (vec2 (0.0, 0.29), 0.035, 0.003, 1.0, st); Pl [9] = rayV (vec2 (0.0, 0.29), 0.035, 0.003, 1.0, st); Pl [10] = rayH (vec2 (0.29, 0.0), 0.035, 0.003, 1.0, st); Pl [11] = rayH (vec2 (0.29, 0.0), 0.035, 0.003, 1.0, st); p = maxList(pl); float pl2[20]; Pl2 [0] = box (vec2 (0.0, 0.0), 0.39, 0.39, 0.035, 0.003, st); Pl2 [1] = box (vec2 (0.29, 0.29), 0.39, 0.39, 0.035, 0.003, st); Pl2 [2] = box (vec2 (0.29, 0.29), 0.39, 0.39, 0.035, 0.003, st); Pl2 [3] = box (vec2 (0.29, 0.29), 0.39, 0.39, 0.035, 0.003, st); Pl2 [4] = box (vec2 (0.29, 0.29), 0.39, 0.39, 0.035, 0.003, st); Pl2 [5] = box (vec2 (0.58, 0.0), 0.39, 0.39, 0.035, 0.003, st); Pl2 [6] = box (vec2 (0.58, 0.0), 0.39, 0.39, 0.035, 0.003, st); Pl2 [7] = box (vec2 (0.0, 0.58), 0.39, 0.39, 0.035, 0.003, st); Pl2 [8] = box (vec2 (0.0, 0.58), 0.39, 0.39, 0.035, 0.003, st); Pl2 [9] = box (vec2 (0.58, 0.58), 0.39, 0.39, 0.035, 0.003, st); Pl2 [10] = box (vec2 (0.58, 0.58), 0.39, 0.39, 0.035, 0.003, st); Pl2 [11] = box (vec2 (0.58, 0.58), 0.39, 0.39, 0.035, 0.003, st); Pl2 [12] = box (vec2 (0.58, 0.58), 0.39, 0.39, 0.035, 0.003, st); p = max(p, maxList(pl2));Copy the code

Once we have the pattern in Figure 7, we also need to shadow it (to compare with the original) so that we can have some dimension when cutting later.

Float shadow(){float r = uresolve.x * 0.4; float shadow(){float r = uresolve.x * 0.4; Vec2 center = VEC2 (uresolution. x/2.0, uresolution. y-r-5.0); vec2 st = gl_FragCoord.xy - center; St /= uresolve.x * 0.5; Return smoothstep (0.9, 0.3, st. y + 0.5 * st. x * st. x 0.1); } vec3 veins(){ return mix(veinsBgColor, veinsFgColor, p)*shadow(); }Copy the code

Here, for convenience, a parabola (St. y + 0.5st.xst. x-0.1) with the opening down and the center axis rejoining with the Y axis is used to simulate the shadow of the circle. This gives us Figure 8.

Finally, the whole image was cropped twice for the effect of red border and transparent background in the original design. Fill the cut parts with red and transparent colors. The results of FIG. 9 and FIG. 10 are obtained. Figure 10 is the final result.

Vec3 circle(vec3 veinsColor){float r = uresolve.x * 0.4; Vec2 center = VEC2 (uresolution. x/2.0, uresolution. y-r-5.0); vec2 dxy = gl_FragCoord.xy - center; float dist = sqrt(dxy.x*dxy.x+dxy.y*dxy.y); float p = dist/r; P = smoothstep(0.95, 0.96, P); return mix(veinsColor, borderColor, p); } vec4 clip(vec3 color){float r = uresolve. x * 0.4; Vec2 center = VEC2 (uresolution. x/2.0, uresolution. y-r-5.0); vec2 dxy = gl_FragCoord.xy - center; float dist = sqrt(dxy.x*dxy.x+dxy.y*dxy.y); Smoothstep (1.0, 1.02, dist/r); Return vec4 (color, 1.0 - p); } void main(){ vec3 res = veins(); res = circle(res); gl_FragColor = clip(res); }Copy the code

conclusion

From the above three examples, we can see that the proper use of WebGL can be used to refine and optimize the page, reduce the dependence on images, avoid using large transparent layers, and do some global/background animations. Since WebGL gives developers the ability to do “pixel by pixel” coloring, developers have a lot of flexibility to do things with shader. So, using Shader flexibly can help you make your pages smaller, shinier, and faster.

In fact, there are some other gadgets completed with Shader in the page after copying, such as the dynamic color gradient of the loading bar at the bottom, and the highlight passing over the text “Meizu Mobile Phone wishes you a happy New Year” in the middle. Because the point is relatively small and the technology used is relatively simple, I will not introduce it in detail.

(after)

Illustration: unsplash.com/photos/NFs6… By @ Ferdinand Stohr