WebGL overview

WebGL is a technology that combines HTML5 and JavaScript to draw and render complex 3D graphics on web pages. WebGL uses JavaScript to operate the OpenGL interface standard to display three-dimensional images on a two-dimensional screen.

WebGL with OpenGL

OpenGL itself is a set of specifications, not API, through OpenGL to unify the various graphics card manufacturers to achieve operating graphics, image standards. WebGL’s technical specification inherits from OpenGL ES. Since version 2.0, OpenGL has supported the programmable shader method. This support allows us to write shader programs in the shader language, which means that we can precisely control the position and color of each pixel. In the OpenGL2.0 specification, a GPU can execute a shader program that generates pixel data for eventual display on the screen.

The structure of WebGL applications

Compared with traditional web pages, the browser that supports WebGL has access to the OpenGL/OpenGL ES standard at the bottom. WebGL supports the shader language and programming language GLSL ES through the implementation of the standard. In our actual development process, GLSL ES usually exists in JavaScript in the form of strings. We can change the shader program by modifying the GLSL ES string in JavaScript.

shader

Shaders are a drawing mechanism that WebGL relies on to implement image rendering. WebGL runs on a GPU, so it needs to use code that can run on a GPU, and such code needs to provide a pair of methods, namely a vertex shader and a chip shader.

Vertex shader

The function of vertex shader is to calculate the position of vertices. Based on the calculated series of vertex positions, WebGL can raster some primions such as points, lines and triangles. Objects displayed in WebGL are made up of a series of vertices, each of which has information such as position and color. By default, the color of all pixels is calculated by linear interpolation, automatically forming a smooth gradient.

The vertex shader receives and sets the position and size of vertices via a_Position and a_PointSize, respectively, and gets the color from the program via a_Color and passes it to the chip shader via v_Color. Gl_Position and gl_PointSize are shaders’ built-in variables that represent the position and size of the vertex, respectively, so this code sets the position and size of the vertex and receives the a_Color color passed in by the program and passes it to the chip shader.

In a shader, variables with names starting with gl_ are built-in variables. In addition, webGL_ and _webGL are shader reserved words. Custom variables cannot start with webgl_ or _webgl. A variable declaration usually contains < storage qualifier >< data type >< variable name >. Take Attribute VEC4 a_Position as an example. Attribute indicates the storage qualifier.

const vs_source = `

    attribute vec4 a_Position;

    attribute float a_PointSize;

    attribute vec4 a_Color;

    varying vec4 v_Color;

    void main() {

        gl_Position = a_Position;

        gl_PointSize = a_PointSize;

        v_Color = a_Color;

    }

`;
Copy the code

Chip shader

The function of the pixel shader is to calculate the color value of each pixel in the current drawing pixel, and control the color and texture of the pixel by pixel rendering. As for the slice, the slice contains information such as color, depth and texture. The slice has much more information than the pixel, and both are pixels visually. As for primitives, primitives refer to 7 basic graphics that can be drawn directly in WebGL, which are:

  • Isolated points:gl.POINTS
  • Isolated line segment:gl.LINES
  • Continuous line segment:gl.LINE_STRIP
  • Continuous coil:gl.LINE_LOOP
  • Isolated triangle:gl.TRIANGLES
  • Triangle:gl.TRIANGLE_STRIP
  • Triangle fan:gl.TRIANGLE_FAN

Here is sample code for a slice shader, first setting float to medium precision, then receiving the color from the vertex shader via v_Color and setting it to the built-in variable gl_FragColor, which determines the vertex pixel color.

For details on precision, see WebGL Shader Precision Settings.

const fs_source = `

    precision mediump float;

    varying vec4 v_Color;

    void main() {

        gl_FragColor = v_Color;

    }

`;
Copy the code

Storage qualifier

In the above example, in addition to specifying the type of the variable, we used the storage qualifier when declaring the variable. In GLSL, you should use storage qualifiers depending on the purpose of the variable. Three storage qualifiers are commonly used:

The attribute properties

Attribute can only be used by vertex shaders to store the input of each vertex in the vertex shader, including vertex position coordinates, texture coordinates, and colors.

Typically we use buffers, which are the data the program sends to the GPU, and attributes are used to get the required data from the buffer and feed it to the vertex shader. A program can specify rules for reading the buffer each time a vertex shader runs.

Set vertex information using buffering

Generate buffer code

Const vertexBuffer = gl.createBuffer();Copy the code

Write data to the buffer

/ / the buffer data const are = new Float32Array ([0.5, 0.5, 10.0, 1.0, 0.0, 0.0, 1.0, 0.5, 0.5, 20.0, 0.0, 1.0, 0.0, 1.0, 0.5, 0.5, 30.0, 0.0, 0.0, 1.0, 1.0]); // Bind the buffer object gl.bindBuffer(gl.array_buffer, vertexBuffer); // Fill the buffer object gl.bufferData(gl.array_buffer, vertices, gl.static_draw) with cached data;Copy the code

Sets buffered read rules and enables buffered objects

VertexAttribPointer (a_Position, 2, gl.float, false, SIZE * 7, 0); // Call the vertex buffer, VertexAttribPointer (a_PointSize gl.vertexAttribPointer(a_PointSize, 1, gl.FLOAT, false, SIZE * 7, SIZE * 2); // Call the vertex buffer, Pass a_Color gl.vertexAttribPointer(a_Color, 4, gl.FLOAT, false, SIZE * 7, SIZE * 3) to a_Color gl.vertexAttribPointer(a_Color, 4, gl.FLOAT, false, SIZE * 7, SIZE * 3); / / activate a_Position using buffer array, the same below gl. EnableVertexAttribArray (a_Position); gl.enableVertexAttribArray(a_PointSize); gl.enableVertexAttribArray(a_Color);Copy the code

Uniform global variable

Uniform can have vertex shaders and slice shaders that store values, such as colors, that remain constant during pixel processing. The vertex shader and the fragment shader share the uniform variable namespace. If you declare uniform variables of the same name in both vertex shaders and fragment shaders, they must be of the same type and precision as the shader.

Varying variables

Varying generally exists in both the vertex shader and the chip shader. Its function is to transfer data from the vertex shader to the chip shader. After the pixel is assembled, WegGL rasterizes the pixel.

A simple example: colored triangles

In this code, specify the three points of the triangle, located at the top left, bottom left, and top right of the canvas. They are red, green, and blue, respectively. You can run the sample code to see the effect.

// Vertex shader const vs_source = 'attribute vec4 a_Position; attribute float a_PointSize; attribute vec4 a_Color; varying vec4 v_Color; void main() { gl_Position = a_Position; gl_PointSize = a_PointSize; v_Color = a_Color; } `; // const fs_source = 'precision mediump float; varying vec4 v_Color; void main() { gl_FragColor = v_Color; } `; const canvas = document.getElementById('app'); const gl = canvas.getContext('webgl'); Function initShader() {// createShader const vs_shader = gl.createshader (gl.vertex_shader); gl.shaderSource(vs_shader, vs_source); gl.compileShader(vs_shader); if (! gl.getShaderParameter(vs_shader, gl.COMPILE_STATUS)) { const error = gl.getShaderInfoLog(vs_shader); console.log('Failed to compile vs_shader:' + error); gl.deleteShader(vs_shader); return; } const fs_shader = gl.createShader(gl.FRAGMENT_SHADER); gl.shaderSource(fs_shader, fs_source); gl.compileShader(fs_shader); if (! gl.getShaderParameter(fs_shader, gl.COMPILE_STATUS)) { const error = gl.getShaderInfoLog(fs_shader); console.log('Failed to compile fs_shader:' + error); gl.deleteShader(fs_shader); return; } // createProgram const program = gl.createprogram (); gl.attachShader(program, vs_shader); gl.attachShader(program, fs_shader); gl.linkProgram(program); if (! gl.getProgramParameter(program, gl.LINK_STATUS)) { const error = gl.getProgramInfoLog(program); Console. log(' unable to link program object: '+ error); gl.deleteProgram(program); gl.deleteShader(fs_shader); gl.deleteShader(vs_shader); return; } gl.useProgram(program); gl.program = program; // Get shader variable position and assign const a_Position = gl.getAttribLocation(gl.program, 'a_Position'); if (a_Position < 0) { console.log('Failed to get the storage location of a_Position'); return; } const a_Color = gl.getAttribLocation(gl.program, 'a_Color'); if (a_Color < 0) { console.log('Failed to get the storage location of a_Color'); return; } / / multiple values by using buffer said const are = new Float32Array ([0.5, 0.5, 1.0, 0.0, 0.0, 1.0, 0.5, 0.5, 0.0, 1.0, 0.0, 1.0, 0.5, 0, 0, 0, 0, 0]) const SIZE = vertes_per_element; const vertexBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW); gl.vertexAttribPointer(a_Position, 2, gl.FLOAT, false, SIZE * 6, 0); gl.vertexAttribPointer(a_Color, 4, gl.FLOAT, false, SIZE * 6, SIZE * 2); gl.enableVertexAttribArray(a_Position); gl.enableVertexAttribArray(a_Color); } initShader(); Gl. ClearColor (0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); gl.drawArrays(gl.TRIANGLES, 0, 3);Copy the code

We only set the colors of the three dots, but the rendered shapes are colored triangles. The reason for this is that the v_Color is interpolated in the passing process. The upper left point and the upper right point are compared to represent the linear change of the three values of RGB color to the target value. The interpolation process assigns color to each pixel color after rasterization, resulting in color gradient.

WebGL rendering process

In the example above, the triangle has three points and needs to execute the vertex shader three times to store the three point positions in the graphic assembly area. After the vertex shader is used, the coordinates of the three points are already in the assembly area of the shape. So let’s assemble the shape. Because we say Gl. TRIANGLES, the shape uses a triangle. This is followed by rasterization, which converts the assembled image into a combination of tiles, or pixels, and interpolation of the variables. After rasterization, the program calls the slice shader. Assuming there are 10 slices after rasterization, the slice shader will execute 10 times, processing one slice per call. The slice shader calculates the color of the slice and writes it into the color buffer.

Image rendering

Drawing pixels from vertex information is great, but not so great for complex graphics. Usually we need to render images, which requires texture mapping, which is why in WebGL we prefer to describe images as textures.

Texture mapping

The principle of texture mapping is very simple, is to map a picture to a geometric surface. Since in WebGL, the only graphics WebGL can draw directly are points, lines and triangles, in texture mapping, the image is mapped to a rectangle consisting of two triangles.

Specific steps of texture mapping:

  1. Prepare the mapped texture map image, the texture image should satisfy the power of 2

The image resolution is not a power of 2 (104 * 104), the image cannot be rendered, and hints that the image resolution may not be a power of 2

Image resolution is 2 power (256 * 256), normal display

  1. Configure the mapping for the geometry, the vertex coordinates correspond to the texture coordinates, and note that the build order depends on the parity of the new vertices. (Suppose v0 represents the first vertex
  • If the new vertices are odd, the order of vertices is: T = [n-1 n-2 n];
  • If the new vertices are even, the order of vertices is: T = [n-2 n-1 n];

The following triangles are formed by: T1 = [v0, v1, v2], T2 = [v2, v1, v3]

// Vertex coordinates, texture coordinates, drawing order: Upper left - > lower left - > right - > lower const are = new Float32Array ([0.5, 0.5, 0.0, 1.0, / / where v0-0.5, 0.5, 0.0, 0.0, 0.5 / / v1, 0.5, 1.0, 1.0, // v2 0.5, -0.5, 1.0, 0.0, // v3]);Copy the code

When the first coordinate and the second coordinate are interchanged, draw

// Vertex coordinates, texture coordinates, drawing order: Lower left - > upper left - > right - > lower const are = new Float32Array ([0.5, 0.5, 0.0, 0.0, / / where v0-0.5, 0.5, 0.0, 1.0, 0.5 / / v1, 0.5, 1.0, 1.0, // v2 0.5, -0.5, 1.0, 0.0, // v3]);Copy the code

  1. Load the texture image and configure it for use in WebGL as follows

    1. Creating a texture object
    2. The Y-axis of the image is reversed because the Y-axis of the image coordinate system is vertically down while the T-axis of the texture coordinate system (st coordinate system) is vertically up. In our example, the vertex coordinates and texture coordinates are opposite, so the Y-axis needs to be reversed in the texture storage pixel

Let’s say I haven’t flipped the Y-axis

  1. Activates the specified texture unit

  2. Binds a texture object to a texture unit

  3. Set the texture filter, which is how to get the texture color when the drawing range (the rectangular area made up of vertex coordinates) does not match the texture size. It is important to note that setting the wrong filter can cause rendering to fail

    1. gl.TEXTURE_MAP_FILTERMagnification method, texture full to draw range
    2. gl.TEXTURE_MIN_FILTERZoom out method to reduce the texture to the drawing scope
    3. gl.TEXTURE_WRAP_SFill horizontally, texture size unchanged, left and right
    4. gl.TEXTURE_WRAP_TFill vertically, texture size unchanged, fill up and down

If the canvas size is 400 * 400, the vertex coordinate range of -0.5 ~ 0.5 corresponds to 200 * 200, and the image size is 256 * 256, the texture is larger than the drawing range, but only gl.texture_MAG_filter is set, and the texture color fails to be obtained. The console outputs familiar errors: the image cannot be rendered, and if the proper texture filtering is not set

  1. Bind the image to the texture
  2. Pass the texture unit to the sampler in the shader
  1. The corresponding pixel is extracted from the texture in the slice shader and the color of the pixel is assigned to the slice

The code snippet associated with the above operation is shown below

// Vertex shader const vs_source = 'attribute vec4 a_Position; attribute vec2 a_TexCoord; varying vec2 v_TexCoord; void main() { gl_Position = a_Position; v_TexCoord = a_TexCoord; } `; // const fs_source = 'precision mediump float; uniform sampler2D u_Sampler; varying vec2 v_TexCoord; void main() { gl_FragColor = texture2D(u_Sampler, v_TexCoord); } `; . function initShader() { ... / / const vertex coordinates, and the texture coordinates are = new Float32Array ([0.5, 0.5, 0.0, 1.0, 0.5, 0.5, 0.0, 0.0, 0.5, 0.5, 1.0, 1.0, 0.5, 0.5, 1.0, 0.0,]); . } const texture = gl.createTexture(); const u_Sample = gl.getUniformLocation(gl.program, 'u_Sample'); const img = new Image(); Img.onload = function () {// Load the texture image and do some configuration to store the Y-axis inversion gl.pixelstorei (gl.unpack_flip_y_webgl, 1) with // texture pixels in WebGL; // Activate the texture unit 0 gl.activeTexture(gl.texture0); // Bind gl.bindtexture (gl.texture_2d, texture); Gl.texture_min_filter (gl.texture_2d, gl.texture_min_filter, gl.linear); // Set the filter method. Here only gl.texture_min_filter is set. // Generate gl.teximage2d (gl.texture_2d, 0, gl.rgb, gl.rgb, gl.unsigned_byte, img); Uniform1i (u_Sample, 0); // Pass texture unit 0 to the sampler gl.uniform1i(u_Sample, 0); Gl. ClearColor (0.0, 1.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); // drawArrays(gl.TRIANGLE_STRIP, 0, 4); } img.src = 'power-of-2-image';Copy the code

❤️ Thank you

That is all the content of this sharing. I hope it will help you

Don’t forget to share, like and bookmark your favorite things.

Welcome to pay attention to the public number ELab team receiving factory good article ~

We are from the front end department of Bytedance, responsible for the front end development of all bytedance education products.

We focus on product quality improvement, development efficiency, creativity and cutting-edge technology and other aspects of precipitation and dissemination of professional knowledge and cases, to contribute experience value to the industry. Including but not limited to performance monitoring, component library, multi-terminal technology, Serverless, visual construction, audio and video, artificial intelligence, product design and marketing, etc.

Interested students are welcome to post in the comments section at 🤪