This is the 110th unwatered original, want to get more original good articles, please search the public account to pay attention to us ~ this article first appeared in the political cloud front blog: WebGL Concepts and Basic introduction
What is WebGL
Baidu Encyclopedia explains WebGL as a 3D drawing protocol, while Wikipedia explains it as a JavaScript API. Since WebGL technology is designed to help us develop interactive 2D and 3D web effects in any compatible web browser without plug-ins, we can think of it as a drawing technology to help us develop 3D web pages, with the JavaScript API at the bottom.
The history of WebGL
The development of WebGL dates back to 2006. WebGL originated as a Canvas 3D experimental project by Mozilla employee Vladimir Vjisevic, and first demonstrated a Canvas 3D prototype in 2006. This technology was implemented in FireFox and Opera browsers in late 2007. The Khronos Group alliance that created WebGL in early 2009 initially included Apple, Google, Mozilla, Opera, and others. The WebGL 1.0 specification was released in March 2011. The development of the WebGL 2 specification began in 2013 and was finally completed in January 2017. The WebGL 2 specification was first supported in Firefox 51, Chrome 56 and Opera 43.
Basic concepts in WebGL
WebGL runs on a COMPUTER’s GPU, so it needs to use code that can run on the GPU. Such code needs to provide pairs of methods, one of which is called a vertex shader and the other is called a chip shader, and it uses the GLSL language. The method that connects vertex shaders to chip shaders is called a shader.
- Vertex shader
The vertex shader is used to calculate the position of the vertex, that is, provide the vertex coordinates in the clipping space
- Chip shader
The function of the fragment shader is to calculate the color value of the pixel. We can think of the fragment shader roughly as the pixel in the web page
- Before we mentioned the concept of vertex shader and chip shader, and the operation of the two methods of vertex shader and chip shader need to have the corresponding data, next let’s learn about the four ways of shader to obtain data:
- Properties and buffering
Buffering is a sequence of binary data sent to the GPU. Typically, the buffered data includes position, direction, texture coordinates, vertex color values, and so on. Of course you can store as much data as you want. Property is used to explain how to get the required data from the buffer and feed it to the vertex shader.
- The global variable
Global variables are assigned before the shader runs and are globally valid during the run. Global variables have the same value passed to the shader during a single draw.
- texture
A texture is a sequence of data that can be read at will while the shader is running. Normally we store most of the image data in the texture, but you can store as much data as you like besides the color data
- Can be variable
A variable is a way for a vertex shader to pass values to a chip shader
summary
WebGL only cares about two things: the coordinate value and the color value in the clipping space. To use WebGL, you only need to provide it with those two things. So we do this by providing two shaders, a vertex shader that provides the clipping space coordinates and a slice shader that provides the color values.
How WebGL works
After understanding some of the basic concepts of WebGL, let’s take a look at what WebGL does on gpus. As we’ve seen before, WebGL’s work on the GPU is mainly divided into two parts, namely the work done by the vertex shader (converting vertices to clipping spatial coordinates) and the work done by the chip shader (drawing pixels based on the calculated results of the vertex shader). If we need to draw a triangle, the work on the GPU is to first call the third vertex shader to calculate the corresponding positions of the three vertices of the triangle in the clipping spatial coordinate system, and save them in the GPU through the variable gl_Position, and then call the chip shader to complete the calculation of the color value of each vertex. The corresponding color value is stored in the GPU through the variable gl_FragColor. With all this done, we have all the pixels we need to draw the triangle, and the final step is to rasterize the triangle.
The native WebGL API draws triangles
Now that we’ve covered the history of WebGL, its basic concepts, and how it works, it’s time to put some real knowledge into practice, so let’s take a look at how to draw a simple triangle on a web page using WebGL. We know that WebGL, as a 3D drawing technology, depends on the Canvas element in HTML5, so we need to do a series of preparations before we officially start drawing:
-
First, we need to create a Canvas element as the canvas needed to draw the triangle, and complete the compatibility test of the browser on the Canvas element.
function webglInit () { const canvasEl = document.createElement('canvas'); // Canvas element is created canvasEl.width = document.body.clientWidth; // Set the width of canvas canvasEl.height = document.body.clientHeight; // Set the height of canvas document.body.append(canvasEl); // Add the created canvas to the page under the body element // Next we need to determine the compatibility of the browser for WebGL. If the browser does not support WebGL, then we do not need to proceed if(! canvasEl.getContext("webgl") && !canvasEl.getContext("experimental-webgl ")) { alert("Your Browser Doesn't Support WebGL"); return; } // If the browser supports WebGL, then we get the WebGL context object and copy it to the variable gl const context = (canvasEl.getContext("webgl"))? canvasEl.getContext("webgl") : getContext("experimental-webgl"); /* Set context.viewport(x, y, width, height); X: used to set the horizontal coordinates of the lower left corner of the viewport. Default: 0 y: Sets the vertical coordinate of the lower left corner of the viewport. Default: 0 width: Specifies the width of the viewport. Default value: Canvas width Height: Used to set the viewport height. When you first create a WebGL context, the viewport size will match the canvas size. However, if you resize the canvas, you need to tell the WebGL context to set the new viewport, so this line of code can be omitted as the first creation context.viewport(0.0, context.canvas.width, context.canvas.height); return context; } Copy the code
-
Now that the canvas is ready, the next step is to draw the triangle. As we usually do when drawing, we need to prepare the vertex shader for drawing the triangle, and the fill color for the triangle, which is called the fragment shader
const gl = webglInit(); Gl.createshader (type) Where type is an enumerator, the value is either GL.vertex_SHAder or GL.fragment_shader const vShader = gl.createShader(gl.VERTEX_SHADER) // Write the GLSL code syntax for vertex shaders gl.shaderSource(shader, source); Shader - webglShader (shader object) used to set the program code source-string containing the GLSL program code gl.shaderSource(vShader, ` attribute vec4 v_position; void main() { gl_Position = v_position; // Set vertex position} ') gl.compileShader(vShader) // Compile the shader code const fShader = gl.createShader(gl.FRAGMENT_SHADER) gl.shaderSource(fShader, ` precision mediump float; uniform vec4 f_color; void main() { gl_FragColor = f_color; // Set the chip color} ') // Write the chip shader code gl.compileShader(fShader) // Compile the shader code Copy the code
-
Now that we have configured the vertex shader and the chip shader, we need to create a program that connects our vertex shader and the chip shader to complete the final triangle drawing.
// Create a program to connect vertex shaders to chip shaders
const program = gl.createProgram()
gl.attachShader(program, vShader) // Add a vertex shader
gl.attachShader(program, fShader) // Add a slice shader
gl.linkProgram(program) // Connect the shaders in Program
gl.useProgram(program) // Tell WebGL to render with this program
const color = gl.getUniformLocation(program, 'f_color')
// Get the f_color position
gl.uniform4f(color, 0.93.0.0.56.1) // Set its value
const position = gl.getAttribLocation(program, 'v_position')
// Get v_position
const pBuffer = gl.createBuffer()
// Create a vertex buffer object and return its ID to hold triangle vertex data.
gl.bindBuffer(gl.ARRAY_BUFFER, pBuffer)
// Bind the vertex buffer object to gl.array_buffer
// Subsequent operations on GL.array_buffer will be mapped to this cache
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
0.0.5.0.5.0,
-0.5, -0.5
]), // The three vertices of a triangle
// In order to avoid data parsing, Float32Array is used to transmit data directly
gl.STATIC_DRAW // indicates that the contents of the buffer will not change frequently
)
// Add vertex data to the newly created cache object
gl.vertexAttribPointer( // Tell OpenGL how to retrieve data from a Buffer
position, // Index of the vertex attribute
2.// the number of components. Must be 1,2,3 or 4. We're only providing x and y
gl.FLOAT, // The data type of each element
false.// Whether to normalize to a specific range, invalid for FLOAT data
0.// The length of a row in the stride array. 0 means that the data is tight and there are no gaps. Let OpenGL determine the specific step size
0 // Offset The byte offset, which must be a multiple of the byte length of the type.
)
gl.enableVertexAttribArray(position);
// Turn on the attribute variable so that the vertex shader can access the buffer data
gl.clearColor(0.1.1.1) // Set the color value when the color buffer is cleared
gl.clear(gl.COLOR_BUFFER_BIT) // Empty the color buffer
// Syntax gl.drawarrays (mode, first, count); First - Specifies which point to start drawing. Count - Specifies how many points to use for drawing
gl.drawArrays( gl.TRIANGLES, 0.3 )
Copy the code
Running the above code in conjunction with the HTML file, we can see the triangle shown in the figure on the web page, and the size of the triangle ADAPTS to the size of the browser window.As you can see, we’ve already written a long list of JAVASCRIPT code just by drawing a simple triangle, and the development costs of actually writing a dynamic 3D interactive web page using the native WebGL API would be prohibitively expensive.
Shortcomings of WebGL native API development
The above example of drawing triangles with the native WebGL API illustrates the problems of using the native WebGL API to develop 3D interactive web pages. While the native WebGL API is functionally sufficient for any of our development needs, it is extremely expensive to develop and learn. This is extremely unnervable for beginners to WebGL, we need to configure vertex shaders to calculate the position of the drawing vertices, and developers need to be familiar with the mathematics of matrix operations, as well as spatial geometry concepts familiar with the spatial distribution of 3D objects. The lighting, texture and other design of the scene also need to have their own views on the configuration of color. So to make things easier for beginners, I’ll introduce some common WebGL development frameworks.
Several WebGL development frameworks
- Three.js
- As a comprehensive library of WebGL, three. js has a wide range of applications. However, it lacks comprehensive and detailed official documents, which is not particularly friendly to users
- Cesium.js
- Cesium. Js is a WebGL library dedicated to 3D map development. With a comprehensive 3D map development API, it is a good choice for developers who need to develop 3D maps, but the application development coverage for other scenarios is not very comprehensive
- Babylon.js
- Babylon. Js is one of the most widely used WebGL libraries in the world, so you can check it out yourself. I won’t go into the details here
Three.js is a 3D engine that runs in a browser and allows you to create various 3D scenes, as well as a comprehensive WebGL library. If you need to develop 3D map web pages, you can use Cesium. Js, a WebGL library dedicated to map development. Babylon. Js is one of the most popular WebGL libraries in the world.
Draw the rotary cube based on three.js
-
The first step of drawing the rotary cube with three.js is the same as that of native WebGl. First, we need to prepare the environment that three.js needs to run.
// Create a renderer variable to store the renderer object var renderer; // The initThree function is used to initialize the environment that three.js needs to run function initThree() { // Like the native WebGL environment setup process, three. js also needs to set the canvas element size first width = document.getElementById('canvas-frame').clientWidth; // Set the width property to the browser window width height = document.getElementById('canvas-frame').clientHeight; // Set the height attribute to the browser window height // Create a new WebGL renderer and assign it to the renderer variable renderer = new THREE.WebGLRenderer({ antialias: true }); // Set the canvas size to the browser window size renderer.setSize(width, height); // Mount the canvas element to the page document.getElementById('canvas-frame').appendChild(renderer.domElement); // Set the empty canvas color to white renderer.setClearColor(0xFFFFFF.1.0); } Copy the code
-
Next, unlike native WebGL, which requires vertex shaders and chip shaders, three.js requires a camera. Three.js The Three basic elements needed to draw 3D web pages are camera, scene and object. Of course, if the light and shade effects need to be set, we also need to add the fourth element, light source. The light source does not necessarily need to be set, but the camera, scene and object must exist.
// Create the camera variable to store the camera object var camera; // Initialize the camera function three. js. There are several camera types that can be selected according to the specific needs. In this case, we want to create a rotating cube, so we use a perspective camera, while if we want to create a 3D shadow scene, we need to use an orthogonal camera function initCamera() { PerspectiveCamera(fov: Number, aspect: Number, near: Number, far:) Number) FOV - perspective aspect - the aspect ratio of the object near - camera near shot far - camera far shot */ camera = new THREE.PerspectiveCamera(45, width / height, 1.10000); camera.position.x = 0; // Set the X-axis position of the camera in 3d coordinates camera.position.y = 10; // Set the y axis position of the camera in 3d coordinates camera.position.z = 5; // Set the camera's z-axis position in the three-dimensional space coordinates camera.up.x = 0; camera.up.y = 0; camera.up.z = 1; camera.lookAt(new THREE.Vector3(0.0.0));// Set the camera observation point } Copy the code
-
In the previous step, we have completed the camera setup. Now let’s prepare the scene which is the second element needed by three.js to draw 3D web pages.
// Create the scene variable to store scene objects var scene; // initScene creates a scene and assigns a value to the scene variable function initScene() { scene = new THREE.Scene(); } Copy the code
-
Now that we have our camera and scene ready, we need to set up the object to be shot, and then add it to the scene after the object is painted.
// Create a variable called cube to hold the geometric cube var cube; // The initObject function is the core of the scenario function initObject() { // First create an instance of the geometry class and assign it to the geometry variable var geometry = new THREE.BoxGeometry(1.1.1); The MeshBasicMaterial constructor creates a simple material that is not affected by the lighting effects of the scene var material = new THREE.MeshBasicMaterial( { color: 0x00ff00});// Mesh is a constructor for the base unit of a triangular Mesh, similar to our native WebGL fragment shader, which is used to connect geometry and materials cube = new THREE.Mesh( geometry, material ); // Finally add the created geometric cube to the scene scene.add(cube); } Copy the code
-
By now, we have completed the basic configuration required by three.js to draw 3D web pages. Of course, if there is a need to deal with the light and shade effect and light color of 3D web pages, we can also add the configuration of light in the scene. Here, because our rotating cube has no special requirements for light, So we went straight to the final step of rendering the scene.
// The render function provides the browser's loop rendering function function render() { cube.rotation.x += 0.01; cube.rotation.y += 0.01; renderer.render(scene, camera); requestAnimationFrame(render); } // Finally, threee.js environment initialization, scene creation, camera creation, renderer creation, and render initialization are combined together to complete the drawing of a rotating cube function threeStart() { initThree(); initCamera(); initScene(); initObject(); render(); } document.addEventListener('DOMContentLoaded'.function(){ threeStart(); }); Copy the code
-
The rotation cube drawing of three. js also needs to be used with the HTML file to see the effect
<! DOCTYPEhtml> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="Width = device - width, initial - scale = 1.0"> <title>Document</title> <script type="text/javascript" src=".. /utils/three.js"></script> <style type="text/css"> div#canvas-frame { border: none; cursor: pointer; width: 100%; height: 600px; background-color: #EEEEEE; } </style> </head> <body> <div id="canvas-frame"></div> </body> </html> Copy the code
After running the above code with the HTML file, we can see a green cube rotating in the web page
summary
By comparison, we found that although we created a more complex scene with three.js, the amount of code was less than when drawing triangles with WebGL’s native API. Therefore, for beginners, it is obviously not appropriate to directly use WebGL native API to develop 3D web pages. At this point, we can use WebGL encapsulation libraries like three. js for development. These third party encapsulated WebGL libraries have greatly reduced our development costs compared to native API development, and have also helped us develop better page effects. This is not to say that native apis are bad, but the ability to understand WebGL native API development can help us achieve more arbitrary functionality when developing 3D web pages. Besides, the documentation of three. js itself is not particularly perfect, so it is also necessary to thoroughly understand WebGL native API in order to use it smoothly.
conclusion
WebGL technology has been around for quite a while, but despite the ability to create cool 3D web pages, it has never really taken off. Today, it’s mostly used for 3D web games. A large part of the reason is restricted by the development of Internet speed. In today’s fast-paced society, people have very low tolerance for the loading speed of web pages. A 3D web page developed by WebGL often takes three or four seconds to open, which is undoubtedly extremely unfriendly to users. However, it is believed that with the development of 5G communication technology and the rapid development of network communication technology, WebGL technology may usher in a new development opportunity tomorrow.
Recommended reading
The most familiar stranger RC-form
Vite features and partial source parsing
How do I use Git at work
How to build a suitable build and deployment platform for your team
Open source works
- Politics in front of tabloids
Open source address www.zoo.team/openweekly/ (there is a wechat group on the homepage of the official website of the tabloid)
, recruiting
ZooTeam (ZooTeam), a young and creative team, belongs to the product RESEARCH and development department of ZooTeam, based in picturesque Hangzhou. The team now has more than 40 front end partners, the average age of 27 years old, nearly 30% are full stack engineers, no problem youth storm team. The membership consists of “old” soldiers from Alibaba and netease, as well as fresh graduates from Zhejiang University, University of Science and Technology of China, Hangzhou Electric And other universities. In addition to the daily business connection, the team also carried out technical exploration and actual practice in the direction of material system, engineering platform, building platform, performance experience, cloud application, data analysis and visualization, promoted and implemented a series of internal technical products, and continued to explore the new boundary of the front-end technology system.
If you want to change the things you’ve been doing, you want to start doing things. If you want to change, you’ve been told you need to think more, but you can’t change; If you want to change, you have the power to achieve that result, but you are not needed; If you want to change what you want to accomplish, you need a team to support you, but there is no place for you to bring people; If you want a change of pace, it’s “3 years of experience in 5 years”; If you want to change the original savvy is good, but there is always a layer of fuzzy window paper… If you believe in the power of belief, that ordinary people can achieve extraordinary things, that you can meet a better version of yourself. If you want to get involved in the growth of a front end team with a deep understanding of the business, a sound technology system, technology that creates value, and spillover impact as the business takes off, I think we should talk about it. Any time, waiting for you to write something, to [email protected]