Learning WebGL isn’t that difficult once you understand the concepts behind it. Many WebGL introductory articles fail to cover these important concepts and simply use WebGL’s complex APIS to start rendering graphics, easily turning pit text into exhort text. This article will focus on these concepts and step by step explore how WebGL renders images to the screen. Understanding these important concepts will greatly reduce the learning curve.
What is WebGL?
WebGL can be used to draw and render complex graphics or perform extensive computations on a web page, and is fully integrated into all web standards in a browser, requiring no plug-in installation. Designed and maintained by non-profit Khronos Group. In addition to graphics rendering, such as games, data visualization, maps, AR/VR, etc., WebGL can also be used in deep learning and other computation-intensive scenarios.
We know that canvas can be used to draw some 2D graphics in web pages.
const canvas = document.createElement('canvas')
const ctx = canvas.getContext('2d') // Create a 2d rendering context
// Now we can use CTX to draw graphs
ctx.fillRect(0.0.100.100) // Draw a square
Copy the code
We see that the parameter above to get the context is passed in 2D. So it is generally assumed that it can only be used to draw 2D graphics on web pages, while WebGL can draw 3D graphics. The truth is, we can draw 3D graphics in 2D, and even render 3D graphics with characters on the terminal, all thanks to math. WebGl is really just a rasterization engine, it’s so low-level that we can only use it to draw points, lines and triangles.
const canvas = document.createElement('canvas')
const gl = canvas.getContext('webgl')
Copy the code
Replace 2D with WebGL to get the 3D rendering context. Most browsers currently support WebGL, although some need to import experimental WebGL, such as IE11.
Later we’ll write code that runs on the GPU (shaders) and passes data from the CPU to the GPU.
CPU and GPU design objectives are different, they are respectively targeted at two different application scenarios. Gpus were originally intended for computer graphics and video games. In general, we will manage the tasks of the entire system in the CPU, and give some computationally large but low-tech tasks that need to be repeated many times to the GPU to complete. With thousands of cores, gpus can perform a large number of concurrent computations that can be performed much faster than cpus. This is why WebGL uses the power of the GPU, which can dramatically increase the speed of rendering images.
OpenGL is introduced
WebGL is based on OpenGL. OpenGL(Open Graphics Library) is a cross-language, cross-platform application programming interface for rendering 2D and 3D vector Graphics. It is often used in CAD, virtual reality, scientific visualization programs and electronic game development. The actual OpenGL library is usually developed by the graphics card manufacturer according to the specification.
OpenGL was originally the IRIS GL API of SGI, which was considered the most advanced technology at that time and became the de facto industry standard. Later, IT was transformed from SGI into OpenGL, an open standard. In 1992, SGI created the OpenGL Architecture Review Board, and in 2006, it handed over control of the OpenGL API standard to the Khronos Group.
OpenGL is cross-platform. OpenGL ES(OpenGL for Embedded Systems) is a subset of OpenGL commonly used in mobile devices. The figure above shows the timeline of OpenGL and OpenGL ES.
WebGL is based on OpenGL ES 2.0, which is a subset of OpenGL ES 2.0. WebGL 2.0 is based on OpenGL ES 3.0. WebGL 2.0 is supported by most modern browsers, but Apple has yet to support WebGL 2.0! So the majority of applications are still WebGL 1.0.
Coordinate system
We know that the central point of the 2D canvas is in the upper left corner, and the Y-axis is positive downward.
The coordinate system in OpenGL seems to be more intuitive.
The origin is in the middle, Y up, X to the right.
Notice that the X, Y, and Z axes in OpenGL have a maximum value of 1 and a minimum value of -1.
const canvasWidth = 500
const x = 100
const two = x / canvasWidth * 2 // Change to 0 -> 2
const clipX = two - 1 // 0 -> 2 becomes -1 -> +1
Copy the code
Normalized Device Coordinates NDC Coordinates for all points are Normalized Device Coordinates (NDC) The normalized device coordinates are a small space of x, y, and Z values between -1 and 1. Any coordinates that fall outside the range will be discarded/cropped and will not show up on your screen.
Left and right coordinate system
We didn’t show you what the Z-axis looks like in OpenGL, because the Z-axis has two forms, one is pointing out of the screen (positive values are out of the screen) and the other is pointing out of the screen (positive values are in the screen).
When the Z axis is pointing out of the screen, we call it a right-handed coordinate system, and when the Z axis is pointing in the screen, we call it a left-handed coordinate system.
We can do this with our right hand and left hand, with our middle finger pointing at the positive Z axis.
Positive direction of rotation
The right-handed coordinate system is opposite to the positive direction of rotation and also extends our left and right hands.
The left hand coordinate system uses the left hand, the right hand coordinate system uses the right hand. The thumb is pointing in the positive direction of the axis, and the remaining four fingers are bending in the positive direction of rotation. If we look at the positive end of the axis, the positive direction of the right hand coordinate system is rotating counterclockwise, and the positive direction of the left hand coordinate system is rotating clockwise.
What coordinate system is OpenGL?
So is OpenGL left handed or right handed? The answer is neither.
Let’s say we have two points right now.
const point1 = [0.5.0.5.0.1] // Are the values of X, Y, and Z respectively
const point2 = [0.5.0.5, -0.2]
Copy the code
If we were to draw the top two points in OpenGL, which one comes first and which one comes last?
Depending on the order in which we render the two points, point1 overrides point2 if we render point1, and point2 overrides point1 if we render point2.
Depth cache mapping
If we turn on OpenGL depth testing.
const canvas = document.createElement('canvas')
const gl = canvas.getContext('webgl')
gl.enable(gl.DEPTH_TEST) // Enable depth tests
Copy the code
Depth testing is to store the z-value mapping of the graph in the depth cache, so that when we draw various graphs in OpenGL, we know whether the graph is closer to us or farther away. The closer the graph is to us, the point will cover the point away from us. If the point is farther than the point in the cache, it will be discarded.
When we turn on depth testing, point2 will always overwrite POINt1 regardless of the order in which point1 and Point2 are rendered. So the point with the lowest Z value will overwrite the point with the highest Z value, which means that OpenGL is left-handed.
OpenGL also has a depthRange function, which takes two arguments depthRange(zNear, zFar), both of which are numbers, both of which must be between 0 and 1. The default is depthRange(0, 1), which is used to set the depth cache range.
const canvas = document.createElement('canvas')
const gl = canvas.getContext('webgl')
gl.depthRange(1.0) // Reverses the default values
gl.enable(gl.DEPTH_TEST)
Copy the code
If we set the depth cache range as above, and then render point1 and point2 we see that point1 will always overwrite point2 regardless of the order, OpenGL becomes a right-handed coordinate system.
By default, the depth cache ranges from 0 to 1. Let’s see how OpenGL converts the Z value ([-1, +1]) into the deep cache ([0, 1]).
depth = n + (f - n) * (z + 1) / 2
// n and f are set by depthRange. N is near, f is far.
Copy the code
The above shows how to turn the Z value into a deep cache.
but
If you actually set depthRange(1, 0) in WebGL, you’ll see no effect. This is the difference between WebGL and OpenGL, according to the WebGL 1.0 specification
www.khronos.org/registry/we…
6.12 the Viewport the Depth Range
The WebGL API does not support depth ranges with where the near plane is mapped to a value greater than that of the far plane. A call to depthRange will generate an INVALID_OPERATION error if zNear is greater than zFar.
That is, depthRange’s zNear is not allowed to be less than zFar in WebGL.
There is another way to turn WebGL into a right-handed coordinate system.
gl.clearDepth(0)
gl.depthFunc(gl.GREATER)
gl.clear(gl.DEPTH_BUFFER_BIT)
Copy the code
Here, set the depth cache to 0 (the default is 1) and reset the depth cache with clear. Then set the depth comparison function to greater (the default is less) so that vertices with higher z values overwrite smaller vertices.
Common coordinate system
Normally we don’t use depthRange, clearDepth, etc. So by default, OpenGL should be left handed. And this is where it gets really confusing, because you’re actually developing with the right hand coordinate system.
It’s not that the right hand frame is better than the left hand frame, but the right hand frame is the convention of OpenGL. Microsoft DirectX, for example, uses a left-handed coordinate system.
Hello World
Now let’s draw a triangle using WebGL.
const canvas = document.createElement('canvas')
canvas.width = 300
canvas.height = 300
document.body.append(canvas) // Create and add canvas to the page
const gl = canvas.getContext('webgl')
gl.viewport(0.0, gl.canvas.width, gl.canvas.height)
// Tell WebGL how to convert coordinates from 0 to 1 into on-screen coordinates
const vertexShader = gl.createShader(gl.VERTEX_SHADER)
// Create a vertex shader
gl.shaderSource(vertexShader, ` attribute vec4 a_position; void main() { gl_Position = a_position; // Set the vertex position} ') // Write the vertex shader code
gl.compileShader(vertexShader) // Compile the shader code
const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER)
// Create a slice shader
gl.shaderSource(fragmentShader, ` precision mediump float; uniform vec4 u_color; void main() { gl_FragColor = u_color; // Set the chip color} ') // Write the fragment shader code
gl.compileShader(fragmentShader) // Compile the shader code
const program = gl.createProgram() // Create a program
gl.attachShader(program, vertexShader) // Add a vertex shader
gl.attachShader(program, fragmentShader) // Add a chip shader
gl.linkProgram(program) // Connect shaders in program
gl.useProgram(program) // Tell WebGL to render with this program
const colorLocation = gl.getUniformLocation(program, 'u_color')
// Get the u_color variable position
gl.uniform4f(colorLocation, 0.93.0.0.56.1) // Set its value
const positionLocation = gl.getAttribLocation(program, 'a_position')
// Get the a_position position
const positionBuffer = gl.createBuffer()
// Create a vertex buffer object and return its ID to hold the triangle vertex data.
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer)
// Bind this vertex buffer object to gl.array_buffer
// Subsequent operations on gl.array_buffer are mapped to this cache
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
0.0.5.0.5.0,
-0.5, -0.5
]), // Three vertices of a triangle
// Since data is sent to the GPU, in order to avoid data parsing, Float32Array is used to transmit data directly
gl.STATIC_DRAW // Indicates that the contents of the buffer do not change often
)
// Add vertex data to the newly created cache object
gl.vertexAttribPointer( // Tell OpenGL how to get data from Buffer
positionLocation, // Index of vertex attributes
2.The number of components must be 1,2,3 or 4. We just provided x and y
gl.FLOAT, // The data type of each element
false.// Whether to normalize to a specific range, not valid for FLOAT data Settings
0.The length of a row in the array, 0 means that the data is close and there is no gap, let OpenGL determine the specific step size
0 // offset specifies the offset. The value must be a multiple of the byte length of the type.
)
gl.enableVertexAttribArray(positionLocation);
// Enable the value of the attribute variable so that the vertex shader can access buffer data
gl.clearColor(0.1.1.1) // Set the color value when the color buffer is cleared
gl.clear(gl.COLOR_BUFFER_BIT) // Clear the color buffer, that is, the canvas
gl.drawArrays( // Draw primitives from an array
gl.TRIANGLES, // Draw triangles
0.// Which point to start with
3 // How many points are needed
)
Copy the code
Using WebGL to draw a triangle, the code is really a bit exaggerated. So let’s explain a little bit about how the top piece of code actually drew this triangle.
Vertex and slice shaders
The comments in the code above basically explain what each step does, but some concepts need to be explained in detail.
WebGL focuses on vertex and slice shaders, the second parameter above gl.shaderSource.
OpenGL shaders are written in GLSL(OpenGL Shading Language), which is a bit like C.
The vertex shader is primarily used to determine the position of a vertex, telling OpenGL the coordinates of the vertex in the NDC(standardized device Coordinates) by setting the gl_Position (built-in variable) variable.
A slice colorator is also called a fragment shader, which is known as a pixel shader. A slice is considered a pixel. The pixel colorator is primarily used to confirm the color of this pixel, which is set to the gl_FragColor (built-in variable) variable.
Our purpose with OpenGL is to render an image on the screen. The image is made up of pixels. First we define a bunch of vertices to OpenGL. Then OpenGL passes each vertex to the vertex coordinate system, which returns the position of the vertex in the NDC. The graphics are then turned into pieces (pixels), a step called rasterization. These slices are then passed to the slice shader, which is then used to output the color of the pixel.
const vertex = [[50.0], [0.50], [...50, -50]] // Define vertices
vertex = vertex.map(v= > vertexShader(v)) // The vertex position in NDC (-1 to +1)
const fragment = rasterization(vertex) // Turn these vertices into tiles
const colors = fragment.map(f= > fragmentShader(f)) // Pass these slices to the slice shader to determine its color
colors.forEach(color= > writePixelToScreen(color)) // Then render to screen
Copy the code
Here is the pseudo-JS code that briefly describes this process.
The image above shows this process nicely, but ignore geometric shaders, which are only vertex and fragment shaders in WebGL.
We can also see from this figure that the fragment shader calls many more tests than the vertex shader. So anything that can be done in the vertex shader goes into the vertex shader.
Pass data to the shader
Shaders are written in GLSL, so how do we pass data into shaders in JS?
The GLSL code above has the following two variables, which represent passed in from outside.
// vertex
attribute vec4 a_position;
// frag
uniform vec4 u_color;
Copy the code
Both variables are of type VEC4, which can be understood as an array of 4 floating-point numbers or a vector of 4 degrees of freedom. You can ignore why the vertices are vec4 and not vec3.
The key to being able to pass in data from outside is attribute and Uniform store qualifiers. These two types of variables must be defined outside the function, and neither of them can be reassigned in the shader.
uniform
Let’s look at Uniform first. It can be used in vertex and fragment shaders, and it is global and unique in shader programs. U_color is a bit like window.u_color in that we assign it a value in external JS, which can be used in vertex and fragment shaders, and we can also change its value in external JS.
const colorLocation = gl.getUniformLocation(program, 'u_color')
gl.uniform4f(colorLocation, 0.93.0.0.56.1)
// similar to program.window. U_color = [0.93, 0.0, 0.56, 1.0];
Copy the code
We first get the position of u_color in the shader and then use Uniform4f to pass the data. 4F stands for 4 floating point numbers (rGBA). Note that the color values in OpenGL do not range from 0 to 255, but from 0 to 1.
attribute
Attribute can only be used by vertex shaders to represent vertex by vertex information. In the example above, we defined three vertices to be passed to the a_position variable. The vertex shader does not fetch these vertices at once, but one by one.
const points = [p1, p2, p3]
points.forEach(p= > vertexShader(p))
Copy the code
Similar to the above implementation of vertex shaders, of course, in the video card will be concurrent implementation of vertex shaders. Passing the attribute using JS is a bit more cumbersome.
const positionLocation = gl.getAttribLocation(program, 'a_position')
const positionBuffer = gl.createBuffer()
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer)
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
0.0.5.0.5.0,
-0.5, -0.5
]), gl.STATIC_DRAW)
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false.0.0)
gl.enableVertexAttribArray(positionLocation);
Copy the code
Uniform we first get the variable’s address, and then create a vertex buffer to store vertex data. The buffer type of the vertex buffer object is gl.array_buffer. We need to bind buffer to gl.array_buffer. Subsequent operations on gl.array_buffer are equivalent to operations on this buffer. We then use the bufferData method to store the data in the cache. Once in the cache, we also need to use vertexAttribPointer to tell OpenGL how to get the data. Finally, we need to use enableVertexAttribArray to enable the vertex property.
Code parsing
Now that you know vertex and slice shaders, the above code is pretty much understood, so let’s go over the above code again.
To render using WebGL, you first need to get the rendering context, just change the usual 2D parameters to WebGL, and then set the WebGL ViewPort so that OpenGL can use it to change the NDC coordinates to on-screen coordinates.
Next we create the vertex and slice shaders and compile the shader code. Create a shader program, add vertex and fragment shaders to the shader program, connect the shaders, and then tell WebGL to use the shader program.
After passing the data to the shader as mentioned above, we set the default WebGL color buffer color value, and then clear the color buffer, that is, clear the canvas with the color we set.
As the last step we started rendering with gl.drawarrays, we chose to render triangles, of course we can also change the type to line segments, and finally it is a three-line triangle instead of a filled triangle, we have three vertices in the vertex buffer, so we set rendering 3 times.
OpenGL itself is a state machine, and we use apis to set its state to tell it how to run, and OpenGL’s state is often called an OpenGL context.
Introduction to GLSL ES
If you are unfamiliar with GLSL, we will take a closer look at the OpenGL shader language. GLSL ES is used in OpenGL ES and WebGL. As you might have guessed, WebGL is based on GLSL 1.2 and GLSL ES 2.0, and WebGL2 is based on 3.30. Also version 3.0 of GLSL ES.
It is a strongly typed language, and every sentence needs a semicolon. It has the same annotation syntax as JS, and the same variable name rules as JS. It cannot use keywords, reserved words, and cannot start with gl_, webgl_, or _webgl_.
There are three main data value types in GLSL, floating point, integer, and Boolean. Note that floating point numbers must have a decimal point. Type conversions can be made directly using float, int, and bool functions.
float f = float(1);
Copy the code
It also like the JS, the operator of the basic + + – + = && | | and the ternary operator support.
Matrices and vectors
Because it is used for drawing, there is also support for matrices and vectors.
Vec2, VEC3, and VEC3 represent vectors of 2, 3, and 4 floating-point numbers.
Ivec2, IVEC3, and IVEC3 are integers.
Bvec2, BVEC3, and BVEC3 Boolean versions.
Mat2, mat3, and mat4 matrices 2×2, 3×3, and 4×4.
vec3 color = vec3(1..1..1.); / / white
Copy the code
GLSL is also powerful for vector assignment, acquisition, and construction.
vec4 v4 = vec4(1..2..3..4.);
vec3 color = v4.rgb; Vec3 (1., 2., 3.)
vec3 position = v4.xyz; // Also available with xyzw
vec3 texture = v4.stp; // Can also be obtained using STPQ
vec2 v2 = vec2(v4); // Use the first two elements of V4
vec4 v41 = vec4(v2, v4.yw); // Use the last two elements of v2 and v4
vec4 v42 = vec4(1.); // Set all four elements to 1.
v42.g = 2.;
v42[1] = 3.; // Can also be obtained using []
mat2 m2 = mat2(1..2..3..4.);
vec2 v21 = m2[0]; / / / 1., 2.
float f = v21[0].x // Mix it up
Copy the code
Branches and loops
Branches and loops are the same as JS.
if (true) {} else if (true) {} else {}
for (int i = 0; i < 3; i++) {
continue; / / or break
}
Copy the code
function
Each shader must have a main function, which is automatically executed. The return value of the function is written before the function name, and void if there is no return value.
float add(float a, float b) {
return a + b;
}
Copy the code
If a function is defined after a call, the function needs to be declared first.
float add(float a, float b); / / declare
void main() {
float c = add(1..1.);
}
float add(float a, float b) {
return a + b;
}
Copy the code
Function parameters also have qualifiers.
In defaults to passing arguments to functions.
Const in is the same as in, but cannot be modified.
Out is assigned in the function and is passed out.
Inout passes in arguments, is assigned in functions, and is passed out.
void add(in float a, in float b, out float answer) {
answer = a + b; // Use out instead of return
}
Copy the code
There are also built-in functions in GLSL, such as sin, cos, POW, ABS, etc.
Precision qualifier
The precision qualifier is used to control the precision of the numerical value, the higher the precision means the slower performance, so we need to control the accuracy of the program reasonably. GLSL is divided into three precision highP, mediump and LOWP, respectively high, medium and low precision.
mediump float size; // Declare a mid-precision floating point number
highp int len; // Declare a high-precision integer
lowp vec4 v; // Low precision vector
Copy the code
It’s very cumbersome to declare one variable at a time, but we can also declare all of these accuracies at once.
precision mediump float; // All floating-point numbers are used with medium precision
Copy the code
GLSL has already set the default variable precision for us.
Both int and float are highp in vertex shaders.
In a slice shader int is mediump, float is undefined.
This is why the first line of code in the chip shader above is the Precision Mediump float; OpenGL doesn’t have a default value, so we have to set it ourselves.
Also in vertex and slice shaderssampler2D
和 samplerCube
Are alllowp
(They are mainly used for rendering images, more on that later).
For more information about GLSL, see OpenGL ES Reference Pages.
The cube
Let’s now explore how to render a cube. As mentioned earlier, WebGL is a very low-level API that can only be used to draw dots, lines, and triangles, so how do we draw squares?
In fact, the beautiful 3D models you see are actually made up of very small triangles.
This refrigerator, for example, is made up of more than 30,000 triangles. Why triangles? This is because any polygon can eventually be decomposed into multiple triangles, which means that triangles are the basic units of a polygon, and triangles must lie on a plane.
Two triangles can be combined to represent a square. The cube has six faces, so it needs 12 triangles. Each triangle needs 3 vertices, so finally we need 36 vertices!
But a cube is special. It actually only has eight vertices, one vertex shared by all three faces. So is there any way we can define only eight vertices? OpenGL can also render triangles using the vertex indexes that we define. For example, we send 8 vertices and an array of vertex indexes to the GPU, and Then OpenGL can render triangles using the order of the array of indexes.
For example, if we have an indexed array [1,2,3,3,2,0] and we are drawing triangles, this means rendering a triangle using the vertices of the array with subscripts 1,2, and 3, and then rendering another triangle with subscripts 3,2, and 0.
const canvas = document.createElement('canvas')
canvas.width = canvas.height = 300
document.body.appendChild(canvas)
const gl = canvas.getContext('webgl')
gl.viewport(0.0, gl.canvas.width, gl.canvas.height)
const program = createProgramFromSource(gl, ` attribute vec4 aPos; attribute vec4 aColor; varying vec4 vColor; void main() { gl_Position = aPos; vColor = aColor; } `.` precision mediump float; varying vec4 vColor; void main() { gl_FragColor = vColor; } `)
const points = new Float32Array([-0.5.0.5, -0.5.0.5.0.5, -0.5.0.5, -0.5, -0.5, -0.5, -0.5, -0.5.0.5.0.5.0.5, -0.5.0.5.0.5, -0.5, -0.5.0.5.0.5, -0.5.0.5
])
const colors = new Float32Array([
1.0.0.0.1.0.0.0.1.1.0.1.0.0.0.0.0.0.0.0.0.0.0.0
])
const indices = new Uint8Array([
0.1.2.0.2.3./ / before
1.4.2.4.7.2./ / right
4.5.6.4.6.7./ / after
5.3.6.5.0.3./ / left
0.5.4.0.4.1./ /
7.6.3.7.3.2 / /
])
const [posLoc, posBuffer] = createAttrBuffer(gl, program, 'aPos', points)
const [colorLoc, colorBuffer] = createAttrBuffer(gl, program, 'aColor', colors)
const indexBuffer = gl.createBuffer()
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer)
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW)
gl.bindBuffer(gl.ARRAY_BUFFER, posBuffer)
gl.vertexAttribPointer(posLoc, 3, gl.FLOAT, false.0.0)
gl.enableVertexAttribArray(posLoc)
gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer)
gl.vertexAttribPointer(colorLoc, 3, gl.FLOAT, false.0.0)
gl.enableVertexAttribArray(colorLoc)
gl.enable(gl.DEPTH_TEST)
gl.clearColor(0.1.1.1)
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)
gl.drawElements(
gl.TRIANGLES, // The type of primitive to render
indices.length, // Number of elements to render
gl.UNSIGNED_BYTE, // The type of the value in the element array buffer
0 // The offset in the element array buffer, in bytes
)
function createShader(gl, type, source) {
const shader = gl.createShader(type)
gl.shaderSource(shader, source)
gl.compileShader(shader)
return shader;
}
function createProgramFromSource(gl, vertex, fragment) {
const vertexShader = createShader(gl, gl.VERTEX_SHADER,vertex)
const fragmentShader = createShader(gl, gl.FRAGMENT_SHADER, fragment)
const program = gl.createProgram()
gl.attachShader(program, vertexShader)
gl.attachShader(program, fragmentShader)
gl.linkProgram(program)
gl.useProgram(program)
return program
}
function createAttrBuffer(gl, program, attr, data) {
const location = gl.getAttribLocation(program, attr)
const buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer)
gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW)
return [location, buffer]
}
Copy the code
The code above draws a cube with sides of length 1, with the center of the cube at the origin of the axes.
In addition to defining the coordinates of each vertex, we also defined the color of each vertex, with the next four vertices outside the screen set to color and the next four to black.
The vertex index is then defined using the Uint8Array (or Uint16Array if the index value is greater than 256).
Create a cache for the color data as well as the coordinates and then tell WebGL how to fetch it. But vertex index data is a little bit different, its binding point is not gl.array_buffer but gl.element_array_buffer, which is the Buffer used for element indexes.
Depth testing is also enabled so that triangles drawn later do not overwrite those drawn first, but are judged by their z-values. In addition to clean up when don’t have to call the clear function twice, but using the | operator, gl. COLOR_BUFFER_BIT | gl. DEPTH_BUFFER_BIT.
The final step is to replace drawArrays with drawElements, indicating that we use indexes to render the graphics.
Stores the qualified word varying
There are three attributes, UNIFORM, and VARYING. The first two have been introduced above, both of which get their data from external JS.
Varying is the data transmitted by the vertex shader to the chip shader. In the example above we assign aColor to vColor, which can then be used in the fragment shader.
It’s called VARYING for a reason, so let’s take a look at what the code will look like in the end.
We set the first 4 vertices to be red, green, blue, and pink. Why is it a gradient?
As I mentioned earlier, fragment shaders generally execute much more times than vertex shaders. This is because before the pixel shader you rasterize, you discretize the pixel into pixels, and then each pixel you execute the pixel shader to determine the color of that pixel.
Varying variables are interpolated by OpenGL as they are passed from the vertex shader to the chip shader. That is, we defined the colors of the three vertices of the triangle, and the pixels inside the triangle are interpolated according to the colors of the three vertices. For example, if one end of a line segment is red and the other is green, then the middle of the line segment is 50% red and 50% green.
Rotation and perspective
We render a cube, why is it a square?
Since the front side of the cube is facing us, we can only see the front side, and if we rotate the cube a little bit, we can see that this is a cube.
In real life, we can look at objects with the effect of near large and far small, that is, there is a perspective effect. A similar effect should be seen in 3D graphics, where the cube is rendered without perspective, meaning that the front side will be the same size as the back side.
How to rotate a graphic to make it look like perspective will be covered in the next article.
conclusion
This article covers WebGL basics and some important concepts. WebGL’s X, Y, and Z coordinates range from -1 to +1, and any vertices outside this range are pruned. This coordinate is called the normalized device coordinate (NDC). WebGL is left handed by default, but we can also change it to right handed. Usually we pick one coordinate system and we don’t change it, and WebGL’s convention is right handed. The graphics are rendered using a vertex shader for each vertex before being rasterized, in which the varying variables are interpolated, and then using a piecework shader to return the color of each pixel. Finally we render a cube that looks like a square, because we’re looking at the opposite side of it, and we need to rotate it to see the other faces, and there’s no API in WebGL for us to call, and the cube rotates, and we need a mathematical formula to rotate it, usually using a rotation matrix, The next article will detail rotation, scaling, and other transformations.
reference
- WebGL Programming Guide
- webglfundamentals.org/
- learnopengl-cn.github.io/
- zh.wikipedia.org/wiki/OpenGL
- zh.wikipedia.org/wiki/WebGL
- Developer.mozilla.org/zh-CN/docs/…
- www.khronos.org/registry/we…