Last year, a study of 3 d visualization projects, with three, js made a 3 d visualization monitoring V1, is the main reference Guo Longbang technology blog www.yanhuangxueyuan.com/3D.html
three editor threejs.org/editor/
Form – the generator MRHJ. Gitee. IO/form – genera…
When learning Three, I still had a headache reading user documents. I didn’t know how to view many things, so I had to look for demo next to each other to find similar effects, and then look for demo codes to find key APIS. Before I went to the official docs, I didn’t have webGL foundation. I don’t know. I plan to learn WebGL this year. I will read the knowledge of WebGL on W3CSchool, but it is boring and there is no complete code. Many terms do not understand, but also need to search one by one. Today, I accidentally turned to the diagram of the working principle of webgl&three.js. I think it is better to write it, so I reprinted it to learn more times, and it can also help more people to understand their principle. Ok, let’s learn together!
What are we talking about? Let’s talk about two things: 1. What’s the philosophy behind WebGL? 2. Take Three.js as an example, what role does the framework play behind it?
Why do we need to understand principles? We assume that you already know something about WebGL, or have done something with three.js. At this time, you may encounter some problems: 1. 2, encountered a bug can not solve, or even no direction; 3, performance problems, completely do not know how to optimize. At this point, we need to know more.
1. What is matrix? Simply put, matrices are used for coordinate transformations, as shown below:
2. How exactly does it transform, as shown in the figure below:
3. For example, shift the coordinate by 2, as shown below:
And if you still don’t understand at this point, it doesn’t matter, all you need to know is that matrices are used for coordinate transformations.
4.1 WebGL API Before learning about a new technology, we will first look at its development documentation or API. Looking at Canvas’s drawing API, we will find that it can draw straight lines, rectangles, circles, arcs and Bezier curves. So we took a look at the WebGL drawing API and found:
It can only know points, lines, triangles, right? I must have read it wrong. No, you read that right.
Even a complex model like this is made up of triangles.
4.2 WebGL rendering process Simply speaking, WebGL rendering process includes the following three steps: 1. Obtaining vertex coordinates; 2. Pixel assembly (drawing one triangle after another); 3.
Next, we’ll walk you through each step.
4.2.1 Obtaining Vertex Coordinates Where do vertex coordinates come from? A cube is fine. What about a robot? Yeah, we’re not going to write these coordinates one by one. Often it comes from 3d software exports, or framework generation, as shown below:
What is the write cache? Yeah, I didn’t introduce it to simplify the process. Since there are often thousands of vertex data, after obtaining vertex coordinates, we usually store them in the video memory, namely the cache area, which is convenient for GPU to read more quickly.
4.2.2 Primitive Assembly We already know that primitive assembly is the generation of each primitive (i.e. triangle) from vertices. Is the process automated? The answer is not really. In order for us to have more control over vertex positions, WebGL gave us this power, which is called the programmable rendering pipeline. WebGL needs us to deal with vertices first, so how? Let’s start with the following image:
We introduced a new term called “vertex shader” written by OpengL ES, defined by javascript as a string and passed to the GPU for generation. For example, here is a vertex shader code:
attribute vec4 position;
void main() {
gl_Position = position;
}
The attribute modifier is used to declare the value of a variable passed by the browser (javascript) to the vertex shader; Position is the vertex coordinates we define; Gl_Position is a built-in outgoing variable. This code doesn’t do anything, if we’re drawing 2d graphics, that’s fine, but if we’re drawing 3D graphics, where the vertex coordinate passed in is a 3D coordinate, we need to convert it to screen coordinates. For example, v(-0.5, 0.0, 1.0) is converted to P (0.2, -0.4). This process is similar to taking pictures with a camera. (I don’t understand how screen coordinates and V coordinates are converted, can you explain if you know?)
4.2.2.1 Vertex Shader Processing Flow Back to the topic, how does a vertex shader handle vertex coordinates?
As shown in the figure above, the vertex shader program will run as many times as there are vertices, after the coordinate transformation is completed and the pixel assembly is carried out by the GPU. You may notice that the vertex shader changes to:
attribute vec4 position;
uniform mat4 matrix;
void main() {
gl_Position = position * matrix;
}
This is the application of the matrix matrix, the three-dimensional world coordinates into screen coordinates, this matrix is called the projection matrix, passed in by javascript, as to how this matrix is generated, we will not discuss.
4.2.3 Rasterization is similar to pixel assembly, and rasterization is controllable.
After the generation of primiples, we need to “color” the model, which is completed by the “chip shader” running on the GPU. It is also an OpengL ES program, where the texture (color, diffuse map, etc.), light, etc. of the model looks is calculated by the pixel shader. Here is a simple fragment shader code:
precision mediump float;
void main(void
) {
Gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
Gl_FragColor is the output color value.
4.2.3.1 How does the slice shader control the generation of color?
As shown in the figure above, the vertex shader is how many vertices there are and how many times they are run, and the slice shader is how many slices (pixels) are generated and how many times they are run.
4.3, WebGL complete working process At this point, in essence, WebGL has experienced the following process: 1, preparing data phase At this stage, we need to provide vertex coordinates, indexes, sequence (triangles), uv texture coordinate (decision), the normal lighting effects (decision), and a variety of matrix (such as the projection matrix). The vertex data is stored in the cache (because of the large number) and passed to the vertex shader with the attribute modifier. The matrix is passed to the vertex shader with the uniform modifier. 2, generate vertex shader according to our needs, defined by Javascript a vertex shader (OpengL ES) program string, generated and compiled into a shader program to pass to the GPU. 3. According to the number of vertices, the pixel assembly GPU executes vertex shader programs one by one to generate the final coordinates of vertices and complete coordinate conversion. 4. Generate the color of the chip shader model, what texture it looks like, lighting effect, shadow (the process is complicated, need to render to texture first, can not pay attention to), all in this stage. 5. Rasterization can pass slice shader, we determine the color of each slice, and determine which slice is blocked according to the depth cache area, do not need to render, and finally store the slice information to the color cache area, and finally complete the whole rendering.
5. What exactly does Three.js do? We know that Three.js does a lot of things for us, but what does it do, and what role does it play in the process? Let’s take a brief look at the process three.js participates in:
The yellow and green parts are the three. Js part, where yellow is the javascript part and green is the OpengL ES part. We found that Three.js did almost everything for us.
The model data is derived. All kinds of matrices are automatically generated; Generates a vertex shader; Assist us to generate materials, configure lighting; Based on the material we set up, a fragment shader is generated. Moreover, webGL’s rasterization-based 2D API has been packaged into a 3D API that we humans can understand.
We already know from WebGL how it works that vertex shaders convert 3d world coordinates to screen coordinates, but coordinate transformations are not limited to projection matrices. The diagram below:
The previous result of WebGL after pixel assembly is as follows: Since we think the model is fixed at the origin of coordinates and the camera coordinates are 0 on both x and Y axes, the normal result is as follows:
5.1.1. Model Matrix
Now, we rotate the model clockwise math.pi /6, and all the vertex positions must have changed.
box.rotation.y = Math.PI/6;
However, if we directly computed the vertex positions in javascript, the performance would be very low (vertices are often in the thousands), and the data would be very difficult to maintain. So, we use the matrix modelMatrix to record this rotation information.
5.1.2 View Matrix
Then, we move the camera up by 30.
camera.position.y = 30;
Similarly, we use the viewMatrix to record movement information.
5.1.3 Projection Matrix
This is what we’ve been talking about before, and we’re documenting it with projectMatrix.
5.1.4 Application matrix
Next, we write the vertex shader:
gl_Position = position * modelMatrix * viewMatrix * projectionMatrix;
So now we have the final vertex position computed on the GPU. In fact, three.js does all of the above for us.
We already know that slice shaders are responsible for processing material, light and other information, but how to deal with it? The diagram below:
5.3 Complete operation process of Three. js:
When we select a material, three.js selects the corresponding vertex shader and slice shader based on the material we selected. Our usual shaders are already built into three.js.
The full text. Original link author: Wan Technician source: www.cnblogs.com/wanbo/
Think no link segmentfault.com/a/119000003…