Hello, my friends. Recently, I shared the contents related to three. js within the team. I just want to summarize and record it during the weekend.
Without further ado, let’s seeEffect of the demo
See this effect, many small partners do not feel familiar, perhaps you want to say is this: UP2017 preheating station.
This is a work of Tencent in 2017, which really attracted me deeply at that time. After constant exploration, I have successfully made a highly imitation demo. Today, I would like to share this project from the perspective of implementation, mainly from the following aspects.
- My exploration of Three.js
- Output project prototype
- Implementation scheme
- Details of the optimization
1. My exploration and simple understanding of Three.js
I first discovered this work when I was in college (2019), and it gave me a particularly shocking feeling! I found that the front end can do more than just manage systems and stores, but also make such a visual impact.
Later, I made this scene slowly in the process of learning. After all, three.js is different from the mainstream front-end development direction, or even two industries, so it is quite different from most of the skills we learn.
Threejs is not a 3D image created out of thin air, but a layer of abstraction and encapsulation based on WebGL. WebGL is a new drawing standard launched by HTML5. WebGL does not speak martial arts, and its syntax is very similar to C language. It is covered with JS shell to cheat and sneak up on me, who has worked for less than two years.
But the way was paved by a great team that encapsulated this very unprincipled GL syntactic abstraction into a semantic three. js, and three. js is compliant with the ESM specification so we can happily use it in front-end engineering projects.
Like this,
import { PerspectiveCamera, Scene } from 'three';
// Create a scene
const scene = new Scene();
Copy the code
It’s nice to see the import syntax
You might ask, what is scene here, what is this line of code doing?
Let’s take a look at the Three core concepts of Three.js: scene, camera and renderer.
The official website has a very detailed explanation, here I will not do the porter, share my understanding.
-
There is a scene.
I think of it as a container. Imagine if we had a root tag in our project when we used Vue or React, usually a div tag with the ID app or root. Our component would render the real DOM inside the root tag, and eventually render the content on the page.
Three.js also has this container concept, called scene. The content in the scene is the content that users may see in the future, so the essence of the scene is to present A 3D picture.
-
Camera
Since three. js provides a three-dimensional world, we need a tool to observe the world, and this tool is the camera, whose function is to observe the scene. As for why it is called a camera and not a camera or eye 🤔🤔🤔 is explained next.
-
Renderer
ReactDOM and ReactNative are two renderers that are responsible for UI rendering in browsers and native apps respectively, so the function of renderers is to render UI. In three. js the renderer is used to render the scene. The renderer renders a frame of the image each time the render method is executed, just as we would with a camera. So the instrument for observing the world is called a camera, not a video camera or an eye.
If you want to see a continuous image, you need to render continuously. Browsers use frame-by-frame rendering, so you can have the renderer render the scene once for every frame, and finally see a dynamic image.
After all this talk, Threejs doesn’t seem so strange
2. Produce project prototype
As the saying goes, all things are difficult at the beginning. A good beginning is half the success. At that time, there was plenty of time in the holiday, so I started this great plan after a simple technical research.
Here I unselfishly give you the prototype of the project, it is this prototype that supports me to complete the project. Interested partners can run, play their own creative will make this small project more corrupt and cool ~
The project prototype is here
3. Implementation scheme
Next is the most egregious link, small partners moved a small bench listen to ha ~
3.1. Technology selection
-
Project construction: Webpack
Front-end base, not explained
-
3D scene: three.js
Render 3d scenes
-
Particle animation: tween.js
Since particles are essentially JS objects created by three.js, not DOM elements, we can’t animate with Css, so we have to use jS-BASED animation libraries (any JS animation library is fine here).
3.2. Particle starry sky in the first scenario
Our first scene is a starry sky composed of particles, and we need to initialize the particles first.
const particles = new Points(geometry, material);
Copy the code
The two parameters received by Points are geometry and material, which are related to the shape of the particle. Here, we need to focus on geometry. Geometry means the geometry that determines how the particles end up together. There are two important vertices: vertices and colors. Vertices are the most important; they determine the position of each vertex.
The positions of each particle in the sky are not written down but randomly generated.
So our implementation idea is very clear, as long as the random generation of each vertex position can get the final effect
// create geometry
const geometry = new Geometry();
// 2. Add point information and Color information to the geometry
for (let i = 0; i < 8000; i++) {
const vertex = new Vector3();
// Randomly generate the positions of x, y, and z axesvertex.x = random... ; vertex.y = random... ; vertex.z = random... ; geometry.vertices.push(vertex); geometry.colors.push(new Color(255.255.255));
}
// 3. Create a particle system
const particles = new Points(geometry, material);
// 4. Add to the scene
scene.add(particles);
Copy the code
So that was the idea of creating the initial scene, and it basically did two things,
- Create the particle system and add it to the scene.
- Make it look like a starry sky. (We will finally show an animation of the rotation of the sky, interested friends can try to write their own)
3.3. Particle switch to model
We talked about geometry. Vertices, and this property ultimately determines what the model looks like.
Now that we have the star particles, we can switch the model by moving each particle to the point of the next model.
So first we need to import the model, store its geometry (or vertices),
const loader = new JSONLoader();
loader.load('assets/1game.json'.geo= > {
glist[0] = geo;
});
Copy the code
Then move the particles in the current scene to the corresponding model point
// Point information for the model
const nextVertices = glist[index].vertices;
const nextVerticesLength = nextVertices.length;
// Iterate over particles to animate
geometry.vertices.forEach(function (vtx, i) {
const o = nextVertices[i % nextVerticesLength];
new TWEEN.Tween(vtx)
.to(
{
x: o.x,
y: o.y,
z: o.z,
},
1000
)
.easing(TWEEN.Easing.Exponential.In)
.delay(delay * Math.random())
.start();
});
Copy the code
[I % nextVerticesLength] [I % nextVerticesLength]
Imagine that we currently have 10,000 particles. Since the number of points in each model is different, the particles cannot find their destination one by one as they move, and the extra particles will be awkward. Here, the idea is to overlap the extra particles from zero. And the way you do that is you take mod.
In addition, the role of delay is to set the animation delay time, here let each particle have their own animation delay time, the final effect will be very nice ~
The following figure forThe optimized 和 Before optimizationEffect comparison of
3.4. Text effect
The two important attributes of CSS3 filter and animation are mainly used.
.text {
animation: activeText 2s 0.5s;
animation-fill-mode: both;
}
@keyframes activeText {
0% {
opacity: 0;
filter: blur(100px);
}
100% {
opacity: 1;
filter: blur(0); }}Copy the code
3.5. Post-processing
What is post-processing? Let’s use a set of pictures to illustrate.
The default effect
After using floodlight, focus and dark corner effects
And you can see how textured it becomes.
However, at the beginning of the realization of this effect can not be so smooth, because post-processing belongs to the category of graphics knowledge, for our front-end is slightly more difficult, learning costs are also very high. In addition, there is little mention of this concept in the three. js document. I can only get a glimpse of it from the source code. Fortunately, I found several examples provided by the author in the source code, and all of them support the ESM specification.
It’s going to look something like this,
import { RenderPass } from 'three/examples/jsm/postprocessing/RenderPass';
import { UnrealBloomPass } from 'three/examples/jsm/postprocessing/UnrealBloomPass';
import { EffectComposer } from 'three/examples/jsm/postprocessing/EffectComposer';
const renderScene = new RenderPass(scene, camera);
// Post processing effect, omit N multiple parameters
const bloomPass = newUnrealBloomPass(...) ; bloomPass.renderToScreen =true;
bloomPass.threshold = 0;
bloomPass.strength = 0.7;
bloomPass.radius = 0.5;
bloomPass.light = 1;
const composer = newEffectComposer(renderer); composer.setSize(... size); composer.addPass(renderScene); composer.addPass(bloomPass); .Copy the code
The resulting Composer is an enhanced renderer. Enhanced means that new post-processing effects are added to the composer. Of course, Composer also has its own name, called effect combinator. When we render the scene, we can use Composer to render and get the post-processed picture.
Like this,
const animation = () = > {
requestAnimationFrame(animation) + composer.render();
-renderer.render(config);
};
Copy the code
In my personal opinion, in the process of learning, apart from the joy of clearing the clouds and seeing the light, perhaps it is also a kind of happiness to find the familiar things in the unfamiliar field. We introduced the REQUIRED API through import syntax, and the remaining task was to optimize 🤣🤣🤣 according to the effect.
(PS: The research on GL grammar has not been sorted out yet, and I will share the summary output later 😜)
4. Detail optimization
-
Web Worker
One drawback of three.js is poor performance, as 3d scenes do a lot of computational work, which is not very friendly for our single-threaded JavaScript. 🤔🤔🤔 so I came up with the idea of using Web workers to do optimization. Compared with the MAIN thread of JS, Web Worker is a separate thread, in which we can do a lot of calculation work to share the pressure of the main thread, so that the page smoothness is greatly increased.
-
Object caching
As mentioned above, our model has about thousands to 10,000 particles, and each particle moves to create an animation object, resulting in a large amount of memory for each animation execution. Here we can do cache optimization,
Like this,
geometry.vertices.forEach(function (vtx, i) { const o = nextVertices[i % nextVerticesLength]; let twInstance = vtx.tweenvtx; if(! twInstance) { twInstance =newTWEEN.Tween(vtx); }... });Copy the code
After optimization, we only create the animation object on the first particle switch, and then just use the cached object.
Also, don’t worry if having so many objects causes a lag problem. Each reactive data in Vue2 also generates multiple additional objects. It is not uncommon for a large project to have more than 10,000 reactive objects, and there is no significant performance problem. So we just need to focus on process optimization and don’t make the system do too much meaningless work.
5. Write at the end
In fact, WebGL is different from my personal development direction. Most of the time, I stick to 😂 with Three.js as a hobby.
Many of my friends loved the cool stuff of 3D, but every time THEY talked about it, they made fun of the tedious configuration and unfamiliar programming language. As you may have noticed in this article, I am basically looking at and thinking about problems from the perspective of a front-end developer, without introducing too many new concepts. In fact, I want to make everyone more familiar with this strange project 🐳🐳🐳.
In addition, the beginning of this work also refers to the idea of UP2017 preheating station 🤣🤣🤣. In this site, there is a saying called “only patience can live up to love”. Here I send this sentence to you, hoping that you can stick to what you love and never forget your original aspiration on the way forward. Stick to what you love!
At the end, I present the effect drawing of the original work: