preface

During the internship, the author realized a Web3D fan integrated scene Demo, which is mainly to improve the fan scene of The graph.

This may be because it is only a demo on the official website of Tupu, so it may be launched in a short time. Therefore, the fan scene of Tupu has the following problems

  1. Slow first loading speed (mainly related to 3D model)
  2. Inefficient rendering (mainly related to 3D models)
  3. Do not scene (do not change the weather, animation, etc.)
  4. Poor interactivity (almost no interactivity)

So I made the following improvements to the demo of the graph

  1. Add scene — change blade speed according to wind speed change and simulate wind field weather according to weather forecast
  2. Improve rendering efficiency – use glTF format model
  3. Enhance interactivity – add control options and give the user the option


  • Demo video – fan 3D scene demo video __ bilibilii
  • Code — LEBRON – Li/Wind – 3D-web: 3D fan scene (github.com) Click a Star and walk!!





Basics of Web3D

The basic knowledge of 3D is mainly based on computer graphics

Including Scene, Camera Camera, Renderer, Geometry, Light model Light, Texture Texture, Material, etc

Specific can refer to Guo Longbang big man’s blog, there is really more content, here will no longer expand





Selection of 3D models

The choice of 3D model format is the easiest step, but probably the most critical to rendering efficiency.

The 3D format of Graph is OBJ+MTL, which is also one of the important factors for its slow loading speed and high rendering load. Therefore, I start with the 3D model format and make the first step of improvement.


Differences between common Web3D models

Before choosing a 3D format, we need to know what are the major 3D formats and what are their characteristics

  1. glTF

GlTF is a very comprehensive format that supports almost all common functions, including color, animation, CSG, detail mesh work, textures, cameras, lighting, relative positioning and other important functions.

GlTF is a transfer format. It does not need to create an importer or converter like other formats. It can be directly used by related applications.

  1. STL

STL only supports simple expression of geometric graphics, and other properties such as color, material, lighting, texture and so on are not available, so it is almost an unusable 3D format

  1. OBJ

OBJ supports certain features such as brief expression, CSG, color, and materials. But there are no animations, textures, cameras and other important 3D model properties. For example, for OBJ format models, if the desired texture needs to be displayed, additional MTL files are used to display the texture.

  1. FBX

To be more precise, FBX models are a generic model format, rather than a 3D model file format like OBJ, and support all major 3D data elements as well as 2D, audio and video media elements, as well as animation, material properties, textures, bone animations, lighting, cameras, and more.

The FBX format supports Polygons, Curves, Surfaces, and Point Group Materials. However, the Detailed Mesh is not provided, and the rest is as complete as GLTF, which is more suitable for games.

Chart review


Why do I use glTF format

I’m just going to explain some of its advantages as a Web3D format here. If you want to study glTF in detail, you can click on GLTF-tutorials on how to super-fast render 3D models on a page

  1. Eliminates additional model data transformation and improves parsing speed
  • GlTF is intended as a transit format, not just another new 3D data format
  • Using JSON to describe the scene structure can be easily analyzed by applications. However, geometry and texture data for 3D objects are usually not contained in JSON files, they are stored in external files or binaries, and JSON files only contain links to these external files.
  • 3D data is stored (in binary form) in a way that can be used directly by most graphics apis, without application decoding or preprocessing.

Other formats, such as OBJ, must parse the scene structure in order to read different input file formats before running the related application, and must convert numeric geometry data to the format required by the graphics API. Therefore, every runtime application must create many importer and Converter for all file formats it supports, which takes up memory and reduces rendering efficiency.

  1. It’s easier to change the animation
  • The fan scene demo of Tupu is based on the animation of OBJ format, but because OBJ does not have its own animation, it will be troublesome to modify the animation
  • GlTF is very convenient to modify animations. I will continue to talk about animations in more detail below.





Change glTF 3D model animation

Tupu’s fan blade speed is fixed, and I hope it can be scenized and changed accordingly according to the wind speed data from the back end, that is to say, the faster the wind speed, the faster the blade turns, which involves the animation of changing the model


How does glTF animate

– glTF 3D models can be animated (very different from OBJ, which can’t be animated)

GlTF format animation storage and animation interpolation principle see glTF format details (example: a simple animation) and glTF format details (animation)

If you don’t want to go further, you can skip the above two links. This article will only cover how we changed the animation after the Loader of Three.js loaded the 3D model in glTF format, focusing on the implementation rather than the principle.


Animation that changes the model

  1. First load the animated model through GLTFLoader
import turbine from '.. /.. /assets/3d-gltf-model/turbine.glb';

loadTurbine = () = > {
    const loader = new GLTFLoader()
    if (this.scene.getObjectByName('turbine')) {
        let removeTurbine = this.scene.getObjectByName('turbine')
        this.scene.remove(removeTurbine)
    }
    
    loader.load(turbine, object= > {
        this.matrixTurbine = object;
        let mesh = object.scene;
        mesh.position.set(0, -2.0);
        this.scene.add(mesh); })}Copy the code
  1. Loader loads the matrixTurbine object property

The glTF model in Github’s example is inherently animated, so after the load model forms an object, the animation-related data is stored in the properties of the matrixTurbine object

  • The Animations property of the matrixTurbine object is an array containing AnimationsClip, which is a set of reusable keyframe traces that represent the animations that come with GLTF.

The duration in AnimationsClip is the duration of the animation, and the tracks array holds the key-value pairs of animation frames within the animation time. It is a set of keyframe tracks, i.e. the position of the parts corresponding to each unit of time (since the glTF model has transparent and opaque display modes, So the tracks array in the figure has two QuaternionKeyframeTrack objects. By repeating the keyframe tracks over and over, you get a continuous animation

  • The Times and VALUES attributes of a QuaternionKeyframeTrack object

The Times attribute of a QuaternionKeyframeTrack object is an array of keyframes. The values attribute of a QuaternionKeyframeTrack object is an array of values corresponding to time points in the time array.

How do you understand that?

So let’s just print these two properties and see what they are. We can see that times is a quarter the length of values

console.log(this.matrixTurbine.animations[0].tracks[1].times)
console.log(this.matrixTurbine.animations[0].tracks[1].values)
// Float32Array(145)
// Float32Array(580)
Copy the code

We can imagine that the values attribute is the action required by a group of 3D objects, and times is the moment when these actions are completed. For example, in the above article, each item of a Times array corresponds to four items in values, and a complete and continuous animation is formed through the animation interpolation algorithm inside glTF. The specific principle may have to open an article to say, here first dig a pit, temporarily image to understand.

  1. Change the speed of the animation by changing The Times property array

Earlier we mentioned the Duration property of animations, which is the duration of an animation round, and The Times array is actually a uniform partition of that duration. If duration = 6, Float32Array(145) is [0, 0.0416666679084301, 0.0833333358168602, 0.125…… 5.958333492279053, 6].

If I want to change the speed of the animation, I just need to change The Times array and duration in proportion to the windSpeed coming from the back end

// Change the fan speed part
this.matrixTurbine.animations[0].tracks[1].times = this.changeArr(object.animations[0].tracks[1].times)// Control transparent fan animation speed
this.matrixTurbine.animations[0].tracks[0].times = this.changeArr(object.animations[0].tracks[0].times) // Control the animation speed of the fan with material
this.matrixTurbine.animations[0].duration = 6 / (windSpeed / 5)

function changeArr(arr){
    return arr.map((a) = > a / (windSpeed / 5))}Copy the code

Animates GLTF models without animation

This example is mainly about how to change the glTF model with animation. If you encounter a 3D model without animation, you can try it:

Instead of instantiating the AnimationClip directly with the constructor, you can use one of its static methods to create AnimationClips: From the JSON (parsing), from the deformation of the target sequence (CreateFromMorphTargetSequence CreateClipsFromMorphTargetSequences), or from the animation hierarchy (parseAnimation) – If your model has not already saved AnimationClips in its geometry’s animation array.





Simulation of thunderstorms

Another requirement is to be able to render the weather based on the weather data interface at the back end. I used thunderstorm weather as an example to expand on the knowledge of textures, materials and geometry.


Thunderstorm weather demonstration

We can see that there is lightning occasionally, and the clouds are thick and flowing


Dark clouds gather over the realization

This implementation is mainly related to geometry, textures, and materials

  1. Based on 3D spatial coordinates, create several random plane geometers above the fan, and set a non-physical calculation of only diffuse reflection focusing on illumination Lambert material

First, we need to know a basic 3D knowledge: a Mesh is composed of geometry and materials, and the Object of the whole 3D scene can be approximately regarded as the combination of several meshs. A black cloud can act as a Mesh, consisting of geometry and textures. The cloud in the code is a Mesh composed of geometry and materials, representing the class of objects based on triangles as polygon Mesh.

Obviously, the geometric properties of this Mesh can be approximated to a plane, and several plane geometry bodies PlaneBufferGeometry are spread out randomly to form clouds in the sky, which serve as geometric carriers of “black clouds”.

Since the dark clouds themselves do not have a specular reflection effect when lightning illuminates them, but rather a diffuse reflection effect, we can use three. js’s MeshLambertMaterial constructor to create a dull, unbright surface, which mainly reflects the diffuse reflection of light.

// cloudGeo is the geometry mentioned above, cloudMaterial is the texture that contains it, which we will discuss below
const cloudGeo = new THREE.PlaneBufferGeometry(400.400);
const cloudMaterial = new THREE.MeshLambertMaterial({
    map: texture,
    transparent: true
});
Copy the code
  1. The texture properties of the Mesh need to be combined with the texture, which is often referred to as “mapping”.

Through the code of let Cloud = new three. Mesh(cloudGeo, cloudMaterial), the built-in projection equation and mapping function of the framework map the Mesh to the texture space, and then the texture space retrieve the color value in the set smok.png. Then the value transformation function transforms the retrieved image to change the surface properties of the geometry. In other words, it “pastes” the smoke. PNG onto the created Mesh containing the geometry and textures.

  1. Sets the location of multiple meshes

In this code, 5 meshes are set up randomly on the space above the 3D fan, which looks like “covering” the fan. Readers can get familiar with the 3D coordinate system of Three.js when they implement it by themselves.

for (let p = 0; p < 5; p++){
    let cloud = new THREE.Mesh(cloudGeo, cloudMaterial); 
    cloud.position.set( Math.random() * 10 + 90.Math.random() * 20 + 15, -Math.random() * 50 - 80 );
}
Copy the code

The realization of cloud flow

Rotation: How many radians the object rotates around the x,y, and z axes. In the animation function, rotate all the cloud Mesh around the Z-axis, and then there is a routine for animating with requestAnimationFrame, which I won’t go into here

function animate() {
    cloudParticles.forEach(p= > { // cloudParticles is an array of cloud Mesh
        p.rotation.z -= 0.002;
    });
}
requestAnimationFrame(animate);
Copy the code

The realization of thunderstorm lightning

The realization of lightning is related to the lighting model. Light types in three. js mainly include AmbientLight, parallel light DirectionalLight and PointLight PointLight.

  1. The type of light used

The light from a point light source is divergent and it is impossible to directly define the direction of its light, which is similar to the luminous properties of lightning, so we use a point light source to simulate lightning.

During each frame of animation rendering, a point light source may be generated in a random position above the cloud Mesh. The light source will shine on the cloud and fan models, and the light will be computed with their geometric textures and vertex normal vectors to change the color of a large number of vertices and re-enter the rendering pipe to simulate the effect of lightning

wholeFlashGroup.name = 'flash'
const flash = new THREE.PointLight(0xe0ffff.10000.0.2);
flash.position.set(100.100, -110);
wholeFlashGroup.add(flash);

function animate() {
    if (Math.random() > 0.90 || flash.power > 220) {
        if (flash.power < 100)
            flash.position.set(
                Math.random() * 30 + 80.Math.random() * 20 + 10.Math.random() * 3 - 100
            );
        flash.power = 50 + Math.random() * 500;
    }
    requestAnimationFrame(animate);
}
Copy the code

Summary: Main code for thunderstorm weather section

// Lightning cloud effect
loadFlash = () = > {
    const wholeFlashGroup = new THREE.Group();
    wholeFlashGroup.name = 'flash'
    const ambient = new THREE.AmbientLight(0x555555);
    wholeFlashGroup.add(ambient);
    const directionalLight = new THREE.DirectionalLight(0xffeedd);
    directionalLight.position.set(0.0.1);
    wholeFlashGroup.add(directionalLight);
    const flash = new THREE.PointLight(0xe0ffff.10000.0.2);
    flash.position.set(100.100, -110);
    wholeFlashGroup.add(flash);
    this.myRef.current.appendChild(renderer.domElement);

    let loader = new THREE.TextureLoader();
    loader.load(smoke, (texture) = > {
        const cloudGeo = new THREE.PlaneBufferGeometry(400.400);
        const cloudMaterial = new THREE.MeshLambertMaterial({// A material for matte surfaces, without specular highlights. The material uses a non-physics-based Lambert model to calculate the reflectivity
            map: texture,
            transparent: true
        });
        for (let p = 0; p < 5; p++) {
            let cloud = new THREE.Mesh(cloudGeo, cloudMaterial);
            cloud.position.set(
                Math.random() * 10 + 90.Math.random() * 20 + 15,
                -Math.random() * 50 - 80
            );
            cloud.rotation.x = 1.16;
            cloud.rotation.y = -0.12;
            cloud.rotation.z = Math.random() * 360;
            cloud.material.opacity = 0.4;
            cloudParticles.push(cloud);
            wholeFlashGroup.add(cloud);
        }
        this.scene.add(wholeFlashGroup)
        animate();
    });

    const animate = () = > {
        cloudParticles.forEach(p= > { // cloudParticles is an array of cloud Mesh
            p.rotation.z -= 0.002;
        });
        if (Math.random() > 0.90 || flash.power > 220) {
            if (flash.power < 100)
                flash.position.set(
                    Math.random() * 30 + 80.Math.random() * 20 + 10.Math.random() * 3 - 100
                );
            flash.power = 50 + Math.random() * 500; } flashAnimatation = requestAnimationFrame(animate); }}Copy the code





GlTF 3D model interaction

The first step in the interaction: get the 3D object with a mouse click

  1. Convert the screen coordinates of the mouse to standard device coordinates

Getting 3D objects to interact with is not as easy as in 2D plane. The core contradiction is that the model objects exist in 3D form in memory, but the coordinates we click on are the 2D coordinates based on the XY axis in the screen

Therefore, first of all, the 2-d screen coordinate position clicked by the mouse should be changed into a standardized device coordinate of three. js (reverse process of “note” below). By standardizing device coordinate, three. js can create pick ray to obtain objects in 3-d space.

Note: OpenGL expects all vertices to be Normalized Device Coordinate (NDC) after all vertex shaders have been run. That is, the coordinates in each direction should be between -1.0 and 1.0, and vertices outside this range will not be visible. We usually set a range of coordinates ourselves and then convert these coordinates to standardized device coordinates in the vertex shader. These normalized device coordinates are then fed into a Rasterizer, which converts them into two-dimensional coordinates or pixels on the screen.

// Get the coordinates of the mouse click position relative to the browser window through the coordinate properties of the event object returned by the mouse event
// Then convert to standard device coordinates
this.mouse.x = (event.clientX / w) * 2 - 1;
this.mouse.y = -(event.clientY / h) * 2 + 1;
Copy the code
  1. Creates an object where the pickup ray gets intersection

The pick ray is like a skewered stick traveling through 3D space, “stringing” all the 3D objects it intersects into an array, and the first item of the array is usually the object we click on.

Create the pick ray with Raycaster. SetFromCamera (Mouse, camera), the mouse coordinates that standardize the device coordinates, and the camera

Through raycaster. IntersectObject (equipment, true) calculation and pick up the ray intersection object, return ray rotating all objects, the method of parameters is a Object3D object array, said ray object range of options, Everything that is selected is returned as an array. If two mesh screen coordinates overlap, it is selected

  1. Summary: Mouse click to get 3D model object code
function onPointerClick (event){
    const [w, h] = [window.innerWidth, window.innerHeight];
    const {mouse, equipment, raycaster} = this;
    this.mouse.x = (event.clientX / w) * 2 - 1;
    this.mouse.y = -(event.clientY / h) * 2 + 1;
    raycaster.setFromCamera(mouse, this.camera);
    const intersects = raycaster.intersectObject(equipment, true);
    if (intersects.length <= 0) {
        return false;
    }
    const selectedObject = intersects[0].object;
    if (selectedObject.isMesh) {
        // Code related to the interaction}}Copy the code

Interactive effects of 3D models

Here I mainly write about the realization of one of the most important interaction effects: click on each part, the outer edge of the 3D part is illuminated, and the label is displayed to show its details

  1. Post process chain EffectComposer

In fact, there are many kinds of interactive effects, but all of them require the creation of EffectComposer, which produces the final visual effect. The original scene is rendered first, and then additional interactive rendering effects are superimposed

  • New EffectComposer(Renderer: WebGLRenderer, renderTarget: WebGLRenderTarget) is used to implement post-processing effects in three.js. The EffectComposer class manages the chain of post-processing that produces the final visual effect. The post-processing processes are executed according to the order in which they are added/inserted, and the last process is automatically rendered to the screen.
let compose = new EffectComposer(this.renderer);
compose.addPass(firstPass);  // Process chain first processes firstPass and renders the scene, otherwise all other mesh will be lost
compose.addPass(secondPass); // After rendering the scene, process the Render pipe in secondPass
Copy the code
  1. 3D component edge contour light

To add outline light to the 3D model, I used the three-Outlinepass library, first called RenderPass to render the original scene, and then rendered outlinepass to add outline light

  • New RenderPass(Scene, Camera), a RenderPass is a single run of the render pipeline. A RenderPass renders the image to an in-memory frame buffer attachment. At the beginning of the rendering process, each attachment needs to be initialized in block memory and may need to be written back to memory at the end of the rendering process.

  • For learning OutlinePass, you can refer to the three.js examples (OutlinePass) demo for learning, which is not expanded here.

/* import {RenderPass, EffectComposer, OutlinePass} from "three-outlinepass"; * /
function outline (selectedObjects, color = 0x15c5e8) {
    const [w, h] = [window.innerWidth, window.innerHeight];
    let compose = new EffectComposer(this.renderer);
    let renderPass = new RenderPass(this.scene, this.camera);
    let outlinePass = new OutlinePass(
        new THREE.Vector2(w, h),
        this.scene,
        this.camera,
        selectedObjects
    );
    outlinePass.renderToScreen = true;
    outlinePass.selectedObjects = selectedObjects;
    compose.addPass(renderPass);  // The process chain first processes the renderPass and renders the scene, otherwise all other mesh will be lost except for outline highlighting
    compose.addPass(outlinePass); // After rendering the original scene, handle the Outline pipeline
    const params = {
        edgeStrength: 3.edgeGlow: 0.edgeThickness: 20.pulsePeriod: 1.usePatternTexture: false
    };
    outlinePass.edgeStrength = params.edgeStrength;
    outlinePass.edgeGlow = params.edgeGlow;
    outlinePass.visibleEdgeColor.set(color);
    outlinePass.hiddenEdgeColor.set(color);
    compose.render(this.scene, this.camera);
    this.compose = compose
}
Copy the code





conclusion

This article does not expand too much on principles, because there are too many principles to supplement in 3D, which may disrupt the continuity of the article. Therefore, relevant learning materials are presented in the form of links in the article, please refer to them by yourself.

I’ve always felt that 3D and VR are going to be a very important direction in the front end, and I hope this article will help you