I am a front-end and will point to 3D modeling and Unity (works above). Recently I contacted Threejs to do a little summary and share my experience and understanding of 3D. This article won’t teach you much about how to use threejs’ API, but just share some basic 3D concepts and get a feel for Threejs for those of you who don’t have front-end 3D experience. Hopefully this article will give those of you who are interested in threejs some basic knowledge and make it easier to read the Threejs documentation.
Coordinate system
Let’s start with the most basic coordinate system in three dimensions. The spatial coordinate system has one more z-axis than the planar coordinate system, and any value of XYZ determines a point in space. There are two common spatial coordinate systems, left – handed coordinate system and right – handed coordinate system. In Threejs we use a right-handed coordinate system, so you can imagine y up and x to the right on the screen. The screen is aligned with xoy, so the Z axis is perpendicular to the screen pointing at you in front of the screen, and this is the right hand coordinate system.
Figure 1 Left hand coordinate system (left) and right hand coordinate system (right)
Here are three kinds of coordinate systems commonly used in space:
- World coordinate system
- Body coordinate system
- Camera coordinate system
World coordinate system
Used to describe the absolute position of the object in the scene. All other coordinate systems are based on the world coordinate system. There is an absolute world coordinate system to describe the position relationship between two objects.
Body coordinate system
Object coordinate system is a very important concept, any object in the scene has its object coordinate system. The object coordinate system is the coordinate system of the object itself. Its origin can be anywhere in the object such as the center of the object, a vertex of the object, or even outside the object. The origin of the object coordinate system can be manually set by the modeler. As shown below, a cube with sides of length 1. The origin of the coordinate system of a cube coincides with point Q itself, so the coordinates of point P in its own coordinate system are (1, 1, 1). From this, it is not difficult to see that the object coordinate system can describe the orientation of the object in a scene.
FIG. 2 Object coordinate system
Camera coordinate system
The camera is the observer and also in the scene. The camera coordinate system can be regarded as a special object coordinate system. The XOy plane is parallel to the screen, and the Z-axis direction is the observation direction of the camera.
coordinates
Locate a point in space xyz is its coordinates. The position of an object in space is fixed, but its coordinates depend on the coordinate system you choose. Common coordinates:
- World coordinates (corresponding to world coordinates)
- Local coordinates (corresponding to object coordinates)
The world coordinates
World coordinates refer to the position of a point in the scene relative to the origin of the world coordinate system. No matter where this point is or in any other coordinate system, this point will always have a world coordinate.
Local coordinates
Local coordinates are the coordinates of a point in the coordinate system of an object. The object coordinate system can be coincident with the world coordinate system by rotation and translation, and there is a transformation relationship between the local coordinates and the world coordinates. The relationship between the world coordinate system and the object coordinate system can be understood simply by using plane coordinate system. In the figure below, we have an inclined tree with its own coordinate system origin at the root. The leaf on the tree is x2, y2 in the tree coordinate system, but it’s x1, y2 in the world coordinate system. This leaf is not displaced, but its coordinates may be different in different coordinate systems.
Figure 3 object coordinate system and world coordinate system
Local coordinates are very important for the nesting of 3D objects. Like DOM, 3D objects have nested relationships. In a 3D object, the local coordinates of its children are relative to the parent object’s own coordinate system (object coordinate system).
scenario
All 3D objects need to be in a scene, which can be thought of as an empty infinite container (the game scene is limited because of the wall of air 😃), which contains only an invisible world origin and three coordinate axes. You can place any object, lights, camera in the scene. You can instantiate a Scene object with Scene in Threejs. The add() method of a scene instance adds 3D objects to the scene.
// threejs creates a scenario
import * as THREE from "three";
const scene = new THREE.Scene();
// Add objects to the scenescen.add(...) ;Copy the code
There can be multiple scenes in an application, and different objects and lighting will be set in different scenes. The “loading progress” seen in the game is actually loading different scene instances.
geometry
Geometry is a mathematical model that describes a set of points and their topological relationships. In Threejs you can use the BufferGeometry class to create geometry instances for a set of consecutive points. Geometry is just data, data is not visible. The object instantiated by BufferGeometry is not a 3D object and cannot be placed in the scene. It is just some data, and some classes inherited from Object3D can be constructed into a 3D object that can be placed in the scene from the geometry data.
point
The information of a point can be represented by only three values of xyz. In threejs, Vector3 represents a three-dimensional vector. The coordinates of a point can be represented by a Vector3. Vector3 can also be used to represent any three dimensional information, such as coordinates, vectors, and color RGB.
line
It takes two points to identify a line segment, and at least two Vector3s are needed to identify a line segment. The array passed in when BufferGeometry is created in Threejs is an array of points, where each of the two adjacent points represents a connection, that is, a line segment.
surface
At least three points are needed to identify a face, and the most basic faces in 3D models are triangles. Because in space, the graph directly surrounded by more than three points is not necessarily a plane, these points are very likely not a plane. For example, a pattern of four points connected in sequence might be a spatial quadrilateral. This happens when you delete points in 3d modeling, called “broken surfaces”.
Figure 4. Space quadrilateral
But any three points must be in a plane, and then you can use multiple triangles to form other polygons. Therefore, when trying to use the BufferGeometry screen, return to the beginning of the triangle after drawing the third point each time, and then start drawing the next triangle. For example, the following set of points describes two triangles whose AC sides coincide in space, forming a spatial quadrilateral.
// Each element of the array is a Vector3 instance, abCD does not overlap
let points = [a, b, c, a, d, c];
Copy the code
surface
In 3D a surface is a concept, there is no real surface. A surface is a solid polygon of an approximate surface composed of many small planes after smooth processing. As the polygon subdivides more, it is closer to the surface it simulates. The more triangles you create, the more performance it takes, so the accuracy of a game’s model is a good reflection of its quality and hardware requirements.
Figure 5. Approximate hemispheres formed by triangles
The base geometry built into Threejs
In general, basic use of BufferGeometry to draw complex models. Complex models are typically created, edited, and exported in modeling software such as 3DSMax and Blender. But Threejs has some basic, common geometry built in to do some testing, like this:
- Plane (PlaneGeometry)
- Sphere (SphereGeometry)
- Cube (BoxGeometry)
- Cylinder (CylinderGeometry)
- Cone (ConeGeometry)
- …
Rendering the threejs built-in geometry with a wireframe shows that all the faces are made up of triangles.
FIG. 6 Wire-frame rendering of Three JS built-in geometry
3 d objects
Only 3D objects can be placed in a scene, and all 3D objects in Threejs inherit from Object3D. All 3D objects have the same basic properties. Such as:
- Position (coordinates)
- Rotation
- Scale
- Up (positive y direction of object coordinate system)
- Parent (parent object)
- Children (Array of child objects)
- …
An empty object
Object3D is a class that can also be instantiated. If you put an Object3D instantiated object in the scene you won’t see anything because it’s an empty object, but you can still change the coordinates of the empty object, or rotate it. Not for nothing, this empty object is useful when marshalling 3D objects. Because 3D objects can be nested, an empty object can be used as the parent of multiple objects, thus achieving the effect of grouping multiple objects. When translating and rotating the parent object, the child object will remain relatively static with the origin of the parent object’s own coordinate system, so as to achieve the effect of multiple objects like a whole.
const empty = new THREE.Object3D();
/ /... Initialize box,ball omitted
// Put box and ball under empty as its children
empty.add(box, ball);
// Change the parent object coordinates
empty.postion.set(1.1.1);
Copy the code
Threejs has a Group class for grouping, which, like Object3D, is a deceptively semantic API. Object3D makes it easier to understand the parent-child nesting of 3D objects.
Common 3D objects
There are many classes in Threejs that inherit from Object3D, and most of their instantiation requires two kinds of objects, geometry and materials. Determine the properties of the 3D object based on geometry and materials. Commonly used to render scene objects are the following:
- Points: Creates Points in a space from geometric point information
- Line: Creates a Line in a space from the edge information of a geometry
- Mesh: Creates a plane in a space using the geometry information
The three constructors, which take the same BufferGeometry instance and give the corresponding texture, will get different renderings. Points, lines, and surfaces of geometry are rendered separately.
// Create triangle geometry
const triangleGeometry = new THREE.BufferGeometry().setFromPoints([
new THREE.Vector3(1.0.0),
new THREE.Vector3(0.1.0),
new THREE.Vector3(-1.0.0),
new THREE.Vector3(1.0.0),]);/ / points
const trianglePoints = new THREE.Points(
triangleGeometry, / / geometry
new THREE.PointsMaterial({ color: 0xffffff.size: 0.05 }) / / material
);
/ / line
const triangleLines = new THREE.Line(
triangleGeometry,
new THREE.LineBasicMaterial({ color: 0xffffff}));/ / surface
const trianglePanle = new THREE.Mesh(
triangleGeometry,
new THREE.MeshBasicMaterial({ color: 0xffffff.side: THREE.DoubleSide })
);
Copy the code
Figure 7. Three object creation forms for triangle geometry
The Mesh is the most common 3D object, and most 3D models in a scene are meshes. The Mesh can be understood as a skin on the geometry, creating a shell that looks exactly like the geometry and is made up entirely of triangles. Geometry information and materials are used to create 3D objects visible in the render result.
Figure 8. Creating a 3D object through mesh (with a wireframe rendered)
The material
As mentioned earlier, instantiating a render visible 3D object requires geometry information and materials. The geometry determines the shape of the object, and the material determines the color (diffuse color), reflected color, glossiness, transparency, refractive index and other properties of the object. When an object is rendered, the shape and material together determine its final rendering effect.
There are many materials in Threejs that inherit from Material. Some materials are unaffected by light, such as the MeshBasicMaterial, which shows only one inherent color. Some are influenced by light, such as MeshPhysicalMaterial, which calculates light based on physical principles. These materials have different attributes, and the same material can be customized to a richer effect through the modification of the attributes.
Single and double sided materials
We know that all solids are made up of many sides, but they are really shells. So by default, materials only render faces on one side of the plane for performance, and the other side of the face is transparent (this effect is visible when the character falls into an object in the game). In Threejs, materials can change how they render in a plane by setting the side property. There are the following situations:
- THREE.DoubleSide (Double sided)
- THREE.FrontSide
- THREE.BackSide
// Instantiate a positive physical material
const fSideMat = new THREE.MeshPhysicalMaterial({
color: 0xeeeeee.side: THREE.FrontSide,
});
// Instantiate a back physical material
const bSideMat = new THREE.MeshPhysicalMaterial({
color: 0xeeeeee.side: THREE.BackSide,
});
Copy the code
Double-sided materials can be used in some cases where both sides can be observed in a particular scene, such as the non-enclosed graphics of the game character’s flaky clothes, leaves, grass, etc. If the character only moves in a chamber, it is possible to use a backside material on a box so that the inside of the box is visible from the outside, but only the rendering of the material is visible from the inside of the box. As shown below, a red ball is placed inside each box, rendered with positive and negative materials respectively.
Figure 9 Front material (left) and back material (right)
Texture mapping
Mapping is a property of a material. When a material has a bitmap as its map, the colors in the bitmap are mapped to the various surfaces of the object in a way that makes it look like an image is attached to the object. Instead of just coloring materials, mapping can map different colors to different parts of a surface for a richer effect.
UV mapping
The image is a rectangle with its own coordinates as a map, U horizontally and V vertically. The coordinates of the two axes will only be between 0 and 1. The lower left vertex coordinates are (0,0) and the upper right vertex coordinates are (1,1). Even if the tile is not square, the upper right corner coordinate is (1,1). U and V from 0 to 1 can take the color of every part of the picture.
The normal texture mapping mode in Threejs defaults to mesh coordinates aligned with texture coordinates. As shown below, the lower left and upper right corners of the rectangle map are aligned on the square grid. The rectangle will be compressed inside the square to map the color onto the plane.
FIG. 10 UV map
In threejs, a Texture is an instance of Texture. You can use TextureLoader to load a bitmap to display as a map and modify various parameters of the map, including color, mapping method, number of horizontal repeats, number of vertical repeats, etc.
const texture = new THREE.TextureLoader().load("some/bitmap.png");
// Set the horizontal repeat
texture.wrapS = THREE.RepeatWrapping;
// Set the vertical direction repeat
texture.wrapT = THREE.RepeatWrapping;
// Set the number of horizontal and vertical repetitions
texture.repeat.set(4.4);
Copy the code
Figure 11 Different parameters of the React Logo map
Box and cylindrical projections
UV coordinates are the default mapping for threejs normal materials, but threejs also supports box and cylindrical projections. Taking a cylinder as an example, a projection can be understood as:
- Unfold the side of the cylinder
- Restore the cylinder (every point on the side of the cylinder is colored)
- The geometry to be projected is placed inside the cylinder
- From the side of the cylinder, the color of the side of the cylinder is projected onto the inner model along the surface normal
Complex mapping
Complex mapping also needs to be done in specialized modeling software, which can accurately map each surface on the model. The UV expansion feature in some modeling software can split the model into many planes. The map can be achieved by manually aligning these planes with a portion of the map.
Expand a vase model in Figure 12 3DSMax and manually align the textures
Other uses for stickers
Simple mapping colors are the most common scene for mapping, and other uses of mapping are supported in some materials.
- AlphaMap (Transparency map)
- NormalMap (normalMap)
- .
For example, normal maps are not used to map object colors, but are often used to create bumps and bumps on a surface. For example, walls, floors and tiled roofs can be modeled as flat surfaces, but normal maps can be used to create a bump effect, making modeling easier and improving application performance.
FIG. 13 Normal map
The light
Lights are used to illuminate the scene. Lights, like other 3D objects, can also be placed in the scene and their coordinates can be set to provide lighting at the desired location. Some materials will render with light, physical materials will not be visible without light, and Phong materials will not reflect. Lighting and its parameters can change the lighting effect, such as light intensity, light color and so on.
- Parallel light
DirectionalLight
, simulate sunlight (only direction setting takes effect) - A point source
PointLight
, simulate an artificial light source such as a light bulb (only the coordinate setting takes effect) - The spotlight
SpotLight
, stage spotlight (the coordinates of the light source and the target point can be set to change the direction of light illumination)
// Create a red dot light source
const redLight = new THREE.PointLight(0xff0000.1);
// Place the point light source above the center of the scene
scene.add(redLight);
redLight.position.set(0.3.0);
Copy the code
Below is the rendering effect of Phong material and physics material with red and blue point light above and below.
Figure 14. Phong material (left) and physical material (right) illuminated
The camera
A camera is a special object in the scene, which can set its coordinates and rotation, and can also follow the object as a child of other objects. The camera does not appear in the render result, but the camera is essential in the render. The camera is rendered as an input to the rendering. Like the human eye, the camera’s position in the scene, its Angle, and its own set of parameters all affect the final render result. There are two kinds of threejs cameras commonly used:
- Perspective camera (similar to the human eye, seeing objects that are far smaller and near larger)
- Orthogonal camera (no large near small effect, similar to parallel light projection)
Create a perspective camera looking at the world origin from (0, 0, 2) coordinates.
const width = 800;// Render the width of the image
const height = 400;// The height of the rendered image
const pCamera = new THREE.PerspectiveCamera(75, width / height, 0.1);
pCamera.position.set(0.0.2);
pCamera.lookAt(0.0.0);
Copy the code
FIG. 15 Perspective camera observing the cubic wire frame
Create an orthogonal camera looking at the origin of the world coordinates from (0, 0, 2).
const width = 800;
const height = 400;
const oCamera = new THREE.OrthographicCamera(
-4.4,
(height / width) * 4,
(-height / width) * 4.0.100
);
oCamera.position.set(0.0.2);
oCamera.lookAt(0.0.0);
Copy the code
FIG. 16 Orthogonal camera observation of cubic wire frame
Perspective cameras are used most of the time to view or render scenes. But orthogonal cameras can also be useful in 3D modeling, where modelers align models in an orthogonal view.
Figure 17. 3DSmax orthogonal view and perspective
Apply colours to a drawing
A renderer is a complex thing. Its job is to take the scene, object, camera and other data as input parameters for rendering, and finally output the color of each pixel on the screen. The WebGL renderer can be used to create a WebGL renderer in Threejs.
const width = 800;
const height = 400;
const renderer = new THREE.WebGLRenderer();
renderer.setSize(width, height);
//renderer.domElement is the canvas to which the renderer outputs the result
document.body.appendChild(renderer.domElement);
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, width / height, 0.1.1000);
// Specify which scene and camera the renderer will render from
renderer.render(scene, camera);
Copy the code
Summarize the key points of threejs
There are at least three elements needed to complete a basic rendering:
- WebGLRenderer
- A Camera.
- An empty Scene
Rendering visible objects requires placing objects into the scene, and 3D objects contain the following two elements:
- Geometry determines the shape of objects.
- Material determines the color, refraction and so on.