Topic of this issue: How will webGL and its derivative technology participate in the construction of the metasexes as the metasexes explode?
Threejs learning path
1. Threejs basis
Since it's a simulated real world, there must be something in common!Copy the code
- We exist in the real universe,Models exist in their world, so start by initializing a universe for them
import * as THREE from "three";
/ / the main scene
// In vue, non-responsive variables can be declared as data to improve performance
const scene = new THREE.Scene();
Copy the code
Scene here is our three-dimensional world. It can contain everything except special times!
So how do we give the time dimension to the Models world we create? Well, that depends on what role time plays in the universe, to mimic it in a computer.
What happens when time stands still? (Gentlemen, please go out and turn right, there is nothing you want.)
If consciousness also stands still then everything is eternal. If the mind is still active, trying to control the body, but you still can’t move, unless time dominates the open time flow of Planck time after Planck time, what is it like?
The renderer.
The renderer is used to render the scene, and the scene contains the light source, the camera, the model, the controller, etc., all of which make up their world so initialize one so that the Models can move frame by frame
initRender() {
To which DOM do you want to mount the scene rendered by your renderer?
container = this.$refs.container;
// Create the renderer global
// The webGLRenderer is the most commonly used renderer. Other renderers are the css3D renderer based on css3D, and the Canvas-based renderer based on canvas
renderer = new THREE.WebGLRenderer({
antialias: true./ / anti-aliasing
precision: "highp".// Select color precision
logarithmicDepthBuffer: true.// Eliminate model interleaving flicker
preserveDrawingBuffer: true // Whether you can extract the canvas drawing buffer
// shadowMap: true, // enable shadow rendering mass calculation
// alpha: true // Whether canvas is transparent
});
// Set the background color of the scene display area
renderer.setClearColor(0x000000.0);
// The size of the scene display area (full)
renderer.setSize(window.innerWidth, window.innerHeight);
// Set pixel ratio if moving end drop frame to remove this try
renderer.setPixelRatio(window.devicePixelRatio);
// You can set styles like this to ensure that cssRender and render are not covered
// renderer.domElement.style.position = "absolute";
// renderer.domElement.style.top = 0;
// renderer.domElement.style.zIndex = "1";
/ / mount domcontainer.appendChild(renderer.domElement); },...Copy the code
“It’s so dark. Where are we? “Wei,” said Jinks, fingering the pigtails
“The great man in the light can’t see us in the dark. We must go and look for the light.” Wei took Jinks by the hand
“This time… I can help you this time!” Jinx looked in Wei’s direction
“I’ve always believed in you. Let’s break the night together!”
Well, let’s, uh, initialize a light source for them
// Initialize the light
initLight() {
// Ambient light means, why is it that I am backlit in my room, and it is supposed to be dark, but there is light, ambient light
ambientLight = new THREE.AmbientLight(0xf1e2e2.1.25);
// Add to the world without rendering
scene.add(ambientLight);
// Point light source white light, brightness 0.6
pointLight = new THREE.PointLight(0xbfbfbf.0.6);
// Set the position of point light source to change the position of light source
pointLight.position.set(0.40.80);
scene.add(pointLight);
// There are spotlights and parallel lights
},
Copy the code
Canyon Zoo “Wei! Look, look, does that little monkey look like one of my little monkey bombs “… “Wei? “Why do you keep staring at the sky?” Jinx looked up and asked curiously about the scattered clouds, swaying in the wind. “I have a sense of being manipulated, strange, real, like… Doll?” Wei mused, “No, you are you, you are Wei.” Jinx hugged Wei’s hand and said Wei hugged Jinx. “HMM! We’ll always be sisters. Jinx means Jinx.”
How to monitor everything? After all, we’re in third person god and we need a goldfinger, right
Then, add a camera handle
// Load the camera and camera control
initCamera() {
// The perspective camera is the counterpart of the parallel perspective camera
// It is very simple to search the parameters in detail
camera = new THREE.PerspectiveCamera(
30.// The field of view is mostly 30-90. For games, the field of view can be adjusted
window.innerWidth / window.innerHeight, // Projection aspect ratio
1.// How close to the camera do I need to render?
100000 // How far away from the camera does it go?
);
// Camera position
camera.position.x = 697.1343985659603; // How do you get so many decimals? Very scary?
camera.position.y = 1784.0457888299613;// How to develop quickly
camera.position.z = 1566.095679557605; / / well
// The camera is up in the Y-axis direction (when the value is 1, it is up), and the Y-axis can only be in the positive direction, for example, after the ground element is added, do not go underground
camera.up.x = 0;
camera.up.y = 1;
camera.up.z = 0;
// What coordinates does the camera look at (origin)
camera.lookAt({
x: 0.y: 0.z: 0
});
// Load the camera's mouse operation left button up, down, right click drag pan pulley zoom
/ / introduction of the import {OrbitControls} from "three/examples/JMS/controls/OrbitControls"; // Camera control
controls = new OrbitControls(camera, renderer.domElement);
// Set the initial focus of the camera
controls.target = new THREE.Vector3(x, y, z); / / (0, 0);
// Whether it can be scaled
controls.enableZoom = true;
// Whether to rotate automatically
controls.autoRotate = false;
// Maximum longitudinal rotation Angle
controls.maxPolarAngle = Math.PI / 2;
// Whether to enable right-click drag
controls.enablePan = true;
// Set the rotation speed
controls.rotateSpeed = 1;
// Make the animation loop when damping or autobiography, meaning whether there is inertia
controls.enableDamping = true;
// Set the closest distance of the camera from the origin
controls.minDistance = 1;
// Set the maximum distance of the camera from the origin
controls.maxDistance = 20000;
},
Copy the code
If you’re confused here, let’s get this straight first we created the Scene to hold the world elements, what do we have? Light source, camera, camera control, and model. First we initialize a Renderer to make sure the world is moving, then we add light, we add ambient light to let us see the elements in the world, we add point light or other light sources to make the model three-dimensional, then we add Camera and Controls To make sure we can use the mouse to view the model from top to bottom. Actually, at this point, the main scene is built, but the most important step is to render the scene!
A key function is requestAnimationFrame(); What it does is it makes the browser automatically execute the next animation frame at the optimal time and then the animation frame that our renderer outputs, we need to recursively call this function to output the image and optimize the performance as well
startAnimate() {
// Update the camera position
controls.update();
// Update the Tween animation frame
if (TWEEN) {
TWEEN.update();
}
// You can write it separately
this.render();
// It can be used to clean up the looping animation, which consumes resources
ranimationID = requestAnimationFrame(this.startAnimate);// recursive call
},
render() {
// Call render function to render the scene
renderer.render(scene, camera);
// If both scene renderers also need to be updated
if(rendererCSS) { rendererCSS.render(cssScene, camera); }},Copy the code
At this point our picture should look something like this
?
Nothing but a scene background color (0x000000)
To do that? Who am I asking? I was so worried
No hurry, we first add a sky box (what is the sky box to Baidu about) pressure shock above are all functions can be summarized in one
// Initialize the 3D scene
init3dScene() {
// Create a group
this.whole = new THREE.Group();
this.yuanquGroup = new THREE.Group();
this.yuanquBuildGroup = new THREE.Group();
// Initialize the scene renderer
this.initRender();
// Initialize skybox
this.initScene();
// Initialize the camera and camera control
this.initCamera();
// Load lights
this.initLight();
// View control will say later
this.animateVis();
// Start rendering
this.startAnimate();
// Load the model later
this.loadMainScene();
},
// Load auxiliary scenes
initScene() {
// When you need to update the matrix of objects in your scene, this will involve calculation. If it is a static object and the operation is infrequent, you can turn off the matrixAutoUpdate parameter and manually update the matrix
scene.matrixAutoUpdate = false;
// Auxiliary coordinate system
var axes = new THREE.AxisHelper(20);
scene.add(axes);
// Set the background
// Add skyboxes the first way
/** skyBox: [ "skybox/dark-s_px.jpg", "skybox/dark-s_nx.jpg", "skybox/dark-s_py.jpg", "skybox/dark-s_ny.jpg", "skybox/dark-s_pz.jpg", "skybox/dark-s_nz.jpg" ],*/
scene.background = new THREE.CubeTextureLoader().load(this.skyBox);
},
Copy the code
Success!
Let’s see how it works! (What screen record GIF artifact is recommended)
2. Threejs learn materials
Share some of my favorites
- Threejs zero-based primer
- Threejs website
- Experience the ultimate web3D application YYDS
- 3. Js image particle explosion effect
- Three. Js dynamic effect scheme
- Gltf format is very detailed to belong to
- Well, it’s a cool 3D blog
- Use three. js to achieve 3D real estate display
- Good blog about picture text canvas texture
- A 3d model embedded in a web page
3. Threejs debugging
Go straight to the conclusion and add one to Google ChromeCopy the code
The high precision decimal mentioned above is actually printed in camera. Position. When you need the parameters of a certain position, print it in render()Copy the code
Ii. 3D model loading with css3DRenderer
To load the 3D model, the first step is to add the corresponding loader files. These JS files are in the example demo folder on the official website
Here, take GLTF 3D file as an example to demonstrate 3d model loading
First of all, let’s take a lookGLTF format
My understanding is: The file that describes the location of the mapping resource and the 3D model is a JSON file that contains the configuration of the JSON description, and the mapping resource can be encoded in iMAg Base64 inline or in a separate.bin file that stores the encoding, or it can refer to the image file which is inlineThis is the bin file and the image
Knowing the GLTF file format, how do we introduce it into the scene
Ok, go directly to the finished code (here we use the local GLTF resource file, which is inline format)
Small 2! The wine!
Loader. Js fileimport { GLTFLoader } from "three/examples/jsm/loaders/GLTFLoader"; //.gltf file loader
const loader = {
// Welcome to advise me about the code writing method is front-end 1 year small white
// Load GLTF file only load model, no animation
loadGLTFOnly(objs, callback) {
let loader = new GLTFLoader();
let tempArr = [];
objs.forEach((one) = > {
// Place it locally to find GLTF file
loader.load('http://home server/CDN /' + one.gltfUrl, function (gltf) {
let model = gltf.scene;
model.name = one.name;
console.log(model.name + ' done!');
// Add each loaded model to the groupcallback(model); }); }); }}export default loader;
Copy the code
Is it that simple? After all, predecessors have written loader and tutorial for us how to use? The above mentioned the concept of a group, in fact, is such as you load a mannequin, divided into groups, ear, nose and mouth into the head group, left and right hands into the hand group, if the boss likes faceless people, then you remove the ear, nose and mouth of the head group, if the boss says I like punishment, then remove the head group, hey hey Add (yourModels) to the group directly, if you want to Scene. Add (yourModels).
Now let’s add two models on the basis of the original, check the effect, is it still black?
Add an initMainScene() function to your init3dScene()// Load the main scene
initMainScene() {
let elements = [];
elements = [
{
gltfUrl: "GLTF/oversized ground field.GLTF".name: "cdcd".level: "Park"
},
{
gltfUrl: "GLTF/Surrounding buildings. GLTF".name: "buildings".level: "Park"}]; loader.loadGLTFOnly(elements,model= > {
// The model of the park is placed in this group
this.yuanquGroup.add(model);
});
// All models are easy to manage
this.whole.add(this.yuanquGroup);
// Don't miss this step
scene.add(this.whole);
},
Copy the code
Success! Not black!
We successfully added 3D models and brought light to their world!
Css3DRendeer is not used much, the most shocking effect is the periodic table on the official website
Thress JS 3DRenderer implement cool periodic table
The use in the project will have some indescribable effects with the webGl renderer of the main scene and the css3D scene and the main scene will not occlude the model. The css3D model is always before the main scene model. I haven’t solved this problem like this
So I’m trying to just use one renderer and one Scene how do I use css3D here?
You basically feed the dom node into the mouth of the cssRenderer, and it takes care of the restAnd then get the DOM node
let node = document.getElementById(elementId).cloneNode(true);
And then convert it with CSS3DSprite or CSS3DObject
cssObject = new CSS3DSprite(node); cssObject = new CSS3DObject(node);
I’ll just add the CSSObject to the cssScene. I won’t go into details
Three. Realize the earth entry animation
Do you have to figure this out first?Copy the code
In fact, very simple, mainly camera animation
animation.js
import { TWEEN } from "three/examples/jsm/libs/tween.module.min.js"; // Animator
import * as THREE from "three";
const animate = {
// The camera moves to achieve roaming and other animations
animateCamera(camera, controls, newP, newT, time = 2000, callBack) {
var tween = new TWEEN.Tween({
x1: camera.position.x, / / camera x
y1: camera.position.y, Y / / cameras
z1: camera.position.z, Z / / cameras
x2: controls.target.x, // The center of the control point is x
y2: controls.target.y, // The center of the control point is y
z2: controls.target.z, // The center of the control point z
});
tween.to(
{
x1: newP.x,
y1: newP.y,
z1: newP.z,
x2: newT.x,
y2: newT.y,
z2: newT.z,
},
time
);
tween.onUpdate(function (object) {
camera.position.x = object.x1;
camera.position.y = object.y1;
camera.position.z = object.z1;
controls.target.x = object.x2;
controls.target.y = object.y2;
controls.target.z = object.z2;
controls.update();
});
tween.onComplete(function () {
controls.enabled = true; callBack(); }); tween.easing(TWEEN.Easing.Cubic.InOut); tween.start(); }},export default animate;
Copy the code
GIF frame off serious, the original animation is still very smooth.
A little bit trickier to implement is that you have to print out each location and pass in a few segments of the animation that goes through the cloud using a component called “image-sprite”: “^1.0.0” thanks to an open source project that borrowed a lot from the big guys
Big guy imitates Tencent qqXplan original QQ project
4. Realize the embryonic form of park management
In fact, this is the main highlight of Web3D, from smart city to smart park, including virtual museum and 3D product display, can directly reflect the unique value of Web3D - real and intuitiveCopy the code
1. Model loading and scene perspective switching
GLTF files should be able to be exported with location information. There is a default location after loading, so you can manually modify position.x/y/z for the model location of the campus, and the best thing is to determine the relative location when exporting the model
Image resources can be loaded using resource-loader, a general-purpose resource loader
Manager = new three.loadingManager (); management
// Resource load control
manager = new THREE.LoadingManager();
manager.onProgress = function (item, loaded, total) {
console.log(((loaded / total) * 100).toFixed(2) + "%"); }; .// The loader accepts a Manager object as a parameter
let loader = new GLTFLoader(manager);
Copy the code
As for scene switching, my current solution is to constantly add or remove members in the group, and the corresponding view will also be refreshed. There are a variety of controllers to choose from. I found a picturesource The one we use is the track controller, which can implement basic drag and zoom events, and other controllers can be selected according to the scene
2. Sprite model
After we load the model, we usually need to display some information in the 3D scene, and that's where Sprite models come inCopy the code
In short, it is a model that faces the camera all the time. We can only see the front side, but not the back side. There is a systematic solution in Three
let texture = new THREE.TextureLoader().load(item.url);
let spriteMaterial = new THREE.SpriteMaterial({
map: texture, // Load the Sprite material background
opacity: 1
});
let sprite = new THREE.Sprite(spriteMaterial);// Create Sprite model
// Set the basic information
sprite.scale.set(item.scale.x, item.scale.y, item.scale.z);
sprite.position.set(item.position.x, item.position.y, item.position.z);
sprite.name = item.name;
Copy the code
There is also a css3DSprite that can convert CSS to 3DSprite models for easy use
Canvas to texture
The most basic canvas containing text images is converted to texture maps, then from Echart ICONS to textures, from video <video> tags to video textures, and from FLV RTSP live or monitor video streams to video texturesCopy the code
Text and pictures:
First of all, I would like to thank the authors of the collection Number 9 mentioned above for coming up with the code directly
// Get a Canvas texture
getTextCanvas(w, h, textArr, imgUrl) {
// Canvas width and height should be a multiple of 2
let width = w;
let height = h;
// Create a Canvas element to get the context
let canvas = document.createElement('canvas');
let ctx = canvas.getContext('2d');
canvas.width = width;
canvas.height = height;
// Set the style
ctx.textAlign = 'start';
ctx.fillStyle = 'rgba (44, 62, 80, 0.65)';
ctx.fillRect(0.0, width, height);
// Add the background image asynchronously, otherwise it may render too soon, resulting in blank space
return new Promise((resolve, reject) = > {
let img = new Image();
img.src = require('@/assets/images/' + imgUrl);
img.onload = () = > {
// Make the canvas transparent
ctx.clearRect(0.0, width, height);
// Draw a picture
ctx.drawImage(img, 0.0, width, height);
ctx.textBaseline = 'middle';
// Because the text needs to be typeset itself, so the loop
// item is a custom text object that contains text content, font size, color, location, etc
textArr.forEach(item= > {
ctx.font = item.font;
ctx.fillStyle = item.color;
ctx.fillText(item.text, item.x, item.y, 1024);
})
resolve(canvas)
};
// Image loading failed
img.onerror = (e) = > {
reject(e)
}
});
},
Copy the code
This is mainly the asynchronous processing of image loading to ensure that the canvas is loaded before rendering and the function can be eaten directlyCopy the code
this.getTextCanvas(1024.512, element.textArr, element.imgUrl).then((canvas) = > {
let textMap = new THREE.CanvasTexture(canvas); // Key step
let textMaterial = new THREE.MeshLambertMaterial({ // Here is diffuse like paper material, corresponding to the specular like metal material.
map: textMap, // Set the texture map
transparent: true.side: THREE.DoubleSide,// Double render
});
let planeGeometry = new THREE.PlaneBufferGeometry(planeWidth, planeHeight); // Planar 3-dimensional geometry
let planeMesh = new THREE.Mesh(planeGeometry, textMaterial);
Copy the code
The result is this
Every chart:
Not so much code soEasyCopy the code
// Get an Echart chart canvas
getEchart(w, h, option,) {
let canvas = document.createElement('canvas');
canvas.width = w;
canvas.height = h;
return new Promise((resolve, reject) = > {
let myChart = echarts.init(canvas, 'dark');
try {
// If you have used Echart, you will know what this option is
myChart.setOption(option, false);
myChart.on(
'finished'.() = >{ resolve(canvas); })}catch (e) {
reject(e)
}
});
},
Copy the code
Results:
I know what you’re asking!
It is impossible to interact with each other, because it has been converted to a texture map, if there is a big guy who knows how to interact, please kindly advise
Video:
Muted video, a fraternal application of Google Browser to autoplay, reduced the importance of a user's fraternal browser to autoplay. After investigating sou and SUo, it seems that it is the security policy of the browser, which forbids web pages from obtaining audio rights by default. It needs to apply, that is, whether a small box is displayed in the upper left corner to allow web pages to play audio or not, so autopay does not work. Since I have a full-screen web3D, other 2D components are floating on the canvas of the main scene, so I hid the visible for beauty in the first test of the Video, and the result was that I could not render the Video texture. Finally, it is found that the video component can only be played in the window, or set z-index = -1, hidden under the component. To render the visual frequency texture in real time explanation 2: unknownCopy the code
// Get a video texture
getVideoTexture(id) {
// ID of the video label
let video = document.getElementById(id);
let _texture = new THREE.VideoTexture(video);
_texture.minFilter = THREE.LinearFilter;
_texture.wrapS = _texture.wrapT = THREE.ClampToEdgeWrapping;
_texture.format = THREE.RGBFormat;
return _texture;
},
Copy the code
Results the figure
Live stream:
As for live streaming, RTMP streaming relies on Flash, I did not implement it on Google, currently testing a FLV address that can run, using flv.jsCopy the code
Please refer to FLVJS tutorial for detailsif (flvjs.isSupported()) {
var videoElement = document.getElementById("videoElement");
var flvPlayer = flvjs.createPlayer(
{
type: "flv".isLive: true.hasAudio: true.hasVideo: true.url: this.url
},
{
autoCleanupSourceBuffer: true.enableWorker: true.enableStashBuffer: false.stashInitialSize: 128}); flvPlayer.attachMediaElement(videoElement); flvPlayer.load(); flvPlayer.play(); }Copy the code
In fact, the principle is still dependent on the Video tag, but using flv.js to play the live stream in real time with the Video tag, and then the following is the process of turning the Video texture
V. Optimization and prospect
Recently, I looked at the ThingJs project and realized that web3D still has a lot to offer, and I still have a lot to learn. But I don’t think web3D is capable enough to support smart cities, and most of the data presentation is based on 3D models, which are not well integrated Is there a lack of a very real landing place, personal ideas, and the meta-universe, collision sparks. Finally, all see here, if feel the article is ok, for a free praise…