Dev. To/MrRyanFloyd…
By Github: github.com/MrRyanFloyd
By Twitter: twitter.com/mrryanfloyd
The introduction
A personal website is a programmer’s second resume. If you have a cool personal page, you’ll get a lot of goodwill from your interviewers.
During the outbreak quarantine, I created an interactive 3D personal web page with Three.js and ammo.js.
Online preview: www.ryan-floyd.com/
The 3D world of Three.js
As I hung out at Google Experiments, I noticed that a lot of the work was written in three.js.
Three.js is a library that makes 3D web application development easy. Created in 2010 by Ricardo Cabello (Mr. Doob), it has more than 1,300 contributors on Github and is the 38th star of all repositories.
After seeing the cool 3D effects on Google Experiments, I decided to learn three.js.
How three.js works
(Component structure for 3D application, image from Discoverthreejs.com)
Three.js makes it easy to display 3D images in the browser. Its underlying layer is based on WebGL, which enables the browser to draw 3D images in the Canvas with the help of the system graphics card.
WebGL itself can only draw points, lines and triangles, while three.js encapsulates WebGL, allowing us to create objects, textures, 3D calculations and so on very easily.
Using three.js, we add all objects to the scene, and then pass the data to the renderer. The renderer is responsible for drawing the scene on the canvas.
(Three. Js application architecture, image fromthreejsfundamentals.org)
For a three.js application, the core is the scene object. The scene graph is shown above.
In a 3D engine, the scene diagram is a hierarchical tree in which each node represents a part of the space. This structure is a bit like a DOM tree, but the scene of three.js is more like a virtual DOM, updating and rendering only the parts of the scene that have changed. The foundation of all this is the WebGLRenderer class of three. js, which converts our code into data in the GPU, and then renders these data by the browser.
Objects in a scene, also called a Mesh. In the world of three. js, the Mesh is composed of Geometry (which determines the shape of the object) and Material (which determines the appearance of the object).
Another important element of the scene is the camera camera, which determines which parts of the scene are drawn on the canvas in what visual effect.
Then there is animation. To animate, renderers usually use the requestAnimationFrame() method to draw scene updates to the canvas 60 times per second. Refer to the MDN for the principle and use of the requestAnimationFrame() method.
The following example, from the official three.js document, creates a rotating 3D cube.
<html> <head> <title>My first three.js app</title> <style> body { margin: 0; } canvas { display: block; </style> </head> <body> <script SRC ="https://unpkg.com/[email protected]/build/three.js"></script> <script> // create the scene and camera var scene = new THREE.Scene(); Var camera = new THREE.PerspectiveCamera(75, window.innerwidth/window.innerheight, 0.1, 1000); // Create the renderer, set the size to the window size, and add the rendered element to the body var renderer = new three.webglrenderer (); renderer.setSize(window.innerWidth, window.innerHeight); document.body.appendChild(renderer.domElement); // Create a Mesh (green 3D cube) and add it to the scene var geometry = new three.boxgeometry (); var material = new THREE.MeshBasicMaterial({ color: 0x00ff00 }); var cube = new THREE.Mesh(geometry, material); scene.add(cube); // Set camera position.z = 5; Var animate = function () {requestAnimationFrame(animate); Cube. Rotation. X + = 0.01; Cube. Rotation. + y = 0.01; renderer.render(scene, camera); }; animate(); </script> </body> </html>Copy the code
The effect is as follows:
Ammo. Js physics engine
Ammo.js is a direct port of Bullet Physics (an open source Physics simulation engine) to JavaScript. I don’t understand much about the underlying workings of the physics engine, but in short, the physics engine creates cycles based on parameters you pass in, such as gravity, and updates the state during each cycle to simulate natural physical motions and collisions.
Objects in circulation (usually rigid) have physical properties such as force, mass, inertia, and friction. Each cycle detects collisions and interactions by constantly checking the position, state, and motion of all objects. If an interaction occurs, the object position is updated based on the elapsed time and the physical properties of the object. Here is a snippet from my code that shows how to create a physics engine loop and how to add it to the Sphere sphere of three.js.
Import * as THREE from "THREE "; import * as Ammo from "./builds/ammo"; import {scene} from "./resources/world"; Then ((Ammo) => {// Create physical world function createPhysicsWorld() {// Complete collision detection algorithm let collisionConfiguration = new Ammo.btDefaultCollisionConfiguration(); / / overlapping scheduling calculation for the collision/let the dispatcher = new Ammo. BtCollisionDispatcher (collisionConfiguration); Let overlappingPairCache = new ammo.btdbvtBroadphase (); // All possible overlapped pairs are overlapped. / / object correctly interaction, consideration of gravity, force, collision, etc. Let constraintSolver = new Ammo. BtSequentialImpulseConstraintSolver (); // Create the physical world based on these parameters. Refer to the bullet physics document let physicsWorld = new Ammo. BtDiscreteDynamicsWorld (dispatcher, overlappingPairCache, constraintSolver, collisionConfiguration ); // Add physicsworld.setgravity (new Ammo. BtVector3 (0, -9.8, 0)); // Add physicsworld.setgravity (new Ammo. } function createBall(){let pos = {x: 0, y: 0, z: 0}; let radius = 2; let quat = {x: 0, y: 0, z: 0, w: 1}; let mass = 3; / / three. Js code / / create a sphere and added to the scene let ball = new three. The Mesh (new three. SphereBufferGeometry (radius), new THREE.MeshStandardMaterial({color: 0xffffff})); ball.position.set(pos.x, pos.y, pos.z); scene.add(ball); //Ammo.js // Set position and rotation let transform = new Ammo.btTransform(); transform.setOrigin(new Ammo.btVector3(pos.x, pos.y, pos.z)); transform.setRotation( new Ammo.btQuaternion(quat.x, quat.y, quat.z, quat.w) ); / / set the object movement let motionState = new Ammo. BtDefaultMotionState (transform); // Set the collision bounding box. Let collisionShape = new ammo.btsphereshape (radius); CollisionShape. SetMargin (0.05); // Let localanaesthetist = new Ammo. BtVector3 (0, 0, 0); collisionShape.calculateLocalInertia(mass, localInertia); / / generated create rigid body (object) structure information let rigidBodyStructure = new Ammo. BtRigidBodyConstructionInfo (mass, motionState collisionShape, localInertia ); Let body = new Ammo. BtRigidBody (rigidBodyStructure); // Add friction to the object when it is moving body.setfriction (10); body.setRollingFriction(10); / / add objects to the physical world, like Ammo. Js engine to update the object's state physicsWorld addRigidBody (body); } createPhysicsWorld(); createBall() }Copy the code
Movement and interaction
In the physical world of ammo.js simulations, interactions are calculated based on attributes and forces.
Each object has an bounding box property, which the physics engine uses to detect its position.
After checking the bounding boxes of all objects in each animation loop, if the bounding boxes of any two objects are in the same position, the engine records a “collision” and updates the objects accordingly. For rigid bodies, this means preventing two objects from being in the same position.
Here is my code snippet showing how the render loop and world physics are updated.
Function renderFrame() {// record the last render time. Let deltaTime = cloce.getDelta (); // Calculate the force and velocity of the ball based on user input moveBall(); UpdatePhysics (deltaTime); Renderer. Render (scene, camera); / / loop requestAnimationFrame (renderFrame); } / / update the status of the physical world to define the function updatePhysics (deltaTime) {physicsWorld. StepSimulation (deltaTime, 10); For (let I = 0; let I = 0; let I = 0; i < rigidBodies.length; Let meshObject = rigidBodies[I]; let meshObject = rigidBodies[I]; let meshObject = rigidBodies[I]; let ammoObject = meshObject.userData.physicsBody; Obtain the / / current motion state let objectMotion = ammoObject. GetMotionState (); / / if the objects are moving, the object's current position and rotation information if (objectMotion) {objectMotion. GetWorldTransform (transform); let mPosition = transform.getOrigin(); let mQuaternion = transform.getRotation(); // Update the position and rotation state of the object meshobject.position.set (mposition.x (), mposition.y (), mposition.z ()); meshObject.quaternion.set(mQuaternion.x(), mQuaternion.y(), mQuaternion.z(), mQuaternion.w()); }}}Copy the code
User input
We want users to be able to move the spheres around in the app on both desktop and touchscreen mobile devices.
For keyboard events, when the arrow key is pressed, a force in the corresponding direction is added to the sphere by listening for the “KeyDown” and “KeyUp” events.
For the touch screen, a joystick controller is created on the screen. We then add the “TouchStart”, “TouchMove”, and “TouchEnd” event listeners to the div element (controller) used for control.
The controller tracks the start, current, and end coordinates of the user’s finger movement, and updates the ball’s forces accordingly each time it is rendered.
The following is just a snippet of the controller code, showing some general concepts. For the complete code, see the source code address at the bottom of this article.
Let moveDirection = {left: 0, right: 0, forward: 0, back: 0}; let moveDirection = {left: 0, right: 0, forward: 0, back: 0}; Coordinates = {x: 0, y: 0}; // Coordinates = {x: 0, y: 0}; // Let dragStart = null; Const stick = document.createElement("div"); Function handleMove(event) {if (dragStart === null) return; If (event.touches) {event.touches = event.touches [0].touches; event.clientY = event.changedTouches[0].clientY; } const xDiff = event.clientx-dragstart.x;} const xDiff = event.clientx-dragstart.x; const yDiff = event.clientY - dragStart.y; const angle = Math.atan2(yDiff, xDiff); const distance = Math.min(maxDiff, Math.hypot(xDiff, yDiff)); const xNew = distance * Math.cos(angle); const yNew = distance * Math.sin(angle); coordinates = { x: xNew, y: yNew }; Stick.style. transform = 'translate3D (${xNew}px, ${yNew}px, 0px)'; // Coordinates calculates the motion of the ball based on the coordinates touchEvent(coordinates); } // function touchEvent(coordinates) {// move to the right if (coordinates. X > 30) {movedirection.right = 1; moveDirection.left = 0; // coordinates < -30} else if (coordinates < -30) {movedirection.left = 1; moveDirection.right = 0; } else { moveDirection.right = 0; moveDirection.left = 0; // coordinates > 30 {move direction.back = 1; // coordinates > 30 {move direction.back = 1; moveDirection.forward = 0; // coordinates} else if (coordinates < -30) {move direction. forward = 1; moveDirection.back = 0; } else { moveDirection.forward = 0; moveDirection.back = 0; }}Copy the code
Here’s what happens when you use the joystick:
conclusion
Now we have all the tools to create an interactive 3D application that simulates the real physical world. Use your imaginative mind and the desire to create beauty to create your own 3D applications! In the Internet age, everyone is a lifelong learner.
The source code for this project can be found on my Github at github.com/MrRyanFloyd…
If you have any feedback or questions, feel free to leave a comment or contact me on LinkedIn!