Before doing group sharing, choose Threejs to learn, and then use Threejs to interpret the panorama.

1. Basic review of Three.js

1.1 three. Js overview

1. What is it? Three. Js is a webGL framework, which encapsulates the underlying graphics interface, enabling simple codes to achieve 3D scene rendering without having to master the redundant graphics knowledge.

2. The core

  • Renderer: Render objects in the scene
  • Scene: The scene is the container of all objects and corresponds to the 3d world we create
  • Camera: PROJECTION of a three-dimensional scene onto a two-dimensional screen

1.2 the camera

1. What is it? The scene created by three. js is three-dimensional, while the display screen is usually two-dimensional. The camera is such an abstraction that defines the projection from a three-dimensional space to a two-dimensional screen. Using the analogy of “camera”, we can intuitively understand this projection.

Classification of 2.

  • Orthogonal projection camera: for lines that are parallel in three dimensions, projections into two dimensions must also be parallel
  • Perspective projection camera: near large and far small effects

1.3 model

1. There are mainly the following models

2. What does the model include

  • geometry
  • The material

1.4 light and shadow

Classification of 1.

  • The ambient light

Refers to the overall lighting effect of the scene. A uniform brightness effect caused by multiple reflections of several light sources in a scene, usually used to specify a base brightness for the entire scene

  • A point source

Point light source can be regarded as the light source emitted by a point without considering the size of light source. The brightness of point light source on different surfaces decreases linearly

  • Parallel light

For any parallel plane, the brightness of the parallel light is the same

  • The spotlight

A spotlight is a special point light source that casts light in one direction

  • shadow

Light and shade are relative, and shadows form because they receive less light than their surroundings. Therefore, to form shadows, a light source is essential.

2. Realization of panorama

  • Model the sphere and position the camera inside the sphere
  • Keep moving the camera so it can see all the angles inside the sphere

2.1 Establishing basic Scenarios

  • Initialize the renderer
const setupRenderer = () = > {
  const renderer = new THREE.WebGLRenderer({antialias: true})
  renderer.setSize(window.innerWidth, window.innerHeight)
  document.body.appendChild(renderer.domElement)
  return renderer
}
Copy the code
  • Initializing the camera
// Build perspective projection camera to create "near big far small" effect
const setupCamera = () = > {
  const aspectRatio = window.innerWidth / window.innerHeight
  const camera = new THREE.PerspectiveCamera(90, aspectRatio, 0.0001.10000)
  camera.position.set(window.obj.camerax, window.obj.cameray, window.obj.cameraz)

  return camera
}
Copy the code
  • Initialize the scene and guides
  const scene = new THREE.Scene()
  const axesHelper = new THREE.AxesHelper(1000)
  const cameraHelper = new THREE.CameraHelper(camera)
  const gridHelper = new THREE.GridHelper(1000.10)

  // set the scene
  scene.add(axesHelper)
  scene.add(cameraHelper)
  scene.add(gridHelper)
Copy the code
  • Modeling a sphere
// Paste an ordinary panorama onto the surface of the sphere by loading a texture map
const sphereGeo = new THREE.SphereGeometry(radius)
const sphereMaterial = new THREE.MeshBasicMaterial({
    map: texture,
    side: THREE.DoubleSide,
})
const sphere = new THREE.Mesh(sphereGeo, sphereMaterial)
Copy the code

In this way, the basic scenario is presented. At this point we can only see one direction in the ball.

The effect we want is that when we slide the mouse, we can see the scene in different directions in the ball.

2.2 Core: Mouse movement – camera movement

1. The following points are involved:

  • Monitor mouse press and move events
  • Mouse movement distance into latitude and longitude
  • Latitude and longitude are converted to camera coordinates

2. Code interpretation

  • Listening for mouse events
// The mousedown event records the starting point of the movement
window.addEventListener('mousedown'.e= > {
    startPosX = e.clientX
    startPosY = e.clientY
    startTs = Date.now()
    isMouseDown = true
  })
// Mousemove, record the end of the move, and call the move function for real-time calculation
window.addEventListener('mousemove'.e= > {
    if(! isMouseDown)return
    const posX = e.clientX
    const posY = e.clientY
    const curTs = Date.now()

    // Move the camera according to the starting and moving coordinates and the moving time
    let startPos = { x: startPosX, y: startPosY };
    let endPos = { x: posX, y: posY };
    let duration = curTs - startTs;
    
    move(startPos, endPos, duration)

    // Take the current destination as the starting point for the next one
    startPosX = posX
    startPosY = posY
    startTs = curTs
})

Copy the code
  • Move function implementation: according to the transformed camera coordinates real-time Settings of the camera position
    • Mouse movement distance -> latitude and longitude
    • Latitude and longitude – coordinates
const move = (startPos, endPos, duration) = > {
    if (duration === 0) return

    const { x: sx, y: sy } = startPos
    const { x: ex, y: ey } = endPos
    const vx = (ex - sx) / duration// The velocity in the X direction
    const vy = (ey - sy) / duration// The velocity in the Y direction
    // 1. Mouse movement distance -> latitude and longitude
    const { longtitude, latitude } = getMovPos({
        startPos,
        endPos: { x: vx * moveBuffTime + ex, y: vy * moveBuffTime + ey },
        curRotate: { longtitude: movObj.longtitude , latitude: movObj.latitude },
    })
   
   // Set the movement parameters
    // Setting the camera's real-time coordinates in update will make the movement more detailed
    const gsapOpts = {
      ease: "power4.out".duration: 1.onUpdate: () = > {
       	// 2. Latitude and longitude -> Camera coordinates
        const { latitude: newLati, posObj: newPos } = setCameraPos({
          longtitude: movObj.longtitude,
          latitude: movObj.latitude
        });

        movObj.latitude = newLati;
        movObj.longtitude = longtitude;

        // 3. Set camera coordinatesposObj.camerax = newPos.x; posObj.cameray = newPos.y; posObj.cameraz = newPos.z; }}; gsap.to(movObj, gsapOpts); }Copy the code

Here, gSAP animation is used to achieve moving animation, and real-time calculation in the animation update process.

In the previous modification process, the following method was used, that is, the calculation of camera coordinates and animation movement were carried out in sequence, and the real-time calculation was not carried out in the update function. This results in what we actually see: when we move the mouse horizontally (changing intensive reading), the scene also changes vertically (and latitude).

const { latitude: newLatitude, posObj: newPos } = setCameraPos({longtitude: movObj.longtitude, latitude: movObj.latitude})
// 3. Move camera
gsap.to(posObj, {
	ease: 'power4.out'.duration: 1.camerax: newPos.x,
	cameray: newPos.y,
	cameraz: newPos.z,
})
movObj.longtitude = longtitude
movObj.latitude = latitude
Copy the code
  • Mouse movement distance -> latitude and longitude

I understand that the ratio of 0.138 and 0.12 can vary depending on how many circles of longitude and dimension you want to move the mouse around.

Each time the current longitude/latitude is added with a corresponding increment of mouse movement distance (e.g. (sX-ex) * 0.138), the new longitude/latitude is obtained

const { x: sx, y: sy } = startPos
  const { x: ex, y: ey } = endPos
  const { longtitude: curLongtitude, latitude: curLatitude } = curRotate

  const longtitude = (sx - ex) * 0.138 + curLongtitude
  const latitude = (sy - ey) * 0.12 + curLatitude
  return {
    longtitude,
    latitude,
}
Copy the code
  • Latitude and longitude -> coordinates
    • Angle -> Radians: Trigonometric calculations use radians
    • It involves mathematical calculations: given the radius of a sphere, calculating the coordinates of a point in space
function setCameraPos ({latitude,longtitude}) {
	const newLatitude = Math.max(-85.Math.min(85, latitude))
	// Convert latitude and longitude to radians
	const phi = THREE.MathUtils.degToRad(newLatitude)
	const theta = THREE.MathUtils.degToRad(longtitude)
	const posObj = {
		x: 0.y: 0.z: 0,}// Key formula calculation
    const r = 100;
    posObj.y = r * Math.sin(phi);
    const ob = r * Math.cos(phi);
    posObj.z = ob * Math.cos(theta);
	posObj.x = ob * Math.sin(theta);
	
	return {
		latitude: newLatitude,
		posObj,
	}
}
Copy the code

For the calculation of coordinates at a point in space, we can look at the following diagramFor a point D in space, we know that OD is the radius, AOB is the longitude, and DOB is the latitude

So the distance of D on the Y-axis is OE=DB=OD * sin(Angle DOB) OB =OD *cos(Angle DOB) the distance of D on the X-axis is OC=AB=OB * sin(Angle AOB) and the distance of D on the z-axis is Oa=OB *cos(Angle AOB)

So the latitude and longitude can be converted into coordinates according to the above formula, which is the calculation in the above code.

At this point, the panoramic effect has been basically completed.

Click here for the full code