takeaway

This article starts with the basics of drawing and details how to develop a fully functional panorama plug-in using three.js.

Let’s take a look at the plugin:

If you’re already familiar with three.js, or if you want to skip the basics, you can start with the panorama preview.

Github address of this project: github.com/ConardLi/tp…

One, clarify the relationship

OpenGL 1.1

OpenGL is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D graphics.

This interface consists of nearly 350 different function calls to draw complex 3d scenes from simple graphics bits.

OpenGL ES is a subset of OpenGL 3D graphics API, designed for embedded devices such as mobile phones, PDAs and game consoles.

Based on OpenGL, generally C or Cpp development is not very friendly for front-end developers.

1.2 WebGL

WebGL combines JavaScript with OpenGL ES 2.0 to give front-end developers the ability to write 3D effects in JavaScript.

WebGL provides hardware 3D accelerated rendering for HTML5 Canvas so that Web developers can use the system graphics card to more smoothly display 3D scenes and models in the browser, and create complex navigation and data visualizations.

1.3 Canvas

Canvas is a rectangular area whose size can be determined freely. JavaScript can be used to operate the rectangular area and draw graphics and text freely.

Generally, Canvas uses its 2D context function for 2D drawing, which is its own capability.

In contrast, WebGL is 3D, which can draw 3D graphics. If WebGL wants to present on the browser, it must need a carrier, namely Canvas, which is different from the previous 2dContext and can also obtain the WebglContext from Canvas.

1.4 Three. Js

Three stands for 3D and JS stands for JavaScript, which is to use JavaScript to develop 3D effects.

Three.js is an easy-to-use 3D library that uses JavaScript to encapsulate and simplify WebGL interface.

Developing directly using WebGL is relatively expensive for developers and requires a lot of computer graphics knowledge.

To some extent, three.js simplifies some specifications and difficult concepts, and simplifies many apis, which greatly reduces the cost of learning and developing 3d effects.

Here’s a look at what you need to know to use three.js.

2. Basic knowledge of three.js

Drawing a 3d effect with three.js requires at least the following steps:

  • Create a scene that contains three dimensions – Sence

  • Add the elements you want to draw to the scene, and set the elements’ shapes, materials, shadows, and so on

  • Given a scene observation position and observation Angle, we use the Camera object to control

  • Renderer is used to render the drawn elements and render them in the browser

In a movie analogy, the scene corresponds to the entire set space, the camera is the shot lens, and the renderer is used to convert the shot scene into film.

2.1 scenario

Scenes allow you to set which objects are rendered by three.js and where.

We place objects, lights, and cameras in the scene.

Simply create an instance of the Scene.

 _scene = new Scene();
Copy the code

2.2 elements

Now that we have the scene, we need to figure out what to show in the scene.

A complex 3D scene is usually constructed by a large number of elements, which may be some custom Geometry or complex models imported from outside.

Three. Js provides us with a lot of Geometry, such as heregeometry, TetrahedronGeometry, TorusGeometry, and so on.

In three.js, the Material determines the form in which the geometry is represented. It includes other attributes besides the shape of a Geometry, such as color, texture, transparency, etc. Material and Geometry are mutually complementary and must be used together.

In the following code we create a cuboid and give it a MeshBasicMaterial

    var geometry = new THREE.BoxGeometry(200.100.100);
    var material = new THREE.MeshBasicMaterial({ color: 0x645d50 });
    var mesh = new THREE.Mesh(geometry, material);
            _scene.add(mesh);
Copy the code

This gives us the outline of a geometry, but it feels weird. It doesn’t look like a geometry. We actually need to add light and shadows to it to make the geometry look more realistic.

The MeshBasicMaterial is not affected by light, it does not create shadows, so let’s change the geometry to a light affected Material: the mesh Standard Material, and add some light to it:

    var geometry = new THREE.BoxGeometry(200.100.100);
    var material = new THREE.MeshStandardMaterial({ color: 0x645d50 });
    var mesh = new THREE.Mesh(geometry, material);
    _scene.add(mesh);
    // Create parallel light - illuminates geometry
    var directionalLight = new THREE.DirectionalLight(0xffffff.1);
     directionalLight.position.set(4 -.8.12);
    _scene.add(directionalLight);
    // Create ambient light
    var ambientLight = new THREE.AmbientLight(0xffffff);
    _scene.add(ambientLight);
Copy the code

With light rendering, geometry looks more 3D. There are many kinds of light sources in three. js. We used AmbientLight and direct light above.

Ambient light will color all objects in the scene.

Parallel light you can think of as light coming from far away into the scene, like sunlight. It is directional and can also activate the reflection effect of objects to light.

In addition to these two lights, three.js provides several other light sources that are suitable for rendering different materials in different situations and can be selected according to the actual situation.

2.3 coordinate system

Before we talk about cameras, let’s first understand the concept of coordinate systems:

In the three-dimensional world, coordinates define the position of an element in the three-dimensional space, and the origin of the coordinate system is the reference point of the coordinate.

Most commonly, we use three lengths from the origin (x, y, and Z) to define a position. This is called a rectangular coordinate system.

When determining the coordinate system, we usually use the thumb, index finger and middle finger at 90 degrees each other. The thumb represents the X axis, the index finger represents the Y axis, and the middle finger represents the Z axis.

This gives rise to two kinds of coordinate systems: left-handed and right-handed.

The coordinate system used in three. js is the right-handed coordinate system.

We can add a coordinate system to our scene so that we can see exactly where the elements are:

 var axisHelper = new THREE.AxisHelper(600);
 _scene.add(axisHelper);
Copy the code

Red represents the X axis, green represents the Y axis, and blue represents the Z axis.

2.4 the camera

The geometry seen above would be invisible without creating a Camera, because the default viewing point is at the origin of the axis, which is inside the geometry.

The Camera specifies where and from what Angle we view the three-dimensional scene.

2.4.1 Differences between the two cameras

Three. Js offers several different types of cameras. The two most commonly used, OrthographicCamera, are PerspectiveCamera and OrthographicCamera.

The image above clearly illustrates the difference between the two cameras:

On the right is the OrthographicCamera, which does not have perspective, that is, the size of the object is not affected by distance, but by the orthogonal projection of the projection. Most of the geometry we draw in our math books uses this projection.

PerspectiveCamera on the left is PerspectiveCamera, which corresponds to the perspective projection in the projection.

If you want the scene to look more realistic and three-dimensional, a perspective camera works best, and if there are elements in the scene that you don’t want to zoom in and out from far and near, an orthogonal projection camera works best.

2.4.2 Construction Parameters

Let’s look at the parameters needed to create two cameras:

_camera = new OrthographicCamera(left, right, top, bottom, near, far);
Copy the code

The OrthographicCamera accepts six parameters, left, right, top, and bottom, corresponding to up, down, left, right, far, and near distances. Elements beyond these distances will not appear in the field of view and will not be drawn by the browser. In fact, these six distances form a cube, so the OrthographicCamera’s visual range is always within the cube.

_camera = new PerspectiveCamera(fov, aspect, near, far);
Copy the code

PerspectiveCamera receives four parameters: near, far and the same as above, corresponding to the farthest and nearest observable distance of the camera. Fov represents the observable Angle of horizontal range. The larger FOV is, the wider the observable range is. Aspect represents the ratio of horizontal to vertical observable distances, so FOV and aspect can determine the observable range within the vertical range.

2.4.3 position, lookAt

There are two more things you need to know about the camera: the position property and the lookAt function:

The position property specifies the position of the camera.

The lookAt function specifies the direction in which the camera is looking.

In fact, both the value of Position and the parameter lookAt receives are an object of type Vector3, which represents coordinates in three-dimensional space and has three properties: x, y, and z represent distances from x, Y, and Z, respectively.

Now, let’s set the camera’s observation direction to the origin and set x, y and z to 0 respectively. Let’s see what happens to the field of view:

_camera = new OrthographicCamera(-window.innerWidth / 2.window.innerWidth / 2.window.innerHeight / 2, -window.innerHeight / 2.0.1.1000);
 _camera.lookAt(new THREE.Vector3(0.0.0))

 _camera.position.set(0.300.600); // 1 - x is 0

 _camera.position.set(500.0.600); // 2 - y is 0

 _camera.position.set(500.300.0); // 3 - z is 0
Copy the code

It is clear that position determines the starting point of our field of vision, but the direction of the camera remains unchanged.

Below, we fix position to change the direction of camera observation:

_camera = new OrthographicCamera(-window.innerWidth / 2.window.innerWidth / 2.window.innerHeight / 2, -window.innerHeight / 2.0.1.1000);
_camera.position.set(500.300.600); 

_camera.lookAt(new THREE.Vector3(0.0.0)) // 1 - View to the origin

_camera.lookAt(new THREE.Vector3(200.0.0)) // 2 - The field of view is tilted to the x axis
Copy the code

Visible: The starting point of our visual field is the same, but the direction of our visual field has changed.

2.4.4 Comparison of the two cameras

Ok, with that in mind, let’s write two more examples to look at the perspective comparison between the two cameras. To make it easier to see, let’s create two geometries with different positions:

var geometry = new THREE.BoxGeometry(200.100.100);
var material = new THREE.MeshStandardMaterial({ color: 0x645d50 });
var mesh = new THREE.Mesh(geometry, material);
_scene.add(mesh);

var geometry = new THREE.SphereGeometry(50.100.100);
var ball = new THREE.Mesh(geometry, material);
ball.position.set(200.0.- 200.);
_scene.add(ball);
Copy the code

Orthogonal projection camera field:

_camera = new OrthographicCamera(-window.innerWidth / 2.window.innerWidth / 2.window.innerHeight / 2, -window.innerHeight / 2.0.1.1000);
_camera.position.set(0.300.600);
_camera.lookAt(new THREE.Vector3(0.0.0))
Copy the code

Perspective camera field:

_camera = new PerspectiveCamera(75.window.innerWidth / window.innerHeight, 0.1.1100);
_camera.position.set(0.300.600);
_camera.lookAt(new THREE.Vector3(0.0.0))
Copy the code

So, this confirms our theory about the two cameras above

2.5 the renderer

Above we created the scene, elements, and camera. Now we tell the browser to render these things to the browser.

Three.js also provides us with several different renderers. Here we will focus on the WebGL renderer. As the name suggests: WebGL renderers use WebGL to draw scenes, which can take advantage of GPU hardware acceleration to improve rendering performance.

_renderer = new THREE.WebGLRenderer();
Copy the code

You need to add the elements you drew with three.js to the browser. This process requires a vector, as described above, called the Canvas, which you can retrieve via _renderer.domelement and assign to the real DOM.

 _container = document.getElementById('conianer');
 _container.appendChild(_renderer.domElement);
Copy the code

Use the setSize function to set the size of the Canvas you want to render, which actually changes the size of the Canvas:

_renderer.setSize(window.innerWidth, window.innerHeight);
Copy the code

Now that you have specified a render vector and the scope of the vector, you can render the scene and camera specified above using the render function:

_renderer.render(_scene, _camera);
Copy the code

In fact, if you execute the above code in sequence, the screen may still be dark with no elements rendered.

This is because the above elements you want to render may not be fully loaded, and you perform the render, and only perform the render once, in this case we need a way to render the scene and camera in real time, we need the following method:

2.6 requestAnimationFrame

Tell the browser window. RequestAnimationFrame () – you want to perform an animation, and required the browser until the next redraw calls the specified callback function to update the animation.

This method takes as an argument a callback function that is executed before the browser’s next redraw.

window.requestAnimationFrame(callback);
Copy the code

If you want to continue to update before next time to redraw the browser next frame animation, so the callback function itself must call window again. RequestAnimationFrame ().

The user function means that you can keep drawing in requestAnimationFrame and the browser knows in real time what it needs to render.

Of course, at some point you don’t need to draw in real time, you can also use cancelAnimationFrame to stop drawing immediately:

window.cancelAnimationFrame(myReq);
Copy the code

Here’s a simple example:

        var i = 0;
        var animateName;
        animate();
        function animate() {
            animateName = requestAnimationFrame(animate);
            console.log(i++);
            if (i > 100) { cancelAnimationFrame(animateName); }}Copy the code

Take a look at the implementation:

We used requestAnimationFrame in conjunction with the renderer of three.js to render 3d animations in real time:

        function animate() {
            requestAnimationFrame(animate);
            _renderer.render(_scene, _camera);
        }
Copy the code

With the above code, we can simply implement some animation effects:

        var y = 100;
        var option = 'down';
        function animateIn() {
            animateName = requestAnimationFrame(animateIn);
            mesh.rotateX(Math.PI / 40);
            if (option == 'up') {
                ball.position.set(200, y += 8.0);
            } else {
                ball.position.set(200, y -= 8.0);
            }
            if (y < 1) { option = 'up'; }
            if (y > 100) { option = 'down'}}Copy the code

2.7 summarize

The above knowledge is the most basic knowledge in three. js, as well as the most important and backbone.

This knowledge will give you some idea of how to look at a complex 3D effect, but of course there is a lot of detail to implement. You can refer to the official documents for details.

The following sections show you how to use three.js in action – implementing a 360-degree panoramic plugin.

This plug-in consists of two parts, the first part is to preview the panorama.

The second part is to configure the markers of the panorama and associate the coordinates of the preview.

Let’s start with the panorama preview section:

3. Panoramic preview

3.1 Basic Logic

  • Wrap a panorama around the inner wall of the sphere

  • Set an observation point at the center of the ball

  • Using the mouse, we can drag the sphere to change our view of the panorama

  • The mouse wheel can be zoomed in and out, and zoomed in to change the viewing distance of the panorama

  • According to the coordinates on the panorama to mount some markers, such as text, ICONS, etc., and can add events, such as click events

3.2 the initialization

Let’s put in place the necessary infrastructure:

Scene, camera (select a vista camera to make the panorama look more realistic), renderer:


_scene = new THREE.Scene();
initCamera();
initRenderer();
animate();

// Initialize the camera
function initCamera() {
    _camera = new THREE.PerspectiveCamera(75.window.innerWidth / window.innerHeight, 0.1.1100);
    _camera.position.set(0.0.2000);
    _camera.lookAt(new THREE.Vector3(0.0.0));
}

// Initialize the renderer
function initRenderer() {
    _renderer = new THREE.WebGLRenderer();
    _renderer.setSize(window.innerWidth, window.innerHeight);
    _container = document.getElementById('panoramaConianer');
    _container.appendChild(_renderer.domElement);
}

// Render in real time
function animate() {
    requestAnimationFrame(animate);
    _renderer.render(_scene, _camera);
}
Copy the code

Here we add a sphere in the scene and wrap the panorama on the sphere as material:

var mesh = new THREE.Mesh(new THREE.SphereGeometry(1000.100.100),
new THREE.MeshBasicMaterial(
        { map: ImageUtils.loadTexture('img/p3.png')})); _scene.add(mesh);Copy the code

Then we should see something like this:

This is not what we wanted, we wanted the panorama to be viewed from the inside of the sphere, and the panorama to be attached to the inner wall of the outer sphere, not laid out on the outside:

We only need to set one property of the Material scale to a negative value to attach the Material to the interior of the geometry:

 mesh.scale.x = - 1;
Copy the code

Then we move the center point of the camera to the center of the ball:

 _camera.position.set(0.0.0);
Copy the code

Now we are inside the panorama sphere:

3.3 Event Handling

The panorama is ready to browse, but you can only see the part in front of you, and you can’t drag it to see other parts. In order to accurately control the drag speed, zoom, enlarge and other scenes, we manually add some events to it:

Listen for mousedown events for the mouse, at which point the starting drag tag _isUsergalaxies is set to true, and record the starting screen coordinates, as well as the starting camera lookAt coordinates.

_container.addEventListener('mousedown', (event)=>{
  event.preventDefault();
  _isUserInteracting = true;
  _onPointerDownPointerX = event.clientX;
  _onPointerDownPointerY = event.clientY;
  _onPointerDownLon = _lon;
  _onPointerDownLat = _lat;
});
Copy the code

Listen for mouse mousemove events and, when the _ISUsergalaxies are true, calculate the real coordinates of the current camera lookAt in real time.

_container.addEventListener('mousemove', (event)=>{
  if (_isUserInteracting) {
    _lon = (_onPointerDownPointerX - event.clientX) * 0.1 + _onPointerDownLon;
    _lat = (event.clientY - _onPointerDownPointerY) * 0.1+ _onPointerDownLat; }});Copy the code

Listen for mouseup events for the mouse and set _isUsergalaxies to false.

_container.addEventListener('mouseup', (event)=>{
 _isUserInteracting = false;
});
Copy the code

Of course, above we just changed the coordinates and did not tell the camera that it had changed. We do this in animate:

function animate() {
  requestAnimationFrame(animate);
  calPosition();
  _renderer.render(_scene, _camera);
  _renderer.render(_sceneOrtho, _cameraOrtho);
}

function calPosition() {
  _lat = Math.max(- 85..Math.min(85, _lat));
  var phi = tMath.degToRad(90 - _lat);
  var theta = tMath.degToRad(_lon);
  _camera.target.x = _pRadius * Math.sin(phi) * Math.cos(theta);
  _camera.target.y = _pRadius * Math.cos(phi);
  _camera.target.z = _pRadius * Math.sin(phi) * Math.sin(theta);
  _camera.lookAt(_camera.target);
}

Copy the code

Listen for the mousewheel event to zoom in and out of the panorama. Note that the maximum zoom range maxFocalLength and the minimum zoom range minFocalLength are specified.

_container.addEventListener('mousewheel', (event)=>{
  var ev = ev || window.event;
  var down = true;
  var m = _camera.getFocalLength();
  down = ev.wheelDelta ? ev.wheelDelta < 0 : ev.detail > 0;
  if (down) {
    if (m > minFocalLength) {
      m -= m * 0.05_camera.setFocalLength(m); }}else {
    if (m < maxFocalLength) {
      m += m * 0.05_camera.setFocalLength(m); }}});Copy the code

Take a look at the results:

3.4 Adding tags

When viewing a panorama, we often need to mark certain specific locations, and these markers may have some events attached, such as you need to click on a marker to get to the next panorama.

Let’s take a look at how to add tags to a panorama and how to add events to those tags.

We probably don’t need to make these markers zoom in and out as the field of view changes. Based on this, we use an orthogonal projection camera to display the markers, and just give it a fixed viewing height:

  _cameraOrtho = new THREE.OrthographicCamera(-window.innerWidth / 2.window.innerWidth / 2.window.innerHeight / 2, -window.innerHeight / 2.1.10);
  _cameraOrtho.position.z = 10;
  _sceneOrtho = new Scene();
Copy the code

Use SpriteMaterial to implement text tags, or image tags:

// Create a text tag
function createLableSprite(name) {
  const canvas = document.createElement('canvas');
  const context = canvas.getContext('2d');
  const metrics = context.measureText(name);
  const width = metrics.width * 1.5;
  context.font = "10 px song typeface";
  context.fillStyle = "Rgba (0,0,0,0.95)";
  context.fillRect(2.2, width + 4.20 + 4);
  context.fillText(name, 4.20);
  const texture = new Texture(canvas);
  const spriteMaterial = new SpriteMaterial({ map: texture });
  const sprite = new Sprite(spriteMaterial);
  sprite.name = name;
  const lable = {
    name: name,
    canvas: canvas,
    context: context,
    texture: texture,
    sprite: sprite
  };
  _sceneOrtho.add(lable.sprite);
  return lable;
}
// Create the image tag
function createSprite(position, url, name) {
  const textureLoader = new TextureLoader();
  const ballMaterial = new SpriteMaterial({
    map: textureLoader.load(url)
  });
  const sp = {
    pos: position,
    name: name,
    sprite: new Sprite(ballMaterial)
  };
  sp.sprite.scale.set(32.32.1.0);
  sp.sprite.name = name;
  _sceneOrtho.add(sp.sprite);
  return sp;
}
Copy the code

With these tags created, we render them into the scene.

We must tell the scene where these markers are located. For intuitive understanding, we need to assign coordinates to these markers. These coordinates are similar to longitude and latitude, which we call lon and LAT.

In this process, there are two coordinate transformations:

First conversion: Convert “latitude and longitude” to three dimensional coordinates, that is, x, y, z coordinates of the form we discussed above.

GeoPosition2World is used for transformation to obtain a Vector3 object. We can pass the current camera _camera as parameter to the object’s project method. This will give us a normalized coordinate based on which we can determine whether the mark is within the field of view. In the following code, if the normalized coordinates are in the range of -1 and 1, it will appear in our field of view and we will render it accurately.

Second conversion: Converts 3d space coordinates to screen coordinates.

If we go straight to the three dimensional coordinates above and apply the coordinates to the mark, we will find that no matter how the field of view moves, the position of the mark will not change at all, because the calculated coordinates will always be constant.

Therefore, we need to use the above standardized coordinates to convert the marked THREE-DIMENSIONAL space coordinates into real screen coordinates, which is realized by the worldPostion2Screen function.

GeoPosition2World and worldPostion2Screen are two functions which can be implemented in github. If you are interested in geoPosition2World and worldPostion2Screen, you can see them in github source code. 😅

var wp = geoPosition2World(_sprites.lon, _sprites.lat);
var sp = worldPostion2Screen(wp, _camera);
var test = wp.clone();
test.project(_camera);
if (test.x > - 1 && test.x < 1 && test.y > - 1 && test.y < 1 && test.z > - 1 && test.z < 1) {
    _sprites[i].sprite.scale.set(32.32.32);
    _sprites[i].sprite.position.set(sp.x, sp.y, 1);
}else {
    _sprites[i].sprite.scale.set(1.0.1.0.1.0);
    _sprites[i].sprite.position.set(0.0.0);
}
Copy the code

Now that the tag has been added to the panorama, let’s add a click event to it:

Three.js does not provide a separate way to add events to Sprite, we can do this with Raycaster.

Raycaster provides mouse pickup capabilities:

The setFromCamera function is used to establish the binding relationship between the current click coordinates (normalized) and the camera.

IntersectObjects is used to determine which objects in a set are targeted (click) and obtain an array of targeted objects.

So we can get the clicked object and do something based on it:

_container.addEventListener('click', (event)=>{
  _mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
  _mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
  _raycaster.setFromCamera(_mouse, _cameraOrtho);
  var intersects = _raycaster.intersectObjects(_clickableObjects);
  intersects.forEach(function (element) {
    alert("Click on:" + element.object.name);
  });
});
Copy the code

Click on a marker to enter the next panorama:

4. Panoramic marking

In order for the panorama to know where I want to place the tag, I need a tool to associate the original image with the location on the panorama:

Three.js
github

4.1 requirements

  • Establish the mapping relationship between coordinates and panorama, and give a set of virtual coordinates for panorama

  • On a tiled panorama, you can add markers at any position and get the coordinates of the markers

  • Add markers to the preview panorama using coordinates. You will see the same markers as in the tiled panorama

4.2 coordinates

On the 2D plane, we can listen to the mouse events on the screen, we can get only the current mouse coordinates, what we need to do is to convert the mouse coordinates into three-dimensional space coordinates.

It may seem impossible, but how can two-dimensional coordinates be converted into three-dimensional coordinates?

But we can do this by means of an intermediate coordinate, which we may call longitude and latitude.

Before we do that, let’s take a look at what longitude and latitude are.

4.3 the latitude and longitude

Using latitude and longitude, you can accurately locate any point on the earth. Its calculation rules are as follows:

The line connecting the South Pole to the North Pole is usually called meridian, also called meridian, and its corresponding surface is called meridian surface, and the original site of Greenwich Observatory in London is called 0° meridian, also called prime meridian and its corresponding surface is prime meridian surface.

Longitude: The Angle between the meridional plane of a sphere and the prime meridional plane. East plus west minus.

Latitude: The Angle between the normal line of a point on the sphere (the normal line of the surface tangent to the sphere at this point) and the equatorial plane. North plus south minus.

Thus, every point on the earth can be mapped to a latitude and longitude, and correspondingly to a line of longitude and latitude.

Thus, even if we expand the sphere and call it a plane, we can still express the position of a point by latitude and longitude:

4.4 Coordinate Conversion

Based on the above analysis, we can give a virtual “latitude and longitude” to the plane panorama. We use Canvas to draw a “warp and weft net” for it:

Convert mouse coordinates to latitude and longitude:

function calLonLat(e) {
  var h = _setContainer.style.height.split("px") [0];
  var w = _setContainer.style.width.split("px") [0];
  var ix = _setContainer.offsetLeft;
  var iy = _setContainer.offsetTop;
  iy = iy + h;
  var x = e.clientX;
  var y = e.clientY;
  var lonS = (x - ix) / w;
  var lon = 0;
  if (lonS > 0.5) {
    lon = -(1 - lonS) * 360;
  } else {
    lon = 1 * 360 * lonS;
  }
  var latS = (iy - y) / h;
  var lat = 0;
  if (latS > 0.5) {
    lat = (latS - 0.5) * 180;
  } else {
    lat = (0.5 - latS) * 180 * - 1
  }
  lon = lon.toFixed(2);
  lat = lat.toFixed(2);
  return { lon: lon, lat: lat };
}

Copy the code

GeoPosition2World and worldPostion2Screen are two functions of geoPosition2World and worldPostion2Screen. In this way, a point on a plane map can be associated with three-dimensional coordinates.

5. Plug-in packaging

In the above code, we have implemented the functions of panorama preview and panorama tag. Now, we will package these functions as plug-ins.

A plug-in is one that takes your code and adds a little bit of configuration to do what you want.

5.1 Panoramic preview package

Let’s see which configurations can be extracted:

var options = {
  container: 'panoramaConianer'.url: 'resources/img/panorama/pano-7.jpg'.lables: [].widthSegments: 60.heightSegments: 40.pRadius: 1000.minFocalLength: 1.maxFocalLength: 100.sprite: 'label'.onClick: (a)= >{}}Copy the code
  • container:domOf the containerid
  • url: Image path
  • lables: Array of tags in the panorama, in the format of{position:{lon:114,lat:38},logoUrl:'lableLogo.png',text:'name'}
  • widthSegments: Number of horizontal segments
  • heightSegments: Number of vertical sections (small rough speed, large fine speed)
  • pRadius: Indicates the radius of the panorama sphere. The default value is recommended
  • minFocalLength: Minimum zoom distance
  • maxFocalLength: Maximum zoom distance
  • sprite: The type of tag to displaylabel,icon
  • onClick: marks the click event

The above configuration is user-configurable, so how do users pass in plug-ins?

We can declare some default configuration options in the plug-in, the user passes in the parameters using the constructor, and overwrites the default configuration using object.assign.

Next, you can use this.def to access these variables, and then just change the dead code to these configurations.

options = {
    // Default configuration...
}

function tpanorama(opt) {
  this.render(opt);
}

tpanorama.prototype = {
  constructor: this.def: {},
  render: function (opt) {
    this.def = Object.assign(options, opt);
    // Initialize operation...}}Copy the code

5.2 Panoramic markup encapsulation

The basic logic is similar to the above, but here are some of the extracted parameters.

var setOpt = {
  container: 'myDiv'./ / setting the container
  imgUrl: 'resources/img/panorama/3.jpg'.width: ' '.// Specify the width and height to be adaptive
  showGrid: true.// Whether to display the grid
  showPosition: true.// Whether to display latitude and longitude prompt
  lableColor: '#9400D3'.// Mark the color
  gridColor: '#48D1CC'.// Grid color
  lables: [],// tag {lon:114,lat:38,text:' tag one '}
  addLable: true.// Double-click to add the mark (latitude and longitude must be enabled)
  getLable: true.// Right click to query the tag (latitude and longitude must be enabled)
  deleteLbale: true.// Enable default middle key delete (latitude and longitude prompt must be enabled)
}
Copy the code

Six, publish,

From there, it’s time to think about how to make the plug-in available to users.

We mainly consider two scenarios, direct references and NPM install

6.1 Direct ReferenceJS

To avoid contaminating the global variable, we wrap our code with a self-executing function(function(){}()) and then expose our plug-in to the global variable Window.

I put it in the originSrc directory.

(function (global, undefined) {

    function tpanorama(opt) {
        // ...
    }

    tpanorama.prototype = {
        // ...
    }

    function tpanoramaSetting(opt) {
        // ...
    }

    tpanoramaSetting.prototype = {
        // ...} global.tpanorama = tpanorama; global.tpanoramaSetting = panoramaSetting; } (window))
Copy the code

6.2 the use ofnpm install

Export the written plug-in directly:

module.exports = tpanorama;
module.exports = panoramaSetting;
Copy the code

I put it in the SRC directory.

Json “main”: “lib/index.js”, and then complete with name, description, version, etc.

First you need to have an NPM account and login. If you don’t have an account, use the following command to create one.

npm adduser --registry http://registry.npmjs.org
Copy the code

If you already have an account, you can use the following command to log in.

npm login --registry http://registry.npmjs.org
Copy the code

Once logged in successfully, you can publish:

npm publish --registry http://registry.npmjs.org
Copy the code

Note that I have manually specified Registry for each of the above commands, because the NPM source you are currently using may have been changed, perhaps using taobao source or company source, in which case not manually specifying registry will cause publishing failure.

After the success of the release of the NPM directly see your package on the official website.

You can then install tPanorama directly using NPM install and then use:

var { tpanorama,tpanoramaSetting } = require('tpanorama');
Copy the code

6.3 the Babel compilation

Finally, don’t forget that either way, Babel is compiled and exposed to the user.

Create a build command in scripts that will compile the source files and eventually expose the user to use lib and Origin.

"build": "babel src --out-dir lib && babel originSrc --out-dir origin".Copy the code

You can also specify some other commands for users to test. For example, I put all the examples in examples and then defined the expamle command in scripts:

"example": "npm run webpack && node ./server/www"
Copy the code

In this way, users can debug the code by cloning it and running NPM Run Example directly locally.

Seven, summary

Github address of this project: github.com/ConardLi/tp…

If there are any mistakes in this article, please correct them in the comments section. If this article has helped you, please like it and follow it.

Want to read more quality articles, can follow my Github blog, your star✨, like and follow is my continuous creation power!

After concern public number reply [add group] pull you into high quality front end communication group.