1. “Tip of the iceberg”

When using echarts-GL earth, it is found that the edges of the sphere are irregular, as shown in an exampleEcharts GL Earth. The code is very simple, only using the texture, without importing the model, how the “tip of the iceberg” bump formed aroused my interest, so I started to follow the journey.

2. Echarts – gl find trace

Once the dependencies are installed, the debug path begins.

  1. Comment out theheightTextureAfter that, the convexity of the tip of the iceberg disappeared, forming a perfect ball, as shown in the figure below.

Thus, it can be inferred that the effect isheightTextureImpact of configuration items.

  1. Search for heightTexture in Echarts-GL

The function getDisplacementTexture has been found. There are four places to use this function, and the key one is here (below).

        ecModel.eachComponent('globe'.function (globeModel, idx) {
            var globe = globeModel.coordinateSystem;

            // Update displacement data
            var displacementTextureValue = globeModel.getDisplacementTexture(); // Get the displacement texture
            var displacementScale = globeModel.getDisplacemenScale(); // Get the displacement ratio

            if (globeModel.isDisplacementChanged()) {
                if (globeModel.hasDisplacement()) {
                    var immediateLoaded = true;
                    __WEBPACK_IMPORTED_MODULE_5__util_graphicGL__["a" /* default */].loadTexture(displacementTextureValue, api, function (texture) {
                        var img = texture.image;
                        var displacementData = getDisplacementData(img, displacementScale); // Get the replacement data
                        globeModel.setDisplacementData(displacementData.data, displacementData.width, displacementData.height);
                        if(! immediateLoaded) {// Update layouts
                            api.dispatchAction({
                                type: 'globeUpdateDisplacment'}); }}); immediateLoaded =false;
                }
                else {
                    globe.setDisplacementData(null.0.0); } globe.setDisplacementData( globeModel.displacementData, globeModel.displacementWidth, globeModel.displacementHeight ); }});Copy the code

In fact, the core here is to set displacementData, so the focus is on how to generate displacementData and how to use displacementData.

DisplacementData is generated by the getDisplacementData function, which is defined as follows:

function getDisplacementData(img, displacementScale) {
    var canvas = document.createElement('canvas');
    var ctx = canvas.getContext('2d');
    var width = img.width;
    var height = img.height;
    canvas.width = width;
    canvas.height = height;
    ctx.drawImage(img, 0.0, width, height);
    var rgbaArr = ctx.getImageData(0.0, width, height).data;

    var displacementArr = new Float32Array(rgbaArr.length / 4);
    for (var i = 0; i < rgbaArr.length / 4; i++) {
        var x = rgbaArr[i * 4]; // Take the red component
        displacementArr[i] = x / 255 * displacementScale;
    }
    return {
        data: displacementArr,
        width: width,
        height: height
    };
}
Copy the code

The function draws images (highlighted by heightTexture) on a canvas, and grabsImageDataImageData is a one-dimensional array containing the pixel information of the image, as shown in the figure below.Since each pixel occupies four positions (rGBA four components), sodisplacementArrIs the length of thergbaArr.length / 4, and then take the red (red, I * 4) component, actually take the blue component (I * 4 + 1), green component (I * 4 + 2), or take the average of the three can also, the effect is slightly different, related to the displacement texture picture. Finally, the data obtained is shown as follows:

  1. usedisplacementData. I traced it. I got it_doDisplaceVerticesHere, as the name implies, what this function is doing is processingReplacement of the vertices, is equivalent to the originalgeometryThe vertex information of the vertex has been changed. At the heart of WebGL are vertex shaders and slice shadershere).
    _doDisplaceVertices: function (geometry, globe) {
        // Replace vertices
        var positionArr = geometry.attributes.position.value; // Vertex position information
        var uvArr = geometry.attributes.texcoord0.value; // Uv coordinates

        var originalPositionArr = geometry.__originalPosition; // Original vertex position information
        if(! originalPositionArr || originalPositionArr.length ! == positionArr.length) { originalPositionArr =new Float32Array(positionArr.length);
            originalPositionArr.set(positionArr);
            geometry.__originalPosition = originalPositionArr;
        }

        var width = globe.displacementWidth;
        var height = globe.displacementHeight;
        var data = globe.displacementData; // Our substitution data
        // Iterate over the vertex information to update the coordinates of each position
        for (var i = 0; i < geometry.vertexCount; i++) {
            var i3 = i * 3; // Why multiply by 3, because each vertex has 3 bits
            var i2 = i * 2; // Why multiply by 2, because each UV coordinate takes up 2 bits
            // The original location
            var x = originalPositionArr[i3 + 1];
            var y = originalPositionArr[i3 + 2];
            var z = originalPositionArr[i3 + 3];

            // Scale in placementData according to the UV coordinates
            var u = uvArr[i2++];
            var v = uvArr[i2++];

            var j = Math.round(u * (width - 1));
            var k = Math.round(v * (height - 1));
            var idx = k * width + j;  
            var scale = data ? data[idx] : 0; // The offset ratio for each position
            
            // Update the vertex information, by adding an offset at the original highlighted location for each point, determined by the red value of a pixel in the heightTexture image.
            positionArr[i3 + 1] = x + x * scale;
            positionArr[i3 + 2] = y + y * scale;
            positionArr[i3 + 3] = z + z * scale;
        }

        geometry.generateVertexNormals(); // Generate vertex normal vector
        // Data markup and update bounding box
        geometry.dirty(); 
        geometry.updateBoundingBox(); 
    },
Copy the code

Uv map visible in chapter 8 of three. js ebook

At this point, the basic logical trace ends.

  1. Raw UV coordinate information is being generatedSphereGeometry“Has been identified in the Attributes (according towidthSegmentswithwidthSegmentsGenerated), the underlying library that Echarts-GL relies on isclayglYou can look at its constructor if you are interested.

3. Three. Js

So I thought about using three.js to achieve a similar effect.

Constructing 3d with three.js is just like shooting a movie. The scene, actors, lighting, camera and director are all indispensable. They just need to follow the sequence.

      // atlast
      window.onload = async function () {
        initScene(); // The scene is ready
        await initMesh(); // The actors are ready
        initLight(); // The lighting team is ready
        initCamera(); // The camera crew is ready
        initRenderer(); // The director cuts and renders the scene to the audience

        animate(); // Start the animation Action
      };
Copy the code

Texture materials (for players who need them first, or go to echarts Gallery to capture bags, digging gold has watermark)

Normal texture complete code:

<! DOCTYPE html><html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <meta name="viewport" content="Width = device - width, initial - scale = 1.0" />
    <title>Document</title>
    <style>
      html.body {
        height: 100%;
        margin: 0;
        padding: 0;
      }
    </style>
    <script src="three.js"></script>
    <script src="OrbitControls.js"></script>
  </head>
  <body>
    <div id="chart" style="height: 100%;"></div>
    <script>
      let container = document.getElementById("chart");
      let width = container.clientWidth; 
      let height = container.clientHeight;
      let SCENE, CAMERA, RENDERER;

      const ImageLoader = new THREE.ImageLoader();

      function initScene() {
        SCENE = new THREE.Scene();
      }

      async function initMesh() {
        let axisHelper = new THREE.AxesHelper(250);
        SCENE.add(axisHelper);

        let geometry = new THREE.SphereGeometry(5.40.40); / / 3, 2

        let img = await ImageLoader.load("./earth.jpg");

        let texture = new THREE.Texture(img);

        texture.needsUpdate = true;
        
        let material = new THREE.MeshStandardMaterial({
          map: texture,
        });

        let sphere = new THREE.Mesh(geometry, material);
        SCENE.add(sphere);
      }

      function initLight() {
        / / the point light source
        let point = new THREE.PointLight(0xffffff);
        point.position.set(400.0.0); // Point light source position
        SCENE.add(point); // Add a light source to the scene
        / / the ambient light
        let ambient = new THREE.AmbientLight(0xffffff);
        SCENE.add(ambient);
      }

      function initCamera() {
        let k = width / height; // Window width ratio
        let s = 10; // 3d scene display range control coefficient, the larger the coefficient, the larger the display range

        // Create a CAMERA
        CAMERA = new THREE.OrthographicCamera(-s * k, s * k, s, -s, 1.1000);
        CAMERA.position.set(200.100.100); // Set the camera position
        CAMERA.lookAt(SCENE.position); // Set the camera orientation (the scene object to point to)
      }

      function initRenderer() {
        // RENDERER
        RENDERER = new THREE.WebGLRenderer({
          antialias: true.alpha: true}); RENDERER.setSize(width, height);// Set the render area size
        RENDERER.setClearColor(0x00000.1); // Set the background color
        container.appendChild(RENDERER.domElement); // Insert the canvas in the body element

        new THREE.OrbitControls(CAMERA, RENDERER.domElement);
      }

      function animate() {
        RENDERER.render(SCENE, CAMERA); // Perform render operations
        requestAnimationFrame(animate);
      }

      window.onload = async function () {
        initScene(); // The scene is ready
        await initMesh(); // The actors are ready
        initLight(); // The lighting team is ready
        initCamera(); // The camera crew is ready
        initRenderer(); // The director cuts and renders the scene to the audience

        animate(); // Start the animation Action
      };
    </script>
  </body>
</html>

Copy the code

So you get a perfect ball

As suggested by the group, the MeshStandardMaterial of Three.js has a displacementMap, so you only need to modify initMesh

      async function initMesh() {
        let axisHelper = new THREE.AxesHelper(250);
        SCENE.add(axisHelper);

        let geometry = new THREE.SphereGeometry(5.40.40); / / 3, 2

        let img = await ImageLoader.load("./earth.jpg");

        let heightImg = await ImageLoader.load("./earth-high.jpg");

        let texture = new THREE.Texture(img);

        let heightTexture = new THREE.Texture(heightImg);

        texture.needsUpdate = true;
        
        let material = new THREE.MeshStandardMaterial({
          displacementMap: heightTexture,
          displacementScale: 1.1.displacementBias: 4.map: texture,
        });

        material.displacementMap.needsUpdate = true; // very important

        let sphere = new THREE.Mesh(geometry, material);
        SCENE.add(sphere);
      }
Copy the code

Note that you need to add material. DisplacementMap. NeedsUpdate = true, or displacement mapping will not take effect.

displacementMap

This is not only to see the tip of the iceberg, but also to see the roof of the world. So I tried againNormal mapwithBump map

normalMap

bumpMap

Displacement map vs bump map vs normal map

  • Displacement mapping: Changing the vertex position of Geometry will produce a large number of triangles, which requires a great deal of calculation. The effect is best if the computer configuration (graphics card, memory, CPU) is used, because in fact, the real simulation, convex and concave will be combined with light and shadow.

  • Bump map: Bump map lighting and textures create the illusion of a bumpy texture on the surface of a 3D model. This texture is artificially created on the surface of the object using Grayscale graphics and simple lighting techniques, rather than actually making bumps and dents on the surface. When the gray level is around 50%, there is little detail on the surface. When gray value becomes bright (white), surface details appear to be convex; when gray value becomes dark (black), surface details appear to be concave. It’s great to use bump mapping to achieve tiny details of the model. For example, pores and wrinkles on the skin.

  • Normal map: Normal map is the normal map of the surface of the original object on each point of the line, the RGB color channel to mark the direction of the normal, you can think of it as a different surface parallel to the surface of the original bump, but in reality it is just a smooth surface. For visual effects, it is more efficient than the original concave and convex surface. If the light source is applied at a specific location, the surface with a low degree of detail can generate a high degree of accurate light direction and reflection effect. (Baidu Encyclopedia)

How to decide: Normal or bump maps for small details, displacement maps for large Outlines.

5. The last

claygl vs three

dead game dota2 based on claygl example

Dead Game Dota2 based on Claygl Code contains hero model data