This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!

Recently, because Hongxing Erke donated 50 million yuan of materials to Henan, it really made many netizens cry. An ordinary company might not have such empathy when it donated 50 million yuan, but after reading the background of Hongxing Erke, I found it was really sad. In 2020, The revenue of Hongxing Erke is 2.8 billion yuan, but the profit is 200 million yuan loss. Even the official weibo account is reluctant to open a member. Under such circumstances, the company also boldly donated 50 million yuan.

Netizens also said That Hongxing Erke, especially like the older generation of savings saved every penny, carefully stored in a tin box. As soon as I heard the need of the motherland, I immediately took out the iron box and gave it to you. Let on the most expensive shoes, took out a pair of 249.

Then I went to the official website of Hongxing Erke to see his shoes.

Good guy, after 55 seconds, finally got the website open… (It looks like it’s really in disrepair. It’s so sad. It’s crazy to see this as a front end…)

Coincides with the weekend, I went to my nearest Hongxing Erke to see. Bought a pair of 136 shoes (is really cheap, the most important or comfortable).

After buying home in the mind, like the poison APP above those Adidas, Nike are wired 360° view, think can give Hong Xing Erke also do one, calculate as a technical personnel for it out of a small force.

action

Once I had the idea, I started working on it. Then I summarized the following steps:

1. The modeling

2. Create the scenario using thee.js

3. Import the model

4. Add the three. js controller

Since I have learned some knowledge related to three. js before, the presentation after the model is quite basic. Therefore, the most troublesome part is the modeling, because we need to put a three-dimensional thing into the computer. For a 2 d object, want to on a computer, as we all know, is very simple, is to use a camera, but want to see in the computer 3 d object is different, it is much more a dimension, increase the amount of really growing exponentially, and began to refer to various data to see how to establish the model of an object.

Looking up a bunch of information, want to build a shoe model, summed up a total of two patterns.

Photogrammetry: The process of taking a photograph and converting it into a 3D model by a pure algorithm, also known in graphics as monocular reconstruction.

2. Lidar Scan: This method is used to scan point clouds, which is also mentioned in he’s latest video.

Put up an outline of what I’ve summarized, mostly foreign sites/tools.

At the beginning, most people mentioned 123D Catch in the search results, and they also watched many videos, saying that its model was fast and realistic. However, after further exploration, it was found that its business seemed to have been consolidated in 2017. The integrated ReMake needs to be paid, so I did not continue considering the cost. (This is just a demo attempt, after all)

Later found a software called Polycam, the finished effect is very good.

But when I chose to use it, I found that it requires a LiDAR scanner and requires an iPhone 12 Pro or higher to work.

In the end, I chose Reality Capture to create the model, which can synthesize a model through multiple pictures. After watching some videos of Station B, I felt that its image effect was also good, but it only supported Windows and required 8G memory to run. This is when I moved out of my 7 year old Windows PC… It was a surprise to find that it was still in service.

modeling

The following begins the formal content, the protagonist is the shoes I bought this time (the pair in the beginning)

Then we started shooting, first I took a random set of photos around the shoe, but found the model was really not good enough…

Later I also used the form of white screen, and added a layer of background, but later found that it was still not good, the application is more to recognize the background numbers behind.

And finally… Still with the help of Nan Xi, the background P was turned white.

The final result is not bad, and the basic point cloud model is out. (It felt good, like dark technology in the movies.)

Here’s what the model looks like. It’s the best model I’ve been training all day (but it’s still slightly rough)

In order to make the model look perfect as much as possible, it took me a whole day to test the model. Because the shooting Angle and the generation of the model were very affected, I took about 1G of pictures and about 500 pictures in total (because I didn’t know how to adjust the model in the early stage, SO I tried a lot of methods).

After we have the model, we can show it on the Internet. Here we use three.js (considering that many people are not related to this field, so we can speak more basic, please forgive us.)

Building an

It consists of three parts (building the scene, loading the model, adding the controller)

1. Build the 3D scene

First we load three.js

<script type="module">
import * as THREE from 'https://cdn.jsdelivr.net/npm/[email protected]/build/three.module.js';
</script>
Copy the code

Then create a WebGL renderer

const container = document.createElement( 'div' );
document.body.appendChild( container );

let renderer = new THREE.WebGLRenderer( { antialias: true}); container.appendChild( renderer.domElement );Copy the code

A scene and camera will be added

let scene = new THREE.Scene();
Copy the code

PerspectiveCamera(FOv, Aspect, near, far)

// Set a perspective camera
camera = new THREE.PerspectiveCamera( 45.window.innerWidth / window.innerHeight, 0.25.1000 );
// Set the camera position
camera.position.set( 0.1.5, -30.0 );
Copy the code

Add the scene and camera to the WebGL renderer.

renderer.render( scene, camera );
Copy the code

2. Model loading

Since our exported model is in OBJ format with a very large volume, I pressed it into GLTF and GLB formats successively. Three.js has written the GLTF loader for us, so we can use it directly.

// Load the model
const gltfloader = new GLTFLoader();
const draco = new DRACOLoader();
draco.setDecoderPath('https://www.gstatic.com/draco/v1/decoders/');
gltfloader.setDRACOLoader(draco);
gltfloader.setPath('assets/obj4/');
gltfloader.load('er4-1.glb'.function (gltf) {
  gltf.scene.scale.set(0.2.0.2.0.2); // Set the zoom
  gltf.scene.rotation.set(-Math.PI / 2.0.0) // Set the Angle
  const Orbit = new THREE.Object3D();
  Orbit.add(gltf.scene);
  Orbit.rotation.set(0.Math.PI / 2.0);

  scene.add(Orbit);
  render();
});
Copy the code

But opening our page with the above code will be completely dark, because we haven’t added light yet. So let’s go ahead and add a beam of light to illuminate our shoes.

// Set the light
const directionalLight = new THREE.AmbientLight(0xffffff.4);
scene.add(directionalLight);
directionalLight.position.set(2.5.5);
Copy the code

Now we can see our shoes clearly, as if we see light in the dark. But at this time, we cannot control by mouse or gesture, so we need to use the controller of our three. js to help us control the Angle of our model.

3. Add controllers

const controls = new OrbitControls( camera, renderer.domElement );
controls.addEventListener('change', render );
controls.minDistance = 2; // Limit scaling
controls.maxDistance = 10;
controls.target.set( 0.0.0 );  // The center of rotation
controls.update();
Copy the code

At this point we can see our shoes from all angles.

And we’re done!

Online experience address: resume. Mdedit. Online/erke

Open source address (including tools, operation steps, and actual demo) : github.com/hua1995116/…

Subsequent planning

Due to the limited time (time spent the day weekend), or didn’t get a very perfect model, the realization of the follow-up will also continue to explore this, later again will explore whether can realize an automated way, from film to display of the model, and in fact we have a model, not far from the AR try shoes also, if you are interested in better ideas or Suggestions, Welcome to talk to me.

Finally, I would like to thank Nan Xi very much for putting aside some things originally planned to help us with the shooting and post-processing, and accompany me to deal with the model for a whole day. (It’s really hard to shoot in limited conditions.)

Also, Zhu Hongxing Erke can become a long-term enterprise, keep innovation, make more and better sportswear, so that the state of favor of the people at the moment can be maintained.

The appendix

A few of the shooting skills, also provided by the official.

1. Do not limit the number of images. RealityCapture can process any image.

2. Use high-resolution images.

3. Each point on the scene surface should be clearly visible in at least two high quality images.

4. Move around the object in a circular manner when taking the photo.

5. Don’t move at an Angle of more than 30 degrees.

6. From taking a photo of the whole subject, move it around and pay attention to the details, making sure it’s all the same size.

7. Complete surround. (Don’t go half way around.)

Look back at my previous highly praised articles, maybe you can harvest more oh!

  • 2021 Front Learning Path Booklist — Path to Self-growth: 570+ likes

  • From cracking a design website about front-end watermarking (detailed tutorial) : 790+ thumbs up

  • This article unlocks the secret of “file download” : 140+ likes

  • 10 cross-domain solutions (with the ultimate trick) : 940+ likes

conclusion

❤️ follow + like + collect + comment + forward ❤️, original is not easy, encourage the author to create better articles

Pay attention to the public account qiufeng’s notes, a focus on the front-end interview, engineering, open source front-end public account