PK Creative Spring Festival, I am participating in the “Spring Festival Creative Submission Contest”, please see: Spring Festival Creative Submission Contest”
Online address: Moewang0321.giitee. IO/AR-blessing…
From giteePage I wanted to preview the page directly, but I have no idea why I can’t recognize the image. I want to comment on the effect, so I will send you an online address through the Intranet.
IO/AR-blessing…
Project address: gitee.com/moewang0321…
Same series of articles
“New Year ideas” write a blessing for the New Year!
“New Year’s Creative” presents itself with a musical fireworks show with festive music 🎊🎊
Implementation effect
preface
In the first article, I simply realized the effect of alipay writing the word “fu”. At that time, I had an idea that alipay could sweep the fortune. After searching a lot of information, I found that it could not be realized by the front end alone, so I changed my idea and used AR to sweep the fortune and summon the mascot of the Year of the Tiger. The effect of this article may be very low, but aimed at casting a brick to attract jade, I hope that the various gods can let me open my eyes (Spring Festival planning articles are quite NB…… I found everyone’s imagination and strength too strong.
The technical route of implementation: AR.js + A-frame + ArToolkit + obj2GLTF
About AR.js and A Frame
I’ll start with A brief introduction to AR.js and A-frame, because a-frame does A lot of things for us in what looks like A little bit of code, but I’ll get to that later.
AR.js
Ar.js is a lightweight AUGMENTED reality-like JavaScript library that supports tag and location-based augmented reality. The AR.js framework includes cross-browser compatibility and supports WebGL and WebRTC, which means it works on Android and iPhone devices up to iOS 11. By wrapping many different AR frameworks, including Three. js, A-Frame, and ARToolKit, AR.js makes it easier and more efficient to introduce AR into Web applications. It has the following advantages:
-
Cross-browser compatibility
-
High performance of 60fps can be achieved even on older devices
-
Web-based, no installation is required
-
Open source, free access
-
Available on all mobile devices using WebGL and WebRTC
-
No additional or unusual hardware is required
-
It can be done with less than 10 lines of HTML
A-Frame
A-frame is A web development framework for building virtual reality (VR) applications. Developed by Mozilla VR, the founder of WebVR, it is the mainstream technology solution for developing WebVR content. WebVR is a fully open source project that has grown into the leading VR community.
A-frame is htML-based and easy to use. But A-Frame is more than just A 3D scene rendering engine or A markup language. The core idea is to provide a declarative, extensible and componentized programming structure based on three.js.
A-frame allows us to use component tags for rapid development in pure HTML, with basic code attached below, which is very comfortable for vUE brothers:
<html>
<head>
<script src="https://aframe.io/releases/1.1.0/aframe.min.js"></script>
</head>
<body>
<a-scene>
<a-box position="0.5-1-3" rotation=45 "0 0" color="#4CC3D9"></a-box>
<a-sphere position="0 1.25 -5" radius="1.25" color="#EF2D5E"></a-sphere>
<a-cylinder
position="1 0.75 3"
radius="0.5"
height="1.5"
color="#FFC65D"
></a-cylinder>
<a-plane
position="0 0 to 4"
rotation="- 90 0 0"
width="4"
height="4"
color="#7BC8A4"
></a-plane>
<a-sky color="#ECECEC"></a-sky>
</a-scene>
</body>
</html>
Copy the code
To summarize the a-frame features:
-
Simplify VR production: By introducing
-
Declarative HTML: HTML is easy to read, understand, and copy and paste.
-
Entity-component Architecture: A-Frame is based on the powerful three.js framework and provides A declarative, componentized, reusable entity component architecture. HTML is just the tip of the iceberg, and developers are free to use JavaScript, DOM apis, Three.js, WebVR, and WebGL.
-
High performance: A-Frame optimizes WebVR from the ground up. Although A-Frame uses DOM, its elements do not touch the browser’s layout engine. Updates to 3D objects are all called in a single requestAnimationFrame in low-overhead memory and can even be run as native applications (90+ FPS).
-
Tool independence: Built on HTML, a-Frame is compatible with most development libraries, frameworks, and tools such as React, vue.js, Angular, d3.js, ember.js, and jQuery.
implementation
Preparing Resources
First prepare aframe and its JS files inherited from AR and NFT (can be taken from my project), a picture for scanning (image elements as much as possible, JPG format, the same length and width) and a mascot model for recognition (OBJ format or GLTF format).
Logo image
First of all, we need to process the image. Since we recognize natural images, we need to use ARToolkit (installation package is included in the project code) to extract feature points from the image. The more feature points, the more accurate the recognition will be.
After arToolkit is installed, the file directory is C:\Program Files (x86)\ARToolKit5. We go to the bin directory, add an image folder, put the image we selected in it, then open the terminal in the bin directory and type the command:
$gentexdata. exe Image relative pathCopy the code
Then press Enter twice, these two steps are to select the degree of extracting image features, the larger the value is, the more features will be extracted. The closer the camera is to the image, the better the tracking will be. The default value is used here.
Next, you need to select the maximum and minimum resolution, which varies according to camera resolution and camera distance. Generally, 20 to 120 is the most appropriate. Our maximum resolution is 96, so I’m going to go from 20 to 96.
Fset,.fset3,.iset files will be generated in the image directory. These files are what we need.
DispFeatureSet can be dispFeatureSet.
Mascot model
If the model is in glTF format, it is best. Other formats need to be converted to the general OBJ format of the model, and then converted to glTF using the Obj2GLTF tool (provided in the repository).
Nodebin /obj2gltf.js -i obj model path -o Output path/model name.gltfCopy the code
code
First, we used a-scene to build a scene, started arJS, gesture detector gesture detection, VR-mode-ui =”enabled: false; Do not show the VR button and place a camera component to call our device camera.
<a-scene arjs vr-mode-ui="enabled: false;" gesture-detector>
<a-entity camera></a-entity>
</a-scene>
Copy the code
Next we put our generated model, images and feature files into the project directory and use the NFT component and configure it accordingly:
<a-nft
type="nft"
url="./assets/2"
>
<a-entity gltf-model="./assets/tiger.gltf" scale="1 1" position="0 0 0">
</a-entity>
</a-nft>
Copy the code
Note that the URL of NFT is the name of the image, no suffix is needed, add the model we want to show after identifying it, and set the zoom level and position.
The problem
At this point, the code is complete, and we’re excited to start the live-server on the project and visit the page with our mobile phone.
MediaDevices does not exist in the browser. There are only a few cases when the browser wants to call the camera:
-
file
-
localhost
-
https
How can we debug it locally? Here is an Intranet penetrator we have been using: ngrok (installation package is also stored in the warehouse).
After decompressing and opening the package, run ngrok HTTP 8080. Register an account and bind Authtoken for the first time, and restart ngrok after binding. After success, you will be returned with three addresses, local, HTTP and HTTPS URL paths, which we will use HTTPS.
Phone access to the address will enable camera access.
Because it is Intranet penetration, opening the page and identifying and loading the page are slow.
The last
As for the picture recognition, AS a front end, I think that if I want to recognize the universal character fu, it should be required to conduct model training, but I am not good at it, so I changed it to recognize the natural picture, looking forward to the improvement of the leaders.