This paper mainly consists of three parts: the first part is WebXR, including the concept of WebXR, standards, advantages and mainstream development methods; From WebXR development mode [using encapsulated third-party library development] leads to the second part — AFrame framework, its introduction, characteristics and application of ECS architecture; The third part is to quickly grasp the basics of AFrame development through a small game demo.

You can start with a small game experience:

Play with webxr-Game.zlxiang.com

Source code address: github.com/zh-lx/webxr…

WebXR

What is WebXR?

WebXR is a set of standards that support rendering 3D scenes for rendering virtual worlds (virtual reality, also known as VR) or adding graphic images to the real world (augmented reality, also known as AR). This standard is actually a set of WebXR Device apis that implement the core of the WebXR feature set, manage the selection of output devices, render 3D scenes to the selected devices at an appropriate frame rate, and manage motion vectors created using the input controller.

WebXR compatible devices include immersive 3D motion and location tracking headphones, glasses that overlay real-world scenes through frames, and handheld mobile phones that enhance reality by capturing the world with a camera and enhancing scenes with computer-generated images.

To accomplish these things, the WebXR Device API provides the following key capabilities:

  • Find a compatible VR or AR output device
  • Render the 3D scene to the device at the appropriate frame rate
  • (Optional) Mirror the output to a 2D display
  • Creates a vector that represents the movement of the input control

WebXR API is built on the early WebVR API. Now in addition to VR, it also adds support for AR. VR and AR are distinguished from sensory experience:

  • VR is an interactive experience between users and pure virtual scenes by means of peripherals input and output (hardware and software systems such as head, joypad, motion sensing and motion sensing)
  • AR is also a way for users to experience additional virtual content with the help of peripherals. The difference is that virtual content is superimposed on the real world, either through projection or video superimposition

The lifecycle of a WebXR application

A WebXR application must go through the following life cycle:

Advantages of WebXR developing VR applications

WebXR is a web-based XR application that can be used to support scenarios where native XR applications are not well suited, such as short, concise marketing pages, online immersive videos, e-commerce, online mini-games, and art creation.

Compared with local XR applications, WebXR has the following advantages:

  • Immediacy of the Web: We can simply share a piece of content with a link and then click and use that content immediately. From the user’s point of view, this is an advantage — you don’t need to install an App to use the content. From a developer’s point of view, we’re in complete control of what we do, and we don’t have to ask permission or go through a regulatory or approval process to release an app.
  • Web standards stability: Because of Web standards, WebXR apis are almost never removed by browsers once they are published, so applications built with WebXR can remain stable for a long time, unlike native applications that need to be constantly adapted with system upgrades.
  • Web development has a large number of practitioners: CURRENTLY XR is not popular in Web development, and the base of Web developers is very large. Once WebXR development is popular among Web development practitioners, it is bound to get a rapid development, promoting the prosperity of this technology.

Development of WebXR applications

There are three mainstream WebXR application development methods:

Use packaged third-party libraries

Learning and development costs are relatively high for users without a WebGL base, so there are several webGL-based libraries on the market, such as Aframe, Babylon, three.js to help us get started with Developing WebXR quickly

WebGL + WebXR api

The development method of WebGL plus WebXR API is relatively close to the bottom layer. For the bottom layer, especially the rendering module, we can do some optimization operations to improve the performance and experience of XR.

Traditional 3D engine + Emscripten

In traditional 3D application development, we generally use some well-known 3D engines such as Unity, Unreal, etc. With the help of Emscripten, we can compile C and C++ code into WebAssembly, so as to realize XR on the Web side.

Aframe framework

Introduction to the

Aframe is a web development framework for building virtual reality (VR) applications. It is based on HTML, making it very easy to get started. But aframe is more than just a 3D scene rendering engine or markup language. Its core idea is to provide a declarative, extensible, and component-based programming structure based on three.js.

It was built by the Mozilla VR team, the founders of WebVR, and is currently the mainstream technology solution for developing WebVR content, and is maintained by the co-founders of Aframe in Supermedium. As an independent open source project, Aframe has grown into one of the largest VR communities.

features

  • simple VR making: Just introduce<script>The labels and<a-scene>, Aframe will automatically generate boilerplate code for 3D rendering, VR-related Settings, and default interaction controls. You don’t have to install anything and you don’t have to build.
  • Declarative HTML: AFrame encapsulates a lot of 3D logic in HTML tags that are easy to read, understand, and copy and paste.
  • ECS architecture: AFrame is based on the powerful three.js framework and provides a declarative, componentized, reusable entity component architecture. HTML is just the tip of the iceberg, and developers are free to use JavaScript, DOM apis, Three.js, WebVR, and WebGL.
  • A high performance: Aframe optimizes WebVR from the ground up, and although it uses DOM, its elements do not touch the browser’s layout engine. Updates to 3D objects are all done in a single, low-overhead memoryrequestAnimationFrameCan even be run as a native application (90+ FPS).
  • Cross-platform: A-Frame can build VR applications that are compatible with major headsets such as HTC Vive, Rift, Daydream, GearVR, Pico, Oculus and even run on normal computers and phones.
  • Tool independence: Built on TOP of HTML, a-Frame is compatible with most development libraries, frameworks, and tools like React, Vue, Angular, etc.
  • Visual inspection tool: Aframe provides a convenient built-in 3D visual inspection tool. Open any A-frame scene, Mac press < Control > +

    •   
  • Rich components: Aframe has a rich set of built-in components, including core components such as geometries, Materials, lights, animations, models, Raycasters, shadows, and animations. Positional audio, text, and Vive/Touch/Daydream/GearVR/Cardboard controls. And more community contributed components such as: Aframe-particle-system-component, aframe-physics-system, Networked – Aframe, Oceans, Mountain, Speech recognition (Afame – Speech – Command – Component), motion capture (AfAME – Motion Capture), and telegraphy (AFRA) Me-teleport-controls, Aframe-super-hands-component, augmented reality, to name a few.

ECS architecture

ECS stands for entity-component-System (entity-component-System), and is an architectural pattern mainly used in game development. ECS architecture follows the design principle that composition pattern is better than inheritance and hierarchy and has great flexibility.

composition

entity

An entity is an object that exists in a game, but is actually a collection of components.

The < a-Entity > element is used in AFrame to represent an entity, which, as defined in the ECS architecture, is a placeholder object whose appearance, behavior, and functionality we provide by inserting components. Position, rotation, and scale are inherent components of the entity.

In code, an entity can be thought of as an HTML tag:

<! -- an empty entity that has no appearance, behavior, or function --><a-entity /><! -- We can add geometry and material components to the entity to give it shape and appearance --><a-entity geometry= primitive: box  material= color: red  />
Copy the code

component

A component is a reusable and modular block of data that we insert into an entity to add appearance, behavior, or functionality. In AFrame, components modify 3d object entities in the scene, and we combine components together to build complex objects (which actually encapsulate three.js and JS code logic).

IO /docs/1.3.0/…

In code, a component can be thought of as an attribute of an HTML tag:

<! Add position component to the entity to change the position of the entity in 3d coordinates.<a-entity position= 1 2 3 ></a-entity>
Copy the code

You can register a component with the aframe.registerComponent API:

<script>
AFRAME.registerComponent('very-high', {
  init: function () {
    this.el.setAttribute('position'.'0 9999, 0')}}); </script><a-entity very-high></a-entity>
Copy the code

system

A system provides global scope, services, and management for component classes. It provides common apis (methods and properties) for component classes. A system can be accessed through scene elements and can help components interact with the global scene.

The system is registered through afame. RegisterSystem in the same way as components. The following code registers a car system that serves a car component. The car component can access its namesake system through this.system. The system sets the speed of the entity corresponding to the component according to the type of the car component.

AFRAME.registerSystem('car', {
  getSpeed: function (type) {
    if (type === 'tractor') {
      return 40;
    } else if (type === 'sports car') {
      return 300;
    } else {
      return 100;
    }
  }
})

AFRAME.registerComponent('car', {
  schema: {
    type: { default: 'tractor'}},init: function () {
    this.el.setAttribute('speed'.this.system.getSpeed(this.data.type))
  },
})
Copy the code

advantage

In 3D and VR game development, ECS architecture has been tested for a long time. The famous Unity game engine uses ECS architecture, so what advantages does ECS have over OOP?

The biggest difference between ECS architecture and object-oriented architecture is that while object-oriented builds complex classes through inheritance, ECS builds complex entities through composition. In OOP mode, when a new type needs different functions from multiple old types, it does not inherit well. ECS integrates and decouples a large number of modules, and integrates a large number of decentralized systems with minimal coupling, making it more flexible.

Here’s an example:

Now we have a game with players, enemies, buildings, trees, etc. After a while of development, we need to add a class of buildings that will attack.

In object-oriented fashion, after a long inheritance chain, attacking buildings can no longer inherit directly from the Building and Enemy classes (Enemy inherits from the Dynamic class) :

In the ECS architecture, since each component is a minimum unit and decoupled from each other, you only need to combine Position, Rotation, Scale, Recover, and Attack components to build a new AttackBuilding entity.

Key concepts in VR development

VR development is based on 3D. In almost all 3D development, there are two relatively important concepts: camera and 3D coordinate system. Understanding these two concepts is the foundation of 3D development.

Camera (camera)

The camera defines the Angle from which the user views the scene. You can think of the camera as the observer’s eye, and only what the camera sees will appear on the screen canvas. Cameras are typically paired with control components that allow input devices to move and rotate the camera.

OrthographicCamera and PerspectiveCamera are commonly used in 3D scenes, while OrthographicCamera is usually used for 2D rendering.

Orthogonal camera

An OrthographicCamera sees objects in three dimensions, but the human eye only sees the front, not the hidden back, and you see a 2D projection. The process of transforming space geometry into a two-dimensional graph is projection, and different projection methods mean different algorithms of projection size.

The perspective camera

The results of PerspectiveCamera not only relate to the Angle of geometry, but also to the distance. Human eyes view the world through PerspectiveCamera. For example, the further you view a railway, the smaller the width between the two tracks.

Aframe is based on three.js. Regardless of orthographic projection or perspective projection, three.js encapsulates relevant projection algorithms, so you only need to choose different projection methods according to different application scenarios. When using OrthographicCamera camera object, three.js automatically calculates the projection result of geometry according to orthographic algorithm. Using PerspectiveCamera camera objects, three.js automatically calculates the projection of the geometry using perspective projection algorithms.

Three dimensional coordinate system

The 3D coordinate system in AFrame uses the right hand coordinate system, and the default camera orientation is the following view:

From the unit

The aframe distance is in meters, because the WebXR API returns posture data in meters.

Rotate the unit

The unit of rotation for aframe is degrees, which is converted to radians within three.js. To determine the positive direction of rotation, use the right hand rule: by pointing your thumb in the direction of the positive axis, the direction your finger goes around is the positive direction of rotation.

Start aframe development with a small game

Now that we know some of the basic concepts of WebXR and AFrame, we can try our hand at making a little game. Through the production process of this mini-game, you will learn some common APIS for A-frame development and have the ability to get started.

Build the Aframe development environment

Start by introducing the AFrame framework, which supports a variety of import methods (github.com/aframevr/af… Script tags were introduced directly into HTML:

<! DOCTYPEhtml>
<html lang= en >
  <head>
    <meta charset= UTF-8  />
    <meta http-equiv= X-UA-Compatible  content= IE=edge  />
    <meta name= viewport  content= width=device-width, initial-scale=1.0  />
    <title>WebXR Game</title>
    <script src= https://aframe.io/releases/1.3.0/aframe.min.js ></script>
  </head>
  <body>
  </body>
</html>
Copy the code

Note that assets, textures, and models in aframe usually need to be loaded remotely. If you open the HTML page directly through the local absolute path, you will not be able to access the resources because of cross-domain. So you need to use host or localhost to access the HTML file for development:

Here I set up a devServer via Webpack to access the local HTML files.

Add primitives/entities

What are primitives

As mentioned earlier, aframe creates an entity that represents an object in the VR world through

. Primitives is also an HTML tag in the form of

, which is essentially an entity-component wrapper inside. IO /docs/1.3.0/…

Add a scene

After introducing the AFrame framework, we add an < A-scene > primitive to the body. < A-scene > is the scene container that contains all entities. All of our entities and primitives need to be added to < A-scene >. < A-Scene > takes care of all the Settings we need for XR development:

  • Set the Canvas, renderer, and render loop
  • Default camera and lighting
  • Setting webvr – polyfill, VREffect
  • Add an interface into VR to launch the WebXR API
<! DOCTYPEhtml>
<html lang= en >
  <head>
    <meta charset= UTF-8  />
    <meta http-equiv= X-UA-Compatible  content= IE=edge  />
    <meta name= viewport  content= width=device-width, initial-scale=1.0  />
    <title>WebXR Game</title>
    <script src= https://aframe.io/releases/1.3.0/aframe.min.js ></script>
  </head>
  <body>
    <a-scene></a-scene>
  </body>
</html>
Copy the code

After adding

, we can see a VR icon in the lower right corner of the page, indicating that the addition is successful. Clicking on the icon takes us to the VR page to launch the WebXR API:

Bring in community resources

For us Web developers, the hardest part of VR is probably building the right 3D models, but the good news is that the community has A lot of packaged resources at its disposal. A-frame Registry collects and organizes these resources for developers to discover and reuse, and there are many resources that we can find right out of the box.

Here I refer to the afram-environment-component, which helps us quickly create a beautiful scene, introduce the script, and add an environment above the

<! DOCTYPEhtml>
<html lang= en >
  <head>
    <meta charset= UTF-8  />
    <meta http-equiv= X-UA-Compatible  content= IE=edge  />
    <meta name= viewport  content= width=device-width, initial-scale=1.0  />
    <title>WebXR Game</title>
    <script src= https://aframe.io/releases/1.3.0/aframe.min.js ></script>
    <script src= https://unpkg.com/[email protected]/dist/aframe-environment-component.min.js ></script>
  </head>
  <body>
    <a-scene environment= preset: forest; ></a-scene>
  </body>
</html>
Copy the code

A scene of the forest appeared on our screen:

Add a wall

Next we add a wall to the scene that shows the start, end, score, and health of our game. Using the

primitive, create a cube as a wall in the scene:

<a-scene environment= preset: forest; >
  <a-box></a-box>
</a-scene>
Copy the code

We said that the primitive is an entity-component wrapper. The above code is equivalent to:

<a-scene environment= preset: forest; >
  <a-entity geometry= primitive: box; ></a-entity>
</a-scene>
Copy the code

Add 3D coordinate transformation

Add scale= 30 20 4 to our wall and set it to a wall 30 meters long in x axis, 4 meters wide in Z axis and 20 meters high in Y axis; Add postion= 0 0-20 and set its position to -20 meters in the direction of z-axis:

<a-scene environment= preset: forest; >
  <a-box scale= 30 20 4  position= 0 0 - 20 ></a-box>
</a-scene>
Copy the code

Apply image textures

Add a SRC attribute to

, specify an image address, and aframe will render the image as a map on the surface of the object. With the following code, we applied a stone texture to the wall surface:

<a-scene environment= preset: forest; >
  <a-box
    scale= 30 20 4 
    position= 0 0 - 20 
    src= https://image-1300099782.cos.ap-beijing.myqcloud.com/wall.jpeg 
  >
   <a-box position= 0 0 0 ></a-box>
  </a-box>
</a-scene>
Copy the code

Use the resource management system

One drawback of the above method of texturing the wall is that the resource loading of the wall does not wait for the image to finish loading, which may result in the rendering of the wall without the image texture in the scene, and the wall rendering of the image texture after the wall image is loaded.

For performance reasons, we recommend using the resource management system (< A-assets >). The resource management system makes it easier for the browser to cache resources (such as images, videos, models), and the A-frame framework ensures that all resources are fetched before rendering.

If we define a in the resource management system, three.js does not need to create a underneath. Creating ourselves in aframe also gives us more control and lets us reuse textures across multiple entities. Aframe also automatically sets crossOrigin and other properties if necessary.

To use the resource Management system for image textures:

  • add<a-assets>Go to the scene.
  • Define the texture as<a-assets>The following<img>.
  • to<img>An HTML ID (LLDB).id= boxTexture ).
  • Reference resources with ids in DOM selector format (src= #boxTexture )

Using the resource management system to achieve the above to add the wall image texture effect code is as follows:

<a-scene environment= preset: forest; >
  <a-assets timeout= 30000 >
    <img
      id= wallImg 
      src= https://image-1300099782.cos.ap-beijing.myqcloud.com/wall.jpeg 
    />
  </a-assets>
  
  <a-box scale= 30 20 4  position= 0 0 - 20  src= #wallImg ></a-box>
</a-scene>
Copy the code

To add text

WebGL has a variety of methods to process text rendering, each with its own advantages and disadvantages. Aframe adopts SDF text implementation scheme and uses three-BMFONT text, which is simple and has good performance. Rendering of text is implemented by adding

primitives.

<a-scene environment= preset: forest; > <! -... --><a-text
    id= start-text 
    value= Start 
    color= #BBB 
    position= - 3 6 18 
    scale= 10 10 10 
    font= mozillavr 
  ></a-text>
</a-scene>
Copy the code

Other text rendering schemes include:

  • Text-geometry: 3D text, rendering costs higher
  • Html-shader: Renders HTML as a texture, which has the advantage of being easy to style, but has the disadvantage of poor performance.

Add the cursor

In the VR world, we can interact through the controller of the VR device. Considering that many developers currently do not have suitable VR hardware with controllers, we can use the built-in CURSOR component to interact.

The cursor primitive < A-cursor > can be used for both gaze-based and controller-based interactions. The default appearance is a circular geometry, which is usually placed as a child object of the camera.

Below we use < A-camera > to add our own camera to replace the default camera set by < A-scene >, and mount < A-cursor > as a child object of the camera, so that no matter how our camera rotates and moves, we can always see the cursor.

Note: When an entity is mounted as a child of another entity in AFrame, the 3d coordinate properties of the child object are the coordinate positions relative to the parent object entity, not the whole 3D world.

<a-scene environment= preset: forest; >
  <a-camera>
    <a-cursor color= #FAFAFA ></a-cursor>
  </a-camera>
</a-scene>
Copy the code

Use the GLTF model

GLTF is an open project of Khronos that provides a common, extensible format for 3D assets that is both efficient and highly interactive with modern Web technologies. The GLTF-Model component uses GLTF (.gltf or.glb) files to load the model data, which will be used by a large number of 3d models in our application.

Open source

Here are a few open GLTF resource sites:

  • Sketchfab: Provides automatic conversion of all downloadable models, including PBR models as well as GLTF formats
  • Poimandres Market: 3D resources in GLTF format are available for download
  • Poly Haven: Offers CC0 HDRIs, PBR textures and GLTF models

Add a weapon

Introduce a GLTF model to load a weapon. In the resource management system, introduce a GLTF resource through the

primitive, then mount an entity child object under the camera, and set the entity SRC to the ID of the < A-asset-item > primitive:

<a-scene environment= preset: forest; >
  <a-assets timeout= 30000 >
    <img
      id= wallImg 
      src= https://image-1300099782.cos.ap-beijing.myqcloud.com/wall.jpeg 
    />
    <a-asset-item
      id= weapon 
      src= https://vazxmixjsiawhamofees.supabase.co/storage/v1/object/public/models/blaster/model.gltf 
    ></a-asset-item>
  </a-assets>

  <a-camera>
    <a-cursor color= #FAFAFA ></a-cursor>
    <a-gltf-model
      id= _weapon 
      src= #weapon 
      position= 0.5 0.5 0.8 
      scale= 1 1 1 
      rotation= 0 180 0 
    ></a-gltf-model>
  </a-camera>
</a-scene>
Copy the code

Here’s our weapon on the page:

Using the component

When the cursor is focused on the start text, the text becomes larger and changes color. The out-of-focus cursor is a restore effect.

Components have many life cycles, and in this case we use the init() method, which is called once at the beginning of the component’s life cycle and is typically used to set initial state and variables, bind methods, and attach event listeners.

Certified components

In the following code we register a start-focus component that registers mouseEnter and mouseleave event listeners for the corresponding entity during the init() lifecycle. When the mouseEnter event is triggered, the start character becomes large and turns orange. When the mouseleave event is triggered, the start character is restored:

AFRAME.registerComponent('start-focus', {
  init: function () {
    this.el.addEventListener('mouseenter'.function () {
      if (window.startLeaveTimer) {
        clearTimeout(window.startLeaveTimer);
        window.startLeaveTimer = null;
      }
      window.CursorFocusEntity = 'start';
      this.setAttribute('scale'.'12 12 12');
      this.setAttribute('color'.'orange');
    });

    this.el.addEventListener('mouseleave'.function () {
      window.startLeaveTimer = setTimeout(() = > {
        window.CursorFocusEntity = null;
        this.setAttribute('scale'.'10 10 10');
        this.setAttribute('color'.'#bbb');
      }, 500); }); }});Copy the code

Mount components

We mount the newly registered start-focus component to the start text primitive:

<a-scene environment= preset: forest; > <! -... --><a-text
    id= start-text 
    value= Start 
    color= #BBB 
    position= - 3 6 18 
    scale= 10 10 10 
    font= mozillavr 
    start-focus
  ></a-text>
</a-scene>
Copy the code

Now we have what we want:

Listen for cursor click events

Add a click listening event to the cursor:

<script>
AFRAME.registerComponent('cursor-listener', {
  init: function () {
    // Click to attack
    this.el.addEventListener('click'.function (evt) {
      console.log('Cursor clicked')}); }}); </script><a-camera>
  <a-cursor color= #FAFAFA  cursor-listener></a-cursor>
</a-camera>
Copy the code

JavaScript interaction

Because AFrame is essentially HTML, we can use JavaScript and DOM apis to control the scenes and entities in it just like normal Web development.

Next we will implement an effect where the weapon fires bullets at the cursor position when the cursor is clicked, using JavaScript’s DOM API to do some interaction between entities and the scene.

Get cursor point information

When the cursor click event is triggered, the callback function has a default parameter evt, which contains information about the cursor. We can print it:

AFRAME.registerComponent('cursor-listener', {
  init: function () {
    // Click to attack
    this.el.addEventListener('click'.function (evt) {
      console.log(evt) }); }});Copy the code

Through print results show that evt. Detail. Intersection computes. Point contains the three-dimensional coordinates of the cursor is pointing to:

Now we are in the cursor click event callback function, perform a createAttack method, its receiving evt. Detail. Intersection computes. Point as an argument:

AFRAME.registerComponent('cursor-listener', { schema: {}, init: The function () {/ / click to attack this. El. AddEventListener (' click ', function (evt) {createAttack (evt) detail. Intersection computes. Point); }); }});Copy the code

Create the entity

To fire the bullet, we need to create the bullet entity, using the Document.createElement API to create the bullet entity. This code creates a sphere:

function createAttack(point) {
  const attackEntity = document.createElement('a-sphere');
}
Copy the code

Set up components for entities

Setting an entity component using javascript is the same as setting a dom attribute using NODE.setAttribute.

Add radius, color, Position and animation to the sphere you just created:

function createAttack(point) {
  const { newX, newY, newZ } = getPosition(point);
  const attackEntity = document.createElement('a-sphere');
  attackEntity.setAttribute('radius'.'0.2');
  attackEntity.setAttribute('color'.'red');
  attackEntity.setAttribute('position'.`${newX} ${newY} ${newZ}`);
  attackEntity.setAttribute(
    'animation'.`property: position; dur: 300; to: ${point.x} ${point.y} ${point.z}; `
  );
}
Copy the code

Where position is the position generated by the sphere, we need to make the bullet emitted from the muzzle position of the weapon, and finally through animation, fired to the position where the cursor is.

Calculate the initial position of the bullet

In the above code, getPosition is the function to calculate the initial position of the bullet. This part is designed to be complicated and boring mathematical calculation, so you can skip it if you are not interested.

As we mentioned earlier, the position of the entity in aframe is relative to the parent object entity, and our bullet will eventually be mounted below the < A-scene >. Since the position of the weapon in the three-dimensional world coordinate system will change when the camera rotates, we need to calculate the initial position of the weapon muzzle. The initial position of the bullet.

Let’s start with one of the planes in a three-dimensional coordinate system. Take the plane with the x and Z axes for example:

First, no matter how the camera rotates, the Angle formed by the line between the muzzle position and the camera pointing position and the camera point in the Xz plane is always the same.

We know the coordinates of the cursor’s starting point and the point where the cursor is at the time of the click, so we can figure out the radian θ of the rotation of the click and the initial state in the Xz plane, and then we can use the following math to figure out where the muzzle of the gun is at the time of the click.

Find the included Angle between two lines in the coordinate system:

The slope of line L1 k1: k1 = (y1-y)/(x1 – x)

K2: k2 = (y2-y)/(x2 – x)

Tangent value of Angle θ : Tan θ = (k2-k1) ‘ ‘/ (1 + k1 * k2)

Angle θ : θ = math.atan (tanθ)

Coordinate system to find the coordinates of a point with another point as the center of the circle after rotation θ :

X2 is x1 minus x cosine theta minus y1 minus y sine theta plus x

Y2 is y1 minus y cosine theta plus x1 minus x sine theta plus y

Using the above mathematical formula, substitute the getPosition function to find the initial position of our bullet.

Access to the entity

We’ll eventually mount the bullet we created into the scene, so we need to get the scene first, using the Document. querySelector API:

const scene = document.querySelector('a-scene');
Copy the code

Mount the entity

Also using the appendChild DOM API, we can mount the bullet entity we just created into the scene:

function createAttack(point) {
  const { newX, newY, newZ } = getPosition(point);
  const attackEntity = document.createElement('a-sphere');
  attackEntity.setAttribute('radius'.'0.2');
  attackEntity.setAttribute('color'.'red');
  attackEntity.setAttribute('position'.`${newX} ${newY} ${newZ}`);
  attackEntity.setAttribute(
    'animation'.`property: position; dur: 300; to: ${point.x} ${point.y} ${point.z}; `
  );
  scene.appendChild(attackEntity);
}
Copy the code

Destruction of the entity

Once the bullet is at the cursor position, we can’t let it stay in the scene forever and need to destroy it, i.e. remove it from the scene with removeChild:

function createAttack(point) {
  const { newX, newY, newZ } = getPosition(point);
  const attackEntity = document.createElement('a-sphere');
  attackEntity.setAttribute('radius'.'0.2');
  attackEntity.setAttribute('color'.'red');
  attackEntity.setAttribute('position'.`${newX} ${newY} ${newZ}`);
  attackEntity.setAttribute(
    'animation'.`property: position; dur: 300; to: ${point.x} ${point.y} ${point.z}; `
  );
  scene.appendChild(attackEntity);
  const timer = setTimeout(() = > {
    scene.removeChild(attackEntity);
    clearTimeout(timer);
  }, 300);
}
Copy the code

Now our bullet firing effect is complete:

Adding an Audio Resource

Audio is important for providing immersion in virtual reality by adding an

<a-scene environment= preset: forest; >
      <a-assets timeout= 30000 >
        <audio
          id= shooting-sound 
          src= https://audio-1300099782.cos.ap-beijing.myqcloud.com/shooting.mp3 
          preload= auto 
        ></audio>
      </a-assets>
      
      <a-camera>
        <a-cursor color= #FAFAFA  cursor-listener></a-cursor>
        <a-gltf-model
          id= _weapon 
          src= #weapon 
          position= 0.5 0.5 0.8 
          scale= 1 1 1 
          rotation= 0 180 0 
        ></a-gltf-model>
        <a-entity
          sound= src: #shooting-sound 
          id= shooting_sound_player 
          position= 0.5 0.5 0.8 
          poolSize= 10 
        ></a-entity>
      </a-camera>
    </a-scene>
Copy the code

Via entity.com ponents. Sound. PlaySound () method, we can show the entity on mount audio, so we’re createAttack method is executed by the following code play shooting audio:

const shootingSoundPlayer = document.querySelector('#shooting_sound_player');

function createAttack(point) {
  shootingSoundPlayer.components.sound.playSound();
  const { newX, newY, newZ } = getPosition(point);
  const attackEntity = document.createElement('a-sphere');
  attackEntity.setAttribute('radius'.'0.2');
  attackEntity.setAttribute('color'.'red');
  attackEntity.setAttribute('position'.`${newX} ${newY} ${newZ}`);
  attackEntity.setAttribute(
    'animation'.`property: position; dur: 300; to: ${point.x} ${point.y} ${point.z}; `
  );
  scene.appendChild(attackEntity);
}
Copy the code

The rest of the work

Now that we have covered all the aframe knowledge in this game demo, you can use the above knowledge, combined with javascript, to complete the rest of the work:

  • Timed spawning of monsters
  • Monsters fire at us at regular intervals, dealing damage
  • Attack monsters, deal damage and gain points for destroying monsters
  • Update the score and our remaining HP
  • The game starts, ends and restarts

conclusion

Through this article, you should gain knowledge about WebXR concepts, standards, and how to develop WebXR applications using aframe framework. WebXR is a blue sea with few participants in both Web development and VR development, and its prospects are very broad. Even our education business, if combined with WebXR technology, is a new idea.

I hope this article can arouse everyone’s interest in WebXR, and summarize some learning materials and development resources of WebXR for the convenience of interested students to get started:

  • Immersive – Web.github. IO/WebXR /#xrpo…
  • aframe

    • Documents: aframe. IO/docs / 1.3.0 /…

    • Resources: aframe. IO/aframe – regi…

    • GLTF resources

      • Sketchfab
      • Poimandres Market
      • Poly Haven
  • WebGL:github.com/KhronosGrou…
  • Three.js:github.com/mrdoob/thre…