The first two issues mainly introduced the basic steps of developing WebVR applications, but the developed scenes are only “viewable from a distance but not disrespectful to play”, and lack of interaction. This issue will take you into the interactive world of VR and see how WebVR events are developed.

What are VR interactions?

In the interactive VR world, the user is no longer just an observer, but a life in the virtual world, and can communicate with the virtual world. The communication should be two-way: the virtual scene senses your presence (location, direction) and your input acts on the object. Here’s a VR game scenario:

  1. A MM wants you to choose a dress for her, and a menu pops up for you to choose. You swipe the touchpad to dress the MM, and you select a dress and click the button on the handle to confirm.
  2. She asks you if you look good, you nod.
  3. MM very happy, let you take pictures of her, at this time your handle has become a camera, aim at her press the button can take pictures;
  4. However, you squat down to appreciate the bottom of MM’s skirt without saying anything else, MM angry, the game over!


VR girlfriend — nod and shake

There are four interaction modes mentioned above, which can be divided into headset interaction and gamepad interaction according to the input device. The former triggers events through headset interaction (e.g., 2 and 4), while the latter triggers events through the handle behavior (e.g., 1 and 3).

These interactions require hardware support (for example, gyroscopes and accelerometers provide directional tracking), and we need JavaScript apis to obtain dynamic data of the Headset or Gamepad.

Due to the differences in headdisplay and gamepad of various VR manufacturers, the support for user interaction level is also uneven. The following shows the support for headdisplay and gamepad in interaction of various mainstream VR platforms.

DoF(Degree of Freedom) in the table is commonly referred to as the degree of freedom, mainly including Orientation degree of freedom and Position degree of freedom.

  • Orientation degree of freedom, support direction tracking, generally supported by gyroscopes, accelerometers and other sensors
  • Position degree of freedom supports Position tracking, which is generally divided into infrared tracking technology of outside-in(extroverted tracking) and SLAM technology of inside-out(introverted tracking)

Generally, in VR system, 3-DOF refers to the Orientation supported by VR hardware, and 6-DOF refers to the Orientation + Position supported.


Interaction event

Headset interaction can be divided into 3-DOF and 6-DOF according to degrees of freedom. Obviously, all VR headsets should support 3-DOF tracking in the Orientation direction.

  • 3-DOF relies on the gyroscope and accelerometer support of the device to render the scene view according to the head orientation. It can support gaze and shake head and nod behavior, which is mainly used by mobile VR.
  • 6-DOF refers to support for spatial tracking, which can sense spatial movement and support Lean, Dodge, duck interactions such as Oculus Rift and Htc Vive, as well as Microsoft MR and Daydream Standalone.
DoF of Headset

To realize headset interaction, the virtual world must be visible first. VR rendering in the in-depth analysis of WebVR in the last phase is the first step to realize headset interaction, and we need to use WebVR API to obtain headset data.

Here is a review: When the user is wearing a headset or walking around, the renderer dynamically calculates the MVP composite matrix of each object through the visual-projection matrix of VRFrameData in each frame, and finally draws the vertices and pixels. The exciting thing is that three.js has already wrapped this process into the camera and renderer to help us achieve the first step.

Second, we need to make the 3d scene aware of the user’s (head) presence. For example, when a ball is thrown at the player, the player dodges cleverly, and the program determines whether the player dodges successfully based on the position of the ball and the player.

6DoF dodge ball

FrameData. Pose returns a VRPose object, which can obtain headset sensor information, such as position, direction, speed, acceleration, etc.

VRPose

With these attributes, we can give the camera physical data, entity awareness, not just observer mode.

For example, a warning will pop up when a user moves beyond the range of a 6-DOF HEADSET like HTC Vive:

update() {
  const { vrdisplay. frameData. userModel } = this; 
  frameData = vrdisplay.getFrameData(frameData);
  const vrpose = frameData.pose;
  userModel.position.fromArray(vrpose.position); // Assign the VRPose position matrix to the user role
  const { x. y. z } = userModel.position; // Deconstruct the x,y,z coordinates of the user's location
  if ( Math.abs(x) > 20 || Math.abs(y) > 20 || Math.abs(z) > 20 ) {
    // When the user is more than 20×20×20 space from the initial point, a warning is displayed
    showWarningToast(a);  // Display the warning box
  }
}
Copy the code

Similarly, in VR mode, three.js will automatically convert position and orientation of VRPose into Object3D attribute of camera. So we can direct call camera. The position and camera quaternation/rotation for the user’s location, direction and code simplified as follows:

update() {
  const { camera. userModel } = this;
  userModel.position.copy(camera.position);
  const { x. y. z } = userModel.position; // Deconstruct the user position coordinates
  if ( Math.abs(x) > 20 || Math.abs(y) > 20 || Math.abs(z) > 20 ) {
    showWarningToast(a);  // When the user is 20 x 20 x 20 away from the initial point, a warning box is displayed
  }
}
 
Copy the code

The basic headset interaction event is like this. With this in mind, we can implement gaze event listening, head or head nodding event listening, and so on.


GamePad interaction events

In addition to headset, most VR devices are also equipped with Gamepads. Users can interact with the virtual scene by holding a headset.

For gamepad gamepads, there are also 3-DOF and 6-DOF versions:

  • 3-DOF, such as Daydream Controller, only supports directional tracking, so Google recommends laser laser pointer for interaction.
  • 6-DOF, like Oculus Touch, can track direction and position, so it can well simulate arm movements.

Instead of interacting with headset sensor input, the Gamepad has a variety of input components such as buttons, touchPad, or Thumbstick.

Gamepad events can then be divided into three categories based on the gamepad input hardware:

A. Sensor events: physical tracking of the handle by the sensor, such as laser pointer interaction;


B. Button event: the interaction generated by clicking the button;


C. Control unit events: Generated by thumbstick, Touchpad input, such as Swipe to turn pages, etc.

So how do you implement gamepad’s interactive events? In fact, how to access the hardware information of the Gamepad? The answer is to use the Gamepad API. See WebVR development tutorial for details


WebVR development portal:

WebVR development tutorial — In-depth analysis of the development and debugging scheme and principle mechanism of WebVR development tutorial — Standard introduction to use Three