A simple introduction

This section focuses on how to customize some effects, because the default effects provided by Sceneform are most likely not consistent with the desired effects of the product and UI šŸ˜­

Custom leveling animation

First look at the BaseArFragment, find a planDiscoveryController, as the name suggests, is a leveling control class

planeDiscoveryController = new PlaneDiscoveryController(instructionsView);
Copy the code

I don’t know what it does yet, but let’s look at the code, okay

public class PlaneDiscoveryController {
  @Nullable private View planeDiscoveryView;

  public PlaneDiscoveryController(@Nullable View planeDiscoveryView) {
    this.planeDiscoveryView = planeDiscoveryView;
  }

  /** Set the instruction view */ on sceneForm View
  public void setInstructionView(View view) {
    planeDiscoveryView = view;
  }

  /** view */
  public void show(a) {
    if (planeDiscoveryView == null) {
      return;
    }

    planeDiscoveryView.setVisibility(View.VISIBLE);
  }

  /** Hide the leveling indicator view */
  public void hide(a) {
    if (planeDiscoveryView == null) {
      return; } planeDiscoveryView.setVisibility(View.GONE); }}Copy the code

You can now determine that it is the control class used to display the hidden leveling animation. The view passed in the new PlaneDiscoveryController above is created like this

View instructionsView = loadPlaneDiscoveryView(inflater, container);
Copy the code

Then look at the function, šŸ‘Œ, and find it. Modify the loaded layout to complete the leveling animation.

private View loadPlaneDiscoveryView(LayoutInflater inflater, @Nullable ViewGroup container) {
    return inflater.inflate(R.layout.sceneform_plane_discovery_layout, container, false);
  }
Copy the code

Custom plane flags

It took a long time to fix the leveling sign, and sceneForm defaults to spotlights, like this

How does this work? If you look at the source code of BaseArFragment, you can see an ArSceneView. Then you can look at the source code of ArSceneView, which is used to display AR scenes from its name

private CameraStream cameraStream;
private PlaneRenderer planeRenderer;
Copy the code

Then see hold CameraStream CameraStream, and PlaneRenderer, whose name can be known as planar render, find āœŒ, go into šŸšŸœ take a look and find a method loadPlaneMaterial

private void loadPlaneMaterial(a) {
        //Sampler is a Sampler
        Sampler sampler = Sampler.builder().setMinMagFilter(MagFilter.LINEAR).setWrapMode(WrapMode.REPEAT).build();
        CompletableFuture<Texture> textureFuture = Texture.builder().setSource(this.renderer.getContext(), drawable.sceneform_plane).setSampler(sampler).build();
        this.planeMaterialFuture = Material.builder().setSource(this.renderer.getContext(), raw.sceneform_plane_material).build().thenCombine(textureFuture, (material, texture) -> {
            material.setTexture("texture", texture);
            material.setFloat3("color".1.0 F.1.0 F.1.0 F);
            float widthToHeightRatio = 0.5711501 F;
            float scaleX = 8.0 F;
            float scaleY = scaleX * widthToHeightRatio;
            material.setFloat2("uvScale", scaleX, scaleY);
            Iterator var6 = this.visualizerMap.entrySet().iterator();

            while(var6.hasNext()) {
                Entry<Plane, PlaneVisualizer> entry = (Entry)var6.next();
                if (!this.materialOverrides.containsKey(entry.getKey())) { ((PlaneVisualizer)entry.getValue()).setPlaneMaterial(material); }}return material;
        });
    }

Copy the code

The Sampler is used to set the display mode of the textures. The WrapMode is used to set the arrangement of textures on the material surface

  • CLAMP_TO_EDGE by default, extends to edges, such as the map will be stretched
  • REPEAT REPEAT
  • MIRRORED_REPEAT mirror repetition

MagFilter is the magnification filter, when the object is magnified when the map resolution is low to process the map, there are two options

  • NEAREST selects the pixel whose center point is closest to the texture coordinates
  • LINEAR selects a series of nearby points to compute interpolation

MinFilter is a reduction filter, which is used to process maps when objects are reduced in size or when the display is very small and the texture resolution is very high, which is not covered here.

From texture.builder () you can find the Texture’s source file drawable.sceneform_plane, and then you get the headache of finding that all the related Settings are private and there is no way to change them. Ctrl+C,Ctrl+všŸ˜œ, copy the BaseArFragment code directly into your own ArFragment and implement its abstract functions. There are too many operations in a Base class. You can abstract the permission request, leveling control, gesture control, etc., and extend the aspect design. To modify the loadPlaneMaterial() of the PlaneRenderer (), you need to define ArSceneView and PlaneRenderer (). However, there are problems with this. Plane rendering, as the name implies, means that only a plane can be rendered, which means that the size is variable and the shape is irregular. What if the product wants a circle or some kind of graphic as a logo, and the user clicks the logo to place it? So there is the problem of incomplete graphics rendering, how to do this? Listen to me šŸ˜‰

Renderable class has two subclasses, one is ModelRenderable, which is used to build 3d model render objects, and the other is ViewRenderable, which is used to build View render objects, that is, 2D. The logo is also 2D (although 3D is fine), so the logo can be rendered into the scene as a render object and moved with the camera.

Create a flag object

ViewRenderable.builder()
    .setView(this, R.layout.layout_ar_position) // Can be a custom View or layout file
    .build()
    .thenAccept { t: ViewRenderable ->
        viewRenderable = t
        Log.d("viewRender"."Build viewRenderable success!")
    }
    .exceptionally {
    	Log.d("viewRender"."Build viewRenderable failed,$it")
        return@exceptionally null
    }
Copy the code

Create viewRenderable through asynchronous operations

Place flag object

arFragment.setOnTapArPlaneListener { hitResult, _, _ ->
    // Create an anchor pointanchor = hitResult!! .createAnchor()valanchorNode = AnchorNode(anchor) anchorNode.setParent(arFragment.arSceneView? .scene)// Fix the model to the anchor point
    positionNode.setParent(anchorNode)
    // Set the render content
    positionNode.renderable = viewRenderable
    // Remove the shadow, depending on your needs
    viewRenderable.isShadowCaster = false
    // Since viewRenderable is vertical by default, we need to rotate it
    // First set its alignment with its parent to be vertically centered
    viewRenderable.verticalAlignment = ViewRenderable.VerticalAlignment.CENTER
    // Then rotate it 90 degrees so that the center of the view is the same as its position
    positionNode.localRotation = Quaternion(1f.0f.0f.1f)}Copy the code

Follow the camera

To move with the lens, it is necessary to obtain the worldPosition of the screen center point in the scene, and then set the coordinates of the marker object in real time when the lens moves

// Focus needs to be captured in real time on each frame refresh
@Override
public void onUpdate(FrameTime frameTime) {
    Frame frame = arSceneView.getArFrame();
    if (frame == null) {
        return;
    }
    // Whether leveling is successful
    boolean isTracking = false;
    for (Plane plane : frame.getUpdatedTrackables(Plane.class)) {
        if (plane.getTrackingState() == TrackingState.TRACKING) {
            planeDiscoveryController.hide();
            isTracking = true; }}// Screen center focus
    Vector3 focusPoint = getFocusPoint(frame, arSceneView.getWidth(), arSceneView.getHeight());
    if(onFocusPointChangeListener ! =null&& focusPoint ! =null) { onFocusPointChangeListener.onFocusPointChange(focusPoint, isTracking); }}// Get the coordinates of the scene from the coordinates on the screen
@Nullable
public Vector3 getFocusPoint(Frame frame, int width, int height) {
    Vector3 focusPoint;

    // Center point hit plane, return Verctor3
    List<HitResult> hits = frame.hitTest(width / 2.0 f, height / 2.0 f);
    if(hits ! =null && !hits.isEmpty()) {
        for (HitResult hit : hits) {
            Trackable trackable = hit.getTrackable();
            Pose hitPose = hit.getHitPose();
            if (trackable instanceof Plane && ((Plane) trackable).isPoseInPolygon(hitPose)) {
                focusPoint = new Vector3(hitPose.tx(), hitPose.ty(), hitPose.tz());
                lastPlaneHitDistance = hit.getDistance();
                returnfocusPoint; }}if (hits.size() > 0) {
            Pose hitPose = hits.get(0).getHitPose();
            return newVector3(hitPose.tx(), hitPose.ty(), hitPose.tz()); }}// If the center point does not hit the plane, to make the movement smoother, get the camera's attitude and the depth when the plane was last found as coordinates
    Pose cameraPose = frame.getCamera().getPose();
    Vector3 cameraPosition = new Vector3(cameraPose.tx(), cameraPose.ty(), cameraPose.tz());
    float[] zAxis = cameraPose.getZAxis();
    Vector3 backwards = new Vector3(zAxis[0], zAxis[1], zAxis[2]);

    focusPoint = Vector3.add(cameraPosition, backwards.scaled(-lastPlaneHitDistance));
    return focusPoint;
}
// Create a monitor interface for screen center coordinates
public interface OnFocusPointChangeListener {
    void onFocusPointChange(@NonNull Vector3 focusPoint, Boolean isTracking);
}
Copy the code

This is done in the BaseArFragment, where you need to set up listening and simulate click-and-drop in your Activity

arFragment.setOnFocusPointChangeListener { vector, isTracking ->
    if (isTracking) {
        positionNode.worldPosition = Vector3(vector)
        // Simulate clicking
        if(! hasFitted) { ArHelper.simulateClick( view, view.left + view.width /2.0 f, view.top + view.height / 2.0 f)}}}/* Simulates the user clicking */
fun simulateClick(
        view: View,
        x: Float,
        y: Float
) {
    var downTime: Long = SystemClock.uptimeMillis()
    val downEvent =
            MotionEvent.obtain(downTime, downTime, MotionEvent.ACTION_DOWN, x, y, 0)
    downTime += 1000
    val upEvent =
            MotionEvent.obtain(downTime, downTime, MotionEvent.ACTION_UP, x, y, 0)
    view.onTouchEvent(downEvent)
    view.onTouchEvent(upEvent)
    downEvent.recycle()
    upEvent.recycle()
}
Copy the code

In actual operation, due to the defects of ArCore, there will be the problem of lost rendering. For example, if moving too fast or blocking the lens may cause problems in ArCore’s construction and identification of the scene, 2D symbols and models of rendering will be lost šŸ˜­, This requires waiting for ArCore to rebuild the scene to reposition Render. However, if the display is needed during the lost period, you can set it as follows

arFragment.setOnFocusPointChangeListener { vector, isTracking ->
    if (isTracking) {
        positionNode.worldPosition = Vector3(vector)
        if(! positonNode.isEnabled) {// When lost, ARCore sets node active and enable to false, so it needs to set isEnabled to true
            // This means node is enabled and visible
            positonNode.isEnabled = true
        }
        // Simulate clicking
        if(! hasFitted) { ArHelper.simulateClick( view, view.left + view.width /2.0 f, view.top + view.height / 2.0 f)}}}Copy the code

conclusion

Overall, Sceneform is an excellent framework, but scalability is very poor. After basic functions are implemented, extensive modifications need to be made if the existing implementation needs to be extended or modified. It is recommended that complex functions be implemented for reference only