@[TOC]
ARKit3.5 Framework Learning (1) Introduction to core Classes (1)
1. Introduction to ARKit framework
ARKit integrates camera and motion features on iOS devices to create an augmented reality experience in your app or game.
Augmented reality (AR) describes the user experience of adding 2D or 3D elements to a live view of a device’s camera to make them appear as if they were living in the real world. ARKit simplifies the task of building AR experiences by combining device motion tracking, camera scene capture, advanced scene processing, and display capabilities. Using these technologies, you can create multiple AR experiences using either the front or rear camera on your iOS device.
1.1 ARKit3.5 New Functions
ARKit 3.5 adds a new Scene Geometry API that uses lidar scanners to create 3D maps of Spaces to differentiate between floors, walls, ceilings, Windows, doors and seats. The scanner is able to quickly measure the length, width and depth of objects at distances up to five meters, enabling users to quickly create digital fax machines that can be used for object occlusion — making digital objects appear to be partially blended into the scene behind real objects. Thanks to “instant AR” support, these digital objects can be automatically placed in space without the user having to wave a tablet and feel the space with their camera.
Apple also said it improved motion capture and figure occlusion in ARKit in version 3.5 to better estimate the depth of a person and the height of motion capture. Previously, ARKit did a “good enough” job of measuring length using a 2D camera on an iPhone or iPad, but the Lidar scanner can make more accurate 3-axis measurements, automatically benefiting previously developed applications without changing the code.
In summary, ARKit 3.5 adds functionality in the following three areas:
- Geometric Scenes: With Geometric Scenes, you can create a three-dimensional picture of a space using labels that identify floors, walls, ceilings, Windows, doors, and seats. This insight into the real world unlocks object occlusion and real-world physics for virtual objects, and also gives you more information to enhance your AR working scenarios.
- Instant AR: The LiDAR scanner on the iPad Pro enables incredibly fast planar detection, enabling instant placement of AR objects in the real world without scanning. The iPad Pro automatically enables instant AR placement for all apps built using ARKit without any code changes.
- Improved motion capture and Figure Occlusion: Depth prediction in Figure Occlusion and height estimation in Motion Capture are more accurate with ARKit 3.5 in iPad Pro. These two features are improved in all apps generated using ARKit on the iPad Pro, again without any code changes.
The following picture is an AR rendering:
ARKit uses Visual Inertial Odometry(VIO) to accurately track real-world scenes in the real world. Compared to other device platforms, VIO in ARKit can fuse sensor data with CoreMotion data to provide more accurate information. ARKit allows iOS devices to sense exactly how they move around the room, without the need for external devices to calibrate. Based on this principle, ARKit can capture high-precision models of iOS device location and motion information and use them in scenarios.
Using ARKit, the iPhone and IPad can analyze the scene from the camera view and find the horizontal plane of the room. ARKit can detect flat surfaces such as tables and floors, as well as detect and track objects. Even cooler, ARKit can also use camera sensors to estimate the intensity of light in a scene to provide appropriate lighting on virtual objects.
In real development, you could use a click-test method, such as the ARHitTestResult class, to find real-world surfaces based on the image captured by the camera. If the developer has planeDetection enabled in the configuration, ARKit can detect the plane in the image captured by the camera and record its location and size.
1.2 ARKitWorkflow Introduction
The figure below illustrates the ARKit workflow. Where blue represents the part responsible by ARKit and green represents the part responsible by SceneKit:
As can be seen from the figure above, the general process of AR work is as follows:
- First, ARKit uses cameras to capture images of real scenes, and then SceneKit builds virtual worlds.
- And when it’s set up,ARKitResponsible for blending real world and virtual world information and rendering one
AR
In the world. - While rendering, ARKit is responsible for three things:
- Maintaining world tracking means that as you move the camera around, you’re trying to get new real-world information.
- Scene analysis refers to the analysis of key information such as feature points and planes in the real world.
- Handling the interaction with the virtual world refers to handling whether or not the virtual object is clicked or whether or not to add/remove objects as the user clicks or drags the screen.
- In general, ARKit captures real world information, mixes real and virtual worlds, and processes and interacts with new information all the time.
2. ARKitFrame class structure
AR applications on iOS platforms are usually composed of ARKit and rendering engine:
ARKit’s ARSession is responsible for managing information for each frame. ARSession does two things: take images and capture sensor data; After analyzing and processing the data, output frame by frame. The diagram below:
To do AR related development, you must first be familiar with two frameworks:
- ARKit: IOS exclusive
- ARCore: the Android platform
You can also use the Unity solution for cross-platform development.
ARKit is augmented reality based on 3D scene (SceneKit), which is the mainstream way. ARKit is also based on augmented reality implemented in 2D scenes. So to learn ARKit framework is also necessary to learn these two frameworks: SceneKit, SpriteKit. Both frameworks will be covered in more detail in subsequent blogs, so stay tuned to….
- Relationship between ARKit and SceneKit:
ARKit is not a standalone framework, but must be used together with SceneKit. :
- ARKit enables the camera to capture real-world images and restore the three-dimensional world
- SceneKit realizes 3D models of virtual reality in images
2.1 Introduction to SpriteKit framework
- SpriteKit: Add high-performance 2D content with smooth animation to your application, or create games using a set of advanced tools based on 2D games. SpriteKit is a generic framework for drawing shapes, particles, text, images, and video in two dimensions. It leverages Metal for high-performance rendering while providing a simple programming interface that makes it easy to create games and other graphics-intensive applications. With a rich set of animations and physical behaviors, you can quickly breathe life into your visual elements and transition gracefully from screen to screen. SpriteKit supports iOS, macOS, tvOS, and watchOS, and integrates well with frameworks like GameplayKit and SceneKit.
- SpriteKitThe frame structure is as follows:
2.2 Introduction to SceneKit Framework
- SceneKit framework: Used to create 3D games and add 3D content to applications using advanced scene descriptions. Easily add animations, physics simulations, particle effects, and realistic physics renderings. SceneKit combines a high-performance rendering engine with a descriptive API for importing, manipulating and rendering 3D assets. Unlike low-level apis such as Metal and OpenGL, which require you to precisely implement the details of the rendering algorithm that displays the scene, SceneKit simply describes the content of the scene and the actions or animations you want it to perform.
- SceneKit framework structure is as follows:
The coordinate system for ARKit and SceneKit looks something like this:
The relationship diagram between ARKit and SceneKit is as follows:
2.3 ARKit framework
- ARKit’s requirements for development environment:
Xcode version: Xcode9 or later iOS :iOS11 or later iOS device requirements: Processor A9 or above (iPhone6s, iPhone6sPlus, iPhone7 iPhone7Plus, iPhone ESE, iPad Pro(9.7, 10.5, 12.9), iPad(2017)) 10.12.4 or later (Installation of Xcode9 requires Mac OS version)
- ARKit framework class structure summary:
2.3.1 ARKit framework is associated with other frameworks
ARKit
SceneKit
ARKit
SceneKit
ARKit
SceneKit
ARSCNView, which displays 3D virtual augmented reality in ARKit framework, inherits from SCNView in SceneKit framework, and SCNView inherits from UIView in UIKit framework.
UIView is used to display the view in the window of the iOS device, SCNView is used to display a 3D scene, and ARSCNView is used to display a 3D scene, but the 3D scene is made up of real world images captured by the camera.
ARSCNView is just a view container that manages an ARSession
In a complete virtual augmented reality experience, ARKit framework is only responsible for transforming the real world picture into a 3D scene. This transformation process is mainly divided into two steps: ARCamera is responsible for capturing the camera picture, and ARSession is responsible for building the 3D scene.
ARKit in 3D reality scene to add virtual objects using the parent SCNView function, this function has been added as early as iOS8 (SceneKit is new iOS8), ARSCNView all with the scene and virtual object related properties and methods are their parent SCNView.
2.3.2 Working principle of ARKit framework
ARKit provides two virtual augmented reality (AR) views: ARSCNView in 3D and ARSKView in 2D. No matter which view is used, the camera image is used as the background view, and the image of this camera is captured by the camera class ARCamera in the frame.
However, there is no direct relationship between ARSCNView and ARCamera, and the communication bridge between them is built through AR session, which is a very heavyweight class ARSession in ARKit framework.
If we want to run an ARSession session, We have to specify an object called ARSessionConfiguration. ARSessionConfiguration is responsible for tracking the camera’s position in the 3D world and capturing feature scenes (such as plane capture).
ARSessionConfiguration is a parent class, in order to see the effect of augmented reality, apple official suggested that we should use its subclasses ARWorldTrackingSessionConfiguration, this class only supports the A9 model after chip, After the iPhone6s.
The following figure shows the role of the ARSession:
ARSession build Bridges of the participants there are two main ARWorldTrackingSessionConfiguration ARFrame.
ARWorldTrackingSessionConfiguration (session tracking configuration) role is the direction and position tracking device, and the surface of the testing equipment camera to see the real world. Inside, it does a very large set of algorithms and calls on your iPhone’s necessary sensors to detect movement and rotation and even tumbling.
When ARWorldTrackingSessionConfiguration calculate the camera position in the 3 d world, itself does not hold the position data, but to calculate the location data to ARSession to management and to the camera’s position data corresponding to the class is ARFrame.
And you can see that the ARSession class has a property called currentFrame, which is the object that maintains ARFrame.
ARCamera is only responsible for capturing images and does not participate in data processing. It belongs to a link in THE 3D Scene. Every 3D Scene has a Camera, which feels our field of vision when looking at objects.
The following diagram shows the relationship between ARSession and ARFrame:
2.3.3 ARKit framework workflow
- ARKit framework workflow:
- ARSCNView loads the scene SCNScene
- SCNScene Starts the camera ARCamera to capture the scene
- Once the scene is captured, ARSCNView starts handing the scene data to the Session
- Session manages the ARSessionConfiguration to track the scene and return an ARFrame
- Add a child node to the ARSCNView scene (3D object model)
The working flow chart of ARKit framework is as follows:
- ARKit framework in the following class relations:
- ARSCNView, which displays 3D virtual augmented reality in ARKit framework, inherits from SCNView in SceneKit framework, which in turn inherits from UIView in UIKit framework.
- In a complete virtual augmented reality experience, ARKit framework is only responsible for transforming the real world picture into a 3D scene. This transformation process is mainly divided into two steps: ARCamera is responsible for capturing the camera picture, and ARSession is responsible for building the 3D scene.
- There is no direct relationship between ARSCNView and ARCamera, they are built through AR session, which is a very heavyweight class ARSession in ARKit framework.
- To run an ARSession session, You must specify an object called session tracker Configuration :ARSessionConfiguration. The main purpose of ARSessionConfiguration is to track the position of the camera in the 3D world and capture some feature scenes (such as flat capture). This class itself is relatively simple but powerful.
3. ARKit core classes
- The header is defined as follows:
#import <ARKit/ARError.h>
#import <ARKit/ARSession.h>
#import <ARKit/ARConfiguration.h>
#import <ARKit/ARFrame.h>
#import <ARKit/ARCamera.h>
#import <ARKit/ARHitTestResult.h>
#import <ARKit/ARLightEstimate.h>
#import <ARKit/ARPointCloud.h>
#import <ARKit/ARReferenceImage.h>
#import <ARKit/ARReferenceObject.h>
#import <ARKit/ARVideoFormat.h>
#import <ARKit/ARCollaborationData.h>
#import <ARKit/ARWorldMap.h>
#import <ARKit/ARRaycastQuery.h>
#import <ARKit/ARTrackedRaycast.h>
#import <ARKit/ARRaycastResult.h>
#import <ARKit/ARAnchor.h>
#import <ARKit/AREnvironmentProbeAnchor.h>
#import <ARKit/ARFaceAnchor.h>
#import <ARKit/ARFaceGeometry.h>
#import <ARKit/ARImageAnchor.h>
#import <ARKit/ARObjectAnchor.h>
#import <ARKit/ARParticipantAnchor.h>
#import <ARKit/ARPlaneAnchor.h>
#import <ARKit/ARPlaneGeometry.h>
#import <ARKit/ARSkeleton.h>
#import <ARKit/ARSkeletonDefinition.h>
#import <ARKit/ARBody2D.h>
#import <ARKit/ARBodyAnchor.h>
#import <ARKit/ARCoachingOverlayView.h>
#import <ARKit/ARSCNView.h>
#import <ARKit/ARSKView.h>
#import <ARKit/ARMatteGenerator.h>
#import <ARKit/ARQuickLookPreviewItem.h>
Copy the code
3.1 ARError
- ARError: Type of error thrown by the ARKit method. ARError is a structure type if:
struct ARError
Copy the code
The error code for ARError is as follows:
- CameraUnauthorized: An error code indicating that the application does not have user permissions to use the camera.
- FileIOFailed: An error code indicating that ARKit cannot read or write the required files.
- InsufficientFeatures: Error code indicates that ARKit does not have the features needed to complete the task.
- InvalidCollaborationData: an error code, indicating ARKit unable to use a specific collaboration data object update the application’s session.
- InvalidConfiguration: Error code indicating that the configuration contains ambiguous or incorrect data.
- InvalidReferenceImage: Error code indicating that the reference image is not available.
- InvalidReferenceObject: Error code indicating that a reference object is not available.
- InvalidWorldMap: Error code indicating that the world map is unavailable.
- MicrophoneUnauthorized: An error code indicating that the application does not have user permissions to use the microphone.
- ObjectMergeFailed: An error code indicating that ARKit was unable to merge detected objects.
- SensorFailed: error code indicating that the sensorFailed to provide the required input.
- SensorUnavailable: Error code indicating that the sensor needed to run the session is not available.
- UnsupportedConfiguration: an error code that you run the configuration on the iOS device is not supported.
- WorldTrackingFailed: An error code indicating that world tracing has encountered a fatal error.
3.2 ARSession
- ARSession: The primary object used to control the AUGMENTED reality experience.
The ARSession object coordinates the main processes that ARKit executes on your behalf to create an AUGMENTED reality experience. These procedures include reading data from the device’s motion-sensing hardware, controlling the device’s built-in camera, and performing image analysis of captured camera images. The session synthesizes all of these results to establish a correspondence between the real space where the device resides and the virtual space where the AR content is modeled.
class ARSession : NSObject
Copy the code
ARSessionThe class definition
ARSessionAttribute method
3.2.1 ARSessionObserver
- ARSessionObserver: Provides methods that can be implemented in response to changes in the AR session state.
protocol ARSessionObserver
Copy the code
This protocol defines optional methods common to the ARSessionDelegate, ARSCNViewDelegate, and ARSKViewDelegate protocols. You can implement the methods of this protocol when you adopt one of these protocols.
3.2.2 ARSessionDelegate
- ARSessionDelegate: Provides methods to receive captured video frame images and track state from an AR session.
protocol ARSessionDelegate
Copy the code
Implement this protocol if you need to work directly with the ARFrame object captured by the session, or if you need to directly track changes to the tracked ARAnchor object set of the session. Typically, you use this protocol when building custom views to display AR content — if you use SceneKit or SpriteKit to display content, the ARSCNViewDelegate and ARSKViewDelegate protocols provide similar information and integrate with these technologies. This protocol extends the ARSessionObserver protocol, so your session delegate can also implement those methods in response to changes in session state.
3.3 ARConfiguration
- ARConfiguration: An object that defines the specific ARKit features enabled in the session at a given time.
class ARConfiguration : NSObject
Copy the code
ARConfiguration defines a base class for the different options that you can configure in the AR experience. You don’t assign configuration yourself; Instead, you instantiate a subclass of it.
To implement the features of the AR experience, you need to create one of the following subclasses and run it in your session with run(with:). A session can only run one configuration at a time, so choose the configuration that best promotes the desired AR experience. ARConfiguration provides the following configurations:
- ARWorldTrackingConfiguration: tracking equipment relative to any surface, person, or known image and the object position and direction, ARKit may find and follow up after using the device’s camera.
- ARBodyTrackingConfiguration: enables you to track a person, after the plane and image using the device’s camera.
- AROrientationTrackingConfiguration: USES the direction of the rear camera tracking equipment only.
- ARImageTrackingConfiguration: through the use of the equipment the image of the rear camera tracking only track known image.
- ARFaceTrackingConfiguration: tracking devices take camera only in the face, including the movements and facial expressions.
- ARObjectScanningConfiguration: use the rear camera to collect high fidelity data about a particular object, you want your application to identify these objects at runtime.
- ARPositionalTrackingConfiguration: only tracking device position in three dimensional space.
ARConfigurationClass structure
Specific definitions are as follows:
/** An object to describe and configure augmented reality for use in ARSession. * /
API_AVAILABLE(ios(11.0))
@interface ARConfiguration : NSObject <NSCopying>
/** Determine whether this device supports ARConfiguration. * /
@property (class.nonatomic.readonly) BOOL isSupported; /** This configuration and a list of video formats supported by the device. @discussionThe first element in the list is the default format for session output. * / @property (class.nonatomic.readonly) NSArray<ARVideoFormat* > *supportedVideoFormats API_AVAILABLE(ios(11.3)); /** Session output in video format. * / @property (nonatomic.strong) ARVideoFormat *videoFormat API_AVAILABLE(ios(11.3)); /** determines how the coordinate system should align with the world. @discussionThe default isARWorldAlignmentGravity. * / @property (nonatomic.assign) ARWorldAlignment worldAlignment; /** Enable or disable light estimation. @discussion*/ @ is enabled by defaultproperty (nonatomic.assign.getter=isLightEstimationEnabled) BOOL lightEstimationEnabled; /** Determines whether audio data is captured and provided. @discussion*/ @ Is disabled by defaultproperty (nonatomic.assign) BOOL providesAudioData; /** The type of semantic understanding provided for each framework. @discussionThe use of"supportsFrameSemanticsThe "class method checks whether the configuration type to be run supports framework semantic sets. For example, one when running a sessionARWorldTrackingConfigurationType configuration requires "+["ARWorldTrackingConfiguration supportsFrameSemantics:] "to perform the above check. If this option, the thrown exception is not supported. The default isARFrameSemanticNone. @see ARFrameSemantics
@see+ [ARConfiguration supportsFrameSemantics* / @ :]property (nonatomic.assign) ARFrameSemantics frameSemantics API_AVAILABLE(ios(13.0)); /** Determine the device andARConfigurationA type of class that supports frame semantics. @discussionNot all devices support semantic framework understanding. The use of"supportsFrameSemanticsThe "class method checks whether the configuration type to be run supports framework semantic sets. For example, when running a configuration of typeARWorldTrackingConfiguration+[ARWorldTrackingConfiguration supportsFrameSemantics:] to perform the above checks. @see ARFrameSemantics* / + (BOOL)supportsFrameSemantics: (ARFrameSemantics)frameSemantics API_AVAILABLE(ios(13.0)); / * *Unavailable* / - (instancetype)init NS_UNAVAILABLE;
+ (instancetype)new NS_UNAVAILABLE;
@end
Copy the code
3.3.1 ARObjectScanningConfiguration
- ARObjectScanningConfiguration: used to collect the high fidelity of real objects in the physical environment spatial data configuration.
class ARObjectScanningConfiguration : ARConfiguration
Copy the code
ARObjectScanningConfiguration class definition:
/** Configures the scan object. The @Discussion object scan configuration runs world tracing, capturing additional details to create reference objects. Running an object scan consumes additional power to provide more detailed functionality. The createReferenceObject method can be called in the session to capture a scan of objects in the world. * /
API_AVAILABLE(ios(12.0))
@interface ARObjectScanningConfiguration : ARConfiguration
/** Enable or disable continuous autofocus. @discussion is enabled by default. */
@property (nonatomic, assign, getter=isAutoFocusEnabled) BOOL autoFocusEnabled;
/** The type of aircraft to be tested in the scenario. @discussion If set, new aircraft will continue to be inspected and updated for a time. The detected plane will be added to the ARPlaneAnchor object in the session. If two planes are merged, the new plane is deleted. The default is ARPlaneDetectionNone. * /
@property (nonatomic, assign) ARPlaneDetection planeDetection;
- (instancetype)init;
+ (instancetype)new NS_SWIFT_UNAVAILABLE("Use init() instead");
@end
Copy the code
To create a recognition of objects in the physical environment of the application, the first to use ARObjectScanningConfiguration scan them during development. After scanning the object, called createReferenceObject (transform: center: among: completionHandler:) can transform it into a test its ARReferenceObject again at runtime. When the user runs your application, you ask ARKit to find the objects you scanned by running a world trace configuration and assign the reference object to its detectionObjects property.
-
ARObjectScanningConfiguration only for development scenarios. Due to the high performance and energy consumption of the high-fidelity space mapping required for object scanning, many ARKit features that are not required for object scanning are disabled.
-
CreateReferenceObject (transform: center: among: completionHandler:) function description: from the session of world space designated area to create a reference object (for 3 d object detection). It is defined as follows:
func createReferenceObject(transform: simd_float4x4,
center: simd_float3,
extent: simd_float3,
completionHandler: @escaping (ARReferenceObject? , Error?) -> Void)
Copy the code
This function returns an ARReferenceObject representing the specified region of the world map. Parameter Description:
transform
: Defines the transformation matrix of the origin and direction of the local coordinate system of the region to be extracted.center
: relative to thetransform
A point of the specified origin that defines the center of the bounding box of the region to be extracted.extent
: The width, height, and depth of the region to be extracted, centered at the central point and pointing to the local coordinate system specified by the transformation.completionHandler
: a handler that is called asynchronously after ARKit completes the creation of the reference object. The handler takes two arguments:referenceObject
Generated:ARReferenceObjectNil if a reference object cannot be created. 6error
: If the referenceObject is nil, an ARError will appear to describe the failure.
Note: this method is valid only when using ARObjectScanningConfiguration run session, the configuration supports scanning reference objects needed for the high fidelity of spatial data collection. Calling this method on a session with a different configuration will immediately invoke the completionHandler with the error.
To use the extracted reference object for 3D object detection, assign it to the detectionObjects property of the world trace configuration. You can bind reference objects to your application by saving them to a file and adding them to the Xcode assets directory.
When ARKit detects a reference image, the resulting transformation of ARObjectAnchor is based on the organization of the reference object’s coordinate system — the transformation specified when extracting the reference object. For example, if the reference object represents a physical item on a horizontal plane, the virtual content should appear on any surface that the physical object does. To adjust the origin of the reference object after extraction, use the applyingTransform() method.
- ApplyingTransform () method description: This method returns a new reference object created by applying the specified transform to the geometric data of this reference object.
func applyingTransform(_ transform: simd_float4x4) -> ARReferenceObject
Copy the code
Pass in the parameter transform: transform matrix in the local coordinate space of the reference object.
When a reference object is extracted from ARWorldMap, its local coordinate space is defined. This method is called to change the origin of the reference object relative to the physical object it represents if an existing reference object has a local coordinate origin that does not match the intended use of the object.
When ARKit detects a reference object, the resulting transformation of ARObjectAnchor is based on the organization of the reference object’s coordinate system. For example, if the reference object represents a physical item on a horizontal plane, the virtual content should appear on any surface that the physical object does. Therefore, it is often useful to align the coordinate origin of the reference object with the bottom of the physical object.
3.3.2 rainfall distribution on 10-12ARBodyTrackingConfiguration
- ARBodyTrackingConfiguration: in 3 d space used to track the movement of a configuration.
class ARBodyTrackingConfiguration : ARConfiguration
Copy the code
When ARKit recognizes a person in the rear camera feed, it calls session(_:didAdd:) and passes you an ARBodyAnchor that you can use to track body movements. Plane detection and image detection are enabled. If you use the body anchor to display a dummy character, you can set the character on the selected surface or image. . By default, ARConfiguration FrameSemantics type bodyDetection is enabled, it allows you to access a person’s joints, ARKit through frame detectedBody detected the man in the camera feedback.
3.3.2.1 ARBodyTrackingConfigurationThe class definition
/** Configuration for running body tracking. @Discussion Body Tracking provides 6 degrees of freedom for tracking the body being examined in the field. By default, ARFrameSemanticBodyDetection is enabled. @see ARBodyAnchor @see -[ARFrame detectedBody] */
API_AVAILABLE(ios(13.0))
@interface ARBodyTrackingConfiguration : ARConfiguration
/** Enable or disable continuous autofocus. @discussion is enabled by default. */
@property (nonatomic, assign, getter=isAutoFocusEnabled) BOOL autoFocusEnabled;
/** World trace will locate and track the initial map of physical space. @discussion If set, the session will attempt to localize to the provided map and will invoke a limited trace state specifying a different (or no) initial map before localization succeeds or runs again. Once localized, the map is extended and can be saved again in session using the 'getCurrentWorldMap' method. * /
@property (nonatomic, strong, nullable) ARWorldMap *initialWorldMap;
/** Run the environment texture mode. @ Discussion If set, texture information will be accumulated and updated. Adding an AREnvironmentProbeAnchor to the session will get the current environment texture that can be used to illuminate virtual objects in the scene from the perspective of the probe. The default is AREnvironmentTexturingNone. * /
@property (nonatomic, assign) AREnvironmentTexturing environmentTexturing;
/** Determines whether the environment texture has a high dynamic range. This function is enabled by default. * /
@property (nonatomic, assign) BOOL wantsHDREnvironmentTextures;
/** The type of aircraft to be tested in the scenario. @discussion If set, new aircraft will continue to be inspected and updated for a time. The detected plane will be added to the ARPlaneAnchor object in the session. If two planes are merged, the new plane is deleted. The default is ARPlaneDetectionNone. * /
@property (nonatomic, assign) ARPlaneDetection planeDetection;
/** Check the image in the scene. @discussion If set, the session will attempt to detect the specified image. When an image is detected, ARImageAnchor is added to the session. * /
@property (nonatomic, copy) NSSet<ARReferenceImage *> *detectionImages;
/** Enables the estimation of the scale factor that can be used to correct the physical size of the image. @Discussion If set to true ARKit, attempts will be made to use the computed camera position to calculate the difference between the scale and the estimate for a given physical size. Information about the estimated scale can be found on the estimatedScaleFactor property on the ARImageAnchor. @note When set to true, the conversion of the returned ARImageAnchor will correct the conversion using an estimated scaling factor. The default value is NO. * /
@property (nonatomic, assign) BOOL automaticImageScaleEstimationEnabled;
/** Allows estimating the scale factor that can be used to correct the physical size of the bone in 3D. @discussion If set to true ARKit, attempts will be made to use the computed camera position to calculate that the ratio for a given physical size is different from the default. Information about the estimated scale can be found on the estimatedScaleFactor property on the ARBodyAnchor. @note When set to true, the conversion of the returned ARBodyAnchor will correct the conversion using an estimated scaling factor. The default value is NO. * /
@property (nonatomic, assign) BOOL automaticSkeletonScaleEstimationEnabled;
/** The maximum number of images tracked simultaneously. @discussion Setting the maximum number of tracked images will limit the number of images that can be tracked in a given frame. If the number of images seen exceeds the maximum, only images that have already been traced will continue to be traced until the trace is lost or another image is deleted. The image will continue to be detected regardless of what image is being tracked. The default value is 0. * /
@property (nonatomic, assign) NSInteger maximumNumberOfTrackedImages;
- (instancetype)init;
+ (instancetype)new NS_SWIFT_UNAVAILABLE("Use init() instead");
@end
Copy the code
3.3.2.1 ARBodyTrackingConfigurationProperties and methods
- Creating a Configuration
// Create a new body tracking configuration.
init(a)// Attempts to use this session configuration to restore the state of the previous AR session.
var initialWorldMap: ARWorldMap?
Copy the code
- Estimated body proportion
// A flag that determines whether ARKit estimates the height of the object it is tracking.
var automaticSkeletonScaleEstimationEnabled: Bool
Copy the code
- Auto focus is supported
// A Boolean value that determines whether the device camera uses fixed or automatic focal length.
var isAutoFocusEnabled: Bool
Copy the code
- Aircraft detection
// A value that specifies whether and how the session attempts to automatically detect planes in the image captured by the camera.
var planeDetection: ARWorldTrackingConfiguration.PlaneDetection
// Select whether and how to detect the captured image plane
struct ARWorldTrackingConfiguration.PlaneDetection
Copy the code
- Make the image track
// A flag instructs ARKit to estimate and set the size of the tracking image on your behalf.
var automaticImageScaleEstimationEnabled: Bool
//A set of images that ARKit is trying to detect in the user environment.
var detectionImages: Set<ARReferenceImage>
// The maximum number of detected images for simultaneous tracking motion.
var maximumNumberOfTrackedImages: Int
Copy the code
- Added reflection of reality
// a flag instructs ARKit to create an HDR environment texture.
var wantsHDREnvironmentTextures: Bool
// Used to generate environment textures.
var environmentTexturing: ARWorldTrackingConfiguration.EnvironmentTexturing
Copy the code
3.3.3 ARFaceTrackingConfiguration
- ARFaceTrackingConfiguration: when you just want to use the equipment front-facing camera to track human face, you can use this configuration.
class ARFaceTrackingConfiguration : ARConfiguration
Copy the code
The face tracking configuration detects faces that can be seen in the device’s front-facing camera. When ARKit detects a face, it creates an ARFaceAnchor object that provides information about the face’s position and orientation, topology, and features that describe facial expressions.
Face tracking is only available on iOS devices with a front-facing TrueDepth camera (see the iOS Device Compatibility Reference). In providing users with any facial tracking function before use ARFaceTrackingConfiguration isSupported property to determine whether the current equipment can do facial tracking.
ARFaceTrackingConfiguration class does not provide methods, or properties, but support from its superclass ARConfiguration inherit all the attributes. In addition, when you enable isLightEstimationEnabled set, facial trace configuration will detect the facial used as light detector, and provide an estimate of directional or ambient lighting (a ARDirectionalLightEstimate object).
Face Tracking provides personal facial information for your application. If you use ARKit face tracking, your app must include a privacy policy that describes to the user how you plan to use face tracking and face data. For details, see the Apple Developer Program License Agreement.
3.3.3.1 ARFaceTrackingConfigurationThe class definition
/** Configuration for running face tracking. @Discussion Face tracking uses a front-facing camera to track faces in 3D, providing details about face topology and expressions. The detected face is added to the session as an ARFaceAnchor object that contains information coefficients about head posture, mesh, eye posture, and blending shapes. If light estimation is enabled, the detected face is treated as a light probe to estimate the direction of incoming light. * /
API_AVAILABLE(ios(11.0))
@interface ARFaceTrackingConfiguration : ARConfiguration
/** The maximum number of faces that can be traced simultaneously. * /
@property (class.nonatomic.readonly) NSInteger supportedNumberOfTrackedFaces API_AVAILABLE(ios(13.0)); /** The maximum number of faces to be traced simultaneously. @discussionSetting the maximum number of tracks limits the number of tracks that can be traced in a given frame. If more than the maximum is visible, only faces that have already been tracked will continue to be tracked until the trace is lost or another face is deleted. The default value is 1. * / @property (nonatomic.assign) NSInteger maximumNumberOfTrackedFaces API_AVAILABLE(ios(13.0)); /** indicates whether world tracking can be enabled on this device. * / @property (class.nonatomic.readonly) BOOL supportsWorldTracking API_AVAILABLE(ios(13.0)); /** Enable or disable world tracking. It is disabled by default. @discussionAfter the commissioning,ARSessionUse a rear camera to track the device's orientation and position in the world. Camera transform sumARFaceAnchorThe transformation is going to be in the world coordinate space. * / @property (nonatomic.assign.getter=isWorldTrackingEnabled) BOOL worldTrackingEnabled API_AVAILABLE(ios(13.0));
- (instancetype)init;
+ (instancetype)new NS_SWIFT_UNAVAILABLE("Use init(a)instead");
@end
Copy the code
3.3.3.2 ARFaceTrackingConfigurationProperties and methods
- Creating a Configuration
// Create a new face tracking configuration.
init(a)Copy the code
- Enable World Tracking
// Whether iOS devices support world tracking and face tracking.
class var supportsWorldTracking: Bool// Make world tracking and face tracking enabled.var isWorldTrackingEnabled: Bool
Copy the code
- Tracking multiple faces
// The maximum number of faces that can be traced simultaneously.
var maximumNumberOfTrackedFaces: Int
// The maximum number of faces that ARKit can track simultaneously.
class var supportedNumberOfTrackedFaces: Int
Copy the code
We doARWorldTrackingConfiguration
- ARWorldTrackingConfiguration: tracking equipment relative to any surface, person, or known image and the object position and direction, ARKit may find and follow up after using the device’s camera.
class ARWorldTrackingConfiguration : ARConfiguration
Copy the code
All AR configurations establish a mapping between the real world in which the device resides and the virtual 3D coordinate space, where you can model content. When your application mixes virtual content with live camera images, users get the illusion that your virtual content is part of the real world.
Creating and maintaining this correspondence between Spaces requires tracking device movements. ARWorldTrackingConfiguration class USES six degrees of freedom (6 dof) mobile tracking equipment: three axis (roll, pitch and yaw) and three translational axis (x, y, and z in the mobile).
This kind of tracking can create immersive AR based gaming experiences: virtual objects can stay in the same place relative to the real world, even if the user tilts the device higher or lower than the object, or moves the sides and backs of surrounding device objects.
ARWorldTrackingConfiguration provide 6 dof (Six Degree of Freedom) equipment tracking. Including Yaw (Yaw Angle), Pitch (Pitch Angle) and Roll (Roll Angle), and the offset along the X, Y and Z axes in the Cartesian coordinate system, as shown in the figure below:
ARKit
VIO (Visual-Inertial Odometry)
IMU
3.3.4.1 ARWorldTrackingConfigurationThe class definition
ARWorldTrackingConfiguration also provides several ways to make your application to identify the real world or with the camera scene elements interact:
- Use planeDetection to find horizontal or vertical surfaces of the real world and add them to the session as ARPlaneAnchor objects.
- Use detectionImages to recognize and track the movement of known 2D images, adding them to the scene as ARImageAnchor objects.
- Use detectionObjects to recognize known 3D objects and add them to the scene as ARObjectAnchor objects.
- Use the raycast function on ARFrame, ARView, ARSCNView, or ARSKView to find the 3D position of real-world features related to 2D points in the camera view.
ARWorldTrackingConfiguration class definition:
/** Configuration to run world tracking. @discussion World tracking provides 6 degrees of freedom tracking devices. By looking for feature points in the scene, world tracking can perform hit tests on frames. Once the session is paused, tracing cannot continue. * /
API_AVAILABLE(ios(11.0))
@interface ARWorldTrackingConfiguration : ARConfiguration
/** Enable or disable continuous autofocus. @discussion is enabled by default. */
@property (nonatomic, assign, getter=isAutoFocusEnabled) BOOL autoFocusEnabled API_AVAILABLE(ios(11.3));
/** Run the environment texture mode. @ Discussion If set, texture information will be accumulated and updated. Adding an AREnvironmentProbeAnchor to the session will get the current environment texture that can be used to illuminate virtual objects in the scene from the perspective of the probe. The default is AREnvironmentTexturingNone. * /
@property (nonatomic, assign) AREnvironmentTexturing environmentTexturing API_AVAILABLE(ios(12.0));
/** Determines whether the environment texture has a high dynamic range. This function is enabled by default. * /
@property (nonatomic, assign) BOOL wantsHDREnvironmentTextures API_AVAILABLE(ios(13.0));
/** The type of aircraft to be tested in the scenario. @discussion If set, new aircraft will continue to be inspected and updated for a time. The detected plane will be added to the ARPlaneAnchor object in the session. If two planes are merged, the new plane is deleted. The default is ARPlaneDetectionNone. * /
@property (nonatomic, assign) ARPlaneDetection planeDetection;
/** World trace will locate and track the initial map of physical space. @discussion If set, the session will attempt to localize to the provided map and will invoke a limited trace state specifying a different (or no) initial map before localization succeeds or runs again. Once localized, the map is extended and can be saved again in session using the 'getCurrentWorldMap' method. * /
@property (nonatomic, strong, nullable) ARWorldMap *initialWorldMap API_AVAILABLE(ios(12.0));
/** Check the image in the scene. @discussion If set, the session will attempt to detect the specified image. When an image is detected, ARImageAnchor is added to the session. * /
@property (nonatomic, copy, null_resettable) NSSet<ARReferenceImage *> *detectionImages API_AVAILABLE(ios(11.3));
/** Enables the estimation of the scale factor that can be used to correct the physical size of the image. @Discussion If set to true, ARKit will attempt to use the computed camera position to calculate the difference between the scale and the estimate for a given physical size. Information about the estimated scale can be found on the estimatedScaleFactor property on the ARImageAnchor. @note When set to true, the conversion of the returned ARImageAnchor will correct the conversion using an estimated scaling factor. The default value is NO. * /
@property (nonatomic, assign) BOOL automaticImageScaleEstimationEnabled API_AVAILABLE(ios(13.0));
/** The maximum number of images tracked simultaneously. @discussion Setting the maximum number of tracked images will limit the number of images that can be tracked in a given frame. If the number of images seen exceeds the maximum, only images that have already been traced will continue to be traced until the trace is lost or another image is deleted. The image will continue to be detected regardless of what image is being tracked. The default value is 0. * /
@property (nonatomic, assign) NSInteger maximumNumberOfTrackedImages API_AVAILABLE(ios(12.0));
/** The object to be detected in the scene. @discussion If set, the session will attempt to detect the specified object. When an object is detected, ARObjectAnchor is added to the session. * /
@property (nonatomic, copy) NSSet<ARReferenceObject *> *detectionObjects API_AVAILABLE(ios(12.0));
/** Enable/disable collaboration sessions. It is disabled by default. @ discussion when enabled, the ARSession will use their authorized didOutputCollaborationData output collaboration data for other participants. The caller's responsibility is to send data to each participant. When a participant receives the data, it should be passed by invoking updateWithCollaborationData ARSession. * /
@property (nonatomic, assign, getter=isCollaborationEnabled) BOOL collaborationEnabled API_AVAILABLE(ios(13.0));
/** Indicates whether user face tracking using the front-facing camera can be enabled on this device. * /
@property (class.nonatomic.readonly) BOOL supportsUserFaceTracking API_AVAILABLE(ios(13.0)); /** Enables or disables running face tracking using the front-facing camera. It is disabled by default. When enabled,ARSessionThe face (if visible in the front-facing camera image) is detected and added to the list of anchor points representing each faceARFaceAnchorObject. @discussion ARFaceAnchorThe transformation of the object takes place in the world coordinate space. @see ARFaceAnchor* / @property (nonatomic.assign.getter=userFaceTrackingEnabled) BOOL userFaceTrackingEnabled API_AVAILABLE(ios(13.0));
- (instancetype)init;
+ (instancetype)new NS_SWIFT_UNAVAILABLE("Use init(a)instead");
@end
Copy the code
3.3.4.2 ARWorldTrackingConfigurationProperty, method description
- Create a configuration:
init()
: Initializes a new world trace configuration.var initialWorldMap: ARWorldMap?
: Attempts to use this session configuration to restore the state of the previous AR session.
- Tracking surface:
var planeDetection: ARWorldTrackingConfiguration.PlaneDetection
: a value that specifies whether and how the session automatically attempts to detect planes in the image captured by the camera.struct ARWorldTrackingConfiguration.PlaneDetection
: Selects whether and how to detect the captured image plane.var sceneReconstruction: ARConfiguration.SceneReconstruction
: a sign that makes reconstruction possible.class func supportsSceneReconstruction(ARConfiguration.SceneReconstruction) -> Bool
: Check whether the device supports scenario reconstruction.
- Detection or tracking of images:
var detectionImages: Set<ARReferenceImage>!
: a set of images that ARKit tries to detect in the user’s environment.var maximumNumberOfTrackedImages: Int
: The maximum number of detection images used to track motion simultaneously.var automaticImageScaleEstimationEnabled: Bool
: a flag that instructs ARKit to estimate and set a detection or track image size on your behalf.
- Detect 3D objects:
var detectionObjects: Set<ARReferenceObject>
: A set of 3D objects that ARKit tries to detect in the user’s environment.
- Tracking the user’s face:
var userFaceTrackingEnabled: Bool
: a flag that determines whether ARKit tracks a user’s face in a world tracking session.class var supportsUserFaceTracking: Bool
: A Boolean value that tells you whether the iOS device supports tracking a user’s face during a world tracking session.
- Create reality reflection:
var environmentTexturing: ARWorldTrackingConfiguration.EnvironmentTexturing
: Behavior ARKit is used to generate ambient textures.enum ARWorldTrackingConfiguration.EnvironmentTexturing
: option to generate ambient textures in the world Trace AR session.class AREnvironmentProbeAnchor
: An object that provides ambient lighting information for a specific area of space in a world tracking AR session.var wantsHDREnvironmentTextures: Bool
: a flag instructing ARKit to create environment textures in HDR format.
- Managing device camera behavior:
var isAutoFocusEnabled: Bool
: A Boolean value that determines whether the device camera uses fixed or automatic focal length.
- Allow multi-user collaboration:
var isCollaborationEnabled: Bool
: A logo to select your experience in a point-to-point multi-user augmented reality.
3.3.5 AROrientationTrackingConfiguration
- AROrientationTrackingConfiguration: USES the direction of the rear camera tracking equipment only. When you want to use the equipment in the direction of the rear camera tracking equipment, you use this AROrientationTrackingConfiguration configuration.
class AROrientationTrackingConfiguration : ARConfiguration
Copy the code
All AR configurations establish a correspondence between the real world in which the device resides and the virtual 3D coordinate space, where content can be modeled. When your app displays this content along with live camera images, users can get the illusion that your virtual content is part of the real world.
Creating and maintaining this correspondence between Spaces requires tracking device movements. AROrientationTrackingConfiguration class USES three degrees of freedom (3 dof) to the movement of the tracking device, in particular, is a three axis (roll, pitch and yaw).
This basic level of motion tracking can create a limited AR experience: a virtual object can appear to be part of the real world, even as the user rotates the device to view above, below, or next to the object. However, this configuration does not track device movement: non-trivial changes in device position break the AR illusion, causing virtual content to drift relative to the real world. For example, users can’t walk around looking at the sides and backs of virtual objects. In addition, 3DOF tracing does not support flat detection or hit testing.
Due to limited 3 dof tracking create AR experience, you usually should not be used directly AROrientationTrackingConfiguration class. On the contrary, six degrees of freedom of plane are detected using ARWorldTrackingConfiguration and accuracy test. Use 3DOF tracing as a backup only in cases where 6DOF tracing is temporarily unavailable.
3.3.5.1 AROrientationTrackingConfigurationThe class definition
AROrientationTrackingConfiguration
/** Configuration for running direction tracking. @Discussion Location tracking provides 3 degrees of freedom for device tracking. * /
API_AVAILABLE(ios(11.0))
@interface AROrientationTrackingConfiguration : ARConfiguration
/** Enable or disable continuous autofocus. */ is enabled by default
@property (nonatomic, assign, getter=isAutoFocusEnabled) BOOL autoFocusEnabled API_AVAILABLE(ios(11.3));
- (instancetype)init;
+ (instancetype)new NS_SWIFT_UNAVAILABLE("Use init() instead");
@end
Copy the code
3.3.5.2 AROrientationTrackingConfigurationAttribute method description
- Create a configuration:
init()
: Initializes a new orientation tracking configuration.
- Managing device camera behavior:
var isAutoFocusEnabled: Bool
: A Boolean value that determines whether the device camera uses fixed or automatic focal length.
3.3.6 ARImageTrackingConfiguration
- ARImageTrackingConfiguration: through the use of the equipment the image of the rear camera tracking only track known image. When you just want to track known after using the device’s camera images data using ARImageTrackingConfiguration configuration.
class ARImageTrackingConfiguration : ARConfiguration
Copy the code
All AR configurations establish a correspondence between the real world in which the device resides and the virtual 3D coordinate space, where content can be modeled. When your app displays this content along with live camera images, users can get the illusion that your virtual content is part of the real world.
Through ARImageTrackingConfiguration ARKit not by tracking the movement of equipment relative to the world to create a 3 d space, but only through the detection and tracking of known two-dimensional images in the camera movement. ARWorldTrackingConfiguration also can detect image, but each configuration has its own advantages. ARImageTrackingConfiguration features below:
- Compared with only trace the image, the track has higher cost performance, so your session can use ARImageTrackingConfiguration reliably tracking more images at a time.
- Tracking images only lets you anchor virtual content to known images only when those images are in the camera’s field of view. World Tracking & Image Detection allows you to use known images to add virtual content to
3D
World, and continues to track the location of that content in world space, even if the image is no longer in view. - World tracking works best in a stable, stationary environment. There are many more situations where you can use image tracking to add virtual content to a known image — for example, an advertisement inside a moving subway car.
When the image-tracking configuration detects known images, it tracks their movement with six degrees of freedom (6DOF) : specifically, three rotation axes (roll, pitch, and yaw) and three translation axes (x, Y, and Z movements).
To use ARImageTrackingConfiguration, please define ARReferenceImage object (in the runtime or through them bound to the Xcode assets directory), and assign them to configure trackingImages properties. Then, as with any AR configuration, the configuration is passed to the run(_:options:) method of the session.
3.3.6.1 ARImageTrackingConfigurationThe class definition
ARImageTrackingConfiguration class definition:
/** Run the image tracking configuration. @ Discussion Image tracking provides 6 degrees of freedom for tracking known images. Four images can be tracked simultaneously. * /
API_AVAILABLE(ios(12.0))
@interface ARImageTrackingConfiguration : ARConfiguration
/** Enable or disable continuous autofocus. */ is enabled by default
@property (nonatomic, assign, getter=isAutoFocusEnabled) BOOL autoFocusEnabled;
/** Trace the image in the scene. * /
@property (nonatomic, copy) NSSet<ARReferenceImage *> *trackingImages;
/** The maximum number of images tracked simultaneously. @discussion Setting the maximum number of tracked images will limit the number of images that can be tracked in a given frame. If the number of images seen exceeds the maximum, only images that have already been traced will continue to be traced until the trace is lost or another image is deleted. The default value is 1. * /
@property (nonatomic, assign) NSInteger maximumNumberOfTrackedImages;
- (instancetype)init;
+ (instancetype)new NS_SWIFT_UNAVAILABLE("Use init() instead");
@end
Copy the code
3.3.6.2 ARImageTrackingConfigurationAttributes that
- Create a configuration:
init()
: Initializes a new image tracing configuration.
- Select the image for tracing:
var trackingImages: Set<ARReferenceImage>
: A set of images that ARKit tries to detect and track in the user’s environment.var maximumNumberOfTrackedImages: Int
: Maximum number of moving images to track simultaneously.
- Managing device camera behavior:
var isAutoFocusEnabled: Bool
: A Boolean value that determines whether the device camera uses fixed or automatic focal length.
3.3.7 ARObjectScanningConfiguration
- ARPositionalTrackingConfiguration: only tracking device position in three dimensional space. Configuration for collecting high-fidelity spatial data for real objects in the physical environment.
class ARObjectScanningConfiguration : ARConfiguration
Copy the code
To create an application that recognizes objects in the physical environment:
- The first to use ARObjectScanningConfiguration scan them during development.
- After scanning the object, called createReferenceObject (transform: center: among: completionHandler:) its into a can test its ARReferenceObject again at runtime.
- When the user runs your application, you ask ARKit to find the object you are scanning by running a world trace configuration and assign the reference object to its detectionObjects property.
ARObjectScanningConfiguration only for development scenarios. Due to the high performance and energy consumption of the high-fidelity space mapping required for object scanning, many ARKit features that are not required for object scanning are disabled.
3.3.7.1 ARObjectScanningConfigurationThe class definition
ARPositionalTrackingConfiguration class definition:
ARObjectScanningConfiguration
/** Configures the scan object. The @Discussion object scan configuration runs world tracing, capturing additional details to create reference objects. Running an object scan consumes additional power to provide more detailed functionality. The createReferenceObject method can be called in the session to capture a scan of objects in the world. * /
API_AVAILABLE(ios(12.0))
@interface ARObjectScanningConfiguration : ARConfiguration
/** Enable or disable continuous autofocus. @discussion is enabled by default. */
@property (nonatomic, assign, getter=isAutoFocusEnabled) BOOL autoFocusEnabled;
/** The type of aircraft to be tested in the scenario. @discussion If set, new aircraft will continue to be inspected and updated for a time. The detected plane will be added to the ARPlaneAnchor object in the session. If two planes are merged, the new plane is deleted. The default is ARPlaneDetectionNone. * /
@property (nonatomic, assign) ARPlaneDetection planeDetection;
- (instancetype)init;
+ (instancetype)new NS_SWIFT_UNAVAILABLE("Use init() instead");
@end
Copy the code
3.3.7.1 ARObjectScanningConfigurationAttributes that
- Create a configuration:
init()
: Initializes a new object scan configuration.
- Make an aircraft detect:
var planeDetection: ARWorldTrackingConfiguration.PlaneDetection
: a value that specifies whether and how the session attempts to automatically detect planes in the image captured by the camera.struct ARWorldTrackingConfiguration.PlaneDetection
: Selects whether and how to detect the captured image plane.
- Managing device camera behavior:
var isAutoFocusEnabled: Bool
: A Boolean value that determines whether the device camera uses fixed or automatic focal length.
3.3.8 ARPositionalTrackingConfiguration
- ARPositionalTrackingConfiguration: when you want to tracking device configuration used in position in space.
class ARPositionalTrackingConfiguration : ARConfiguration
Copy the code
6 degrees of freedom tracking for iOS devices is supported by running the camera at the lowest possible resolution and frame rate. Use this configuration when you do not need to parse camera feeds (for example, virtual reality scenarios).
3.3.8.1 ARPositionalTrackingConfigurationThe class definition
/** Configuration for running location tracking. @Discussion Position tracking provides 6 degrees of freedom for device tracking by running the camera at the lowest possible resolution and frame rate. * /
API_AVAILABLE(ios(13.0))
@interface ARPositionalTrackingConfiguration : ARConfiguration
/** The type of aircraft to be tested in the scenario. @discussion If set, new aircraft will continue to be inspected and updated for a time. The detected plane will be added to the ARPlaneAnchor object in the session. If two planes are merged, the new plane is deleted. The default is ARPlaneDetectionNone. * /
@property (nonatomic, assign) ARPlaneDetection planeDetection;
/** World trace will locate and track the initial map of physical space. @discussion If set, the session will attempt to localize to the provided map and will invoke a limited trace state specifying a different (or no) initial map before localization succeeds or runs again. Once localized, the map is extended and can be saved again in session using the 'getCurrentWorldMap' method. * /
@property (nonatomic, strong, nullable) ARWorldMap *initialWorldMap;
- (instancetype)init;
+ (instancetype)new NS_SWIFT_UNAVAILABLE("Use init() instead");
@end
Copy the code
3.3.8.2 ARPositionalTrackingConfigurationattribute
- Create a configuration:
init()
: Creates a new location tracking configuration.var initialWorldMap: ARWorldMap?
: Attempts to use this session configuration to restore the state of the previous AR session.
- Detect real surfaces:
var planeDetection: ARWorldTrackingConfiguration.PlaneDetection
: a value that specifies whether and how the session automatically attempts to detect planes in the image captured by the camera.
3.3.9 ARKitCoordinate system
ARKit measures the real world using cartesian coordinates. The device position when ARSession is opened is the origin of the coordinate axes. The worldAlignment attribute of ARSessionConfiguration determines the orientation of the three axes and has three enumerated values:
- ARWorldAlignmentCamera
- ARWorldAlignmentGravity
- ARWorldAlignmentGravityAndHeading
The corresponding coordinate axes of the three enumerated values are shown in the figure below:
For ARWorldAlignmentGravity, the Y direction is always parallel to the gravity direction, while the X and Z directions are still determined by the posture of the device. This is useful for calculating the coordinates of objects with gravity properties, such as placing a row of hydrogen balloons or performing an animation of a falling basketball.
For ARWorldAlignmentGravityAndHeading, X, Y and Z axes toward the east, south, positive. In this mode, internal adjustments are made to ensure that the -Z direction of the ARKit coordinate system matches the true north direction of our real world, based on the Angle between the yaw Angle of the device and the magnetic true north (non-magnetic north) direction. With this prerequisite, the real world coordinates can be correctly mapped to the virtual world.
3.4 ARCamera
- ARCamera: Information about the camera position and imaging characteristics for a given frame.
class ARCamera : NSObject
Copy the code
You can get camera information from each camera property delivered by ARFrame ARKit.
3.5 ARHitTestResult
- ARHitTestResult: Information about a real-world surface found by examining a point on the screen.
If you use SceneKit or SpriteKit as your renderer, you can search the screen for real-world surfaces using the following command: ARSCNView hitTest(_:types:): Search the captured camera image for a real object or AR anchor point corresponding to a point in the SceneKit view
ARSKView hitTest(_:types:): Searches the captured camera image for a real object or AR anchor that corresponds to a point in the SpriteKit view.
Hit tests searching for real-world objects or surfaces through the processing of camera images by AN AR session. In the coordinate system of a view, a 2D point can refer to any point on a 3D line starting from the device camera and extending along the direction determined by the device orientation and camera projection. This method searches along this line, returning all objects that intersect it, sorted by distance from the camera.
class ARHitTestResult : NSObject
Copy the code
3.6 ARLightEstimate
- ARLightEstimate:
AR
Estimated scene lighting information associated with captured video frames in the session.
class ARLightEstimate : NSObject
Copy the code
3.7 ARPointCloud
- ARPointCloud: A collection of points in the world coordinate space of an AR session.
class ARPointCloud : NSObject
Copy the code
Use the ARFrame rawFeaturePoints property to get a point cloud representing the intermediate results of the scenario analysis that ARKit uses to perform world tracking.
3.8 ARReferenceImage
- ARReferenceImage: A description of the image that you want ARKit to detect in the physical environment.
class ARReferenceImage : NSObject
Copy the code
In order to accurately detect the position and orientation of a two-dimensional image in the real world, ARKit requires pre-processed image data and knowledge of the real-world dimensions of the image. The ARReferenceImage class encapsulates this information. To enable image detection in an AR session, pass a set of reference images to the detectionImages property of the session configuration.
3.9 ARReferenceObject
- ARReferenceObject: you want toARKitin
AR
A description of the real object being looked up in the physical environment during the session.
class ARReferenceObject : NSObject
Copy the code
Object detection in ARKit allows you to trigger AR content when a session recognizes a known 3D object. For example, your application could detect sculptures in an art museum and provide a virtual curator, or detect tabletop game characters and create visual effects for the game.
To provide a known 3D object detection, you scan a real world object using ARKit:
- Using ARObjectScanningConfiguration run AR session, to enable the high fidelity space mapping data collection.
- In the process, the device’s camera is pointed at real-world objects from different angles so ARKit can build up an internal map of the object and its surroundings. For an example of guiding user interactions to generate good scan data, see Scanning and Detecting 3D Objects.
3.10 ARVideoFormat
- ARVideoFormat: A video size and frame rate specification for AR sessions.
@interface ARVideoFormat : NSObject
Copy the code
This class is immutable; To set the frame rate and video resolution of an AR session, set the videoFormat property to one of the supportedVideoFormats array.
3.11 ARCollaborationData
- ARCollaborationData: An object that holds information collected by the user about the physical environment.
@interface ARCollaborationData : NSObject
Copy the code
To create a multi-user AUGMENTED reality experience, you can enable collaboration on a world tracking session. ARKit periodically outputs ARCollaborationData shared between users, which allows everyone to view the same virtual content from their perspective. For more information, see Enabling collaborationEnabled.
3.12 ARWorldMap
- ARWorldMap: Spatial mapping state and a set of anchors from the world tracking AR session.
class ARWorldMap : NSObject
Copy the code
The world map of session state includes arKit-aware physical space for the user’s mobile device (ARKit is used to determine the location and orientation of the device), as well as any ARAnchor objects added to the session (which can represent applications that detect real-world features or virtual content).
3.12.2 ARWorldMapSerialize and deserialize the world map:
When your application exits, you can save the current world map (obtained using getCurrentWorldMap(completionHandler:)). Because ARWorldMap conforms to NSSecureCoding, you serialize it using NSKeyedArchiver.
func writeWorldMap(_ worldMap: ARWorldMap, to url: URL) throws {
let data = try NSKeyedArchiver.archivedData(withRootObject: worldMap, requiringSecureCoding: true)
try data.write(to: url)
}
Copy the code
To restore the world map the next time the application starts, use NSKeyedUnarchiver.
func loadWorldMap(from url: URL) throws -> ARWorldMap {
let mapData = try Data(contentsOf: url)
guard let worldMap = try NSKeyedUnarchiver.unarchivedObject(ofClass: ARWorldMap.self, from: mapData)
else { throw ARError(.invalidWorldMap) }
return worldMap
}
Copy the code
If the application is launched in the same physical environment, you can use anchors in the recovered world map to place the same virtual content in the same place in the saved session. For more information, see Saving and Loading World Data.
3.12.3 ARWorldMapShare saved world maps
With two devices that track the same map of the world, you can build a web experience where both users can see and interact with the same virtual content. Send ARWorldMap to another device to create a shared reference frame for the multi-user AR experience:
The steps are as follows:
- On one device, use NSKeyedArchiver to convert the world map into data objects. (You don’t need to write data to a file and send it over the network.)
- Send the resulting data to another device using the networking technology of your choice. (For example, in a MultipeerConnectivity session, call Send (_:toPeers:with:) to send data and implement the MCSessionDelegate method on another device to receive data.)
- On the receiving device, use NSKeyedUnarchiver to deserialize an ARWorldMap from the data.
For more information, see Creating a Multi-User Augmented Reality Experience.
3.12.4 ARWorldMapRun the deserialized world map
To start a new session from an existing ARWorldMap, set the initialWorldMap property of a world trace configuration and use run(_:options:). This will start a new session using the same spatial awareness and anchor loaded from the saved world map.
3.12.4.1 ARWorldMapIntroduction to correlation functions
- run(_:options:): is an instance method of ARSession that starts a session with the specified configuration and options
AR
To deal with. It is defined as follows:
func run(_ configuration: ARConfiguration,
options: ARSession.RunOptions = [])
Copy the code
Parameter Description:
configuration
: Objects that define motion and scene tracking behavior for the session.options
: Options that affect how the existing session state, if any, transitions to the new configuration. This parameter is invalid if the session is running for the first time.
Sessions track device movement, capture and process scene images from the device camera, and only coordinate with your delegate object or ARSCNView or ARSKView views at run time.
Call this method on a session that has immediately begun the transition to the new session configuration. The options parameter determines how the existing session state is converted to the new configuration. By default, the session restores device location tracking from the last known state and retains all anchors already contained in the session (those you add manually using add(anchor:) and those added automatically by ARKit features such as flat detection or face tracking).
3.13 ARRaycastQuery
- ARRaycastQuery: A mathematical ray that you can use to find three-dimensional positions on real-world surfaces.
class ARRaycastQuery : NSObject
Copy the code
You can create a Raycast query by providing a 3D vector and a starting position. To create a Raycast query using the 2D screen position and the default vector that the user projects to z, Two convenient functions can be used where the makeRaycastQuery(from:allowing:alignment) on ARSCNView can be easily allowing, or the raycastQuery(from:allowing:alignment) on ARSCNView can be easily allowing. A ray projection can intersect a plane (flat) or a grid (uneven surface). To intersect a plane, see ArrayCastQuery.target. To fellowship with the grid, see ARRaycastQuery. Target. EstimatedPlane.
3.14 ARTrackedRaycast
- ARTrackedRaycast:ARKit repeats a Raycast query continuously to provide more accurate results over time.
class ARTrackedRaycast : NSObject
Copy the code
Tracking ray projection improves hit testing techniques by continuously repeating 3D position queries. ARKit gives you a place to update and refine its understanding of the world over time. To start a traced Raycast, you need to call trackedRaycast(_:updateHandler:) on your application’s current ARSession.
3.15 ARRaycastResult
- ARRaycastResult: Information about the surface of the real world found by examining a point on the screen.
class ARRaycastResult : NSObject
Copy the code
If you use ARView or ARSCNView as your renderer, You can use the Raycast (from:allowing:alignment) and raycastQuery(from:allowing:alignment) functions, respectively, to search for real-world surfaces on the screen. If you use a custom renderer, you can find real-world locations using screen points: ARFrame’s raycastQuery(from:allowing:alignment) function allows you to have a lot of room for alignment. The raycast(:) function of ARSession. For traced raycasts, you call trackedRaycast(:updateHandler:) on your application’s current ARSession.
3.16 ARAnchor
- ARAnchor: The location and direction of something in the physical environment.
class ARAnchor : NSObject
Copy the code
To track the static position and orientation of real or virtual objects relative to the camera, create anchor objects and add them to the AR session using the add(anchor:) method.
Adding an anchor point to the session helps ARKit optimize the world tracking accuracy of the area around that anchor point so that the virtual object looks like the real world. If the virtual object moves, the anchor is removed from the old location and an anchor is added in the new location.
Some ARKit features automatically add special anchors to the session. If enabled, the world trace session can add ARPlaneAnchor, ARObjectAnchor, and ARImageAnchor objects; Add the ARFaceAnchor object to the face tracking session.
3.16.1 Introduction to Subclasses
In addition to creating your own ARAnchor instance to track the actual location of your virtual content, you can subclass ARAnchor to associate custom data with the anchor you created. Make sure your anchor class behaves correctly when updating frames in ARKit or saving and loading anchors in ARWorldMap:
- Anchor subclasses must meet the requirements of the ARAnchorCopying protocol. ARKit calls init(anchor :)(on a background thread) to copy an instance of the anchor class from one ARFrame to the next. The implementation of this initializer should copy the values of any custom attributes added by subclasses.
- Anchor subclasses must also adopt the NSSecureCoding protocol. Rewrite encode(with:) and init(coder:) to save and restore the values of your subclass’s custom attributes when ARKit saves and loads them into a world map.
- Anchors are considered equal based on their identifier attributes.
- When you save the world map, only ARTrackable anchors that are not used are included.
3.16.2 ARTrackable
- ARTrackable: A real-world object in which ARKit tracks changes in position and direction within a scene.
protocol ARTrackable
Copy the code
This protocol is used by ARKit classes, such as the ARFaceAnchor class, which represents moving objects in the scene.
ARKit automatically manages the representation of these objects in an active AR session, ensuring that real-world changes in the object’s position and orientation (the anchor’s Transform property) are reflected in the corresponding ARKit object. The isTracked attribute indicates whether the current transformation is valid for the actual object movement.
Traceable anchor classes affect other ARKit behavior:
- The getCurrentWorldMap(completionHandler:) method automatically contains only untraceable anchors in the ARWorldMap it creates. (After creating the world map, you can choose to add additional anchors.)
- ARSCNView and ARSKView automatically hide anchor nodes whose isTracked property is false.
- World-tracking sessions use untraceable anchors to optimize the tracking quality of the area around each anchor. Traceable anchors do not affect world tracking.
3.17 AREnvironmentProbeAnchor
- AREnvironmentProbeAnchor: tracking AR session in the world to provide specific space environment lighting information objects.
class AREnvironmentProbeAnchor : ARAnchor
Copy the code
An ambient texture describes a view of a particular point in the scene in all directions. In 3d asset rendering, ambient textures are the basis of image-based lighting algorithms, where surfaces can realistically reflect light from their surroundings. ARKit can use camera images to generate ambient textures during an AR session, allowing SceneKit or a custom rendering engine to provide real image-based lighting for virtual objects in an AR experience. To enable texture mapping generation for an AR session, set the environmentTexturing property:
- ARWorldTrackingConfiguration EnvironmentTexturing. Manual environment texture, you can identify the light detection texture mapping points you want in the scene by creating AREnvironmentProbeAnchor objects and adding them to the session.
- ARWorldTrackingConfiguration EnvironmentTexturing. Automatic environment texture, ARKit automatically creates, locates, and adds the AREnvironmentProbeAnchor object to the session.
In both cases, ARKit automatically generates the environment texture as the session collects the camera image. Use delegate methods such as session(_:didUpdate:) to find out when a texture is available and access it from the anchor’s environmentTexture property. If you are using ARSCNView and automaticallyUpdatesLighting option to display the AR content, SceneKit will automatically retrieve AREnvironmentProbeAnchor texture mapping and use them to light the scene.
3.18 ARFaceAnchor
- ARFaceAnchor: Information about posture, topology and facial expression detected by ARKit in the front-facing camera.
class ARFaceAnchor : ARAnchor
Copy the code
When the session detects a unique face in the front-facing camera feed, it automatically adds an ARFaceAnchor: object to its list of anchors. When you use ARFaceTrackingConfiguration tracking surface ARKit can track multiple surface at the same time.
- Track face position and orientation
The inherited transformation attribute describes the current position and direction of the face in the world coordinates; That is, in the coordinate space specified by the worldAlignment attribute with the session configuration. Use this transformation matrix to locate the virtual content you want to “attach” to your AR scene. This transformation matrix creates a plane coordinate system that is used to locate other elements relative to the plane. Plane coordinate space units are in meters, with the origin centered behind the plane, as shown in the figure below.
-
Using face topology: The geometry property provides an ARFaceGeometry object that represents a detailed topology of the face, which conforms to a general face model to match the size, shape, and current expression of the detected face. You can use this model as a base to overlay content based on the shape of the user’s face — for example, applying virtual makeup or tattoos. You can also use this model to create occlusion geometry — a 3D model that doesn’t render any visible content (allowing camera images to pass through), but it blocks the camera from seeing other virtual content in the scene.
-
Tracking facial Expressions: The blendShapes property provides a high-level model of the current facial expression, described by a set of specified coefficients that represent the movement of a particular facial feature with respect to its neutral configuration. You can use mixed shape coefficients to animate 2D or 3D content (such as characters or avatars) and follow the user’s facial expressions.
3.19 ARFaceGeometry
- ARFaceGeometry: A THREE-DIMENSIONAL mesh describing the topology of the face for face tracking AR sessions.
class ARFaceGeometry : NSObject
Copy the code
This class provides a generic model for a detailed topology of faces in the form of a 3D grid, suitable for use with a variety of rendering technologies or for exporting 3D assets. (For a quick way to see face geometry using SceneKit, see the ARSCNFaceGeometry class.)
In a face tracking AR session, when you get the face geometry from the ARFaceAnchor object, the model matches the size, shape, and current expression of the detected face. You can also use a dictionary called Blend shape Coefficients to create a face grid, which provides a detailed but more effective description of the face’s current expression.
In an AR session, you can use this model as a basis for overlaying content based on the shape of the user’s face — for example, applying virtual makeup or tattoos. You can also use this model to create occlusion geometries that hide other virtual content behind the 3D shape of a face detected in the camera image.
- The surface mesh topology is constant in the ARFaceGeometry instance. That is, the values of the vertexCount, textureCoordinateCount and triangleCount properties do not change. The triangleIndices buffer always describes the same permutations of vertices. The textureCoordinateCount buffer always maps the same vertex index to the same texture coordinates.
- Only vertices buffer changes between the face grids provided by the AR session, indicating changes in vertex positions because ARKit ADAPTS the mesh to the shape and expression of the user’s face.
3.20 ARImageAnchor
- ARImageAnchor: Information about the location and direction of the image detected in the world tracking AR session.
class ARImageAnchor : ARAnchor
Copy the code
When you run a World-Tracking AR session and specify the ARReferenceImage object for the detectionImages property configured for the session, ARKit searches for these images in the real environment. When the session recognizes an image, it automatically adds the ARImageAnchor for each detected image to its list of anchor points. To find the range of recognized images in a scene, you can use the inherited Transform property and the physical size of the anchor’s referenceImage.
3.21 ARObjectAnchor
- ARObjectAnchor: The position and direction information of the real 3D object detected in the world tracking AR session.
class ARObjectAnchor : ARAnchor
Copy the code
When you run an AR session that tracks the world and specify ARReferenceObject objects for the detectionObjects property configured for the session, ARKit searches for those objects in the real environment. When the session identifies an object, it automatically adds the ARObjectAnchor of each detected object to its list of anchors. To place virtual 3D content that matches the location or size of the object being examined, use the anchor’s inherited Transform property and the center and scope of the anchor’s referenceObject.
var detectionObjects: Set<ARReferenceObject> { get set }
Copy the code
detectionObjects
- DetectionObjects is a set of 3D objects that ARKit is trying to detect in the user’s environment.
- Use this property to select known 3D objects found in the user’s environment for ARKit and render them as ARObjectAnchor for use in an AUGMENTED reality experience. To create reference objects for detection, scan them in the world trace session and extract the ARReferenceObject instance using ARWorldMap. You can then save the reference objects as files and package them into any ARKit application created using the Xcode assets directory.
3.22 ARParticipantAnchor
- ARParticipantAnchor: Represents the anchor of another user in a multi-user AUGMENTED reality experience.
class ARParticipantAnchor : ARAnchor
Copy the code
When you set isCollaborationEnabled to true, ARKit calls session(_:didAdd:) and provides an ARParticipantAnchor for each user it detects in the physical environment, giving you their world location.
3.23 ARPlaneAnchor:
- ARPlaneAnchor: 2D surface detected by ARKit in a physical environment.
class ARPlaneAnchor : ARAnchor
Copy the code
When you enable planeDetection in a world-tracking session, ARKit notifies your app of all surfaces it sees through the device’s rear camera. ARKit calls your delegate session(_:didAdd:), using an ARPlaneAnchor for each unique surface. Each plane anchor provides details about the surface, such as its position and shape in the real world.
The session (_ : didAdd:) specification:
- Depending on the session configuration, ARKit can automatically add anchors to the session.
- If you use SceneKit or SpriteKit to display your AR experience, you can implement one of the following methods instead of just adding anchors to the session, you can also track how SceneKit or SpriteKit content is added to the scene:
ARSCNView: renderer(_:nodeFor:) or renderer(_:didAdd:for:) ARSKView: node(for:) or view(_:didAdd:for:)
3.24 ARPlaneGeometry
- ARPlaneGeometry: A THREE-DIMENSIONAL grid describing the shape of the explored plane in a world tracking AR session.
class ARPlaneGeometry : NSObject
Copy the code
A THREE-DIMENSIONAL mesh describing the shape of the detected plane in a world tracking AR session. This class provides an estimate of the rough shape of the detected plane in the form of a detailed 3D grid suitable for use with various rendering techniques or for exporting 3D assets. (for a quick way to view plane geometry using SceneKit, see the ARSCNPlaneGeometry course.)
Unlike the center and range attributes of ARPlaneAnchor, the latter only estimates the rectangular region of the detected plane, while the geometric attributes of the plane anchor provide a more detailed estimate of the two-dimensional region covered by the plane. For example, if ARKit detects a circular desktop, the resulting ARPlaneGeometry object roughly matches the general shape of the table. As the session continues, ARKit provides updated planar anchor points whose associated geometry improves the estimated shape of the aircraft.
You can use this model to place 3D content more precisely, which should only appear on the detected plane. For example, ensure that virtual objects do not fall off the edge of a table. You can also use this model to create occlusion geometries that hide other virtual content behind surfaces detected in camera images.
Plane geometry is always convex. That is, the boundary polygon of plane geometry is a minimal convex hull that surrounds all points identified or estimated by ARKit that are part of the plane.
3.25 ARSkeleton
- ARSkeleton: Interface for tracking the skeleton of the body.
class ARSkeleton : NSObject
Copy the code
As a collection of joints, this protocol describes the human motion states that ARKit can track. The ARSkeleton3D subclass provides you with the location of the body joints tracked in the 3D space, especially its jointLocalTransforms and jointModelTransforms properties. The ARSkeleton2D subclass provides you with the location of the joints of the object being tracked in two dimensions through its jointLandmarks property.
3.26 ARSkeletonDefinition
- ARSkeletonDefinition: The hierarchy of joints and their names.
class ARSkeletonDefinition : NSObject
Copy the code
The skeleton definition establishes relationships between joints that make up the skeleton of a 3D or 2D body, where joints are connected to other joints to form a single skeleton in a parent-child hierarchy. Use parentIndices to identify the hierarchy defined by the given framework. ARKit specifies specific joints that are critical for body tracking. You can access the specified joint name by calling index(forJointName:) and passing in an available joint identifier.
3.27 ARBody2D
- ARBody2D: The screen space represents the person ARKit identifies in the camera feed.
class ARBody2D : NSObject
Copy the code
When ARKit recognizes a person in the camera prompt, it estimates the screen space position of the body joint and provides that position to you through the Current frame’s detectedBody.
3.28 ARBodyAnchor
- ARBodyAnchor: An object that tracks the movement of an object in 3D space and ARKit can recognize it in the camera.
class ARBodyAnchor : ARAnchor
Copy the code
This ARAnchor subclass tracks the movement of a single person. By using ARBodyTrackingConfiguration run session, you can enable the body tracking. When ARKit recognizes a person in the background camera feed, it calls your delegate’s session(_:didAdd:) function with ARBodyAnchor. The shifting position of the body anchor points defines the world position of the body hip joint. You can also check the object ARKit is tracking in the frame’s anchor.
- Put the skeleton on the surface
Because the origin of the body anchor point maps to the hip joint, you can calculate the current offset of the foot to the hip joint to place the bones of the body on a surface. You can get the foot’s offset from the bone’s origin by passing the foot joint index to the jointModelTransforms.
static var hipToFootOffset: Float {
// Get an index for a foot.
let footIndex = ARSkeletonDefinition.defaultBody3D.index(forJointName: .leftFoot)
// Get the foot's world-space offset from the hip.
let footTransform = ARSkeletonDefinition.defaultBody3D.neutralBodySkeleton3D! .jointModelTransforms[footIndex]// Return the height by getting just the y-value.
let distanceFromHipOnY = abs(footTransform.columns.3.y)
return distanceFromHipOnY
}
Copy the code
3.29 ARCoachingOverlayView
- ARCoachingOverlayView: A visual command view that provides guidance to users during session initialization and recovery.
class ARCoachingOverlayView : UIView
Copy the code
This view provides standardized login routines for your users. You can configure this view to display automatically during session initialization and in limited trace cases, while providing the user with specific instructions to most easily track the world of ARKit. These illustrations show stacked views with horizontal and vertical plane objects, indicating that the user should start moving the device:
These illustrations show an overlay view indicating that the user should continue to move the phone or change the speed of movement:
When you start your application, a coaching Overlay will ask the user to move the device in a way that helps ARKit set up a trace. When you select a specific target, like finding an airplane, the view adjusts its commands accordingly. When the coaching overlay determines that the goal has been achieved and coaching is no longer needed, it is hidden from the user’s view. For an example application using coaching overlay, see Placing Objects and Handling 3D interactions.
- Support for automatic tutoring:
By default, activatesautomatic is enabled, so you should cover coachingOverlayViewWillActivate (_) to determine whether the coach is in progress. Coordinate your actions to help the user pay attention to these instructions, for example, by hiding any UI that is not needed when the session is reinitialized.
- After the break to reposition: if you enable to localization (see sessionShouldAttemptRelocalization (_), if any interruption reduces the application state of tracking, ARKit will attempt to restore your session. In this case, a coaching overlay appears automatically and provides the user with instructions to help ARKit reposition.
During this time, the coaching Overlay includes a button that lets the user indicate that they want to restart rather than resume the session.
ARKit by calling your entrust coachingOverlayViewDidRequestSessionReset (_) function to notify you when the user press Start. Implement this callback if your application needs any custom actions to restart the AR experience.
func coachingOverlayViewDidRequestSessionReset(_ coachingOverlayView: ARCoachingOverlayView) {
// Reset the session.
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
session.run(configuration, options: [.resetTracking])
// Custom actions to restart the AR experience.
// ...
}
Copy the code
If you don’t realize coachingOverlayViewDidRequestSessionReset (_), coaching overlay will reset by tracking in response to the Start Over button, this will also remove any existing anchor point. For more information on relocalization, see Managing session Life Cycle and Trace Quality.
3.30 ARSCNView
- ARSCNView: A view that enables you to display augmented reality experiences with SceneKit.
class ARSCNView : SCNView
Copy the code
The ARSCNView class provides the easiest way to create an augmented reality experience that blends virtual 3D content with a real-world view of the device’s camera. When you run the ARSession object provided by the view:
- This view automatically renders a live video feed from the device’s camera as the scene background.
- The view’s SceneKit scene’s world coordinate system responds directly to the AR world coordinate system established by the session configuration.
- The view automatically moves its SceneKit camera to match the movement of the device in the real world.
Because ARKit automatically matches the SceneKit space with the real world, placing a virtual object that looks like it will hold a real world position simply requires setting the SceneKit position of the object appropriately. (See providing 3D virtual content with SceneKit.)
You don’t necessarily need to use the ARAnchor class to track the location of objects added to the scene, but by implementing the ARSCNViewDelegate method, you can add SceneKit content to any anchor that ARKit automatically detects.
3.30.1 ARSCNViewThe class definition
3.30.2 ARSCNViewDelegate
-
You can coordinate automatic synchronization of SpriteKit content with an AR session. Implement this protocol to provide SpriteKit content corresponding to the ARAnchor object tracked by the view’s AR session, or to manage the view’s automatic updates to that content. This protocol extends the ARSessionObserver protocol, so your session delegate can also implement those methods in response to changes in session state.
-
Processing content update:
// Require the delegate to provide a SpriteKit node corresponding to the newly added anchor point.
func view(ARSKView, nodeFor: ARAnchor) -> SKNode?
// Tell the agent that a SpriteKit node corresponding to a new AR anchor has been added to the scene.
func view(ARSKView, didAdd: SKNode, for: ARAnchor)
// Tell the agent that the properties of the SpriteKit node will be updated to match the current state of their corresponding anchor points.
func view(ARSKView, willUpdate: SKNode, for: ARAnchor)
// Tell the agent that the properties of the SpriteKit node have been updated to match the current state of its corresponding anchor.
func view(ARSKView, didUpdate: SKNode, for: ARAnchor)
// Tell the agent that the SpriteKit node corresponding to the AR anchor has been removed from the scene.
func view(ARSKView, didRemove: SKNode, for: ARAnchor)
Copy the code
3.30.3 ARSCNViewProperties and methods
- Mandatory Use SceneKit to provide 3D virtual content: Use SceneKit to add realistic 3D objects to the AR experience.
// An AR session for managing motion tracking of view content and camera image processing.
var session: ARSession
// The SceneKit scene to be displayed in the view.
var scene: SCNScene
Copy the code
- Responding to AR Updates
// An object that the mediation you provide synchronizes the view's AR scene information with SceneKit content.
var delegate: ARSCNViewDelegate?
The // method coordinates the automatic synchronization of SceneKit content with the AR session.
protocol ARSCNViewDelegate
Copy the code
- Discover the real surface
Search for and in the captured camera imageSceneKitThe real object or object corresponding to a point in the viewARAnchor point.func hitTest(CGPoint, types: ARHitTestResult.ResultType)- > [ARHitTestResult]
// Create a raycast query from a point on the view aligned with the center of the camera's field of view.
func raycastQuery(from: CGPoint, allowing: ARRaycastQuery.Target, alignment: ARRaycastQuery.TargetAlignment) -> ARRaycastQuery?
Copy the code
- Map content to the actual location
// Returns the AR anchor associated with the specified SceneKit node (if any).
func anchor(for: SCNNode) -> ARAnchor?
// Return the SceneKit node associated with the specified AR anchor (if any).
func node(for: ARAnchor) -> SCNNode?
// In the 3d world space detected by ARKit, return the projection from the 2D view to points on the plane.
func unprojectPoint(CGPoint, ontoPlane: simd_float4x4) -> simd_float3?
Copy the code
- Management of the illumination
// A Boolean that specifies whether ARKit creates and updates SceneKit lights in the view's scene.
var automaticallyUpdatesLighting: Bool
Copy the code
- Debugging is based on ar demonstrations
// Draw overlay content in SceneKit view to help debug AR trace options
typealias ARSCNDebugOptions
Copy the code
- Managing render Effects
, determine whether the view presents motion blur.var rendersMotionBlur: Bool
// A flag that determines whether SceneKit applies the image noise characteristics to the virtual content of the application.
var rendersCameraGrain: Bool
Copy the code
3.31 ARSKView
- ARSKView: a view that you can useSpriteKitAccording to
AR
Experience.
class ARSKView : SKView
Copy the code
Use the ARSKView: class to create an AUGMENTED reality experience that places 2D elements in a device camera view in a 3D space. When you run the ARSession object provided by the view:
- This view automatically renders a live video feed from the device’s camera as the scene background.
- When you implement the ARSKViewDelegate method to associate SpriteKit content with real-world locations, the view automatically scales and rotates those SpriteKit nodes so that they appear to track the real world as seen by the camera.
3.31.1 ARSKViewDelegate
- ARSKViewDelegate: Provides methods that you can implement to coordinate automatic synchronization of SpriteKit content with AR sessions.
protocol ARSKViewDelegate
Copy the code
- Method you can coordinate automatic synchronization of SpriteKit content with an AR session.
- Implement this protocol to provide views with
AR
session-trackedARAnchorObjects that correspond to SpriteKit content, or manage views that automatically update that content.- This protocol extends the ARSessionObserver protocol, so your session delegate can also implement those methods in response to changes in session state.
- Callback agents that handle content updates:
// Require the delegate to provide a SpriteKit node corresponding to the newly added anchor point.
func view(ARSKView, nodeFor: ARAnchor) -> SKNode?
// Tell the delegate that a SpriteKit node corresponding to a new AR anchor has been added to the scene.
func view(ARSKView, didAdd: SKNode, for: ARAnchor)
// Tells the delegate that the properties of the SpriteKit node will be updated to match the current state of their corresponding anchor points.
func view(ARSKView, willUpdate: SKNode, for: ARAnchor)
// Tell the delegate that the properties of the SpriteKit node have been updated to match the current state of their corresponding anchors.
func view(ARSKView, didUpdate: SKNode, for: ARAnchor)
// Tell the delegate that the SpriteKit node corresponding to the AR anchor has been removed from the scene.
func view(ARSKView, didRemove: SKNode, for: ARAnchor)
Copy the code
3.32 ARMatteGenerator
- ARMatteGenerator: An object that creates a matte texture that you can use to mask the virtual content of your application
class ARMatteGenerator : NSObject
Copy the code
Use this class when you want full control over your app’s virtual content, based on the people ARKit identifies in the camera feed.
Applications that use a standard renderer (ARView or ARSCNView) do not need this class to affect people’s occlusion. See frameSemantics for more information.
To help your custom renderer with human occlusion, the Matte Generator processes alpha and depth information in frame segmentation buffering and estimates depth data, providing you with matte and depth textures. You use these textures to layer the virtual content of your application
3.33 ARQuickLookPreviewItem
- ARQuickLookPreviewItem: An object that you use to customize your AR quick browsing experience.
Use this class when you want to control the background, specify what a shared table shares, or disable scaling without allowing the user to scale a particular model.
3.34 ARFrame
- ARFrame: A video image captured as part of a session with location tracking information
As ARKit analyzes video frames to estimate the user’s position in the world, a running session constantly captures video frames from the device’s camera. ARKit also gives you this information in the form of ARFrame and at the frequency of your application’s frame rate.
Your application can receive arFrames in two ways:
- If your application maintains its own render loop, or if you need frame information in the ARSCNViewDelegate or ARSKViewDelegate callback, request currentFrame from ARSession.
- Make one of your objects a delegate to ARSession, and ARKit automatically receives new frames as they are captured.
3.34.1 ARFrameThe class definition
3.34.2 ARFrameAttribute method
- Access the captured video frame
// The pixel buffer containing the image captured by the camera.
var capturedImage: CVPixelBuffer
// The time when the frame was captured.
var timestamp: TimeInterval
// Depth maps (if any) are captured with video frames.
var capturedDepthData: AVDepthData?
// The time when the depth data of the frame (if any) was captured.
var capturedDepthDataTimestamp: TimeInterval
Copy the code
- Check world map status
// The feasibility of generating or repositioning a world map for the framework.
var worldMappingStatus: ARFrame.WorldMappingStatus
// Possible values that describe how thoroughly ARKit maps the visible region in a given frame.
enum ARFrame.WorldMappingStatus
Copy the code
- Checking Field Parameters
// Information about the position, orientation, and imaging parameters of the camera used to capture the frame.
var camera: ARCamera
// Estimation of lighting conditions based on camera images.
var lightEstimate: ARLightEstimate?
// Returns an affine transform used to convert between the normalized image coordinates and the coordinate space used to render the camera image on screen.
func displayTransform(for: UIInterfaceOrientation, viewportSize: CGSize) -> CGAffineTransform
Copy the code
- Trace and find objects
// A list of anchors that represent positions tracked or objects detected in the scene.
var anchors: [ARAnchor]
// Search the captured camera image for real-world objects or AR anchors.
func hitTest(CGPoint, types: ARHitTestResult.ResultType)- > [ARHitTestResult]
Copy the code
- Commissioning site inspection
// Currently, ARKit, the intermediate result of scenario analysis, is used to perform world tracing.
var rawFeaturePoints: ARPointCloud?
//A collection of points in the world coordinate space of the AR session.
class ARPointCloud
Copy the code
- Discover the real surface
// Get a raycast query for a screen point.
func raycastQuery(from: CGPoint, allowing: ARRaycastQuery.Target, alignment: ARRaycastQuery.TargetAlignment) -> ARRaycastQuery
Copy the code
- 2D body tracking
//ARKit identifies the screen position of the body in the camera image.
var detectedBody: ARBody2D?
// Screen space represents the person ARKit identifies in the camera feed.
class ARBody2D
Copy the code
- DetectedBody: Type ARBody2D, used by ARKit to detect body screen position information in camera images
var detectedBody: ARBody2D? { get }
Copy the code
To enable the 2 d test, you can add test frame semantics to the configuration of frameSemantics attributes, or use ARBodyTrackingConfiguration run session, by default, the body detection is enabled.
- Class ARBody2D: Screen space represents the person ARKit identifies in the camera feed.
When ARKit recognizes a person in the camera prompt, it estimates the screen space position of the body joint and provides that position to you through the Current frame’s detectedBody.
- Isolate virtual content from people
// A buffer containing pixel information that identifies the shape of the object from the camera feed used to mask virtual content.
var segmentationBuffer: CVPixelBuffer?
// A buffer representing the estimated depth value of the camera feed used to mask virtual content.
var estimatedDepthData: CVPixelBuffer?
// A category of pixels that defines the type of content you use to mask your application's virtual content.
enum ARFrame.SegmentationClass
Copy the code
-
SegmentationBuffer: Type CVPixelBuffer that identifies a buffer containing pixel information that identifies the shape of the object from the camera feed used to mask virtual content.
-
EstimatedDepthData: CVPixelBuffer type, depth data buffer, representing the estimated depth value of the camera feed used to mask virtual content.
-
Enum ARFrame. SegmentationClass: enumerated types, it defines you used to keep out the contents of the application of virtual content types
enum SegmentationClass : UInt8
Copy the code
ARKit applies the categories defined in this class based on its interpretation of the camera feed pixel data. Only man is identified in the camera feed, so the available pixel classification is ARFrame SegmentationClass. Person or ARFrame. SegmentationClass. None.
- ARFrame. SegmentationClass. Person: the split in the buffer pixel is classified as part of the people.
- ARFrame. SegmentationClass. None: the split in the buffer pixel classification is unrecognized.
- Turn on camera texture
// A value that specifies the number of textures to appear in the camera texture texture.
var cameraGrainIntensity: Float
// Tiled metal textures created by ARKit to match the visual characteristics of the current video stream.
var cameraGrainTexture: MTLTexture?
Copy the code
- CameraGrainIntensity: Specifies the number of textures present in the camera texture.
var cameraGrainIntensity: Float { get }
Copy the code
This attribute is in [0..1], where 0 indicates no food and 1 indicates the maximum amount of food. When you apply this value to the depth component of cameraGrainTexture, you select the data that conceptually matches this intensity level from the various visual image noise data stored in the metal texture.
- CameraGrainTexture: Tiled metal texture created by ARKit to match the visual characteristics of the current video stream.
var cameraGrainTexture: MTLTexture? { get }
Copy the code
Camera textures enhance the visual cohesion of the real and enhanced aspects of the user experience, enabling your app’s virtual content to have similar image noise characteristics that naturally appear in the camera feed.
The diagram below:
rendersCameraGrain
- RendersCameraGrain: A flag that determines whether SceneKit applies image noise characteristics to the virtual content of an application. RendersCameraGrain is a property of ARSCNView.
var rendersCameraGrain: Bool { get set }
Copy the code
This function is enabled by default. Once set up, SceneKit adds a camera texture effect to your app’s virtual content that matches the image noise signature ARKit observed in camera feedback.
Reference:
- ARKit: The practice of augmented reality in Meituan’s catering business
- Juejin. Cn/post / 684490…