The target

This course will develop an ARkit Demo App and apply SceneKit to help you familiarize yourself with basic ARkit.

It’s time for you to immerse yourself in this tutorial and learn how to build the ARKit App step by step and interact with the AR world through the device in your hand.

The idea of this tutorial is to learn about AR and build an APP using the API. Through the steps of the tutorial, you will learn step by step how ARKit interacts with magical 3D objects on physical devices.

Before we begin, please understand that this tutorial is based on basic functions only.

You need to prepare

Before entering this tutorial, it is recommended that you have the basic development ability of iOS. This is an intermediate level of teaching, and we will need Xcode9 and above.

In order to test your ARKit App, you will need an Apple ARKit compatible device. Apple A9 processor or higher is recommended.

Now make sure you have the above requirements and are ready to start. Here is what I will take you through:

  • Create a new ARKit Apps project
  • I’m going to set ARKit SceneKit View
  • Combine ARSCNView with View Controller
  • Connect the IBOutlet
  • Set ARSCNView Session
  • Allows the camera to use permissions
  • Add 3D objects to ARSCNView
  • Add gesture judgment to ARSCNView
  • Remove the object from ARSCNView
  • Add multiple objects to ARSCNView

Create a new ARKit Apps project

Again, open Xcode, from the Xcode menu, choose File > New > Project… “, then select the Single View App and press Next. Xcode also has an ARKit example App for internal keys, but you can still use the Single View App to develop AR apps.

You can name your project whatever you want, in this case ARKitDemo, and press Next to complete the new project.

I’m going to set ARKit SceneKit View

Now open up the Storyboard, find the ARKit SceneKit View in the Object Library in the bottom right corner and drag it to your View Controller.

Then pull the size of your ARKit SceneKit View across the View Controller and it should appear as follows:

Praise! In this case, ARKit SceneKit View is where we want to display the SceneKit content of our augmented reality.

Connect the IBOutlet

We are still in main. storyboard, go to the Toolbar in the upper right corner of the interface and open the Assistant Editor. Now import ARKit to viewController.swift:

import ARKit
Copy the code

Then hold down control and drag onto the View of ARKit ScenKit to viewController.swift, and when you connect, specify IBOutlet and call it sceneView, yeah, Feel free to delete the didReceiveMemoryWarning() method; we won’t use it in this tutorial.

I’m going to set ARKit SceneKit View

If we want our App to be able to see the real world through the camera at the beginning of execution, and detect our surroundings, this is actually quite amazing technology! Now that Apple has built a set of AUGMENTED reality features for developers, we don’t need to spend any more time redesigning from scratch, so thank you, Apple! So we can embrace ARKit.

Ok! Now it’s time to set the ARkit SceneKit View by inserting the following code under the ViewController category:

override func viewWillAppear(_ animated: Bool) {

    super.viewWillAppear(animated)
    let configuration = ARWorldTrackingConfiguration()
    sceneView.session.run(configuration)

}
Copy the code

And in viewWillAppear (_) method, we will initialize AR configuration, it is called ARWorldTrackingConfiguration, this is a world that can be performed on the function of tracking, and so on! What is World Tracking, you must ask? Take a look at Apple’s official filing:

“World Tracking provides six degrees of freedom tracks on the device to find the desired feature points of the scene. World Tracking also enables performing hit-tests against the frame. Tracking will no longer execute when this unit is paused.”

“World Tracking provides 6 degrees of freedom tracking of the device. By finding feature points in the scene, world tracking enables performing hit-tests against the frame. Tracking can no longer be resumed once the session is Paused.”

-Apple official documents

World tracking can keep track of the device’s azimuth and position. It can also measure the level of the real world using the device’s camera.

The final piece of code, the AR unit (Session) is mainly used to manage the moves cerum and camera image processing content. We need to perform this configuration

Next, let’s add another method to the ViewController:

override func viewWillDisappear(_ animated: Bool) {

    super.viewWillDisappear(animated)
    sceneView.session.pause()

}
Copy the code

In the viewWillDisappear(_:) method, we mainly do that when the view is closed, the AR unit will stop the radical action and processing the image content at the same time.

## Allow camera permission

Before we can implement our App, we need to inform our users that we need to use the camera for augmented reality. This is a required request notification since iOS10, so please open info.plist. Then right-click on the blank area, select Add Row, select Privacy — Camera Usage Description under key, and write For Augmented Reality under Value.

Come here and make sure you’ve done everything you’ve just taught at this point.

Pick up your device and connect it to your Mac to create and execute a project in Xcode for the first time, at which point the App will ask you for permission to open the camera. Please press OK. If you select Don’t allow, the App cannot use the camera to do what it wants.

You should now be able to see your camera picture.

We’ve already set up our sceneView unit and can execute World Tracking, so it’s time to get exciting! Augmented reality!

## Add 3D objects to ARSCNView

Without further ado, we’re going to have a box, so let’s add the following code to your ViewController category.

Func addBox() {let box = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0) let boxNode = SCNNode() boxNode.geometry = box boxNode.position = SCNVector3(0, 0, 0.2) the let scene = SCNScene () scene. The rootNode. AddChildNode (boxNode) sceneView. Scene = scene}Copy the code

Now let’s explain what we did.

Create a box, 1 Float = 1 meter

Next, we create a point point boxNode object that represents the coordinates between the position and an object in 3D space, but it has no visible content on its own and needs to be assisted in adding information.

So we need to create a shape at this point and give it some visual content. Set the box parameter to the geometric information of the boxNode. Then we give our point position a position that is relative to the camera. In the positive x axis, it is to the right. Negative x is left, positive Y is up, negative Y is down, and positive Z is back, negative Z is forward.

Next, we’re going to create a scene that uses SceneKit’s scene functionality to display on the view, come and add our boxNode as the initial root point of the scene. However, the initial root point in a scene is the way SceneKit defines the coordinate system with the real world.

Normally, our scene will now have a cube, which will be located in the middle of the camera frame, 0.2 meters away from the camera.

Finally, let our sceneView show the scene we just created.

Now add the addBox() method to viewDidLoad() :

override func viewDidLoad() {

    super.viewDidLoad()
    addBox()

}
Copy the code

Create and execute the App and you should be able to see a cube floating in the air.

You can now also re-simplify the addBox() method:

Func addBox() {let box = SCNBox(width: 0.05, height: 0.05, length: 0.05, chamferRadius: 0) let boxNode = SCNNode() boxNode.geometry = box boxNode.position = SCNVector3(0, 0, - 0.2) sceneView. Scene. The rootNode. AddChildNode (boxNode)}Copy the code

It’ll make it easier to know what you’re doing.

Ok! It’s time to add more gestures!

## Added gesture recognition method to ARSCNView

Under addBox(), add the following code:

func addTapGestureToSceneView() {

    let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(ViewController.didTap(withGestureRecognizer:)))
    sceneView.addGestureRecognizer(tapGestureRecognizer)

}
Copy the code

So in this case, we initialize the click gesture recognition method object, and we set the target to the ViewController, self, and inside the action to didTap(withGestureRecognizer:), And then we’ll add the click gesture recognition object to the scenView.

It’s time to do some call methods in the click gesture recognition method object

# Remove objects from ARSCNView

Add the following code to viewController.swift:

@objc func didTap(withGestureRecognizer recognizer: UIGestureRecognizer) { let tapLocation = recognizer.location(in: sceneView) let hitTestResults = sceneView.hitTest(tapLocation) guard let node = hitTestResults.first? .node else { return } node.removeFromParentNode() }Copy the code

Here, we set up the didTap(withGestureRecognizer:) method so that we can get the user’s click position in sceneView and see which node we hit.

Then we remove the first point from hitTestResults. If we don’t get any points in hitTestResults, we remove the point that was clicked first, which is also the parent node.

While we test object removal, update the viewDidLoad() method and add a call to addTapGestureToSceneView() :

override func viewDidLoad() {

    super.viewDidLoad()

    addBox()
    addTapGestureToSceneView()

}
Copy the code

Now if you can create and execute your project, you should be able to click on the Box Node and remove it from the Scene view.

But we feel like we’re back where we started.

It doesn’t matter! So let’s add more stuff.

Add multiple objects to ARSCNView

Now our cube feels a little lonely, let’s make a little more cube, and we’ll add objects to some of the feature points.

So what are eigenpoints?

According to Apple’s official explanation, the definition of feature points is as follows:

This point is automatically identified by ARKit from a continuous surface, but there is no other relative reference point.

It actually detects feature points on the surface of the real world. So, getting back to how to add cubes, before we start, create an extension at the bottom of the ViewController category code.

extension float4x4 {

    var translation: float3 {
        let translation = self.columns.3
        return float3(translation.x, translation.y, translation.z)
    }

}
Copy the code

This exetension establishes a matrix of float3 with x, y, and z parameters.

So our next step is to modify addBox() :

Func addBox(x: Float = 0, y: Float = 0, z: Float = -0.2) {let box = SCNBox(width: 0.1, height: 0.1, length: 0.1) 0.1, chamferRadius: 0) let boxNode = SCNNode() boxNode.geometry = box boxNode.position = SCNVector3(x, y, z) sceneView.scene.rootNode.addChildNode(boxNode) }Copy the code

Basically, we’re adding arguments to initialize the addBox() method, and we’re also giving it an initial value, which means we don’t have to write specific x, Y, and Z values when we call the addBox() method in viewDidLoad().

Ok!

Now we need to modify the didTap(withgesturecognizer 🙂 method, and we want to be able to add an object when a point in the real world is detected.

So returning to the code description of our Guard let, after the else, and before return, add the following code:

{
let hitTestResultsWithFeaturePoints = sceneView.hitTest(tapLocation, types:         .featurePoint)

if let hitTestResultWithFeaturePoints = hitTestResultsWithFeaturePoints.first {

    let translation = hitTestResultWithFeaturePoints.worldTransform.translation
    addBox(x: translation.x, y: translation.y, z: translation.z)
    }
}  
Copy the code

To explain the way we want to achieve this.

First, we’re going to have a hit test, much like our first test, except that we clearly define featurePoint as a type

Parameters. The Types parameter requires the HIT test to search for real-world objects or surfaces via the camera image of the AR unit. It contains many types, but this tutorial is only for feature points so far.

The idea that we can safely remove the results of the first hit test after passing the hit test of feature points is important because feature points don’t always exist and ARKit doesn’t detect real world objects and surfaces all at once.

If the first hit test is successfully removed, then we will convert matrix_FLOAT4x4 to float3. Since we have added an extension to do this, we can also modify the real world coordinates of X, y and z if we are interested.

Then, we enter x, y, and z at a feature point to add a cube.

Your didTap(withGestureRecognizer:) method should look like this:

@objc func didTap(withGestureRecognizer recognizer: UIGestureRecognizer) { let tapLocation = recognizer.location(in: sceneView) let hitTestResults = sceneView.hitTest(tapLocation) guard let node = hitTestResults.first? .node else { let hitTestResultsWithFeaturePoints = sceneView.hitTest(tapLocation, types: .featurePoint) if let hitTestResultWithFeaturePoints = hitTestResultsWithFeaturePoints.first { let translation = hitTestResultWithFeaturePoints.worldTransform.translation addBox(x: translation.x, y: translation.y, z: translation.z) } return } node.removeFromParentNode() }Copy the code

Test the final App

Now it’s time to test the final version of the App by pressing the Run key to execute the project!

conclusion

Congratulations, we have been walking with us for so long to complete this teaching. There are still many functions of ARKit that need us to continue to challenge, we have only learned some superficial.

I hope you enjoy this introduction to ARKit, and I look forward to seeing you build your own ARKit App.

For the full sample project, you can find it on GitHub.

Recommended at the end of the article: iOS popular anthology

  • 1.BAT and other major manufacturers iOS interview questions + answers

  • 2.Must-read books for Advanced Development in iOS (Classics must Read)

  • 3.IOS Development Advanced Interview “resume creation” guide

  • (4)IOS Interview Process to the basics