The following content is the author’s notes on reading the ARCore Sample, which is only for personal study, recording and reference. If there is any mistake, please leave a comment and correct it.
tags: ARCore
Entry: HelloArActivity
HelloArActivity is the entry point to the sample application. This entry briefly demonstrates how to use ARCore. The following four things are done here:
- Configuration ARCore SDK
- Configuring the Drawing Environment
- Draw information to the screen, such as camera data, point cloud, diamond plane, Android small robot
- Click on the interaction
As you can see, ARCore is relatively easy to use. The SDK encapsulates a set of apis in as simple a way as possible. We don’t even need to worry about using the camera API.
ARCore minimal use guide
Since it is a sample project of ARCore, the core of course is the use of ARCore.
The main interface class exposed by the SDK is the Session class. The functionality of ARCore is provided through this class. Developers interact with ARCore through this class.
Using the Session class is simple:
- Construct a Session bound to the current Activity
- For this
Session
configure - will
onPause
,onResume
Life cycle events are notified to thisSession
From Sample, this is the most core configuration step using ARCore. But that alone is not enough. These steps just get ARCore going. But if it doesn’t show up in the interface, how can you be sure ARCore is working properly? This problem first press not table. We’ll try to solve it later when we go deeper.
Note: Since ARCore works with the camera, you also need to ensure that the application is granted access to the camera.
ARCore Sample Graph drawing
Now, let’s see how Sample does the drawing. This is the most important part of AR application development.
Draw the logic
The objects associated with drawing are:
BackgroundRenderer mBackgroundRenderer ... ; ObjectRenderer mVirtualObject ... ; ObjectRenderer mVirtualObjectShadow ... ; PlaneRenderer mPlaneRenderer ... ; PointCloudRenderer mPointCloud ... ;Copy the code
Among them:
mBackgroundRenderer
Used to plot the data captured by the camera.mVirtualObject
Used to draw small Android robots.mVirtualObjectShadow
Used to draw shadows for Android robots.mPlaneRenderer
Use to draw the plane that the SDK recognizes.mPointCloud
Used to draw the point cloud identified by the SDK.
Responsible for the drawing of the object is the above several people. But where exactly do you draw it? How do I draw it?
Draw the configuration to the screen
For those of you who have developed OpenGL applications on Android, to draw on Android, you need to prepare a GLSurfaceView as the drawing target. Sample is no exception.
First, a GLSurfaceView control, mSurfaceView, is prepared in the layout file. GLSurfaceView will prepare the OpenGL drawing environment for us and call back to us when appropriate.
First, you need to configure the GLSurfaceView.
The relevant codes are as follows:
// Set up renderer.
mSurfaceView.setPreserveEGLContextOnPause(true);
mSurfaceView.setEGLContextClientVersion(2);
mSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0); // Alpha used for plane blending.
mSurfaceView.setRenderer(this);
mSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);Copy the code
The configuration of GLSurfaceView is as follows:
- In pause state, the EGL context is preserved
- OpenGL ES version select 2.0
- Draw surface selection RGBA for 8 bit, depth for 16 bit, template for 0 bit configuration
- Setting itself up as a renderer means that the processing logic is implemented in this class
- Render mode is set to continuous rendering, i.e. after one frame is rendered, the next frame is immediately rendered
For more information, see the OpenGL tutorial documentation on the website.
Implementation logic for drawing
After setting up the GLSurfaceView configuration, it’s time to implement our drawing logic. To render content on GLSurfaceView, you need to implement the GLSurfaceView.renderer interface. This interface is defined as follows:
public interface Renderer {
void onSurfaceCreated(GL10 gl, EGLConfig config);
void onSurfaceChanged(GL10 gl, int width, int height);
void onDrawFrame(GL10 gl);
}Copy the code
onSurfaceCreated(GL10 gl, EGLConfig config)
This method is called when a drawable surface is created or recreated. In this callback, you can do some initialization. Note that this method runs in an OpenGL thread and has an OpenGL context, so you can make an OpenGL call here.onSurfaceChanged(GL10 gl, int width, int height)
This method is called when the drawable surface changes. At this point, the outside might have changed the size of the control, so we need to update our viewport information in this call so that it can be accurately drawn to the screen when drawing.void onDrawFrame(GL10 gl)
This method is called at drawing time. Each time it is drawn, it is called once, that is, once per frame. Here is the main drawing logic.
Therefore, if you want to know how to draw content in Sample, you need to focus on these three methods.
Draw the logic
First, let’s see how to initialize:
@override public void onSurfaceCreated(GL10 gl, EGLConfig config) {gles20.glclearColor (0.1f, 0.1f, 0.1 f, 1.0 f); / / initialization background rendering (i.e., the data of cameras) / / into the type of Context, because inside the Context is required to read the resource mBackgroundRenderer. CreateOnGlThread (this); / / set the camera texture handle, will ARCore camera data update to the texture mSession. SetCameraTextureName (mBackgroundRenderer. GetTextureId ()); / / configure other rendering objects try {/ / virtual objects, android mVirtualObject. The little green robot createOnGlThread (context = / * * / this,"andy.obj"."andy.png"); / / material information configuration mVirtualObject. SetMaterialProperties (0.0 f, f 3.5, 1.0 f, 6.0 f); / / shadow configuration mVirtualObjectShadow. CreateOnGlThread (context = / * * / this,"andy_shadow.obj"."andy_shadow.png"); / / blending mode mVirtualObjectShadow. SetBlendMode (BlendMode. Shadow); / / material information configuration mVirtualObjectShadow. SetMaterialProperties (1.0 f, f 0.0, 0.0 f, 1.0 f); } catch (IOException e) { Log.e(TAG,"Failed to read obj file"); } the try {/ / plane mPlaneRenderer. CreateOnGlThread (context = / * * / this,"trigrid.png");
} catch (IOException e) {
Log.e(TAG, "Failed to read plane texture"); } / / point cloud configuration mPointCloud createOnGlThread context = (/ * * / this); }Copy the code
Next, you configure the size of the drawn surface, notifying ARCore of the size of the drawn surface.
@Override public void onSurfaceChanged(GL10 gl, int width, int height) { GLES20.glViewport(0, 0, width, height); / / notice ARCore changed the display area size, so that ARCore internal adjustment matrix of perspective, and adjust the video background mSession. SetDisplayGeometry (width, height); }Copy the code
Void onDrawFrame(GL10 GL); void onDrawFrame(GL10 GL);
/ / screen clearing GLES20 glClear (GLES20 GL_COLOR_BUFFER_BIT | GLES20. GL_DEPTH_BUFFER_BIT); try { // ... // Draw the background, that is, the image data captured by the camera mbackgroundRenderer.draw (frame); // If you are not in motion tracking state, do not draw anything elseif (frame.getTrackingState() == TrackingState.NOT_TRACKING) {
return; } // Draw the point cloud of ARCore, that is, the feature point mpointCloud.update (frame.getpointCloud ()); mPointCloud.draw(frame.getPointCloudPose(), viewmtx, projmtx); / / draw ARCore identified to plane mPlaneRenderer. DrawPlanes (mSession. GetAllPlanes (), frame, getPose (), projmtx);for (PlaneAttachment planeAttachment : mTouches) {
if(! planeAttachment.isTracking()) {continue; } planeAttachment.getPose().toMatrix(mAnchorMatrix, 0); / / drawing to prevent virtual object and its shadow mVirtualObject. UpdateModelMatrix (mAnchorMatrix, scaleFactor); mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor); mVirtualObject.draw(viewmtx, projmtx, lightIntensity); mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity); } } catch (Throwable t) { // Avoid crashing the application due to unhandled exceptions. Log.e(TAG,"Exception on the OpenGL thread", t);
}Copy the code
Here, mBackgroundRenderer is used to draw the content captured by the camera, mPointCloud is used to draw the feature point cloud identified by ARCore, and mPlaneRenderer is used to draw the plane identified by ARCore. Draw a virtual object and its shadow with mVirtualObject and mVirtualObjectShadow.
As you can see, the methods associated with drawing are draw or drawXXX. It’s these calls that make things appear on the interface. The specific logic is encapsulated in the corresponding class, interested students can further study.
Similarly, we can see that the objects we are responsible for drawing need some information from us before drawing:
- The position of the object being drawn
- Draw object View matrix, Project matrix
- Attitude (position and orientation) of an object
- Light information about the object
Where does this information come from? It’s basically done through ARCore. Now let’s see how we can get this data out of ARCore.
Get drawing information from ARCore
Remember the Session class mentioned above? Yes, AR related information is still obtained through Session. Since this information is primarily used for drawing, the code to fetch the data is in the renderer’s void onDrawFrame(GL10 GL).
Frame Frame = msession.update (); Frame Frame = msession.update (); // In the code design of Sample, only one click event is processed at a time, so as to reduce the workload of drawing process // Because the frequency of click events is much lower than that of rendering frame rate, so the click events are processed by multiple frames, but the sensory difference is not much. // This is an optimization technique that can be used in practice using MotionEvent tap = mqueuedSingletaps.poll ();if(tap ! = null && frame.getTrackingState() == TrackingState.TRACKING) {for(HitResult hit: frame.hittest (tap)) {// Check whether the hit plane is hit // hitTest is a hit test interface provided by ARCore, which is used to check which targets are hit by the hit operationif(Hit instanceof PlaneHitResult && ((PlaneHitResult) hit).ishitinpolygon ()) {// This is also an optimization technique, Limit the number of objects to 16 // because these objects need to be tracked internally by ARCore, the more ARCore traces, the more computation is requiredif(mTouches.size() >= 16) { mSession.removeAnchors(Arrays.asList(mTouches.get(0).getAnchor())); mTouches.remove(0); } // Save the object's information to mTouches // Msession.addanchor (hit.gethitPose ())) // This is crucial, it tells ARCore, Add (new PlaneAttachment((PlaneHitResult) hit).getPlane(), msession.addAnchor (hit.gethitPose ())));break; }}} //... // Get the projection matrix of the current camera relative to the world coordinate systemfloat[] projmtx = new float[16]. MSession. GetProjectionMatrix (projmtx, 0, 0.1 f, 100.0 f); This matrix, together with the matrix above, determines which objects in the virtual world can be seenfloat[] viewmtx = new float[16]. frame.getViewMatrix(viewmtx, 0); // Calculate light intensity finalfloatlightIntensity = frame.getLightEstimate().getPixelIntensity(); Mpointcloud.update (frame.getPointCloud())); // The attitude determines the orientation information of these points, the view and the projection matrix, Determines which points can see mpointCloud.draw (frame.getPointCloudPose(), viewmtx, projmtx); // Checkif we detected at least one plane. If so, hide the loading message.
if(mLoadingMessageSnackbar ! = null) {// getAllPlanes gets the location information of all planes identifiedfor (Plane plane : mSession.getAllPlanes()) {
if (plane.getType() == com.google.ar.core.Plane.Type.HORIZONTAL_UPWARD_FACING &&
plane.getTrackingState() == Plane.TrackingState.TRACKING) {
hideLoadingMessage();
break; }}} / / through all the plane position and attitude information, combined with the projection matrix, then plot mPlaneRenderer. DrawPlanes (mSession. GetAllPlanes (), frame, getPose (), projmtx);floatScaleFactor = 1.0 f;for (PlaneAttachment planeAttachment : mTouches) {
if(! planeAttachment.isTracking()) {continue; GetPose ().tomatrix (mAnchorMatrix, 0); / / used this information to draw little robot and its shadow mVirtualObject updateModelMatrix (mAnchorMatrix, scaleFactor); mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor); mVirtualObject.draw(viewmtx, projmtx, lightIntensity); mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity); } } catch (Throwable t) { // Avoid crashing the application due to unhandled exceptions. Log.e(TAG,"Exception on the OpenGL thread", t);
}Copy the code
This information is a manifestation of the capabilities that ARCore provides to us. With this information, we can do many, many things. Not just the little things drawn on the sample program.
Knowing how to get this information, we can replace all the drawing code, such as drawing with another 3D graphics framework, by passing this information to the corresponding API. If you are interested, you can try, that is, replace the part of the drawing content mentioned above.
conclusion
At this point, the ARCore sample program is resolved. The content in the rendering package is mainly used to draw content. It has nothing to do with ARCore and can be replaced with more mature and modern 3D graphics frameworks as mentioned above.
Overall, ARCore’s API design is minimal, providing its core functionality with as little API exposure as possible. Not too difficult to use. However, to use ARCore well, you need to have some basic knowledge of OpenGL, as well as some basic concepts of game development, such as coordinate system, projection perspective matrix, view matrix, texture and so on.
I will also continue to explore how ARCore can be used in conjunction with other 3D graphics frameworks to reduce the code associated with interoperating with the underlying OpenGL (these things are basic, but writing OpenGL naked is not fun), but it is very, very important to understand the basics of OpenGL.
Above, is the author of ARCore example project code simple analysis. If there is a mistake, please comment pointed out, thank you!