When I am scrolling through moments, I will always be swept by some interesting small games. These games are easy to use, suitable for all ages and spread fast, dominating moments in minutes. Do you want to make a fun little game? Huawei machine learning service provides facial recognition detection and key point recognition functions to help you achieve this goal.

Crazy Rockets — This game integrates face recognition detection and key point recognition for hands. Developed two ways of playing, one is to control the rocket shuttle through Stonehenge by moving the face up and down. The other is controlled by moving gestures up and down. Both ways are through the detection of face and hand key points to feedback information, and then control the rocket movement, fun!

Crazy shopping cart mini game is realized through the integration of hand key point detection function, through gesture detection can control the shopping cart left and right movement, so as to catch all kinds of goods falling down, every 15 seconds will be raised once, to bring different shopping game experience to the player.

Crazy Rockets

(a) face

1. Configure the Maven repository

  • Configure the Maven repository for the HMS Core SDK in AllProjects > Repositories.
allprojects {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}}}Copy the code
  • Configure the Maven repository address for the HMS Core SDK in “BuildScript > Repositories”.
buildscript {   
    repositories {       
       google()       
       jcenter()       
       maven {url 'https://developer.huawei.com/repo/'}}}Copy the code
  • Add the AGCP configuration in BuildScript > Dependencies.
dependencies {
        ...       
        classpath 'com. Huawei. Agconnect: agcp: 1.3.1.300'}}Copy the code

2. Integrate the SDK

Implementation  'com. Huawei. HMS: ml - computer vision - face: 2.0.1.300'
Copy the code

3. Create a face analyzer

MLFaceAnalyzer analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
Copy the code

4. Create a processing class

public class FaceAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLFace> {
    @Override
    public void transactResult(MLAnalyzer.Result<MLFace> results) {
        SparseArray<MLFace> items = results.getAnalyseList();
        // The developer processes the recognition results as needed. Note that only the detection results are processed here.
        // Other detection-related interfaces provided by ML Kit cannot be called.
    }
    @Override
    public void destroy(a) {
        // Detect the end callback method, used to release resources, etc.}}Copy the code

5. Create LensEngine to capture camera dynamic video stream and pass it into analyzer

LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
    .setLensType(LensEngine.BACK_LENS)
    .applyDisplayDimension(1440.1080)
    .applyFps(30.0 f)
    .enableAutomaticFocus(true)
    .create();
Copy the code

6. Call the run method, start the camera, read the video stream, and identify

// Implement the other logic of the SurfaceView control yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
    lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
    // Exception handling logic.
}
Copy the code

7. Release detection resources

if(analyzer ! =null) {
    try {
        analyzer.stop();
    } catch (IOException e) {
         // Exception handling.}}if(lensEngine ! =null) {
    lensEngine.release();
}
Copy the code

(2) gesture recognition

1. Configure the Maven repository

Configure the Maven repository for the HMS Core SDK in AllProjects > Repositories.

allprojects {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}}}Copy the code

Configure the Maven repository address for the HMS Core SDK in “BuildScript > Repositories”.

buildscript {   
    repositories {       
       google()       
       jcenter()       
       maven {url 'https://developer.huawei.com/repo/'}}}Copy the code

Add the AGCP configuration in BuildScript > Dependencies.

dependencies {
        ...       
        classpath 'com. Huawei. Agconnect: agcp: 1.3.1.300'}}Copy the code

2. Integrate the SDK

// Introduce the base SDK
implementation 'com. Huawei. HMS: ml - computer vision - handkeypoint: 2.0.4.300'
// Introduce the model package of hand keypoint detection
implementation 'com. Huawei. HMS: ml - computer vision - handkeypoint - model: 2.0.4.300'
Copy the code

3. Create a default gesture analyzer

MLHandKeypointAnalyzer analyzer =MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();
Copy the code

4. Create a processing class

public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// The developer processes the recognition results as needed. Note that only the detection results are processed here.
// Other detection-related interfaces provided by ML Kit cannot be called.
}
@Override
public void destroy(a) {
// Detect the end callback method, used to release resources, etc.}}Copy the code

5. Set the processing class

analyzer.setTransactor(new HandKeypointTransactor());
Copy the code

6. Create Lengengine

LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280.720)
.applyFps(20.0 f)
.enableAutomaticFocus(true)
.create();
Copy the code

7. Call the run method, start the camera, read the video stream, and identify

// Implement the other logic of the SurfaceView control yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
Copy the code

8. Release detection resources

if(analyzer ! =null) {
analyzer.stop();
}
 
if(lensEngine ! =null) {
lensEngine.release();
}
Copy the code

(3) Crazy shopping cart development actual combat

1. Configure the Maven warehouse address

buildscript {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
    dependencies {
        ...
        classpath 'com. Huawei. Agconnect: agcp: 1.4.1.300'
    }
}
 
allprojects {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}}}Copy the code

2. Full SDK integration

dependencies{
    // Introduce the base SDK
    implementation 'com. Huawei. HMS: ml - computer vision - handkeypoint: 2.0.4.300'
    // Introduce the model package of hand keypoint detection
    implementation 'com. Huawei. HMS: ml - computer vision - handkeypoint - model: 2.0.4.300'
}
Copy the code

After integrating the SDK in one of the above two ways, add the configuration to the file header.

Add apply plugin: ‘com.huawei. Agconnect ‘to apply plugin: ‘com.android. Application’

3. Create a hand keypoint analyzer

MLHandKeypointAnalyzer analyzer =MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();
Copy the code

4. Create recognition result processing class “HandKeypointTransactor”

public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
    @Override
    public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
        SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
        // The developer processes the recognition results as needed. Note that only the detection results are processed here.
        // Other detection-related interfaces provided by ML Kit cannot be called.
    }
    @Override
    public void destroy(a) {
        // Detect the end callback method, used to release resources, etc.}}Copy the code

5. Set the recognition result processor, and realize the binding between analyzer and result processor

analyzer.setTransactor(new HandKeypointTransactor());
Copy the code

6. Create LensEngine

LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
    .setLensType(LensEngine.BACK_LENS)
    .applyDisplayDimension(1280.720)
    .applyFps(20.0 f)
    .enableAutomaticFocus(true)
    .create();
Copy the code

7. Call the run method, start the camera, read the video stream, and identify

// Implement the other logic of the SurfaceView control yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
    lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
    // Exception handling logic.
}
Copy the code

8. After the detection is complete, stop the analyzer and release detection resources

if(analyzer ! =null) {
    analyzer.stop();
}
if(lensEngine ! =null) {
    lensEngine.release();
}
Copy the code

After looking at the main development steps, do you think the integration is simple and fast? In addition to the above two small games, face recognition detection and hand key point recognition technology have many application scenarios in life. For example, after the software for shooting short videos integrates this technology, some cute or funny special effects can be generated according to the key points of the hands to increase the interest of short videos. Or in the scene of smart home, you can customize some gestures as the remote control instructions of smart home appliances to carry out some more intelligent human-computer interaction. Give it a try and build fun and fun apps together!

For more details, please refer to:

Obtain the official website of Huawei Developer Alliance and development guide documents

Join the developer discussion on Reddit

Download the demo and sample code at Github

Go to Stack Overflow to resolve integration issues


The original link: developer.huawei.com/consumer/cn…

Author: Pepper