preface

As the epidemic has changed the shopping habits of all kinds of people this year, the market share of e-commerce continues to rise. As a brand new way of transaction, livestreaming e-commerce is reshaping the flow entrance pattern, and more and more consumers enter the product page through livestreaming. Therefore, in order to obtain better effect of live broadcast, anchors often spend more time preparing product highlights, preferential measures and display links, each of which has a direct impact on the final trading results. Past businesses in a fixed set of air cargo, it’s easy to let the audience aesthetic fatigue, when visitors can’t see their goods often because not interested in leaving, unless the jokes, the host won’t be able to keep every audience interest in link of all the goods, as a result of the live is likely to be watched as commodity introduction does not increase the decreases. Now, with the help of the image segmentation technology launched by Huawei machine learning service, it can realize the digital real-time replacement of various static and dynamic sets according to different commodity categories and requirements, making the live broadcast lively and interesting with the switching of various styles of background. This technology uses semantic segmentation to divide anchor portraits. For example, when introducing household goods, it can immediately switch to a home-style room, and when introducing outdoor sports equipment, it can also switch to the outdoors in real time. Through this innovative experience, the audience can find a more immersive feeling.

Function is introduced

Demo based on machine learning service huawei launched image segmentation and hands to identify two key technology, developing through gestures to switch function of background, in order to avoid wrong operation, the Demo sets only switch sharp wave background, after loading custom background support forward switch (dial to the right) and backward switching (toggle) to the left, The operation mode is the same as that of mobile phones, supporting dynamic video background. At the same time, if you want to use customized gestures to switch background or achieve other gesture effects, you can integrate Huawei ML Kit hand key recognition for customized development.

Is it an imaginative interactive experience? Let’s see how this is done.

Development steps

  1. Add the HUAWEI AGCP plug-in and Maven code base.
buildscript {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
    dependencies {
        ...
        classpath 'com. Huawei. Agconnect: agcp: 1.4.1.300'
    }
}
  
allprojects {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}}}Copy the code
  1. Full SDK integration.
dependencies{
     // Introduce the basic SDK for image segmentation
    implementation 'com. Huawei. HMS: ml - computer vision - segmentation: 2.0.4.300'
    // Introduce a multi-category split model package
    implementation 'com. Huawei. HMS: ml - computer vision - image segmentation - multiclass - model: 2.0.4.300'
    // Introduce portrait split model package
    implementation 'com. Huawei. HMS: ml - computer vision - image segmentation - body - model: 2.0.4.300'
    // Introduce the basic SDK for gesture recognition
    implementation 'com. Huawei. HMS: ml - computer vision - handkeypoint: 2.0.4.300'
    // Introduce the model package of hand keypoint detection
    implementation 'com. Huawei. HMS: ml - computer vision - handkeypoint - model: 2.0.4.300'
}
Copy the code
  1. Add configuration in the file header.

Add apply plugin: ‘com.huawei. Agconnect ‘to apply plugin: ‘com.android. Application’

  1. Automatic updating of machine learning models

Add in the androidmanifest.xml file

<manifest
    ...
    <meta-data
        android:name="com.huawei.hms.ml.DEPENDENCY"
        android:value="imgseg,handkeypoint" />
    ...
</manifest>
Copy the code
  1. Create an image segmentation detector.
MLImageSegmentationAnalyzer imageSegmentationAnalyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();// Image segmentation analyzer
MLHandKeypointAnalyzer handKeypointAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();// Gesture recognition analyzer
  
MLCompositeAnalyzer analyzer = new MLCompositeAnalyzer.Creator()
                                    .add(imageSegmentationAnalyzer)
                                   .add(handKeypointAnalyzer)
                                   .create();
Copy the code
  1. Create the recognition result processing class.
public class ImageSegmentAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLImageSegmentation> {
    @Override
    public void transactResult(MLAnalyzer.Result<MLImageSegmentation> results) {
        SparseArray<MLImageSegmentation> items = results.getAnalyseList();
        // The developer processes the recognition results as needed. Note that only the detection results are processed here.
        // Other detection-related interfaces provided by ML Kit cannot be called.
    }
    @Override
    public void destroy(a) {
        // Detect the end callback method, used to release resources, etc.}}public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
    @Override
    public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
        SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
        // The developer processes the recognition results as needed. Note that only the detection results are processed here.
        // Other detection-related interfaces provided by ML Kit cannot be called.
    }
    @Override
    public void destroy(a) {
        // Detect the end callback method, used to release resources, etc.}}Copy the code
  1. Set up the recognition of the result processor, analyzer and result processor binding.
imageSegmentationAnalyzer.setTransactor(new ImageSegmentAnalyzerTransactor());
handKeypointAnalyzer.setTransactor(new HandKeypointTransactor());
Copy the code
  1. Create LensEngine
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context,analyzer)
    // Set the camera front and rear mode. Lensengine. BACK_LENS is rear and lensengine. FRONT_LENS is front.
    .setLensType(LensEngine.FRONT_LENS)
    .applyDisplayDimension(1280.720)
    .applyFps(20.0 f)
    .enableAutomaticFocus(true)
    .create();
Copy the code
  1. Activate the camera, read the video stream, and identify.
// Implement the other logic of the SurfaceView control yourself.
SurfaceView mSurfaceView = new SurfaceView(this);
try {
    lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
    // Exception handling logic.
}
Copy the code
  1. When the detection is complete, stop the analyzer and release the detection resources.
if(analyzer ! =null) {
    try {
        analyzer.stop();
    } catch (IOException e) {
        // Exception handling.}}if(lensEngine ! =null) {
    lensEngine.release();
}
Copy the code

conclusion

To sum up, this little hack technology can be quickly implemented with a few simple steps such as importing packages, establishing detection, analysis and result processing. Also through image segmentation technology, we can also do many things, such as matte barrage in video website, combined with some front rendering technique easily avoid barrage to cover the body part, or making use of the existing materials of various sizes of elegant photo photos, one of the main advantage is that the semantic segmentation can accurately control you want to split out the object, In addition to portraits, you can also segment food, pets, buildings, landscapes and even plants without resorting to professional retouching software on your computer.

Github Demo

More detailed development guide reference huawei website developer alliance: developer.huawei.com/consumer/cn…


The original link: developer.huawei.com/consumer/cn… Original author: Timer