preface
Huawei’s HMS Core4.0 is the latest in a series of HMS Core4.0 releases.
One of the highlighted services is Machine Learning Kit (MLKit).
So what can machine learning services do? What problems can you solve during application development?
Today, take face detection as an example, give you a practical sample, let you feel the powerful functions provided by machine learning services and provide convenience to developers.
Machine learning services provide the ability to detect faces
Let me first show you the face detection capability of Huawei machine learning service:
From this diagram you can see, can support to recognize faces toward the face recognition, expression of support to detect human face (happy, disgust, surprised, sad, angry, angry), support to detect human faces attributes (sex, age, wear), support check everywhere, support face, nose, eyes, lips and eyebrows feature detection, the coordinates of Also support multi-face detection at the same time, is not very powerful!
Core tip: this function is free, android all models covered!
Machine learning services provide the ability to detect faces
Today, I will use the multi-face recognition + expression detection ability of machine learning service to write a small demo of smiling capture and do a practical practice. Demo source github download please stamp here
1. Development preparation
The preparations for kit development of Huawei HMS are almost the same, which is nothing more than adding maven dependency and introducing SDK
1.1 Add Huawei Maven storehouse to project Gradle
Add the following maven address increments:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}}}Copy the code
1.2 Add SDK dependencies to build.gradle at application level
Introduce face recognition SDK and basic SDK
dependencies{
// Introduce the base SDK
implementation 'com. Huawei. HMS: ml - computer - vision: 1.0.2.300'
// Introduce face detection capability pack
implementation 'com. Huawei. HMS: ml - computer vision - face recognition - model: 1.0.2.300'
}
Copy the code
1.3 Add model auto-download incrementally in androidmanifest.xml
This is mainly used to update the model. After the algorithm is optimized, it can be automatically downloaded to the mobile phone for updating
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
</application></manifest>
Copy the code
1.4 Apply for camera and storage permissions in androidmanifest.xml
<! -- Camera permission --><uses-permission Android :name="android.permission.CAMERA"/ > <! -- uses storage permission --><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Copy the code
2. Code development
2.1 Create face analyzer and take photos after detecting smiles
Photo taken after detection:
- Configure analyzer parameters
- Pass the parser parameter configuration to the parser
- In Analyzer. SetTransacto, transactResult is rewritten to process the content after face recognition. Face recognition will return a smile confidence (simply understood as the probability of smile), as long as the setting is greater than a certain confidence to take photos.
private MLFaceAnalyzer analyzer;private void createFaceAnalyzer(a) {
MLFaceAnalyzerSetting setting =
new MLFaceAnalyzerSetting.Factory()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
.setKeyPointType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_KEYPOINTS)
.setMinFaceProportion(0.1 f)
.setTracingAllowed(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
this.analyzer.setTransactor(new MLAnalyzer.MLTransactor<MLFace>() {
@Override public void destroy(a) {}@Override public void transactResult(MLAnalyzer.Result<MLFace> result) {
SparseArray<MLFace> faceSparseArray = result.getAnalyseList();
int flag = 0;
for (int i = 0; i < faceSparseArray.size(); i++) {
MLFaceEmotion emotion = faceSparseArray.valueAt(i).getEmotions();
if(emotion.getSmilingProbability() > smilingPossibility) { flag++; }}if (flag > faceSparseArray.size() * smilingRate && safeToTakePicture) {
safeToTakePicture = false; mHandler.sendEmptyMessage(TAKE_PHOTO); }}}); }Copy the code
Photo storage part:
private void takePhoto(a) {
this.mLensEngine.photograph(null.new LensEngine.PhotographListener() {
@Override public void takenPhotograph(byte[] bytes) {
mHandler.sendEmptyMessage(STOP_PREVIEW);
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
saveBitmapToDisk(bitmap);
}
});}
Copy the code
2.2 Create a vision engine to capture the camera’s dynamic video stream and transmit it to the analyzer
private void createLensEngine(a) {
Context context = this.getApplicationContext();
// Create LensEngine
this.mLensEngine = new LensEngine.Creator(context, this.analyzer).setLensType(this.lensType)
.applyDisplayDimension(640.480)
.applyFps(25.0 f)
.enableAutomaticFocus(true) .create(); }Copy the code
2.3 Dynamic permission application, connected analyzer and visual engine creation code
@Overridepublic void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
this.setContentView(R.layout.activity_live_face_analyse);
if(savedInstanceState ! =null) {
this.lensType = savedInstanceState.getInt("lensType");
}
this.mPreview = this.findViewById(R.id.preview);
this.createFaceAnalyzer();
this.findViewById(R.id.facingSwitch).setOnClickListener(this);
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
} else {
this.requestCameraPermission();
}}
private void requestCameraPermission(a) {
final String[] permissions = new String[]{Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE};
if(! ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE);
return;
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
if(requestCode ! = LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE) {super.onRequestPermissionsResult(requestCode, permissions, grantResults);
return;
}
if(grantResults.length ! =0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
return;
}}
Copy the code
conclusion
How about the development process is so simple that you can develop a new feature in 30 minutes! Let’s experience the effect of this multi-face smile grab bag.
Single smiling face shot:
Many smiling faces captured:
Based on face detection ability, what functions can be done, please open your imagination! Here are some more hints, such as:
- Add some interesting decorative effects by recognizing the location of facial features such as ears, eyes, nose, mouth and eyebrows.
- By recognizing the contour of the face, do some exaggerated deformation and stretching, generate some interesting portrait pictures, or develop beauty functions for the contour area.
- Through age identification, develop some parental control functions for the pain points of children addicted to electronic products.
- Develop eye warning features by measuring how long the eyes are staring at the screen.
- Through random instructions (shaking head, blinking eyes, opening mouth, etc.) to achieve user action matching living detection function.
- Recommend related products to users through comprehensive use of the user’s age, gender and other test results;
For details about the development guide, see the Huawei Developer Association machine Learning Service Development Guide on its official website
Content source: developer.huawei.com/consumer/cn…
Original author: AI_ Talking