This component is currently available only on the React Native iOS

The actual development requirements of this component are as follows: it can detect and mark faces, realize basic beauty, and can carry out basic camera functions such as taking photos and linefeed lens. Official component packaging tutorial

This article code: DEMO run DEMO

$ git clone https://github.com/lianglei777/demos.git
$ cd demos
$ git checkout RNFaceDemo
$ cd RNFaceDemo
$ npm install 
$ cd ios 
$ pod install
Copy the code

If pod install fails, refer to the Cocoapods section of this article.

Component functions

  • Face marker, return the number of faces
  • Filter beauty (based on GPUImage), beauty degree can be adjusted (0~9)
  • Camera functions, including taking pictures, changing front and rear lenses, other camera functions can be self-expansion

The effect is shown below.

How to use

Code files

  1. Add the Camera in the ios folder of demo to the ios directory of your project.
  2. Add related camera album permission configuration in ios
<key>NSCameraUsageDescription</key> <string> Use your camera to take pictures < / string > < key > NSPhotoLibraryAddUsageDescription < / key > < string > save the image, using your photo album to save photos < / string > < key > NSPhotoLibraryUsageDescription < / key > < string > upload avatar, use your photo album to get photo < / string >Copy the code
  1. Js layers using reference NativeModule/RNFaceDetectView. Js and Pages/ComponentBridgeDemo js

Install GPUImage

Add the following content to your Podfile and run Pod Install

pod 'GPUImage',:git => 'https://github.com/lianglei777/GPUImage.git'
Copy the code

CMD + B to compile, if you encounter the following problems

Error 1

Solution: Add the libgpuimage. a file by clicking the steps below

Error 2

Build Settings — Library Search Paths, double click to add “${PODS_CONFIGURATION_BUILD_DIR}/GPUImage”, select non-recursive

introduce

More comments have been added to the code, but here are some points I think I should focus on based on the file.

GPUImage

GPUImage is an Object C library that uses GPU to add filter effects and beautify images. Unfortunately, the author gave up maintenance after swift appeared, and some codes need to be modified to run in the newer iOS version. Here is the GPUImage library I modified, which is also used in the component

FSKGPUImageBeautyFilter

FSKGPUImageBeautyFilter is a beauty filter based on GPUImage, which can adjust the beauty effect through three dimensions.

/** / @property (nonatomic, assign) CGFloat beautyLevel; / / @property (nonatomic, assign) CGFloat brightLevel; /** Tonal intensity */ @property (nonatomic, assign) CGFloat toneLevel;Copy the code

Note here FSKGPUImageBeautyFilter. M file initWithFragmentShaderFromString way, the definition of the incoming shader parameters if you don’t understand the related content, please do not modify, Also do not in order to code beautiful to add a space or newline, this is I stepped on the pit, the current code is debugging actual verification, please rest assured to use.

RCTFaceDetectView

Here is the main code that encapsulates the component

RCTFaceDetectView.h

@property(nonatomic,copy)RCTBubblingEventBlock onFaceCallback; RCTBubblingEventBlock onFaceCallback; @property(nonatomic,copy)NSString *beautyLevel; + (instancetype)sharedInstance; // singleton - (UIView *)initBeautifyFaceView; // Initialize camera interface // Switch camera front and rear cameras - (void)switchCameraFrontOrBack; / / photo - (void) takeFaceDetectCamera (RCTResponseSenderBlock) successBlock; -(void)setBeautyLevel:(NSString *)level; // stopCamera capture -(void)stopCamera;Copy the code

Notice here

  • OnFaceCallback is the callback method used in the JS component. It must be defined using RCTBubblingEventBlock. BeautyLevel is a prop pass argument

  • SwitchCameraFrontOrBack, takeFaceDetectCamera, stopCamera are component function methods, before due to requirements, there is no encapsulation as component parameter method, you can customize the relevant call method exposed to JS. Currently, the wrapper of the native method in the exposed method reference is not added to demo. It is written as follows:

#pragma Mark - Face detection camera: Photo callback The photo picture base64 RCT_REMAP_METHOD(takeFaceDetectCameraWithCallback,takeFaceDetectCamera:(RCTResponseSenderBlock)successBlock){ dispatch_async(dispatch_get_main_queue(), ^{ [[RCTFaceDetectView sharedInstance] takeFaceDetectCamera:successBlock]; }); } #pragma mark - Face detection camera: Before and after the switch camera RCT_REMAP_METHOD (switchCameraFrontOrBack, switchCameraFrontOrBack) {dispatch_async (dispatch_get_main_queue (), ^{ [[RCTFaceDetectView sharedInstance] switchCameraFrontOrBack]; }); } #pragma mark - Face detection camera: Skin care factor RCT_REMAP_METHOD (setFilterLevel, setBeautyLevel: (float) level) {dispatch_async (dispatch_get_main_queue (), ^{ [[RCTFaceDetectView sharedInstance] setBeautyLevel: [NSString stringWithFormat:@"%f",level]]; }); } #pragma mark -- stop video stream -- RCT_EXPORT_METHOD(stopFaceDetectCamera) {[[RCTFaceDetectView sharedInstance] stopCamera]; [[RCTFaceDetectView sharedInstance] unobserveGlobalNotifications]; }Copy the code

RCTFaceDetectView.m

The point is introduced

How to obtain the face related parameters

The ios proxy method needs to be implemented as follows

#pragma mark - AVCaptureMetadataOutputObjectsDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
  ...
}
Copy the code

The metadataObjects parameter contains the number of faces and their coordinate positions in the camera.

How do I draw a face frame

Coordinate transformation is required here, will face in the camera coordinate to the coordinates of the screen, here you need to use transformedMetadataObjectForMetadataObject method, specific please check code

How to beautify

Use the FSKGPUImageBeautyFilter in the GPUImage, written normally.

RCTFaceDetectViewManager

Expose the native component that RCTFaceDetectView encapsulates to the JS layer

For more, run the demo and read the code

The latter

When writing this article, I planned to write in detail as much as possible, but in the actual writing process, SOME things would be very wordy if detailed writing, so I used the form of brief points when writing. There are many comments in the code, as long as you read carefully, you can see and use it.

I remember that it took nearly two weeks to research and develop this demand. How to get face data and how to do face painting, how to do filter beautify, how to do iOS native component packaging, and how to combine all of these features together. At that time, I did not find a suitable open source component or demo for reference. Instead, I put it together by analyzing function points and gradually developing it. After writing it, I still felt full of sense of achievement.

Forgive me for only having iOS components, as I’m not familiar with Android.

If you have any questions please leave them in the comments section

  • React Native js and ios value transfer
  • React Native packs face detection and beauty components
  • React Native WebView usage scenarios