1. Introduction
This article describes how to use OpenCV for simple face recognition development, before this, you need to configure the OpenCV and NDK environment. The version I use is OpenCV 3.4.6, which can be downloaded here. The version of NDK used is Android-NdK-R16b, which can be downloaded here. For the problems caused by using other versions, this article will not describe, because using other versions, you will encounter a lot of pits, please pay attention.
2. The NDK configuration
Install LLDB and CMake tools in SDK Manage, not NDK, because we will configure the downloaded version 16B later.
Then configure the NDK path under SDK Location in the Project Structure
3. The OpenCV configuration
Need to find libOpencv_java3. so in SDK opencv-Android-sdk SDK native libbs armeabi-v7a, Find the OpenCV and Opencv2 folders in opencv-Android-sdk \ SDK \native\jni\include and copy them to the project.
In cmakelists. TXT, configure OpencV as follows
# For more information about using CMake with Android Studio, read the
# documentation: https://d.android.com/studio/projects/add-native-code.html
# Sets the minimum version of CMake required to build the native library.Cmake_minimum_required (VERSION 3.4.1 track)#set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11")
# 1. Determine the compiler type, if it is a GCC compiler, add c++11 support to the compiler options
if(CMAKE_COMPILER_IS_GNUCXX)
set(CMAKE_CXX_FLAGS "-std=c++11 ${CMAKE_CXX_FLAGS}")
message(STATUS "optional:-std=c++11")
endif(CMAKE_COMPILER_IS_GNUCXX)
# 2. We need to import our header file based on this configured directoryinclude_directories(.. /.. /main/jniLibs/include)# 3. Add the dependency opencv.so library
set(distribution_DIR ${CMAKE_SOURCE_DIR}/.. /.. /.. /.. /src/main/jniLibs) add_library( opencv_java3 SHARED IMPORTED) set_target_properties( opencv_java3 PROPERTIES IMPORTED_LOCATION .. /.. /.. /.. /src/main/jniLibs/armeabi-v7a/libopencv_java3.so)# Creates and names a library, sets it as either STATIC
# or SHARED, and provides the relative paths to its source code.
# You can define multiple libraries, and CMake builds them for you.
# Gradle automatically packages shared libraries with your APK.
add_library( # Sets the name of the library.
native-lib
# Sets the library as a shared library.
SHARED
# Provides a relative path to your source file(s).
native-lib.cpp)
# Searches for a specified prebuilt library and stores the path as a
# variable. Because CMake includes system libraries in the search path by
# default, you only need to specify the name of the public NDK library
# you want to add. CMake verifies that the library exists before
# completing its build.
find_library( # Sets the name of the path variable.
log-lib
# Specifies the name of the NDK library that
# you want CMake to locate.
log)
# Specifies libraries CMake should link to your target library. You
# can link multiple libraries, such as libraries you define in this
# build script, prebuilt third-party libraries, or system libraries.
target_link_libraries( # Specifies the target library.
native-lib opencv_java3
# 4. Add the dependency library
jnigraphics
# Links the target library to the log library
# included in the NDK.
${log-lib})
Copy the code
Gradle configuration
android {
...
defaultConfig {
...
externalNativeBuild {
cmake {
cppFlags "-std=c++11 -frtti -fexceptions"
abiFilters 'armeabi-v7a'
arguments "-DANDROID_STL=gnustl_static"
}
ndk {
abiFilters 'armeabi-v7a'
}
}
}
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"}}}Copy the code
4. Specific core implementation
4.1 Kotlin upper implementation
Write the FaceDetection class to interact with native. The method is to detect faces, save face information and load classifier files for face recognition
Class FaceDetection {@param mFaceBitmap */ external Fun faceDetectionSaveInfo(mFaceBitmap: * @param filePath */ external Fun loadCascade(filePath: String) companion object { init { System.loadLibrary("native-lib")}}}Copy the code
Write functional code in MainActivity
class MainActivity : AppCompatActivity() {
private var mFaceBitmap: Bitmap? = null
private var mFaceDetection: FaceDetection? = null
private var mCascadeFile: File? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
mFaceBitmap = BitmapFactory.decodeResource(resources, R.drawable.timg1)
face_image.setImageBitmap(mFaceBitmap)
copyCascadeFile()
mFaceDetection = FaceDetection()
if(mCascadeFile ! = null) { mFaceDetection? .loadCascade(mCascadeFile!! .absolutePath) } } private funcopyCascadeFile() {
try {
// load cascade file from application resources
var inputStream = resources.openRawResource(R.raw.lbpcascade_frontalface)
val cascadeDir = getDir("cascade", Context.MODE_PRIVATE)
mCascadeFile = File(cascadeDir, "lbpcascade_frontalface.xml")
if(mCascadeFile!! .exists())return
val os = FileOutputStream(mCascadeFile)
var buffer = ByteArray(4096)
var bytesRead: Int = inputStream.read(buffer)
while(bytesRead ! = -1) { os.write(buffer, 0, bytesRead) bytesRead = inputStream.read(buffer) } inputStream.close() os.close() } catch (e: IOException) {e.printStackTrace()}} Fun faceDetection(view: view) { .let { mFaceDetection? .faceDetectionSaveInfo(it) face_image.setImageBitmap(it) } } }Copy the code
4.2 Code implementation of native layer
After analyzing the requirements, we need to convert the passed Bitmap into Mat in combination with OpenCV’s characteristics. After identifying the frame, we need to convert Mat into Bitmap and call back to Kotlin layer. Mat has a type: CV_8UC4 corresponds to ARGB_8888 in our Bitmap, CV_8UC2 corresponds to RGB_565 in our Bitmap. The function is implemented as follows
JNIEXPORT void bitmap2Mat(JNIEnv *env, Mat & Mat, jobject bitmap)typeCV_8UC4 corresponds to ARGB_8888 in our Bitmap, CV_8UC2 corresponds to RGB_565 in our Bitmap. AndroidBitmapInfo info; void* pixels; AndroidBitmap_getInfo(env,bitmap,&info); AndroidBitmap_lockPixels(env, Bitmap,&pixels); // Specify the width and height of mattype BGRA
mat.create(info.height,info.width,CV_8UC4);
ifFormat == ANDROID_BITMAP_FORMAT_RGBA_8888){// The corresponding mat should be CV_8UC4 mat temp(info.height,info.width,CV_8UC4,pixels); Temp. CopyTo (mat); // Copy temp to mat. }else if(info. The format = = ANDROID_BITMAP_FORMAT_RGB_565) {/ / corresponding mat should be CV_8UC2 mat temp (info. The height, the info. Width, CV_8UC2, pixels); // mat is CV_8UC4, CV_8UC2 -> CV_8UC4 cvtColor(temp, MAT,COLOR_BGR5652BGRA); AndroidBitmap_unlockPixels(env, Bitmap); Void mat2Bitmap(JNIEnv *env, mat mat, jobject Bitmap) {// 1. AndroidBitmapInfo info; void* pixels; AndroidBitmap_getInfo(env,bitmap,&info); AndroidBitmap_lockPixels(env, Bitmap,&pixels);if(info.format == ANDROID_BITMAP_FORMAT_RGBA_8888){// C4
Mat temp(info.height,info.width,CV_8UC4,pixels);
if(mat.type() == CV_8UC4){
mat.copyTo(temp);
}
else if(mat.type() == CV_8UC2){
cvtColor(mat,temp,COLOR_BGR5652BGRA);
}
else if(mat.type() == CV_8UC1){// grayscale mat cvtColor(MAT,temp,COLOR_GRAY2BGRA); }}else if(info.format == ANDROID_BITMAP_FORMAT_RGB_565){// C2
Mat temp(info.height,info.width,CV_8UC2,pixels);
if(mat.type() == CV_8UC4){
cvtColor(mat,temp,COLOR_BGRA2BGR565);
}
else if(mat.type() == CV_8UC2){
mat.copyTo(temp);
}
else if(mat.type() == CV_8UC1){// grayscale mat cvtColor(MAT,temp,COLOR_GRAY2BGR565); AndroidBitmap_unlockPixels(env, Bitmap); }Copy the code
Face recognition core part, can use OpenCV gray processing and straight square equalization of the picture, which can improve the recognition rate, recognize the face, we need to draw a box on the face, to see the recognition results.
jint JNICALL Java_com_vegen_facedetection_FaceDetection_faceDetectionSaveInfo(JNIEnv *env, jobject instance, Opencv has a very key class called Mat, opencv is written in C and C++, 1. The Bitmap is converted into a C++ object that opencv can operate on, Mat is a matrix Mat Mat; bitmap2Mat(env,mat,bitmap); Gray_mat opencV gray_mat opencV gray_mat opencV gray_mat cvtColor(mat,gray_mat,COLOR_BGRA2GRAY); // Again handle direct equalize_mat; equalizeHist(gray_mat,equalize_mat); STD ::vector<Rect> faces; CascadeClassifier. DetectMultiScale (equalize_mat, faces, 1.1, 5); LOGE("Number of faces: %d",faces.size());
if(faces.size() ! = 0) {for(Rect faceRect: Faces) {// Draw a rectangle(mat,faceRect,Scalar(255,155,155),8); Mat2Bitmap (env,mat,bitmap); // Save face information // save face information Mat, picture JPG Mat face_info_mat(equalize_mat, faceRect) // Save face_info_mat}return 0;
}
Copy the code
Load the classifier file
JNIEXPORT void JNICALL
Java_com_vegen_facedetection_FaceDetection_loadCascade(JNIEnv *env, jobject instance, jstring filePath_) {
const char *filePath = env->GetStringUTFChars(filePath_, 0);
cascadeClassifier.load(filePath);
LOGE("Classifier file loaded successfully");
env->ReleaseStringUTFChars(filePath_, filePath);
}
Copy the code
5. Identify achievements
It is found that the recognition rate of a women’s volleyball team photo is quite high, and the effect is shown in the figure below
Original photo (women’s Volleyball photo source network, deleted)
Effect picture after identification
6. The latter
OpenCV is very powerful, the use of more detailed tutorials will be introduced later, and the follow-up will improve real-time face recognition, please look forward to.
The full source of this article can be viewed here at github.com/Vegen/FaceD… Welcome star.