1. Introduction
Overview
HUAWEI ML Kit provides the face detection service, which detects a user's facial features, including the face contour and angles, as well as positions of the eyebrows, eyes, nose, mouth, and ears on the face, and then returns detection results to your app.
What You Will Create
In this codelab, you will create a demo app for face detection.
What You Will Learn
In this codelab, you will learn how to:
- Use the ML SDK.
- Call the face detection service of ML Kit.
2. What You Will Need
Hardware Requirements
- A computer with Android Studio installed for app development
- A mobile phone running Android 4.4 or later for app development and debugging
Software Requirements
- JDK 1.8.211 or later
- Android Studio: 3.X or later
- minSdkVersion: 19 or later (mandatory)
- targetSdkVersion: 30 (recommended)
- compileSdkVersion: 30 (recommended)
- Gradle version: 4.6 or later (recommended)
- Test device: a Huawei phone running EMUI 5.0 or later, or a non-Huawei phone running Android 4.4 or later (Some capabilities are available only to Huawei phones.)
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
3. Integrating the SDK
- Configure the Maven repository address. For details, please refer to Configuring the Maven Repository Address for the HMS Core SDK.
- Integrate the face detection SDK. For details, please refer to Integrating the Face Detection SDK.
4. Downloading the Demo Project
Click the following link to download the demo project of this codelab:
Download
Decompress the downloaded package to a local directory, for example, D:\mlkit-demo.
5. Developing and Running the Demo Project
Configuring the Project and Device
- Go to File > Open, and select the demo project from the directory where the decompressed demo project is stored, for example, D:\MLKit-master\initial to import it.
- If a dialog box similar to the following is displayed, click OK.
- Synchronize the project with the Gradle files.
- Verify that the phone is correctly connected to your computer.
If the following information is displayed, the project is successfully synchronized.
If Unknown Device or No device is displayed, run the following commands in the CMD window to restart the ADB service:
adb kill-server
adb start-server

Adding the Camera Permission
Add the camera permission for the app by adding the following statement to the AndroidManifest.xml file:
<!--todo step 1: add authorization of camera -->
<uses-feature android:name="android.hardware.camera" /> <uses-permission android:name="android.permission.CAMERA"/>
Creating a Face Analyzer Based on the Device-Side Algorithm
Create a face analyzer for the app. To do this, add the following content to the createFaceAnalyzer method in the LiveImageDetectionActivity.java file:
// todo step 2: add on-device face analyzer
MLFaceAnalyzerSetting setting = new MLFaceAnalyzerSetting.Factory()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
.setPerformanceType(MLFaceAnalyzerSetting.TYPE_SPEED)
.allowTracing()
.create();
analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
analyzer.setTransactor(new FaceAnalyzerTransactor());
Creating a LensEngine Object
Add the function of starting the camera for the app. To do this, add the following content to the createLensEngine method in the LiveImageDetectionActivity.java file:
// todo step 3: add on-device lens engine
LensEngine mLensEngine = new LensEngine.Creator(context, analyzer)
.setLensType(lensType)
.applyDisplayDimension(1600, 1024)
.applyFps(25.0f)
.enableAutomaticFocus(true)
.create();
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
Displaying Face Detection Results
Add the function of displaying face detection results for the app. To do this, add the following content to the transactResult method in the FaceAnalyzerTransactor.java file:
// todo step 4: add on-device face graphic
public class FaceAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLFace> {
@Override
public void transactResult(MLAnalyzer.Result<MLFace> results) {
SparseArray<MLFace> items = results.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
MLFaceGraphic graphic = new MLFaceGraphic(mGraphicOverlay, items);
mGraphicOverlay.add(graphic);
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
Running the App
Click on the toolbar of Android Studio to run the app.
Testing Your App
Focus the camera on the face. The face contour and facial landmarks are correctly displayed.
6. Congratulations
Well done. You have successfully completed this codelab and learned how to:
- Use the ML SDK.
- Call the face detection service of ML Kit.
7. Reference
This project is only for demonstration. For details about the actual development process, please refer to the ML Kit Development Guide.