Introduction

HUAWEI ML Kit detects a user's facial features, including the position of their eyes, ears, nose, and mouth, and sends to your app. You can utilize this information to develop fun apps.

What You Will Do

In this codelabs, you will create a demo project for face detection.

What You Will Learn

Hardware Requirements

Software Requirements

Download source code

Unpack the downloaded zip file to a local path(such as D:\mlkit-demo). This will unpack a root folder (FaceDemo) with all of the resources you will need.

Configure your projects and devices

  1. Select open from File drop-down menu to import target demo project from D:\mlkit-demo\FaceDemo.
  2. Choose "OK" if a pop-up window shows below.
  3. Sync project with gradle files.
  4. If you see the following information, the synchronization project is successful.

  5. Make sure Huawei phone is currently connected to computer.
  6. If "Unknown Device" or "No device" icon is shown, please restart adb server by using command.

    adb kill-server and adb start-server.

    ((Tip1: default adb path isCC:\Users\USER_NAME\AppData\Local\Android\Sdk*platform-tools*\adb

    Tip2:If daemon fails to start, you need to kill process occupying port 5037 by using command

    kill PORT_NUMBER

    Tip3: commandNetstat –ano|findstr 5037can help you find process occupying port 5037)

    If the following mobile phone icon is displayed in the toolbar, your configuration has taken effect.

Add authorization of camera

In this step, we will add authorization of camera. Add the following to AndroidManifest.xml:

AndroidManifest.xml

<!--todo step 1: add authorization of camera --> <uses-feature android:name="android.hardware.camera" /> <uses-permission android:name="android.permission.CAMERA"/>

Add on-device face detector

In this step, we will add functionality to your app to build face detector.Add the following to the createFaceAnalyzer method of LiveImageDetectonActivity class:

LiveImageDetectorActivity.java

// todo step 2: add on-device face analyzer MLFaceAnalyzerSetting setting = new MLFaceAnalyzerSetting.Factory() .setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES) .allowTracing() .create(); analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);

Add on-device LensEngine

In this step, we will add functionality to your app to pull up camera. Add the following to the createLensEngine method of LiveImageDetectonActivity class:

LiveImageDetectorActivity.java

// todo step 3: add on-device lens engine mLensEngine = new LensEngine.Creator(context, analyzer) .setLensType(lensType) .applyDisplayDimension(1600, 1024) .applyFps(25.0f) .enableAutomaticFocus(true) .create();

Add on-device face graphic

In this step, we will add functionality to your app to graph the detected face. Add the following to the transactResult method of FaceAnalyzerTransactor class:

FaceAnalyzerTransactor.java

// todo step 4: add on-device face graphic MLFaceGraphic graphic = new MLFaceGraphic(mGraphicOverlay, faceSparseArray.valueAt(i)); mGraphicOverlay.add(graphic);

Run app

Click Run () in the Android Studio toolbar.

Take you phone to dry run

Focus the camera on your face. Then the contour and features of a face can be detected correctly.

Well done. You have successfully completed this codelab and learned:

For more information about HUAWEI ML Kit, please visit our official website(https://developer.huawei.com/consumer/en/hms/huawei-mlkit).

This project is only for quick demonstration, and actual development process should strictly follow the development guide. For details, please refer to

https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4

Code copied