Overview

The AirTouch app is a simple app which allows your users to experience virtual reality. To build the app and then boost revenue, you need to integrate some kits from HMS Core. The app name tells that users do not need to touch their screens to perform operations while using the app. In this app, we'll use the capabilities of face detection,ASR, and hand gesture recognition from ML Kit, Site Kit, and AR Engine. This app helps your users to quickly get answers for their questions and acts as their life assistant.

Features

HMS Core Kits

Sign-in

Face detection (smile) from ML Kit

Voice detection

ASR from ML Kit

App rating

Hand gesture detection from ML Kit

Place detail search

Site Kit

Scroll-up and Scroll-down

AR Engine

What You Will Create

In this codelab, you will create a demo project by using the capabilities of ML Kit (face detection, ASR, hand gesture detection), Site Kit, and AR Engine. To build the project, you need to:

What You Will Learn

In this code lab, you will learn how to:

Hardware Requirements

Software Requirements

Click here to learn more about how to prepare for the integration.

Go to Project Settings > Manage APIs in AppGallery Connect, and enable the services below.

Now, you have successfully enabled the required services for your app.

ML Kit (Face Detection)

ML Kit provides you with the face detection service. You can build the sign-in function with this service integrated.

Add the following dependencies in the app-level build.gradle file.

Implementation 'com.huawei.hms:ml-computer-vision-face:2.0.5.300' implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.5.300' implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.5.300' implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.5.300'

Step 1: Add CameraSourcePreview and GraphicOverlay to the layout. The layout will be used in LiveFaceDetectionHMSActivity to allow users to sign in to your app via eye blinking.

<com.huawei.touchmenot.java.hms.camera.CameraSourcePreview android:id="@+id/preview" android:layout_width="match_parent" android:layout_height="match_parent"> <com.huawei.touchmenot.java.hms.camera.GraphicOverlay android:id="@+id/overlay" android:layout_width="match_parent" android:layout_height="match_parent" /> </com.huawei.touchmenot.java.hms.camera.CameraSourcePreview>

Step 2: Create the createFaceAnalyzer() method in the LiveFaceDetectionHMSActivity Activity. It will initialize the MLFaceAnalyzerSetting object for detecting face keypoints, such as facial features and expressions.

Java

private MLFaceAnalyzer createFaceAnalyzer() { MLFaceAnalyzerSetting setting = new MLFaceAnalyzerSetting.Factory() .setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES) .setPerformanceType(MLFaceAnalyzerSetting.TYPE_SPEED) .allowTracing() .create(); analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting); this.analyzer.setTransactor(new FaceAnalyzerTransactor(this.mOverlay, this)); return this.analyzer; }

Kotlin

private fun createFaceAnalyzer(): MLFaceAnalyzer? { val setting = MLFaceAnalyzerSetting.Factory() .setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES) .setPerformanceType(MLFaceAnalyzerSetting.TYPE_SPEED) .allowTracing() .create() analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting) analyzer?.setTransactor(FaceAnalyzerTransactor(mOverlay, this)) return analyzer }

Step 3: Create the FaceAnalyzerTransactor class.

This class implements the MLAnalyzer.MLTransactor<MLFace> API which gives an override method named as transactResult(). This method provide the MLFace object, which gives left/right eye open probability that helps detect eye blinking.

Java

public static float EYE_CLOSED_THRESHOLD = 0.4f FaceAnalyzerTransactor(GraphicOverlay ocrGraphicOverlay, LiveFaceDetectionHMSActivity context) { this.mGraphicOverlay = ocrGraphicOverlay; mContext = context; } @Override public void transactResult(MLAnalyzer.Result<MLFace> result) { this.mGraphicOverlay.clear(); SparseArray<MLFace> faceSparseArray = result.getAnalyseList(); for (int i = 0; i < faceSparseArray.size(); i++) { MLFaceFeature feature = faceSparseArray.get(i).getFeatures(); float leftOpenScore = feature.getLeftEyeOpenProbability(); float rightOpenScore = feature.getRightEyeOpenProbability(); if (leftOpenScore < EYE_CLOSED_THRESHOLD && rightOpenScore < EYE_CLOSED_THRESHOLD) { Log.e("Eye blinked called ---", "" + feature.getLeftEyeOpenProbability() + " : " + feature.getRightEyeOpenProbability()); mContext.runOnUiThread(new Runnable() { @Override public void run() { Toasty.success(mContext, "Eye blink detected. Login successful", Toast.LENGTH_SHORT, true).show(); new Handler().postDelayed(new Runnable() { @Override public void run() { loadHome(); } }, 800); } }); } } } private void loadHome() { mContext.startActivity(new Intent(mContext, HomeActivity.class)); mContext.finish(); }

Kotlin

class FaceAnalyzerTransactor internal constructor(private val mGraphicOverlay: GraphicOverlay?, private val mContext: LiveFaceDetectionHMSActivity) : MLTransactor<MLFace> { private val THREAD_DELAY = 800 override fun transactResult(result: MLAnalyzer.Result<MLFace>) { mGraphicOverlay!!.clear() val faceSparseArray = result.analyseList for (i in Constants.INIT_ZERO until faceSparseArray.size()) { val feature = faceSparseArray[i].features val leftOpenScore = feature.leftEyeOpenProbability val rightOpenScore = feature.rightEyeOpenProbability if (leftOpenScore < EYE_CLOSED_THRESHOLD && rightOpenScore < EYE_CLOSED_THRESHOLD) { Log.d(Constants.STR_EYE_BLINKED_CALLED, feature.leftEyeOpenProbability .toString() + Constants.STR_COLON + feature.rightEyeOpenProbability) mContext.runOnUiThread { Toast.makeText(mContext, Constants.STR_EYE_BLINK_LOGIN, Toast.LENGTH_SHORT).show() Handler().postDelayed({ loadHome() }, THREAD_DELAY.toLong()) } } } } private fun loadHome() { mContext.startActivity(Intent(mContext, HomeActivity::class.java)) mContext.finish() } companion object { private const val EYE_CLOSED_THRESHOLD = 0.4f } }

Result:

ML Kit (ASR)

ML Kit provides you with the automatic speech recognition service.

Add the following dependencies in the app-level build.gradle file.

implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.2.0.300' implementation 'com.huawei.hms:ml-computer-voice-asr:2.2.0.300'

Step 1: Create a speech recognizer using the code below.

Java

MLAsrRecognizer mSpeechRecognizer = MLAsrRecognizer.createAsrRecognizer(HomeActivity.this);

Kotlin

var mSpeechRecognizer = MLAsrRecognizer.createAsrRecognizer(this@HomeActivity)

Step 2: Add the API key obtained from the agconnect-service.json file.

Java

MLApplication.getInstance().setApiKey(API_KEY);

Kotlin

MLApplication.getInstance().apiKey = API_KEY

Step 3: Create a speech recognition result listener callback to get the speech in the format of text. It includes the following callback methods.

  1. onStartListening (): This method is called when the recorder starts to receive speech.
  2. onStartingOfSpeech (): This method is called when the user starts to speak, that is, the speech recognizer detects that the user starts to speak.
  3. onVoiceDataReceived (): This method returns the original PCM stream and audio power to the user.
  4. onRecognizingResults (): This method receives the recognized text from MLAsrRecognizer.
  5. onResults (): This method is called when the user receives the text data of ASR.
  6. onError (): This method is called when an error occurs in recognition.
  7. onState (): This method notifies the app status change.

Java

protected class SpeechRecognitionListener implements MLAsrListener { @Override public void onStartingOfSpeech() { Log.e(TAG, "onStartingOfSpeech"); start(); mTextView.setTextColor(getResources().getColor(R.color.color_red_card)); mTextView.setText("Listening.... Please wait."); waveLineView.startAnim(); // The user starts to speak, that is, the speech recognizer detects that the user starts to speak. } @Override public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) { // Return the original PCM stream and audio power to the user. } @Override public void onState(int i, Bundle bundle) { // Notify the app status change. Log.e(TAG, "onState"); } @Override public void onRecognizingResults(Bundle partialResults) { // Receive the recognized text from MLAsrRecognizer. Log.e(TAG, "onRecognizingResults"); } @Override public void onResults(Bundle results) { Log.e(TAG, "onResults"); endSpeech(); setImage(0); // Text data of ASR. String data = results.getString(MLAsrRecognizer.RESULTS_RECOGNIZED); mTextView.setText(data); displayResult(data); Log.e(TAG, data); waveLineView.stopAnim(); //startASR(); } @Override public void onError(int error, String errorMessage) { Log.e(TAG, "onError"); mTextView.setText(error + errorMessage); Toast.makeText(HomeActivity.this, error + errorMessage, Toast.LENGTH_SHORT).show(); pauseSpeech(); waveLineView.stopAnim(); // waveLineView.onPause(); mSpeechRecognizer.destroy(); startASR(); } @Override public void onStartListening() { Log.e(TAG, "onStartListening"); // The recorder starts to receive speech. setImage(1); mTextView.setText("Listening.... Please wait."); startSpeech(); waveLineView.startAnim(); } }

Kotlin

protected inner class SpeechRecognitionListener : MLAsrListener { override fun onStartingOfSpeech() { Log.d(TAG, Constants.STR_STARTING_SPEECH) start() mTextView!!.setTextColor(resources.getColor(R.color.color_red_card)) mTextView!!.text = Constants.STR_LISTENING_ALERT_MESSAGE startSpeech() } override fun onVoiceDataReceived(data: ByteArray, energy: Float, bundle: Bundle) { // Return the original PCM stream and audio power to the user. } override fun onState(i: Int, bundle: Bundle) { // Notify the app status change. Log.d(TAG, Constants.STR_ON_STATE) } override fun onRecognizingResults(partialResults: Bundle) { // Receive the recognized text from MLAsrRecognizer. Log.d(TAG, Constants.STR_RECOGNIZING_RESULTS) } override fun onResults(results: Bundle) { Log.d(TAG, Constants.STR_ON_RESULTS) endSpeech() setImage(Constants.INIT_ZERO) // Text data of ASR. val data = results.getString(MLAsrRecognizer.RESULTS_RECOGNIZED) mTextView!!.text = data displayResult(data) Log.d(TAG, data) endSpeech() } override fun onError(error: Int, errorMessage: String) { Log.d(TAG, Constants.STR_ON_ERROR) // Called when an error occurs in recognition. mTextView!!.text = error.toString() + errorMessage Toast.makeText(this@HomeActivity, error.toString() + errorMessage, Toast.LENGTH_SHORT).show() pauseSpeech() // waveLineView.onPause(); endSpeech() mSpeechRecognizer!!.destroy() startASR() } override fun onStartListening() { Log.d(TAG, Constants.STR_ON_START_LISTENING) // The recorder starts to receive speech. setImage(Constants.INIT_ONE) mTextView!!.text = Constants.STR_LISTENING_ALERT_MESSAGE startSpeech() } }

Step 4: Set speech recognition parameters and pass the object of SpeechRecognitionListener to start recognizing the speech. The ASR result can be obtained from MLAsrListener.

Java

mSpeechRecognizer.setAsrListener(new SpeechRecognitionListener()); // Set parameters and start the audio device. Intent intentSdk = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH) // Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Example: "zh": Chinese; "en-US": English; "fr-FR": French .putExtra(MLAsrConstants.LANGUAGE, "en-US") // Set to return the recognition result along with the speech. If you ignore the setting, this mode is used by default. Options are as follows: // MLAsrConstants.FEATURE_WORDFLUX: Recognizes and returns texts through onRecognizingResults. // MLAsrConstants.FEATURE_ALLINONE: After the recognition is complete, texts are returned through onResults. .putExtra(MLAsrConstants.FEATURE, MLAsrConstants.FEATURE_ALLINONE); // Start speech recognition. mSpeechRecognizer.startRecognizing(intentSdk); mTextView.setText("Ready to speak.");

Kotlin

mSpeechRecognizer?.setAsrListener(SpeechRecognitionListener()) // Set parameters and start the audio device. val intentSdk = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH) // Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Example: "zh": Chinese; "en-US": English; "fr-FR": French .putExtra(MLAsrConstants.LANGUAGE, R.string.US_English) // Set to return the recognition result along with the speech. If you ignore the setting, this mode is used by default. Options are as follows: // MLAsrConstants.FEATURE_WORDFLUX: Recognizes and returns texts through onRecognizingResults. // MLAsrConstants.FEATURE_ALLINONE: After the recognition is complete, texts are returned through onResults. .putExtra(MLAsrConstants.FEATURE, MLAsrConstants.FEATURE_ALLINONE) // Start speech recognition. mSpeechRecognizer?.startRecognizing(intentSdk) mTextView!!.text = Constants.STR_READY_TO_SPEAK

Step 5: After obtaining the recognition result from ASR, perform the following tasks:

Obtain weather-related information:

Java

if (data.contains(Constants.STR_WEATHER)) { getWeatherReport(Constants.WEATHER_LAT, Constants.WEATHER_LONG); }

Kotlin

if (data.contains(Constants.STR_WEATHER)) { getWeatherReport(Constants.WEATHER_LAT, Constants.WEATHER_LONG) }

Open camera/gallery:

Java

if (data.equalsIgnoreCase(Constants.STR_OPEN_CAMERA)) { speechData = new SpeechData(); speechData.setResponse(userName + Constants.STR_OPENING_CAMERA); mAdapter.add(speechData, recyclerView); Handler responseHandler = new Handler(); responseHandler.postDelayed(new Runnable() { @Override public void run() { Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivity(cameraIntent); } }, Constants.DELAY_MILLIS); } else if (data.equalsIgnoreCase(Constants.STR_OPEN_GALLERY)) { speechData = new SpeechData(); speechData.setResponse(userName + Constants.STR_OPENING_CAMERA); mAdapter.add(speechData, recyclerView); Handler responseHandler = new Handler(); responseHandler.postDelayed(new Runnable() { @Override public void run() { Intent intent = new Intent(); intent.setType(Constants.STR_IMG_TYPE); intent.setAction(Intent.ACTION_GET_CONTENT); startActivity(Intent.createChooser(intent, Constants.STR_SELECT_PICTURE)); } }, Constants.DELAY_MILLIS); }

Kotlin

if (data.equals(Constants.STR_OPEN_CAMERA, ignoreCase = true)) { speechData = SpeechData() speechData.response = userName + Constants.STR_OPENING_CAMERA mAdapter!!.add(speechData, recyclerView!!) val responseHandler = Handler() responseHandler.postDelayed({ val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE) startActivity(cameraIntent) }, Constants.DELAY_MILLIS.toLong()) } else if (data.equals(Constants.STR_OPEN_GALLERY, ignoreCase = true)) { speechData = SpeechData() speechData.response = userName + Constants.STR_OPENING_CAMERA mAdapter!!.add(speechData, recyclerView!!) val responseHandler = Handler() responseHandler.postDelayed({ val intent = Intent() intent.type = Constants.STR_IMG_TYPE intent.action = Intent.ACTION_GET_CONTENT startActivity(Intent.createChooser(intent, Constants.STR_SELECT_PICTURE)) }, Constants.DELAY_MILLIS.toLong()) }

Obtain the place data:

Java

if (data.contains(Constants.STR_SEARCH)) { searchSitePlaces(data.replace(Constants.STR_SEARCH, "").trim()); }

Kotlin

if (data.contains(Constants.STR_SEARCH)) { searchSitePlaces(data.replace(Constants.STR_SEARCH, "").trim { it <= ' ' }) }

Rate the app using hand gesture detection:

Java

if (data.contains(Constants.STR_FEEDBACK)) { speechData = new SpeechData(); speechData.setResponse(Constants.STR_DEAR + userName + Constants.STR_REDIRECT_FEEDBACK); mAdapter.add(speechData, recyclerView); Handler responseHandler = new Handler(); responseHandler.postDelayed(new Runnable() { @Override public void run() { startActivityForResult(new Intent(HomeActivity.this, LiveHandKeyPointAnalyseActivity.class), REQUEST_CODE); } }, Constants.DELAY_MILLIS); }

Kotlin

if (data.contains(Constants.STR_FEEDBACK)) { speechData = SpeechData() speechData.response = Constants.STR_DEAR + userName + Constants.STR_REDIRECT_FEEDBACK mAdapter!!.add(speechData, recyclerView!!) val responseHandler = Handler() responseHandler.postDelayed({ startActivityForResult(Intent(this@HomeActivity, LiveHandKeyPointAnalyseActivity::class.java), REQUEST_CODE) }, Constants.DELAY_MILLIS.toLong()) }

ML Kit (Hand Gesture Recognition)

ML Kit provides you with hand gesture recognition.

Add the following dependencies in the app-level build.gradle file.

implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.1.0.300' implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.1.0.300'

Step 1: Add LensEnginePreview and GraphicOerlay to the layout. This layout will be used in LiveHandKeyPointAnalyseActivity for rating the app using hand gesture detection.

Java

<com.huawei.touchmenot.java.hms.camera.LensEnginePreview android:id="@+id/hand_preview" android:layout_width="200dp" android:layout_height="200dp" android:layout_centerHorizontal="true" android:layout_centerVertical="true"> <com.huawei.touchmenot.java.hms.camera.GraphicOverlay android:id="@+id/hand_overlay" android:layout_width="match_parent" android:layout_height="match_parent" /> </com.huawei.touchmenot.java.hms.camera.LensEnginePreview>

Kotlin

<com.huawei.touchmenot.kotlin.hms.camera.LensEnginePreview android:id="@+id/hand_preview" android:layout_width="200dp" android:layout_height="200dp" android:layout_centerHorizontal="true" android:layout_centerVertical="true"> <com.huawei.touchmenot.kotlin.hms.camera.GraphicOverlay android:id="@+id/hand_overlay" android:layout_width="match_parent" android:layout_height="match_parent" /> </com.huawei.touchmenot.kotlin.hms.camera.LensEnginePreview>

Step 2: Initialize the MLHandKeypointAnalyzerSetting object with the required configuration for analyzing the hand keypoints.

Java

MLHandKeypointAnalyzerSetting setting = new MLHandKeypointAnalyzerSetting.Factory() .setMaxHandResults(2) .setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL) .create(); mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting); mAnalyzer.setTransactor(new HandAnalyzerTransactor(this, mOverlay));

Kotlin

val setting = MLHandKeypointAnalyzerSetting.Factory() .setMaxHandResults(Constants.INIT_TWO) .setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL) .create() mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting) mAnalyzer?.setTransactor(HandAnalyzerTransactor(this, mOverlay))

Step 3: Create the HandAnalyzerTransactor class.

This class implements MLAnalyzer.MLTransactor<MLHandKeypoints> API which gives an override method named as transactResult(). This method provides the MLHandKeyPoints object, which gives the coordinate information of each hand keypoint which helps count the raised fingers.

Java

private class HandAnalyzerTransactor implements MLAnalyzer.MLTransactor { private GraphicOverlay mGraphicOverlay; WeakReference<LiveHandKeyPointAnalyseActivity> mMainActivityWeakReference; HandAnalyzerTransactor(LiveHandKeyPointAnalyseActivity mainActivity, GraphicOverlay ocrGraphicOverlay) { mMainActivityWeakReference = new WeakReference<>(mainActivity); this.mGraphicOverlay = ocrGraphicOverlay; } @Override public void transactResult(MLAnalyzer.Result<MLHandKeypoints> result) { this.mGraphicOverlay.clear(); SparseArray<MLHandKeypoints> handKeypointsSparseArray = result.getAnalyseList(); List<MLHandKeypoints> list = new ArrayList<>(); System.out.println("point list size = " + handKeypointsSparseArray.size()); for (int i = 0; i < handKeypointsSparseArray.size(); i++) { list.add(handKeypointsSparseArray.valueAt(i)); System.out.println("point list size new = " + handKeypointsSparseArray.valueAt(i).getHandKeypoints()); } HandKeypointGraphic graphic = new HandKeypointGraphic(this.mGraphicOverlay, list, result, LiveHandKeyPointAnalyseActivity.this); this.mGraphicOverlay.add(graphic); } @Override public void destroy() { this.mGraphicOverlay.clear(); } }

Kotlin

private inner class HandAnalyzerTransactor internal constructor(mainActivity: LiveHandKeyPointAnalyseActivity, ocrGraphicOverlay: GraphicOverlay?) : MLTransactor { private val mGraphicOverlay: GraphicOverlay? var mMainActivityWeakReference: WeakReference<LiveHandKeyPointAnalyseActivity> /** * Process the results returned by the analyzer. * * @param result */ override fun transactResult(result: MLAnalyzer.Result<MLHandKeypoints>) { mGraphicOverlay!!.clear() val handKeypointsSparseArray = result.analyseList val list: MutableList<MLHandKeypoints> = ArrayList() println(R.string.point_list_size.toString() + handKeypointsSparseArray.size()) for (i in Constants.INIT_ZERO until handKeypointsSparseArray.size()) { list.add(handKeypointsSparseArray.valueAt(i)) println(R.string.point_list_size_new.toString() + handKeypointsSparseArray.valueAt(i).handKeypoints) } val graphic = HandKeypointGraphic(mGraphicOverlay, list, result, this@LiveHandKeyPointAnalyseActivity) mGraphicOverlay.add(graphic) } override fun destroy() { mGraphicOverlay!!.clear() } init { mMainActivityWeakReference = WeakReference(mainActivity) mGraphicOverlay = ocrGraphicOverlay } }


Site Kit

With this kit, you can search for details about a place (such as the name, detailed address, and longitude-latitude coordinates of the place) based on the unique ID (siteId) of the place.

Add the following dependency in the app-level build.gradle file.

Implementation ‘com.huawei.hms:site:5.0.0.300'

Step 1: Create a TextSearchRequest object, which is used as the request body for keyword search and set the query parameter.

Java

TextSearchRequest textSearchRequest = new TextSearchRequest(); textSearchRequest.setQuery(queryText);

Kotlin

val textSearchRequest = TextSearchRequest() textSearchRequest.query = queryText

Step 2: Create a SearchResultListener object to listen for the search result. Create the SearchService object to call the textSearch() API and pass the created TextSearchRequest and SearchResultListener objects to the API.

Obtain the TextSearchResponse object using the created SearchResultListener object. You can obtain a Site object from the TextSearchResponse object and then parse it to obtain specific search results.

Java

try { SearchService searchService = SearchServiceFactory.create(HomeActivity.this, URLEncoder.encode(API_KEY, "utf-8")); searchService.textSearch(textSearchRequest, new SearchResultListener<TextSearchResponse>() { @Override public void onSearchResult(TextSearchResponse textSearchResponse) { if (textSearchResponse != null && textSearchResponse.getSites() != null) { for (Site site : textSearchResponse.getSites()) { siteId = site.getSiteId(); break; } DetailSearchRequest request = new DetailSearchRequest(); request.setSiteId(siteId); SearchResultListener<DetailSearchResponse> resultListener = new SearchResultListener<DetailSearchResponse>() { @Override public void onSearchResult(DetailSearchResponse result) { Site site; if (result == null || (site = result.getSite()) == null) { return; } site = result.getSite(); SpeechData speechData = new SpeechData(); speechData.setRequestType(TYPE_SEARCH); // TYPE_SEARCH = 2 indicates that recyclerView is used to inflate the search layout. speechData.setResponse(site.getName()); speechData.setSiteResponse(site); mAdapter.add(speechData, recyclerView); Log.i("TAG", String.format("siteId: '%s', name: %s\r\n", site.getSiteId(), site.getName())); } @Override public void onSearchError(SearchStatus status) { Log.i("TAG", "Error : " + status.getErrorCode() + " " + status.getErrorMessage()); } }; // Call the place detail search API. searchService.detailSearch(request, resultListener); } else { SpeechData speechData = new SpeechData(); speechData.setResponse("No result found"); mAdapter.add(speechData, recyclerView); } } @Override public void onSearchError(SearchStatus searchStatus) { String val = new Gson().toJson(searchStatus); Log.e(TAG, val); Toast.makeText(HomeActivity.this, val, Toast.LENGTH_SHORT).show(); } }); } catch (UnsupportedEncodingException e) { }

Kotlin

try { val searchService = SearchServiceFactory.create(this@HomeActivity, URLEncoder.encode(API_KEY, Constants.STR_UTF_8)) searchService.textSearch(textSearchRequest, object : SearchResultListener<TextSearchResponse?> { override fun onSearchResult(textSearchResponse: TextSearchResponse?) { if (textSearchResponse != null && textSearchResponse.sites != null) { for (site in textSearchResponse.sites) { siteId = site.siteId break } val request = DetailSearchRequest() request.siteId = siteId val resultListener: SearchResultListener<DetailSearchResponse?> = object : SearchResultListener<DetailSearchResponse?> { override fun onSearchResult(result: DetailSearchResponse?) { var site: Site if (result == null || result.site.also { site = it } == null) { return } site = result.site val speechData = SpeechData() speechData.requestType = TYPE_SEARCH // TYPE_SEARCH = 2 indicates that recyclerView is used to inflate the search layout. speechData.response = site.name speechData.siteResponse = site mAdapter!!.add(speechData, recyclerView!!) Log.d(TAG, String.format("siteId: '%s', name: %s\r\n", site.siteId, site.name)) } override fun onSearchError(status: SearchStatus?) { TEMP_MESSAGE = Constants.STR_ERROR_MSG + status?.errorCode + " " + status?.errorMessage Log.d(TAG, TEMP_MESSAGE) } } // Call the place detail search API. searchService.detailSearch(request, resultListener) } else { val speechData = SpeechData() speechData.response = Constants.STR_NO_RESULT_FOUND mAdapter!!.add(speechData, recyclerView!!) } } override fun onSearchError(searchStatus: SearchStatus) { val `val` = Gson().toJson(searchStatus) Log.d(TAG, `val`) Toast.makeText(this@HomeActivity, `val`, Toast.LENGTH_SHORT).show() } }) } catch (e: UnsupportedEncodingException) { ExceptionHandling().PrintExceptionInfo(Constants.EXCEPTION_MSG, e) }

AR Engine

Add the following dependency in the app-level build.gradle file.

implementation 'com.huawei.hms:arenginesdk:2.15.0.1'

With ASR of ML Kit integrated, the app can recognize speeches and convert them to text. The text is displayed in the chat list. As the number of voice requests increases, the text will grow and it will be impossible to implement scroll-up and-down without touching the screen. For this scenario, we use HUAWEI AR Engine's hand gesture detection feature to add the scroll-up and-down capabilities.

Step 1: Check whether the HUAWEI AR Engine app is installed on the current device. If not, redirect the user to HUAWEI AppGallery for installation.

Java

private boolean arEngineAbilityCheck() { boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this); if (!isInstallArEngineApk && isRemindInstall) { Toast.makeText(this, Constants.STR_AGREE_MSG, Toast.LENGTH_LONG).show(); finish(); } Log.d(TAG, Constants.STR_IS_INSTALL + isInstallArEngineApk); if (!isInstallArEngineApk) { startActivity(new Intent(this, ConnectAppMarketActivity.class)); isRemindInstall = true; } return AREnginesApk.isAREngineApkReady(this); }

Kotlin

private fun arEngineAbilityCheck(): Boolean { val isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this) if (!isInstallArEngineApk && isRemindInstall) { Toast.makeText(this, Constants.STR_AGREE_MSG, Toast.LENGTH_LONG).show() finish() } Log.d(TAG, Constants.STR_IS_INSTALL + isInstallArEngineApk) if (!isInstallArEngineApk) { startActivity(Intent(this, ConnectAppMarketActivity::class.java)) isRemindInstall = true } return AREnginesApk.isAREngineApkReady(this) }

Step 2: Create the ARSession object and configure hand gesture tracking by initializing the ARHandTrackingConfig class. Set the power consumption mode and focus mode using the code below.

Java

mArSession = new ARSession(this); ARHandTrackingConfig config = new ARHandTrackingConfig(mArSession); config.setCameraLensFacing(ARConfigBase.CameraLensFacing.FRONT); config.setPowerMode(ARConfigBase.PowerMode.ULTRA_POWER_SAVING); mArSession.configure(config); HandRenderManager mHandRenderManager = new HandRenderManager(this); mHandRenderManager.setArSession(mArSession); try { mArSession.resume(); } catch (ARCameraNotAvailableException e) { Toast.makeText(this, R.string.Camera_Fail, Toast.LENGTH_LONG).show(); mArSession = null; return; }

Kotlin

mArSession = ARSession(this) val config = ARHandTrackingConfig(mArSession) config.cameraLensFacing = ARConfigBase.CameraLensFacing.FRONT config.powerMode = ARConfigBase.PowerMode.ULTRA_POWER_SAVING mArSession?.configure(config) var mHandRenderManager: HandRenderManager = HandRenderManager(this) mHandRenderManager?.setArSession(mArSession) try { mArSession!!.resume() } catch (e: ARCameraNotAvailableException) { Toast.makeText(this, R.string.Camera_Fail, Toast.LENGTH_LONG).show() mArSession = null return }

Step 3: Handle Frames to analyze the hand gestures .Use the code snippet below to obtain the collection of ARHand frames captured by AR Engine.

Java

ARFrame arFrame = mSession.update(); ARCamera arCamera = arFrame.getCamera(); // The size of the projection matrix is 4x4. float[] projectionMatrix = new float[PROJECTION_MATRIX_MAX]; // Obtain the projection matrix through ARCamera. arCamera.getProjectionMatrix(projectionMatrix, PROJECTION_MATRIX_OFFSET, PROJECTION_MATRIX_NEAR, PROJECTION_MATRIX_FAR); mTextureDisplay.onDrawFrame(arFrame); Collection<ARHand> hands = mSession.getAllTrackables(ARHand.class);

Kotlin

val arFrame = mSession!!.update() val arCamera = arFrame.camera // The size of the projection matrix is 4x4. val projectionMatrix = FloatArray(PROJECTION_MATRIX_MAX) // Obtain the projection matrix through ARCamera. arCamera.getProjectionMatrix(projectionMatrix, PROJECTION_MATRIX_OFFSET, PROJECTION_MATRIX_NEAR, PROJECTION_MATRIX_FAR) mTextureDisplay.onDrawFrame(arFrame) val hands = mSession!!.getAllTrackables(ARHand::class.java)

Step 4: After the hand is captured, ARSession will continuously track the hand gestures. To obtain the gesture type, call the ARHand.getGestureType() method and compare the hand with the gesture required to perform the scrolling operation.

AR Engine supports the following hand gestures. We can use Gesture 0 and Gesture 1 to handle the scenario mentioned above.

Java

for (ARHand hand : hands) { if (hand.getGestureType() == Constants.INIT_ONE) { HomeActivity.getInstance().scrollUpMethod(); } else if (hand.getGestureType() == Constants.INIT_ZERO) { HomeActivity.getInstance().scrollDownMethod(); } }

Kotlin

for (hand in hands) { if (hand.gestureType == Constants.INIT_ONE) { HomeActivity.getInstance().scrollUpMethod() } else if (hand.gestureType == Constants.INIT_ZERO) { HomeActivity.getInstance().scrollDownMethod() } }

Well done. You have successfully built the AirTouch app and learned how to:

For more information, please click the following links:

To download the source code, please click the button below.
Download

Code copied