An audio playback app provides audio playback, collection, and management features (including fast-forward, rewind, pause, play, and volume control). By walking through this codelab, you'll be able to understand how to implement HarmonyOS app development, player application, audio collection, and system volume control.

What's the Purpose of This Codelab?

You'll be able to build an app to provide the following features:

  1. Playing local audio resources or audio resources obtained from the Internet
  2. Using the audio collector to collect onsite audio streams, save the streams to the local host, and play the recording by reading the streams
  3. Controlling the notification volume, media volume, and call volume

What You Will Learn

After finishing this codelab, you'll be able to understand:

The procedure is as follows:

This codelab illustrates only the core code. You can download the complete code by referring to 9 References. The following figure shows the code structure of the entire project.

Service Logic for Audio Playback

Step 1 - Create a Player object and set the player status callback, in which HmPlayerCallback inherits Player.IPLayerCallback.

Player player = new Player(Context); player.setPlayerCallback(new HmPlayerCallback());

Step 2 - Prepare media resources and create the playback source using the encapsulated SourceFactory based on the specified playback path.

private void initSourceType(Context context, String path) throws IOException { if (context == null || path == null) { return; } if (path.substring(0, NET_HTTP_MATCH.length()).equalsIgnoreCase(NET_HTTP_MATCH) || path.substring(0, NET_RTMP_MATCH.length()).equalsIgnoreCase(NET_RTMP_MATCH) || path.substring(0, NET_RTSP_MATCH.length()).equalsIgnoreCase(NET_RTSP_MATCH)) { mPlayerSource = new Source(path); } else if (path.startsWith(STORAGE_MATCH)) { File file = new File(path); if (file.exists()) { FileInputStream fileInputStream = new FileInputStream(file); FileDescriptor fileDescriptor = fileInputStream.getFD(); mPlayerSource = new Source(fileDescriptor); } } else { RawFileDescriptor fd = context.getResourceManager().getRawFileEntry(path).openRawFileDescriptor(); mPlayerSource = new Source(fd.getFileDescriptor(), fd.getStartPosition(), fd.getFileSize()); } }

Step 3 - Start the playback.

private void start() { if (mPlayer != null) { mBuilder.mContext.getGlobalTaskDispatcher(TaskPriority.DEFAULT).asyncDispatch(() -> { if (surface != null) { mPlayer.setVideoSurface(surface); } else { LogUtil.error(TAG, "The surface has not been initialized."); } mPlayer.prepare(); if (mBuilder.startMillisecond > 0) { int microsecond = mBuilder.startMillisecond * MICRO_MILLI_RATE; mPlayer.rewindTo(microsecond); } mPlayer.play(); }); } }

Step 4 - Implement the player status callback. The progress bar uses HmPlayer.addPlayerStateCallback to add the player status callback set in Step 1 to implement UI update.

@Override public void onMediaTimeIncontinuity(Player.MediaTimeInfo mediaTimeInfo) { LogUtil.info(TAG, "onMediaTimeIncontinuity is called"); for (Player.StreamInfo streanInfo : mPlayer.getStreamInfo()) { int streamType = streanInfo.getStreamType(); if (streamType == Player.StreamInfo.MEDIA_STREAM_TYPE_AUDIO && mState == PlayerState.PREPARED) { for (StateChangeListener callback : stateChangeCallbacks) { mState = PlayerState.PLAY; callback.stateCallback(PlayerState.PLAY); } if (mBuilder.isPause) { pause(); } } } } private StateChangeListener mStateChangeListener = new StatuChangeListener() { @Override public void stateCallback(PlayerState state) { mContext.getUITaskDispatcher().asyncDispatch(() -> { switch (state) { case PREPARING: mPlayToogle.setClickable(false); mProgressBar.setEnabled(false); mProgressBar.setProgressValue(0); break; case PREPARED: mProgressBar.setMaxValue(mPlayer.getDuration()); mTotleTime.setText(DateUtils.msToString(mPlayer.getDuration())); break; case PLAY: showController(false); mPlayToogle.setPixelMap(ResourceTable.Media_ic_music_stop); mPlayToogle.setClickable(true); mProgressBar.setEnabled(true); break; case PAUSE: mPlayToogle.setPixelMap(ResourceTable.Media_ic_music_play); break; case STOP: case COMPLETE: mPlayToogle.setPixelMap(ResourceTable.Media_ic_update); mProgressBar.setEnabled(false); break; default: break; } }); } };

—-End

How to Use

Common service processes are encapsulated into HmPlayer so that you can use the player more quickly and conveniently.

How to Use

Step 1 - Create a SoundPlayer object.

SoundPlayer soundPlayer = new SoundPlayer();

Step 2 - Initialize the system sound. The first parameter indicates the tone type, and the second parameter indicates the tone duration.

soundPlayer.createSound(ToneDescriptor.ToneType.DTMF_0, 500);

Step 3 - Play the system sound.

soundPlayer.play();

Step 4 - Release resources.

if (soundPlayer != null) { soundPlayer.release(); soundPlayer = null; }

—-End

The AudioRecorder and AudioRender classes implement recording and playback, respectively. They provide APIs for starting recording, stopping recording, playing recordings, and releasing resources. They also support real-time recording playback and recording file playback. You can press and hold the recording button to start recording, release the button to stop recording and save the recording to the local host, or click the play button to play recordings.

Service Logic for Recording

Step 1 - Create an AudioStreamInfo object and an AudioCapturer object to call the underlying recording feature.
Step 2

private void initRecord() { if (audioStreamInfo == null) { audioStreamInfo = new AudioStreamInfo.Builder() .encodingFormat(builder.encodingFormat) .channelMask(builder.channelMask) .sampleRate(builder.inputSampleRate) .build(); AudioCapturerInfo audioCapturerInfo = new AudioCapturerInfo.Builder() .audioStreamInfo(audioStreamInfo) .audioInputSource(builder.inputSource) .build(); audioCapturer = new AudioCapturer(audioCapturerInfo); } }

Step 3 - Create an AudioRecorder object and set the recording storage path.

savefilePath = getExternalFilesDir(Environment.DIRECTORY_MUSIC) + File.separator + "AudioTest.mp3"; audioRecorder = new AudioRecorder.Builder(this).setSaveFilePath(savefilePath).create();

Step 4 - Use the AudioRecorder object to start recording and write the recording to a file.

private void beginRecord() { if (!audioRecorder.isRecording()) { recordTag = isRealTimePlay ? 0 : 1; audioRecorder.record(); } }

—-End

Service Logic for Recording Playback

Step 1 - Set the format for playing audio streams and create an AudioRender object.

private void initRender() { AudioStreamInfo asi = new AudioStreamInfo.Builder() .encodingFormat(builder.encodingFormat) .channelMask(builder.channelMask) .sampleRate(builder.inputSampleRate) .audioStreamFlag(builder.streamFlag) .streamUsage(builder.streamUsage) .build(); audioRenderInfo = new AudioRendererInfo.Builder() .audioStreamInfo(asi) .audioStreamOutputFlag(builder.streamOutputFlag) .bufferSizeInBytes(bufferSize) .isOffload(builder.isOneOffLoad) // The value false indicates that audio streams are transmitted to the buffer by segment for playback. The value true indicates that audio streams are transmitted to the HAL at a time for playback. .build(); audioRender = new AudioRenderer(audioRenderInfo, AudioRenderer.PlayMode.MODE_STREAM); }

Step 2 - Set a listener to observe the end of the audio stream playback.

audioRender.setFrameIntervalObserver(() -> { if (audioRender.getAudioTime().getFramePosition() != 0) { if (audioPlayListener != null) { audioPlayListener.onComplete(); } release(); } }, MediaConst.READ_RENDER_INTERVAL, audioRenderHandler);

Step 3 - Start recording playback and write recording streams to the AudioRender object.

public void play(byte[] bytes, int length) { if (audioRenderInfo == null) { initRender(); } start(); audioRenderHandler.postTask(() -> { byte[] datas = new byte[length]; System.arraycopy(bytes, 0, datas, 0, datas.length); audioRender.write(datas, 0, datas.length); }); }

—-End

Move the volume slider left or right to adjust the notification volume, media volume, and call volume.

How to Use

Step 1 - Create an AudioManager object.

AudioManager audioManager = new AudioManager();

Step 2 - Set the volume based on the value returned by Slider. STREAM_DTMF indicates the notification volume, STREAM_MUSIC indicates the media volume, and STREAM_VOICE_CALL indicates the call volume.

@Override public void onTouchEnd(Slider slider) { int progress = slider.getProgress(); switch (slider.getId()) { case ResourceTable.Id_sound_volume_bar: try { if (audioManager.setVolume(AudioManager.AudioVolumeType.STREAM_DTMF, progress)) { dtmfVolume = progress; } else { audioManager.setVolume(AudioManager.AudioVolumeType.STREAM_DTMF, dtmfVolume); } } catch (SecurityException e) { LogUtil.error(TAG, e.getMessage()); } break; case ResourceTable.Id_sound_volume_bar2: try { if (audioManager.setVolume(AudioManager.AudioVolumeType.STREAM_MUSIC, progress)) { musicVolume = progress; } else { audioManager.setVolume(AudioManager.AudioVolumeType.STREAM_MUSIC, musicVolume); } } catch (SecurityException e) { LogUtil.error(TAG, e.getMessage()); } break; case ResourceTable.Id_sound_volume_bar3: try { if (audioManager.setVolume(AudioManager.AudioVolumeType.STREAM_VOICE_CALL, progress)) { callVolume = progress; } else { audioManager.setVolume(AudioManager.AudioVolumeType.STREAM_VOICE_CALL, callVolume); } } catch (SecurityException e) { LogUtil.error(TAG, e.getMessage()); } break; default: break; } }

—-End

Well done. You have completed this codelab and learned:

You can download the complete code of this project from AudioDemoCodelab.

Code copied