AudioTrack源码解读(2)

2022-10-25 16:30:10 浏览数 (2)

本篇介绍

本篇介绍下AudioTrack的操作,比如Playback的线程运行,播放,write,暂停等流程。

源码介绍

播放线程运行

首先从播放线程的拉起开始,这儿以PlaybackThread为例, 由于PlaybackThread是以智能指针形式存在的,因此在创建后,会调用onFirstRef,因此这整个流程从这个函数开始:

代码语言:javascript复制
void AudioFlinger::PlaybackThread::onFirstRef()
{
    if (mOutput == nullptr || mOutput->stream == nullptr) {
        ALOGE("The stream is not open yet"); // This should not happen.
    } else {
        // setEventCallback will need a strong pointer as a parameter. Calling it
        // here instead of constructor of PlaybackThread so that the onFirstRef
        // callback would not be made on an incompletely constructed object.
        if (mOutput->stream->setEventCallback(this) != OK) {
            ALOGE("Failed to add event callback");
        }
    }
    run(mThreadName, ANDROID_PRIORITY_URGENT_AUDIO);
}

最后会执行run,这个方法会进行调用threadLoop,这个方法比较长,我们分段看下重点部分:

代码语言:javascript复制
AudioFlinger::PlaybackThread::threadLoop() {
...
 // 处理配置命令
    processConfigEvents_l();

// 删除该PalybackThread管理的Track中不需要播放的部分,比如播放结束的,暂停的等
    mMixerStatus = prepareTracks_l(&tracksToRemove);

// 拷贝一份需要播放的Track
   // Acquire a local copy of active tracks with lock (release w/o lock).
   //
   // Control methods on the track acquire the ThreadBase lock (e.g. start()
   // stop(), pause(), etc.), but the threadLoop is entitled to call audio
   // data / buffer methods on tracks from activeTracks without the ThreadBase lock.
   activeTracks.insert(activeTracks.end(), mActiveTracks.begin(), mActiveTracks.end());

  // mix
  if (mMixerStatus == MIXER_TRACKS_READY) {
  // threadLoop_mix() sets mCurrentWriteLength
        threadLoop_mix();
  }
// 音效处理
         if (mSleepTimeUs == 0 && mType != OFFLOAD) {
                for (size_t i = 0; i < effectChains.size(); i   ) {
                    effectChains[i]->process_l();
                    // TODO: Write haptic data directly to sink buffer when mixing.
                    if (activeHapticSessionId != AUDIO_SESSION_NONE
                            && activeHapticSessionId == effectChains[i]->sessionId()) {
                        // Haptic data is active in this case, copy it directly from
                        // in buffer to out buffer.
                        const size_t audioBufferSize = mNormalFrameCount
                                * audio_bytes_per_frame(mChannelCount, EFFECT_BUFFER_FORMAT);
                        memcpy_by_audio_format(
                                (uint8_t*)effectChains[i]->outBuffer()   audioBufferSize,
                                EFFECT_BUFFER_FORMAT,
                                (const uint8_t*)effectChains[i]->inBuffer()   audioBufferSize,
                                EFFECT_BUFFER_FORMAT, mNormalFrameCount * mHapticChannelCount);
                    }
                }
            }

// 写音频数据
ret = threadLoop_write();

// 结束处理
 threadLoop_removeTracks(tracksToRemove);
        tracksToRemove.clear();

        // FIXME I don't understand the need for this here;
        //       it was in the original code but maybe the
        //       assignment in saveOutputTracks() makes this unnecessary?
        clearOutputTracks();

        // Effect chains will be actually deleted here if they were removed from
        // mEffectChains list during mixing or effects processing
        effectChains.clear();
...
}

从这儿可以看出以下几点:

  • PlaybackThread会从所管理的Track中拿出需要播放的音频,然后进行混流,这样的话,可以猜想到,如果有些音频暂停,结束之类的,只需要不让对应的Track参与混流就可以了;
  • 每个PlaybackThread都对应一个AudioStreamOut,用来将音频数据写到hal,然后由hal写给驱动。

这儿再看下音频数据如何写入Hal,具体实现是threadLoop_write:

代码语言:javascript复制
// shared by MIXER and DIRECT, overridden by DUPLICATING
ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
  
    if (mNormalSink != 0) {

        const size_t count = mBytesRemaining / mFrameSize;

        ATRACE_BEGIN("write");
        // update the setpoint when AudioFlinger::mScreenState changes
        uint32_t screenState = AudioFlinger::mScreenState;
        if (screenState != mScreenState) {
            mScreenState = screenState;
            MonoPipe *pipe = (MonoPipe *)mPipeSink.get();
            if (pipe != NULL) {
                pipe->setAvgFrames((mScreenState & 1) ?
                        (pipe->maxFrames() * 7) / 8 : mNormalFrameCount * 2);
            }
        }
        ssize_t framesWritten = mNormalSink->write((char *)mSinkBuffer   offset, count);  // 写入normal sink
        ATRACE_END();
        if (framesWritten > 0) {
            bytesWritten = framesWritten * mFrameSize;
#ifdef TEE_SINK
            mTee.write((char *)mSinkBuffer   offset, framesWritten);  // 写入tee sink
#endif
        } else {
            bytesWritten = framesWritten;
        }
    // otherwise use the HAL / AudioStreamOut directly
    } else {
        // Direct output and offload threads

        if (mUseAsyncWrite) {
            ALOGW_IF(mWriteAckSequence & 1, "threadLoop_write(): out of sequence write request");
            mWriteAckSequence  = 2;
            mWriteAckSequence |= 1;
            ALOG_ASSERT(mCallbackThread != 0);
            mCallbackThread->setWriteBlocked(mWriteAckSequence);
        }
        ATRACE_BEGIN("write");
        // FIXME We should have an implementation of timestamps for direct output threads.
        // They are used e.g for multichannel PCM playback over HDMI.
        bytesWritten = mOutput->write((char *)mSinkBuffer   offset, mBytesRemaining);  写入hal
        ATRACE_END();

        if (mUseAsyncWrite &&
                ((bytesWritten < 0) || (bytesWritten == (ssize_t)mBytesRemaining))) {
            // do not wait for async callback in case of error of full write
            mWriteAckSequence &= ~1;
            ALOG_ASSERT(mCallbackThread != 0);
            mCallbackThread->setWriteBlocked(mWriteAckSequence);
        }
    }

    mNumWrites  ;
    mInWrite = false;
    if (mStandby) {
        mThreadMetrics.logBeginInterval();
        mStandby = false;
    }
    return bytesWritten;
}

这儿是调用了AudioStreamOut的write接口,继续看下内部实现:

代码语言:javascript复制
ssize_t AudioStreamOut::write(const void *buffer, size_t numBytes)
{
    size_t bytesWritten;
    status_t result = stream->write(buffer, numBytes, &bytesWritten);
    if (result == OK && bytesWritten > 0 && mHalFrameSize > 0) {
        mFramesWritten  = bytesWritten / mHalFrameSize;
    }
    return result == OK ? bytesWritten : result;
}

可以想到,这儿的stream 就是连接hal的,具体可以看下构造的地方:

代码语言:javascript复制
status_t AudioStreamOut::open(
        audio_io_handle_t handle,
        audio_devices_t deviceType,
        struct audio_config *config,
        const char *address)
{
    sp<StreamOutHalInterface> outStream;

    audio_output_flags_t customFlags = (config->format == AUDIO_FORMAT_IEC61937)
                ? (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_IEC958_NONAUDIO)
                : flags;

    int status = hwDev()->openOutputStream(
            handle,
            deviceType,
            customFlags,
            config,
            address,
            &outStream);
    // Some HALs may not recognize AUDIO_FORMAT_IEC61937. But if we declare
    // it as PCM then it will probably work.
    if (status != NO_ERROR && config->format == AUDIO_FORMAT_IEC61937) {
        struct audio_config customConfig = *config;
        customConfig.format = AUDIO_FORMAT_PCM_16_BIT;

        status = hwDev()->openOutputStream(
                handle,
                deviceType,
                customFlags,
                &customConfig,
                address,
                &outStream);
        ALOGV("AudioStreamOut::open(), treat IEC61937 as PCM, status = %d", status);
    }

    if (status == NO_ERROR) {
        stream = outStream;
        mHalFormatHasProportionalFrames = audio_has_proportional_frames(config->format);
        status = stream->getFrameSize(&mHalFrameSize);
        LOG_ALWAYS_FATAL_IF(status != OK, "Error retrieving frame size from HAL: %d", status);
        LOG_ALWAYS_FATAL_IF(mHalFrameSize <= 0, "Error frame size was %zu but must be greater than"
                " zero", mHalFrameSize);

    }

    return status;
}

这儿就拿到了hal的stream 接口。

Play接口实现

代码语言:javascript复制
    public void play()
    throws IllegalStateException {
        if (mState != STATE_INITIALIZED) {
            throw new IllegalStateException("play() called on uninitialized AudioTrack.");
        }
        //FIXME use lambda to pass startImpl to superclass
        final int delay = getStartDelayMs();
        if (delay == 0) {
            startImpl();
        } else {
            new Thread() {
                public void run() {
                    try {
                        Thread.sleep(delay);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                    baseSetStartDelayMs(0);
                    try {
                        startImpl();
                    } catch (IllegalStateException e) {
                        // fail silently for a state exception when it is happening after
                        // a delayed start, as the player state could have changed between the
                        // call to start() and the execution of startImpl()
                    }
                }
            }.start();
        }
    }

本质上就是调用startImpl:

代码语言:javascript复制
    private void startImpl() {
        synchronized(mPlayStateLock) {
            baseStart();
            native_start();
            if (mPlayState == PLAYSTATE_PAUSED_STOPPING) {
                mPlayState = PLAYSTATE_STOPPING;
            } else {
                mPlayState = PLAYSTATE_PLAYING;
                mOffloadEosPending = false;
            }
        }
    }

这样就一个native调用就下去了,可以想象到Native的实现一定会走到AudioFlinger的播放线程的Track里面。可以沿着这个思路看下代码: 先到JNI

代码语言:javascript复制
static void
android_media_AudioTrack_start(JNIEnv *env, jobject thiz)
{
    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    if (lpTrack == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException",
            "Unable to retrieve AudioTrack pointer for start()");
        return;
    }

    lpTrack->start();
}

再到Native的AudioTrack:

代码语言:javascript复制
status_t AudioTrack::start()
{
...
    if (!(flags & CBLK_INVALID)) {
        status = mAudioTrack->start();  // 调用Track的开始
        if (status == DEAD_OBJECT) {
            flags |= CBLK_INVALID;
        }
    }
    if (flags & CBLK_INVALID) {
        status = restoreTrack_l("start");
    }

    // resume or pause the callback thread as needed.
    sp<AudioTrackThread> t = mAudioTrackThread;
    if (status == NO_ERROR) {
        if (t != 0) {
            if (previousState == STATE_STOPPING) {
                mProxy->interrupt();
            } else {
                t->resume(); // 启动回调线程
            }
        } else {
            mPreviousPriority = getpriority(PRIO_PROCESS, 0);
            get_sched_policy(0, &mPreviousSchedulingGroup);
            androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);
        }

        // Start our local VolumeHandler for restoration purposes.
        mVolumeHandler->setStarted();
    } else {
        ALOGE("%s(%d): status %d", __func__, mPortId, status);
        mState = previousState;
        if (t != 0) {
            if (previousState != STATE_STOPPING) {
                t->pause();
            }
        } else {
            setpriority(PRIO_PROCESS, 0, mPreviousPriority);
            set_sched_policy(0, mPreviousSchedulingGroup);
        }
    }

    return status;
}

}

先看下Track的开始:

代码语言:javascript复制
status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
                                                    audio_session_t triggerSession __unused)
{
    sp<ThreadBase> thread = mThread.promote();
    if (thread != 0) {
        if (isOffloaded()) {
            Mutex::Autolock _laf(thread->mAudioFlinger->mLock);
            Mutex::Autolock _lth(thread->mLock);
            sp<EffectChain> ec = thread->getEffectChain_l(mSessionId);
            if (thread->mAudioFlinger->isNonOffloadableGlobalEffectEnabled_l() ||
                    (ec != 0 && ec->isNonOffloadableEnabled())) {
                invalidate();
                return PERMISSION_DENIED;
            }
        }
        Mutex::Autolock _lth(thread->mLock);
        track_state state = mState;

        // clear mPauseHwPending because of pause (and possibly flush) during underrun.
        mPauseHwPending = false;
        if (state == PAUSED || state == PAUSING) {
            if (mResumeToStopping) {
                // happened we need to resume to STOPPING_1
                mState = TrackBase::STOPPING_1;
                ALOGV("%s(%d): PAUSED => STOPPING_1 on thread %d",
                        __func__, mId, (int)mThreadIoHandle);
            } else {
                mState = TrackBase::RESUMING;
                ALOGV("%s(%d): PAUSED => RESUMING on thread %d",
                        __func__,  mId, (int)mThreadIoHandle);
            }
        } else {
            mState = TrackBase::ACTIVE;   //修改了标记
            ALOGV("%s(%d): ? => ACTIVE on thread %d",
                    __func__, mId, (int)mThreadIoHandle);
        }

        // states to reset position info for non-offloaded/direct tracks
        if (!isOffloaded() && !isDirect()
                && (state == IDLE || state == STOPPED || state == FLUSHED)) {
            mFrameMap.reset(); 
        }
        PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
        if (isFastTrack()) {
            // refresh fast track underruns on start because that field is never cleared
            // by the fast mixer; furthermore, the same track can be recycled, i.e. start
            // after stop.
            mObservedUnderruns = playbackThread->getFastTrackUnderruns(mFastIndex);
        }
        status = playbackThread->addTrack_l(this);  // 添加Track,这样ThreadLoop就可以遍历到这个Track了
        if (status == INVALID_OPERATION || status == PERMISSION_DENIED) {
            triggerEvents(AudioSystem::SYNC_EVENT_PRESENTATION_COMPLETE);
            //  restore previous state if start was rejected by policy manager
            if (status == PERMISSION_DENIED) {
                mState = state;
            }
        }

        // Audio timing metrics are computed a few mix cycles after starting.
        {
            mLogStartCountdown = LOG_START_COUNTDOWN;
            mLogStartTimeNs = systemTime();
            mLogStartFrames = mAudioTrackServerProxy->getTimestamp()
                    .mPosition[ExtendedTimestamp::LOCATION_KERNEL];
            mLogLatencyMs = 0.;
        }

        if (status == NO_ERROR || status == ALREADY_EXISTS) {
            // for streaming tracks, remove the buffer read stop limit.
            mAudioTrackServerProxy->start(); // 重设标记内存读取位置
        }

        // track was already in the active list, not a problem
        if (status == ALREADY_EXISTS) {
            status = NO_ERROR;
        } else {
            // Acknowledge any pending flush(), so that subsequent new data isn't discarded.
            // It is usually unsafe to access the server proxy from a binder thread.
            // But in this case we know the mixer thread (whether normal mixer or fast mixer)
            // isn't looking at this track yet:  we still hold the normal mixer thread lock,
            // and for fast tracks the track is not yet in the fast mixer thread's active set.
            // For static tracks, this is used to acknowledge change in position or loop.
            ServerProxy::Buffer buffer;
            buffer.mFrameCount = 1;
            (void) mAudioTrackServerProxy->obtainBuffer(&buffer, true /*ackFlush*/);
        }
    } else {
        status = BAD_VALUE;
    }
    if (status == NO_ERROR) {
        forEachTeePatchTrack([](auto patchTrack) { patchTrack->start(); });
    }
    return status;
}

接下来看下开启回调, 开启回调后,在配置了用回调获取数据的场景,就会从Native向Java要数据,并写入共享内存:

代码语言:javascript复制
bool AudioTrack::AudioTrackThread::threadLoop()
{
    {
        AutoMutex _l(mMyLock);
        if (mPaused) {
            // TODO check return value and handle or log
            mMyCond.wait(mMyLock);
            // caller will check for exitPending()
            return true;
        }
        if (mIgnoreNextPausedInt) {
            mIgnoreNextPausedInt = false;
            mPausedInt = false;
        }
        if (mPausedInt) {
            // TODO use futex instead of condition, for event flag "or"
            if (mPausedNs > 0) {
                // TODO check return value and handle or log
                (void) mMyCond.waitRelative(mMyLock, mPausedNs);
            } else {
                // TODO check return value and handle or log
                mMyCond.wait(mMyLock);
            }
            mPausedInt = false;
            return true;
        }
    }
    if (exitPending()) {
        return false;
    }
    nsecs_t ns = mReceiver.processAudioBuffer();  // 驱动回调
    switch (ns) {
    case 0:
        return true;
    case NS_INACTIVE:
        pauseInternal();
        return true;
    case NS_NEVER:
        return false;
    case NS_WHENEVER:
        // Event driven: call wake() when callback notifications conditions change.
        ns = INT64_MAX;
        FALLTHROUGH_INTENDED;
    default:
        LOG_ALWAYS_FATAL_IF(ns < 0, "%s(%d): processAudioBuffer() returned %lld",
                __func__, mReceiver.mPortId, (long long)ns);
        pauseInternal(ns);
        return true;
    }
}

可以猜想到,回调接下来就会通过jni向java要数据:

代码语言:javascript复制
...
nsecs_t AudioTrack::processAudioBuffer()
{

    // perform callbacks while unlocked
    if (newUnderrun) {
        mCbf(EVENT_UNDERRUN, mUserData, NULL);
    }
    while (loopCountNotifications > 0) {
        mCbf(EVENT_LOOP_END, mUserData, NULL);
        --loopCountNotifications;
    }
    if (flags & CBLK_BUFFER_END) {
        mCbf(EVENT_BUFFER_END, mUserData, NULL);
    }
    if (markerReached) {
        mCbf(EVENT_MARKER, mUserData, &markerPosition);
    }
    while (newPosCount > 0) {
        size_t temp = newPosition.value(); // FIXME size_t != uint32_t
        mCbf(EVENT_NEW_POS, mUserData, &temp);
        newPosition  = updatePeriod;
        newPosCount--;
    }

    if (mObservedSequence != sequence) {
        mObservedSequence = sequence;
        mCbf(EVENT_NEW_IAUDIOTRACK, mUserData, NULL);
        // for offloaded tracks, just wait for the upper layers to recreate the track
        if (isOffloadedOrDirect()) {
            return NS_INACTIVE;
        }
    }

    // if inactive, then don't run me again until re-started
    if (!active) {
        return NS_INACTIVE;
    }
    // ... 
}

这儿的mCbl就是jni的audioCallback

代码语言:javascript复制
static void audioCallback(int event, void* user, void *info) {
  ...
    if (postEvent) {
        JNIEnv *env = AndroidRuntime::getJNIEnv();
        if (env != NULL) {
            env->CallStaticVoidMethod(
                    callbackInfo->audioTrack_class,
                    javaAudioTrackFields.postNativeEventInJava,
                    callbackInfo->audioTrack_ref, event, arg, 0, NULL);
            if (env->ExceptionCheck()) {
                env->ExceptionDescribe();
                env->ExceptionClear();
            }
        }
    }

    {
        Mutex::Autolock l(sLock);
        callbackInfo->busy = false;
        callbackInfo->cond.broadcast();
    }
}

这儿向Java侧抛一个事件,然后Java侧就会调用索要数据接口:

代码语言:javascript复制
 private static void postEventFromNative(Object audiotrack_ref,
            int what, int arg1, int arg2, Object obj) {
        //logd("Event posted from the native side: event="  what   " args="  arg1 " " arg2);
        final AudioTrack track = (AudioTrack) ((WeakReference) audiotrack_ref).get();
        if (track == null) {
            return;
        }

        if (what == AudioSystem.NATIVE_EVENT_ROUTING_CHANGE) {
            track.broadcastRoutingChange();
            return;
        }

        if (what == NATIVE_EVENT_CODEC_FORMAT_CHANGE) {
            ByteBuffer buffer = (ByteBuffer) obj;
            buffer.order(ByteOrder.nativeOrder());
            buffer.rewind();
            AudioMetadataReadMap audioMetaData = AudioMetadata.fromByteBuffer(buffer);
            if (audioMetaData == null) {
                Log.e(TAG, "Unable to get audio metadata from byte buffer");
                return;
            }
            track.mCodecFormatChangedListeners.notify(0 /* eventCode, unused */, audioMetaData);
            return;
        }

        if (what == NATIVE_EVENT_CAN_WRITE_MORE_DATA
                || what == NATIVE_EVENT_NEW_IAUDIOTRACK
                || what == NATIVE_EVENT_STREAM_END) {
            track.handleStreamEventFromNative(what, arg1);
            return;
        }

        NativePositionEventHandlerDelegate delegate = track.mEventHandlerDelegate;
        if (delegate != null) {
            Handler handler = delegate.getHandler();
            if (handler != null) {
                Message m = handler.obtainMessage(what, arg1, arg2, obj);
                handler.sendMessage(m);
            }
        }
    }

利用looper抛一个消息,接下来处理消息:

代码语言:javascript复制
 public void handleMessage(Message msg) {
            final LinkedList<StreamEventCbInfo> cbInfoList;
            synchronized (mStreamEventCbLock) {
                if (msg.what == NATIVE_EVENT_STREAM_END) {
                    synchronized (mPlayStateLock) {
                        if (mPlayState == PLAYSTATE_STOPPING) {
                            if (mOffloadEosPending) {
                                native_start();
                                mPlayState = PLAYSTATE_PLAYING;
                            } else {
                                mAvSyncHeader = null;
                                mAvSyncBytesRemaining = 0;
                                mPlayState = PLAYSTATE_STOPPED;
                            }
                            mOffloadEosPending = false;
                            mPlayStateLock.notify();
                        }
                    }
                }
                if (mStreamEventCbInfoList.size() == 0) {
                    return;
                }
                cbInfoList = new LinkedList<StreamEventCbInfo>(mStreamEventCbInfoList);
            }

            final long identity = Binder.clearCallingIdentity();
            try {
                for (StreamEventCbInfo cbi : cbInfoList) {
                    switch (msg.what) {
                        case NATIVE_EVENT_CAN_WRITE_MORE_DATA:
                            cbi.mStreamEventExec.execute(() ->
                                    cbi.mStreamEventCb.onDataRequest(AudioTrack.this, msg.arg1));
                            break;
                        case NATIVE_EVENT_NEW_IAUDIOTRACK:
                            // TODO also release track as it's not longer usable
                            cbi.mStreamEventExec.execute(() ->
                                    cbi.mStreamEventCb.onTearDown(AudioTrack.this));
                            break;
                        case NATIVE_EVENT_STREAM_END:
                            cbi.mStreamEventExec.execute(() ->
                                    cbi.mStreamEventCb.onPresentationEnded(AudioTrack.this));
                            break;
                    }
                }
            } finally {
                Binder.restoreCallingIdentity(identity);
            }
        }

这样就完成了数据索要流程,也就完成了Start流程。

Pause介绍

这两个流程是类似的,也是从AudioTrack到AudioFlinger的Track,调用过去后设置一个标记,这样ThreadLoop的时候就可以针对Pause进行处理了,这儿就不看Java侧的实现了,流程就是调用native的接口,没做其他主要事情,直接从Native开始看:

代码语言:javascript复制
android_media_AudioTrack_pause(JNIEnv *env, jobject thiz)
{
    sp<AudioTrack> lpTrack = getAudioTrack(env, thiz);
    if (lpTrack == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException",
            "Unable to retrieve AudioTrack pointer for pause()");
        return;
    }

    lpTrack->pause();
}

看下pause的实现:

代码语言:javascript复制
void AudioTrack::pause()
{
    const int64_t beginNs = systemTime();
    AutoMutex lock(mLock);
    mediametrics::Defer defer([&]() {
        mediametrics::LogItem(mMetricsId)
            .set(AMEDIAMETRICS_PROP_EVENT, AMEDIAMETRICS_PROP_EVENT_VALUE_PAUSE)
            .set(AMEDIAMETRICS_PROP_EXECUTIONTIMENS, (int64_t)(systemTime() - beginNs))
            .set(AMEDIAMETRICS_PROP_STATE, stateToString(mState))
            .record(); });

    ALOGV("%s(%d): prior state:%s", __func__, mPortId, stateToString(mState));

    if (mState == STATE_ACTIVE) {
        mState = STATE_PAUSED;
    } else if (mState == STATE_STOPPING) {
        mState = STATE_PAUSED_STOPPING;
    } else {
        return;
    }
    mProxy->interrupt();
    mAudioTrack->pause();
}

看下AudioFlinger中的pause实现:

代码语言:javascript复制
void AudioFlinger::PlaybackThread::Track::pause()
{
    ALOGV("%s(%d): calling pid %d", __func__, mId, IPCThreadState::self()->getCallingPid());
    sp<ThreadBase> thread = mThread.promote();
    if (thread != 0) {
        Mutex::Autolock _l(thread->mLock);
        PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
        switch (mState) {
        case STOPPING_1:
        case STOPPING_2:
            if (!isOffloaded()) {
                /* nothing to do if track is not offloaded */
                break;
            }

            // Offloaded track was draining, we need to carry on draining when resumed
            mResumeToStopping = true;
            FALLTHROUGH_INTENDED;
        case ACTIVE:
        case RESUMING:
            mState = PAUSING;
            ALOGV("%s(%d): ACTIVE/RESUMING => PAUSING on thread %d",
                    __func__, mId, (int)mThreadIoHandle);
            if (isOffloadedOrDirect()) {
                mPauseHwPending = true;
            }
            playbackThread->broadcast_l();
            break;

        default:
            break;
        }
    }
    // Pausing the TeePatch to avoid a glitch on underrun, at the cost of buffered audio loss.
    forEachTeePatchTrack([](auto patchTrack) { patchTrack->pause(); });
}

果然没做其他事情,就是设置了下标记。

Stop实现

这儿直接从AudioTrack native的实现看,可以猜想到这儿需要完成的事情主要有调用Track的stop,并停止回调线程

代码语言:javascript复制
void AudioTrack::stop()
{
    if (mState != STATE_ACTIVE && mState != STATE_PAUSED) {
        return;
    }

    if (isOffloaded_l()) {
        mState = STATE_STOPPING;
    } else {
        mState = STATE_STOPPED;
        ALOGD_IF(mSharedBuffer == nullptr,
                "%s(%d): called with %u frames delivered", __func__, mPortId, mReleased.value());
        mReleased = 0;
    }

    mProxy->stop(); // notify server not to read beyond current client position until start().
    mProxy->interrupt();
    mAudioTrack->stop();   // 停止Track

    // Note: legacy handling - stop does not clear playback marker
    // and periodic update counter, but flush does for streaming tracks.

    if (mSharedBuffer != 0) {
        // clear buffer position and loop count.
        mStaticProxy->setBufferPositionAndLoop(0 /* position */,
                0 /* loopStart */, 0 /* loopEnd */, 0 /* loopCount */);
    }

    sp<AudioTrackThread> t = mAudioTrackThread; // 停止回调索要数据
    if (t != 0) {
        if (!isOffloaded_l()) {
            t->pause();
        } else if (mTransfer == TRANSFER_SYNC_NOTIF_CALLBACK) {
            // causes wake up of the playback thread, that will callback the client for
            // EVENT_STREAM_END in processAudioBuffer()
            t->wake();
        }
    } else {
        setpriority(PRIO_PROCESS, 0, mPreviousPriority);
        set_sched_policy(0, mPreviousSchedulingGroup);
    }
}

这儿主要看下Track中的stop:

代码语言:javascript复制
void AudioFlinger::PlaybackThread::Track::stop()
{
    ALOGV("%s(%d): calling pid %d", __func__, mId, IPCThreadState::self()->getCallingPid());
    sp<ThreadBase> thread = mThread.promote();
    if (thread != 0) {
        Mutex::Autolock _l(thread->mLock);
        track_state state = mState;
        if (state == RESUMING || state == ACTIVE || state == PAUSING || state == PAUSED) {
            // If the track is not active (PAUSED and buffers full), flush buffers
            PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
            if (playbackThread->mActiveTracks.indexOf(this) < 0) {
                reset();
                mState = STOPPED;
            } else if (!isFastTrack() && !isOffloaded() && !isDirect()) {
                mState = STOPPED;
            } else {
                // For fast tracks prepareTracks_l() will set state to STOPPING_2
                // presentation is complete
                // For an offloaded track this starts a drain and state will
                // move to STOPPING_2 when drain completes and then STOPPED
                mState = STOPPING_1;
                if (isOffloaded()) {
                    mRetryCount = PlaybackThread::kMaxTrackStopRetriesOffload;
                }
            }
            playbackThread->broadcast_l();
            ALOGV("%s(%d): not stopping/stopped => stopping/stopped on thread %d",
                    __func__, mId, (int)mThreadIoHandle);
        }
    }
    forEachTeePatchTrack([](auto patchTrack) { patchTrack->stop(); });
}

这儿也是设置下标记。看起来stop和pause没啥区别,实际上还是有区别的,对于stop的Track是可以被释放的,而pause的却不会。

Write的实现

这儿以native_write_byte为例:

代码语言:javascript复制
static jint writeToTrack(const sp<AudioTrack>& track, jint audioFormat, const T *data,
                         jint offsetInSamples, jint sizeInSamples, bool blocking) {
    // give the data to the native AudioTrack object (the data starts at the offset)
    ssize_t written = 0;
    // regular write() or copy the data to the AudioTrack's shared memory?
    size_t sizeInBytes = sizeInSamples * sizeof(T);
    if (track->sharedBuffer() == 0) {
        written = track->write(data   offsetInSamples, sizeInBytes, blocking);
        // for compatibility with earlier behavior of write(), return 0 in this case
        if (written == (ssize_t) WOULD_BLOCK) {
            written = 0;
        }
    } else {
        // writing to shared memory, check for capacity
        if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
            sizeInBytes = track->sharedBuffer()->size();
        }
        memcpy(track->sharedBuffer()->unsecurePointer(), data   offsetInSamples, sizeInBytes);
        written = sizeInBytes;
    }
    if (written >= 0) {
        return written / sizeof(T);
    }
    return interpretWriteSizeError(written);
}

如果有共享buffer就直接copy,如果没有,那么就需要调用AudioTrack的write:

代码语言:javascript复制
ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
    if (mTransfer != TRANSFER_SYNC && mTransfer != TRANSFER_SYNC_NOTIF_CALLBACK) {
        return INVALID_OPERATION;
    }

    if (isDirect()) {
        AutoMutex lock(mLock);
        int32_t flags = android_atomic_and(
                            ~(CBLK_UNDERRUN | CBLK_LOOP_CYCLE | CBLK_LOOP_FINAL | CBLK_BUFFER_END),
                            &mCblk->mFlags);
        if (flags & CBLK_INVALID) {
            return DEAD_OBJECT;
        }
    }

    if (ssize_t(userSize) < 0 || (buffer == NULL && userSize != 0)) {
        // Validation: user is most-likely passing an error code, and it would
        // make the return value ambiguous (actualSize vs error).
        ALOGE("%s(%d): AudioTrack::write(buffer=%p, size=%zu (%zd)",
                __func__, mPortId, buffer, userSize, userSize);
        return BAD_VALUE;
    }

    size_t written = 0;
    Buffer audioBuffer;

    while (userSize >= mFrameSize) {
        audioBuffer.frameCount = userSize / mFrameSize;

        status_t err = obtainBuffer(&audioBuffer,
                blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
        if (err < 0) {
            if (written > 0) {
                break;
            }
            if (err == TIMED_OUT || err == -EINTR) {
                err = WOULD_BLOCK;
            }
            return ssize_t(err);
        }

        size_t toWrite = audioBuffer.size;
        memcpy(audioBuffer.i8, buffer, toWrite);
        buffer = ((const char *) buffer)   toWrite;
        userSize -= toWrite;
        written  = toWrite;

        releaseBuffer(&audioBuffer);
    }

    if (written > 0) {
        mFramesWritten  = written / mFrameSize;

        if (mTransfer == TRANSFER_SYNC_NOTIF_CALLBACK) {
            const sp<AudioTrackThread> t = mAudioTrackThread;
            if (t != 0) {
                // causes wake up of the playback thread, that will callback the client for
                // more data (with EVENT_CAN_WRITE_MORE_DATA) in processAudioBuffer()
                t->wake();
            }
        }
    }

    return written;
}

获取一个buffer,然后再写入; 看下obtainBuffer:

代码语言:javascript复制
status_t AudioTrack::obtainBuffer(Buffer* audioBuffer, const struct timespec *requested,
        struct timespec *elapsed, size_t *nonContig)
{
    // previous and new IAudioTrack sequence numbers are used to detect track re-creation
    uint32_t oldSequence = 0;

    Proxy::Buffer buffer;
    status_t status = NO_ERROR;

    static const int32_t kMaxTries = 5;
    int32_t tryCounter = kMaxTries;

    do {
        // obtainBuffer() is called with mutex unlocked, so keep extra references to these fields to
        // keep them from going away if another thread re-creates the track during obtainBuffer()
        sp<AudioTrackClientProxy> proxy;
        sp<IMemory> iMem;

        {   // start of lock scope
            AutoMutex lock(mLock);

            uint32_t newSequence = mSequence;
            // did previous obtainBuffer() fail due to media server death or voluntary invalidation?
            if (status == DEAD_OBJECT) {
                // re-create track, unless someone else has already done so
                if (newSequence == oldSequence) {
                    status = restoreTrack_l("obtainBuffer");
                    if (status != NO_ERROR) {
                        buffer.mFrameCount = 0;
                        buffer.mRaw = NULL;
                        buffer.mNonContig = 0;
                        break;
                    }
                }
            }
            oldSequence = newSequence;

            if (status == NOT_ENOUGH_DATA) {
                restartIfDisabled();
            }

            // Keep the extra references
            proxy = mProxy;
            iMem = mCblkMemory;

            if (mState == STATE_STOPPING) {
                status = -EINTR;
                buffer.mFrameCount = 0;
                buffer.mRaw = NULL;
                buffer.mNonContig = 0;
                break;
            }

            // Non-blocking if track is stopped or paused
            if (mState != STATE_ACTIVE) {
                requested = &ClientProxy::kNonBlocking;
            }

        }   // end of lock scope

        buffer.mFrameCount = audioBuffer->frameCount;
        // FIXME starts the requested timeout and elapsed over from scratch
        status = proxy->obtainBuffer(&buffer, requested, elapsed);   // 从共享buffer中申请内存
    } while (((status == DEAD_OBJECT) || (status == NOT_ENOUGH_DATA)) && (tryCounter-- > 0));

    audioBuffer->frameCount = buffer.mFrameCount;
    audioBuffer->size = buffer.mFrameCount * mFrameSize;
    audioBuffer->raw = buffer.mRaw;
    audioBuffer->sequence = oldSequence;
    if (nonContig != NULL) {
        *nonContig = buffer.mNonContig;
    }
    return status;
}

为什么这儿的proxy就可以获取共享内存? 因为在AudioFlinger中创建Track的时候就分配了一块共享内存,并把内存fd 通过binder共享给了调用方,这样调用方就可以直接在这块内存上获取可用内存了,具体实现如下:

代码语言:javascript复制
status_t ClientProxy::obtainBuffer(Buffer* buffer, const struct timespec *requested,
        struct timespec *elapsed)
{
 
    for (;;) {
        int32_t flags = android_atomic_and(~CBLK_INTERRUPT, &cblk->mFlags);
        // check for track invalidation by server, or server death detection
        if (flags & CBLK_INVALID) {
            ALOGV("Track invalidated");
            status = DEAD_OBJECT;
            goto end;
        }
        if (flags & CBLK_DISABLED) {
            ALOGV("Track disabled");
            status = NOT_ENOUGH_DATA;
            goto end;
        }
        // check for obtainBuffer interrupted by client
        if (!ignoreInitialPendingInterrupt && (flags & CBLK_INTERRUPT)) {
            ALOGV("obtainBuffer() interrupted by client");
            status = -EINTR;
            goto end;
        }
        ignoreInitialPendingInterrupt = false;
        // compute number of frames available to write (AudioTrack) or read (AudioRecord)
        int32_t front;
        int32_t rear;
        if (mIsOut) {
            // The barrier following the read of mFront is probably redundant.
            // We're about to perform a conditional branch based on 'filled',
            // which will force the processor to observe the read of mFront
            // prior to allowing data writes starting at mRaw.
            // However, the processor may support speculative execution,
            // and be unable to undo speculative writes into shared memory.
            // The barrier will prevent such speculative execution.
            front = android_atomic_acquire_load(&cblk->u.mStreaming.mFront);
            rear = cblk->u.mStreaming.mRear;
        } else {
            // On the other hand, this barrier is required.
            rear = android_atomic_acquire_load(&cblk->u.mStreaming.mRear);
            front = cblk->u.mStreaming.mFront;
        }
        // write to rear, read from front
        ssize_t filled = audio_utils::safe_sub_overflow(rear, front);
        // pipe should not be overfull
        if (!(0 <= filled && (size_t) filled <= mFrameCount)) {
            if (mIsOut) {
                ALOGE("Shared memory control block is corrupt (filled=%zd, mFrameCount=%zu); "
                        "shutting down", filled, mFrameCount);
                mIsShutdown = true;
                status = NO_INIT;
                goto end;
            }
            // for input, sync up on overrun
            filled = 0;
            cblk->u.mStreaming.mFront = rear;
            (void) android_atomic_or(CBLK_OVERRUN, &cblk->mFlags);
        }
        // Don't allow filling pipe beyond the user settable size.
        // The calculation for avail can go negative if the buffer size
        // is suddenly dropped below the amount already in the buffer.
        // So use a signed calculation to prevent a numeric overflow abort.
        ssize_t adjustableSize = (ssize_t) getBufferSizeInFrames();
        ssize_t avail =  (mIsOut) ? adjustableSize - filled : filled;
        if (avail < 0) {
            avail = 0;
        } else if (avail > 0) {
            // 'avail' may be non-contiguous, so return only the first contiguous chunk
            size_t part1;
            if (mIsOut) {
                rear &= mFrameCountP2 - 1;
                part1 = mFrameCountP2 - rear;
            } else {
                front &= mFrameCountP2 - 1;
                part1 = mFrameCountP2 - front;
            }
            if (part1 > (size_t)avail) {
                part1 = avail;
            }
            if (part1 > buffer->mFrameCount) {
                part1 = buffer->mFrameCount;
            }
            buffer->mFrameCount = part1;
            buffer->mRaw = part1 > 0 ?
                    &((char *) mBuffers)[(mIsOut ? rear : front) * mFrameSize] : NULL;
            buffer->mNonContig = avail - part1;
            mUnreleased = part1;
            status = NO_ERROR;
            break;
        }
        struct timespec remaining;
        const struct timespec *ts;
        switch (timeout) {
        case TIMEOUT_ZERO:
            status = WOULD_BLOCK;
            goto end;
        case TIMEOUT_INFINITE:
            ts = NULL;
            break;
        case TIMEOUT_FINITE:
            timeout = TIMEOUT_CONTINUE;
            if (MAX_SEC == 0) {
                ts = requested;
                break;
            }
            FALLTHROUGH_INTENDED;
        case TIMEOUT_CONTINUE:
            // FIXME we do not retry if requested < 10ms? needs documentation on this state machine
            if (!measure || requested->tv_sec < total.tv_sec ||
                    (requested->tv_sec == total.tv_sec && requested->tv_nsec <= total.tv_nsec)) {
                status = TIMED_OUT;
                goto end;
            }
            remaining.tv_sec = requested->tv_sec - total.tv_sec;
            if ((remaining.tv_nsec = requested->tv_nsec - total.tv_nsec) < 0) {
                remaining.tv_nsec  = 1000000000;
                remaining.tv_sec  ;
            }
            if (0 < MAX_SEC && MAX_SEC < remaining.tv_sec) {
                remaining.tv_sec = MAX_SEC;
                remaining.tv_nsec = 0;
            }
            ts = &remaining;
            break;
        default:
            LOG_ALWAYS_FATAL("obtainBuffer() timeout=%d", timeout);
            ts = NULL;
            break;
        }
        int32_t old = android_atomic_and(~CBLK_FUTEX_WAKE, &cblk->mFutex);
        if (!(old & CBLK_FUTEX_WAKE)) {
            if (measure && !beforeIsValid) {
                clock_gettime(CLOCK_MONOTONIC, &before);
                beforeIsValid = true;
            }
            errno = 0;
            (void) syscall(__NR_futex, &cblk->mFutex,
                    mClientInServer ? FUTEX_WAIT_PRIVATE : FUTEX_WAIT, old & ~CBLK_FUTEX_WAKE, ts);
            status_t error = errno; // clock_gettime can affect errno
            // update total elapsed time spent waiting
            if (measure) {
                struct timespec after;
                clock_gettime(CLOCK_MONOTONIC, &after);
                total.tv_sec  = after.tv_sec - before.tv_sec;
                // Use auto instead of long to avoid the google-runtime-int warning.
                auto deltaNs = after.tv_nsec - before.tv_nsec;
                if (deltaNs < 0) {
                    deltaNs  = 1000000000;
                    total.tv_sec--;
                }
                if ((total.tv_nsec  = deltaNs) >= 1000000000) {
                    total.tv_nsec -= 1000000000;
                    total.tv_sec  ;
                }
                before = after;
                beforeIsValid = true;
            }
            switch (error) {
            case 0:            // normal wakeup by server, or by binderDied()
            case EWOULDBLOCK:  // benign race condition with server
            case EINTR:        // wait was interrupted by signal or other spurious wakeup
            case ETIMEDOUT:    // time-out expired
                // FIXME these error/non-0 status are being dropped
                break;
            default:
                status = error;
                ALOGE("%s unexpected error %s", __func__, strerror(status));
                goto end;
            }
        }
    }
}

这样就完成了数据的write。

flush实现介绍

Flush用来清空buffer,这样调用方后续就可以使用更多的内存了。 直接从Native看:

代码语言:javascript复制
void AudioTrack::flush()
{
    const int64_t beginNs = systemTime();
    AutoMutex lock(mLock);
    mediametrics::Defer defer([&]() {
        mediametrics::LogItem(mMetricsId)
            .set(AMEDIAMETRICS_PROP_EVENT, AMEDIAMETRICS_PROP_EVENT_VALUE_FLUSH)
            .set(AMEDIAMETRICS_PROP_EXECUTIONTIMENS, (int64_t)(systemTime() - beginNs))
            .set(AMEDIAMETRICS_PROP_STATE, stateToString(mState))
            .record(); });

    ALOGV("%s(%d): prior state:%s", __func__, mPortId, stateToString(mState));

    if (mSharedBuffer != 0) {
        return;
    }
    if (mState == STATE_ACTIVE) {
        return;
    }
    flush_l();
}

void AudioTrack::flush_l()
{
    ALOG_ASSERT(mState != STATE_ACTIVE);

    // clear playback marker and periodic update counter
    mMarkerPosition = 0;
    mMarkerReached = false;
    mUpdatePeriod = 0;
    mRefreshRemaining = true;

    mState = STATE_FLUSHED;
    mReleased = 0;
    if (isOffloaded_l()) {
        mProxy->interrupt();
    }
    mProxy->flush();
    mAudioTrack->flush();
}

看下Track中的flush:

代码语言:javascript复制
void AudioFlinger::PlaybackThread::Track::flush()
{
    ALOGV("%s(%d)", __func__, mId);
    sp<ThreadBase> thread = mThread.promote();
    if (thread != 0) {
        Mutex::Autolock _l(thread->mLock);
        PlaybackThread *playbackThread = (PlaybackThread *)thread.get();

        // Flush the ring buffer now if the track is not active in the PlaybackThread.
        // Otherwise the flush would not be done until the track is resumed.
        // Requires FastTrack removal be BLOCK_UNTIL_ACKED
        if (playbackThread->mActiveTracks.indexOf(this) < 0) {
            (void)mServerProxy->flushBufferIfNeeded();
        }

        if (isOffloaded()) {
            // If offloaded we allow flush during any state except terminated
            // and keep the track active to avoid problems if user is seeking
            // rapidly and underlying hardware has a significant delay handling
            // a pause
            if (isTerminated()) {
                return;
            }

            ALOGV("%s(%d): offload flush", __func__, mId);
            reset();

            if (mState == STOPPING_1 || mState == STOPPING_2) {
                ALOGV("%s(%d): flushed in STOPPING_1 or 2 state, change state to ACTIVE",
                        __func__, mId);
                mState = ACTIVE;
            }

            mFlushHwPending = true;
            mResumeToStopping = false;
        } else {
            if (mState != STOPPING_1 && mState != STOPPING_2 && mState != STOPPED &&
                    mState != PAUSED && mState != PAUSING && mState != IDLE && mState != FLUSHED) {
                return;
            }
            // No point remaining in PAUSED state after a flush => go to
            // FLUSHED state
            mState = FLUSHED;
            // do not reset the track if it is still in the process of being stopped or paused.
            // this will be done by prepareTracks_l() when the track is stopped.
            // prepareTracks_l() will see mState == FLUSHED, then
            // remove from active track list, reset(), and trigger presentation complete
            if (isDirect()) {
                mFlushHwPending = true;
            }
            if (playbackThread->mActiveTracks.indexOf(this) < 0) {
                reset();
            }
        }
        // Prevent flush being lost if the track is flushed and then resumed
        // before mixer thread can run. This is important when offloading
        // because the hardware buffer could hold a large amount of audio
        playbackThread->broadcast_l();
    }
    // Flush the Tee to avoid on resume playing old data and glitching on the transition to new data
    forEachTeePatchTrack([](auto patchTrack) { patchTrack->flush(); });
}

这儿没做其他的,就是设置了一个标记mFlushHwPending。在threadLoop里面会调用flushAck,而该方法的实现就是处理flush:

代码语言:javascript复制
// must be called with thread lock held
void AudioFlinger::PlaybackThread::Track::flushAck()
{
    if (!isOffloaded() && !isDirect())
        return;

    // Clear the client ring buffer so that the app can prime the buffer while paused.
    // Otherwise it might not get cleared until playback is resumed and obtainBuffer() is called.
    mServerProxy->flushBufferIfNeeded();

    mFlushHwPending = false;
}

看下flushBufferIfNeeded的实现:

代码语言:javascript复制
void ServerProxy::flushBufferIfNeeded()
{
    audio_track_cblk_t* cblk = mCblk;
    // The acquire_load is not really required. But since the write is a release_store in the
    // client, using acquire_load here makes it easier for people to maintain the code,
    // and the logic for communicating ipc variables seems somewhat standard,
    // and there really isn't much penalty for 4 or 8 byte atomics.
    int32_t flush = android_atomic_acquire_load(&cblk->u.mStreaming.mFlush);
    if (flush != mFlush) {
        ALOGV("ServerProxy::flushBufferIfNeeded() mStreaming.mFlush = 0x%x, mFlush = 0x%0x",
                flush, mFlush);
        // shouldn't matter, but for range safety use mRear instead of getRear().
        int32_t rear = android_atomic_acquire_load(&cblk->u.mStreaming.mRear);
        int32_t front = cblk->u.mStreaming.mFront;

        // effectively obtain then release whatever is in the buffer
        const size_t overflowBit = mFrameCountP2 << 1;
        const size_t mask = overflowBit - 1;
        int32_t newFront = (front & ~mask) | (flush & mask);
        ssize_t filled = audio_utils::safe_sub_overflow(rear, newFront);
        if (filled >= (ssize_t)overflowBit) {
            // front and rear offsets span the overflow bit of the p2 mask
            // so rebasing newFront on the front offset is off by the overflow bit.
            // adjust newFront to match rear offset.
            ALOGV("flush wrap: filled %zx >= overflowBit %zx", filled, overflowBit);
            newFront  = overflowBit;
            filled -= overflowBit;
        }
        // Rather than shutting down on a corrupt flush, just treat it as a full flush
        if (!(0 <= filled && (size_t) filled <= mFrameCount)) {
            ALOGE("mFlush %#x -> %#x, front %#x, rear %#x, mask %#x, newFront %#x, "
                    "filled %zd=%#x",
                    mFlush, flush, front, rear,
                    (unsigned)mask, newFront, filled, (unsigned)filled);
            newFront = rear;
        }
        mFlush = flush;
        android_atomic_release_store(newFront, &cblk->u.mStreaming.mFront);
        // There is no danger from a false positive, so err on the side of caution
        if (true /*front != newFront*/) {
            int32_t old = android_atomic_or(CBLK_FUTEX_WAKE, &cblk->mFutex);
            if (!(old & CBLK_FUTEX_WAKE)) {
                (void) syscall(__NR_futex, &cblk->mFutex,
                        mClientInServer ? FUTEX_WAKE_PRIVATE : FUTEX_WAKE, 1);
            }
        }
        mFlushed  = (newFront - front) & mask;
    }
}

这时候就完成了AudioTrack主要流程的实现解读。

0 人点赞