AudioTrack引发的应用Crash分析

2023-02-17 11:39:33 浏览数 (3)

问题背景

crash的堆栈如下:

代码语言:javascript复制
signal: 11 (SIGSEGV), code: 1 (SEGV_MAPERR) fault addr: 0x7434a00010
si_errno:0, si_errnoMsg:Success, sending pid:0, sending uid:0
    r0: 0x0000000000000000  r1: 0x00000076fd4782e8  r2: 0x0000000000000050
    r3: 0x0000000000000003  r4: 0x00000075ed8bdcd0  r5: 0x0000000000000030
    r6: 0xfefefefefefefeff  r7: 0x7f7f7f7f7f7f7f7f  r8: 0x0000007434a00000
    r9: 0x7fb8960001d68187  r10: 0x0000000000000001  r11: 0x0000000000000000
    r12: 0x00000075ed8bddf0  r13: 0x0000000000000030  r14: 0x000953cc692918aa
    r15: 0x0000000034155555  r16: 0x00000076fd47b960  r17: 0x00000077002dd1b0
    r18: 0x00000074dae1c000  r19: 0x000000740b999100  r20: 0x000000740b999460
    r21: 0x0000000000000000  r22: 0x00000075ed8c1000  r23: 0x00000076fd46b8f8
    r24: 0x000000740e6e8ba0  r25: 0x0000000000000000  r26: 0xffffffff000028ea
    r27: 0x0000000000000000  r28: 0x0000000000001e8e  r29: 0x00000075ed8c05f0
    r30: 0x00000076fd432ce8  sp: 0x00000075ed8bf590  pc: 0x00000076fd432cf0
    pstate: 0x0000000060001000
        #00    pc 0000000000087cf0    /system/lib64/libaudioclient.so (_ZN7android10AudioTrack9setVolumeEff 512) [arm64-v8a::dc73dae9c187d72cde4afabf20af7ece]
        #01    pc 00000000000ba9bc    /system/lib64/libaudioclient.so (_ZN7android15TrackPlayerBase15playerSetVolumeEv 92) [arm64-v8a::dc73dae9c187d72cde4afabf20af7ece]
        #02    pc 00000000000b9b64    /system/lib64/libaudioclient.so (_ZN7android10PlayerBase9setVolumeEf 148) [arm64-v8a::dc73dae9c187d72cde4afabf20af7ece]
        #03    pc 000000000005183c    /system/lib64/libaudioclient.so (_ZN7android5media8BnPlayer10onTransactEjRKNS_6ParcelEPS2_j 468) [arm64-v8a::dc73dae9c187d72cde4afabf20af7ece]
        #04    pc 000000000004c82c    /system/lib64/libbinder.so (_ZN7android7BBinder8transactEjRKNS_6ParcelEPS1_j 232) [arm64-v8a::57163581d76191da2e160eb5e5da866e]
        #05    pc 000000000005512c    /system/lib64/libbinder.so (_ZN7android14IPCThreadState14executeCommandEi 1084) [arm64-v8a::57163581d76191da2e160eb5e5da866e]
        #06    pc 0000000000054c40    /system/lib64/libbinder.so (_ZN7android14IPCThreadState20getAndExecuteCommandEv 156) [arm64-v8a::57163581d76191da2e160eb5e5da866e]
        #07    pc 00000000000554a0    /system/lib64/libbinder.so (_ZN7android14IPCThreadState14joinThreadPoolEb 80) [arm64-v8a::57163581d76191da2e160eb5e5da866e]
        #08    pc 000000000007b4e8    /system/lib64/libbinder.so [arm64-v8a::57163581d76191da2e160eb5e5da866e]
        #09    pc 00000000000154cc    /system/lib64/libutils.so (_ZN7android6Thread11_threadLoopEPv 260) [arm64-v8a::85aad54dcf3341150498b846bfca7931]
        #10    pc 00000000000a4490    /system/lib64/libandroid_runtime.so (_ZN7android14AndroidRuntime15javaThreadShellEPv 144) [arm64-v8a::f104295cb0d778811ecb8f1b7336dba1]
        #11    pc 0000000000014d90    /system/lib64/libutils.so [arm64-v8a::85aad54dcf3341150498b846bfca7931]
        #12    pc 00000000000da278    /apex/com.android.runtime/lib64/bionic/libc.so (_ZL15__pthread_startPv 64) [arm64-v8a::e81bf516b888e895d4e757da439c8117]
        #13    pc 000000000007a448    /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread 64) [arm64-v8a::e81bf516b888e895d4e757da439c8117]

上下文log如下:

代码语言:javascript复制
- 02-16 20:35:25.159 7822 10988 D com.tencent.no: PlayerBase::setVolume() from IPlayer
- 02-16 20:35:25.160 7822 10988 D AudioTrack: setVolume left 0.000 right 0.000 , callingPid 0
- 02-16 20:35:25.220 7822 7822 I liteav : [I][02-16/20:35:25.220 8.0][7822,7822][log_jni.cc:26]AudioSystemBroadcastReceiver: receive Action: android.bluetooth.headset.profile.action.AUDIO_STATE_CHANGED
- 02-16 20:35:25.251 7822 8607 I liteav : [I][02-16/20:35:25.251 8.0][7822,8607][oboe_player.cc:92]OboePlayer StopPlayout.
- 02-16 20:35:25.252 7822 8607 I liteav : [I][02-16/20:35:25.251 8.0][7822,8607][oboe_player.cc:19]OboePlayer destroyed.

还有一个信息是对应的场景是音频焦点丢失情况下。 本地尝试复现发现复现不出来,压测也没有复现。 google上搜了下,也有对应的issue,不过没有fix:https://issuetracker.google.com/issues/234934924

其他信息就没有了,那既然别人没分析出来我们要不要分析?当然要,接下来就开始狂飙一样的分析

分析过程

阶段1

堆栈上看是crash到setVolume上了,这个是设置系统硬件音量,我们的业务是绝对不会设置硬件音量,这个就是一个疑点了。 会不会是其他业务的AudioTrack呢? 有2个信息可以确认不是其他业务的AudioTrack 首先每次setVolume的上下文总有停止我们AudioTrack的记录,一般不会这么巧。 如果上面信息是猜测的话,再比较下portid就会发现的确是同一个AudioTrack,portId是AudioTrack的唯一标识。 甚至对象地址都一样,这样看上来是有个地方操作了我们的AudioTrack了。 从操作时序上看,每次crash都是先看到析构AudioTrack的信息,然后再看到setVolume的信息,然后就crash了,很自然想到就是操作已经释放的了AudioTrack了,实际上也的确是的。那接下来的问题就是哪里调用的setVolume呢?而且还是IPC调用过来的。首先业务肯定没有这样操作,那有个可能是系统操作的,可是需要证据,日志里面有打印binder的对端,pid 是 0,这就不好找了。

阶段2

接下来只能从源码角度分析了,首先找到PlayerBase的setVolume:

代码语言:javascript复制
binder::Status PlayerBase::setVolume(float vol) {
    ALOGD("PlayerBase::setVolume() from IPlayer");
    {
        Mutex::Autolock _l(mSettingsLock);
        mVolumeMultiplierL = vol;
        mVolumeMultiplierR = vol;
    }
    status_t status = playerSetVolume();
    if (status != NO_ERROR) {
        ALOGW("PlayerBase::setVolume() error %d", status);
    }
    return binderStatusFromStatusT(status);
}

这行信息可以和log对上,想必playerSetVolume 就是操作AudioTrack了:

代码语言:javascript复制
status_t TrackPlayerBase::playerSetVolume() {
    return doSetVolume();
}

status_t TrackPlayerBase::doSetVolume() {
    status_t status = NO_INIT;
    if (mAudioTrack != 0) {
        float tl = mPlayerVolumeL * mPanMultiplierL * mVolumeMultiplierL;
        float tr = mPlayerVolumeR * mPanMultiplierR * mVolumeMultiplierR;
        mAudioTrack->setVolume(tl, tr);
        status = NO_ERROR;
    }
    return status;
}

果然是的,这儿的 TrackPlayerBase 就是PlayerBase 的子类,也应该可以猜到PlayerBase是一个支持Binder的接口类:

代码语言:javascript复制
class PlayerBase : public ::android::media::BnPlayer
{
public:
    explicit PlayerBase();
    virtual ~PlayerBase() override;
    ...
}

果然是的。 接下来就有一个疑问了,是哪儿持有了PlayerBase,又是如何将AudioTrack 和PlayerBase关联起来的呢?第二个问题的答案应该就是PlayerBase或者子类里,看下内部的AudioTrack变量哪里赋值的不就明白了么?

代码语言:javascript复制
void TrackPlayerBase::init(const sp<AudioTrack>& pat,
                           const sp<AudioTrack::IAudioTrackCallback>& callback,
                           player_type_t playerType, audio_usage_t usage,
                           audio_session_t sessionId) {
    PlayerBase::init(playerType, usage, sessionId);
    mAudioTrack = pat;
    if (mAudioTrack != 0) {
        mCallbackHandle = callback;
        mSelfAudioDeviceCallback = new SelfAudioDeviceCallback(*this);
        mAudioTrack->addAudioDeviceCallback(mSelfAudioDeviceCallback);
        mAudioTrack->setPlayerIId(mPIId); // set in PlayerBase::init().
    }
}

果然这儿将创建好的AudioTrack传递进来,然后顺便还加了一个监控,真巧妙,接下来就需要找创建TrackPlayerBase的地方了,先全局搜下看看,惊喜的是果然可以搜到,而且排除干扰信息后只有一处在创建,那就是libwilhelm:

代码语言:javascript复制
void android_audioPlayer_create(CAudioPlayer *pAudioPlayer) {

    // pAudioPlayer->mAndroidObjType has been set in android_audioPlayer_checkSourceSink()
    // and if it was == INVALID_TYPE, then IEngine_CreateAudioPlayer would never call us
    assert(INVALID_TYPE != pAudioPlayer->mAndroidObjType);

    // These initializations are in the same order as the field declarations in classes.h

    // FIXME Consolidate initializations (many of these already in IEngine_CreateAudioPlayer)
    // mAndroidObjType: see above comment
    pAudioPlayer->mAndroidObjState = ANDROID_UNINITIALIZED;
    pAudioPlayer->mSessionId = (audio_session_t) android::AudioSystem::newAudioUniqueId(
            AUDIO_UNIQUE_ID_USE_SESSION);
    pAudioPlayer->mPIId = PLAYER_PIID_INVALID;

    // placeholder: not necessary yet as session ID lifetime doesn't extend beyond player
    // android::AudioSystem::acquireAudioSessionId(pAudioPlayer->mSessionId);

    pAudioPlayer->mStreamType = ANDROID_DEFAULT_OUTPUT_STREAM_TYPE;
    pAudioPlayer->mPerformanceMode = ANDROID_PERFORMANCE_MODE_DEFAULT;

    // mAudioTrack lifecycle is handled through mTrackPlayer
    pAudioPlayer->mTrackPlayer = new android::TrackPlayerBase();
    assert(pAudioPlayer->mTrackPlayer != 0);
    pAudioPlayer->mCallbackProtector = new android::CallbackProtector();
    // mAPLayer
    // mAuxEffect

    pAudioPlayer->mAuxSendLevel = 0;
    pAudioPlayer->mAmplFromDirectLevel = 1.0f; // matches initial mDirectLevel value
    pAudioPlayer->mDeferredStart = false;

    // This section re-initializes interface-specific fields that
    // can be set or used regardless of whether the interface is
    // exposed on the AudioPlayer or not

    switch (pAudioPlayer->mAndroidObjType) {
    case AUDIOPLAYER_FROM_PCM_BUFFERQUEUE:
        pAudioPlayer->mPlaybackRate.mMinRate = AUDIOTRACK_MIN_PLAYBACKRATE_PERMILLE;
        pAudioPlayer->mPlaybackRate.mMaxRate = AUDIOTRACK_MAX_PLAYBACKRATE_PERMILLE;
        break;
    case AUDIOPLAYER_FROM_URIFD:
        pAudioPlayer->mPlaybackRate.mMinRate = MEDIAPLAYER_MIN_PLAYBACKRATE_PERMILLE;
        pAudioPlayer->mPlaybackRate.mMaxRate = MEDIAPLAYER_MAX_PLAYBACKRATE_PERMILLE;
        break;
    default:
        // use the default range
        break;
    }
}

那基本就可以确认是libwilhelm中的操作了,这下可以减小grep范围了,接下来需要继续搜使用mTrackPlayer的地方,应该有地方会创建一个AudioTrack,然后作为参数调用mTrackPlayer的init方法,这样才合理:

代码语言:javascript复制
// FIXME abstract out the diff between CMediaPlayer and CAudioPlayer
SLresult android_audioPlayer_realize(CAudioPlayer *pAudioPlayer, SLboolean async) {

    SLresult result = SL_RESULT_SUCCESS;
    SL_LOGV("Realize pAudioPlayer=%p", pAudioPlayer);
    AudioPlayback_Parameters app;
    app.sessionId = pAudioPlayer->mSessionId;
    app.streamType = pAudioPlayer->mStreamType;

    switch (pAudioPlayer->mAndroidObjType) {

    //-----------------------------------
    // AudioTrack
    case AUDIOPLAYER_FROM_PCM_BUFFERQUEUE: {
        // initialize platform-specific CAudioPlayer fields

        SLDataFormat_PCM *df_pcm = (SLDataFormat_PCM *)
                pAudioPlayer->mDynamicSource.mDataSource->pFormat;

        uint32_t sampleRate = sles_to_android_sampleRate(df_pcm->samplesPerSec);

        audio_channel_mask_t channelMask;
        channelMask = sles_to_audio_output_channel_mask(df_pcm->channelMask);

        // To maintain backward compatibility with previous releases, ignore
        // channel masks that are not indexed.
        if (channelMask == AUDIO_CHANNEL_INVALID
                || audio_channel_mask_get_representation(channelMask)
                        == AUDIO_CHANNEL_REPRESENTATION_POSITION) {
            channelMask = audio_channel_out_mask_from_count(df_pcm->numChannels);
            SL_LOGI("Emulating old channel mask behavior "
                    "(ignoring positional mask %#x, using default mask %#x based on "
                    "channel count of %d)", df_pcm->channelMask, channelMask,
                    df_pcm->numChannels);
        }
        SL_LOGV("AudioPlayer: mapped SLES channel mask %#x to android channel mask %#x",
            df_pcm->channelMask,
            channelMask);

        checkAndSetPerformanceModePre(pAudioPlayer);

        audio_output_flags_t policy;
        switch (pAudioPlayer->mPerformanceMode) {
        case ANDROID_PERFORMANCE_MODE_POWER_SAVING:
            policy = AUDIO_OUTPUT_FLAG_DEEP_BUFFER;
            break;
        case ANDROID_PERFORMANCE_MODE_NONE:
            policy = AUDIO_OUTPUT_FLAG_NONE;
            break;
        case ANDROID_PERFORMANCE_MODE_LATENCY_EFFECTS:
            policy = AUDIO_OUTPUT_FLAG_FAST;
            break;
        case ANDROID_PERFORMANCE_MODE_LATENCY:
        default:
            policy = (audio_output_flags_t)(AUDIO_OUTPUT_FLAG_FAST | AUDIO_OUTPUT_FLAG_RAW);
            break;
        }

        int32_t notificationFrames;
        if ((policy & AUDIO_OUTPUT_FLAG_FAST) != 0) {
            // negative notificationFrames is the number of notifications (sub-buffers) per track
            // buffer for details see the explanation at frameworks/av/include/media/AudioTrack.h
            notificationFrames = -pAudioPlayer->mBufferQueue.mNumBuffers;
        } else {
            notificationFrames = 0;
        }
        const auto callbackHandle = android::sp<android::AudioTrackCallback>::make(pAudioPlayer);
        const auto pat = android::sp<android::AudioTrack>::make(
                pAudioPlayer->mStreamType,                           // streamType
                sampleRate,                                          // sampleRate
                sles_to_android_sampleFormat(df_pcm),                // format
                channelMask,                                         // channel mask
                0,                                                   // frameCount
                policy,                                              // flags
                callbackHandle,                                      // callback
                notificationFrames,                                  // see comment above
                pAudioPlayer->mSessionId);

        // Set it here so it can be logged by the destructor if the open failed.
        pat->setCallerName(ANDROID_OPENSLES_CALLER_NAME);

        android::status_t status = pat->initCheck();
        if (status != android::NO_ERROR) {
            SL_LOGE("AudioTrack::initCheck status %u", status);
            // FIXME should return a more specific result depending on status
            result = SL_RESULT_CONTENT_UNSUPPORTED;
            return result;
        }

        pAudioPlayer->mTrackPlayer->init(
            pat, callbackHandle,
            android::PLAYER_TYPE_SLES_AUDIOPLAYER_BUFFERQUEUE,
            usageForStreamType(pAudioPlayer->mStreamType),
            pAudioPlayer->mSessionId);
...

果然说曹操到曹操就到了,当前文件ctrl f 搜了一下就看到了,代码中创建了一个AudioTrack,然后作为参数调用了mTrackPlayer的init方法,这时候基本信息都关联起来了,那够了吗?明显不够,我们的目的不是为了看哪儿把AudioTrack给了PlayerBase,而是需要找哪儿可能用mTrackPlayer的bpbinder,然后触发它的setVolume调用!那接下来就继续搜呗,就搜mTrackPlayer,看看是会把mTrackPlayer作为参数传递给哪个方法:

代码语言:javascript复制
    JNIEnv* j_env = NULL;
        jclass clsAudioTrack = NULL;
        jmethodID midRoutingProxy_connect = NULL;
        if (pAudioPlayer->mAndroidConfiguration.mRoutingProxy != NULL &&
                (j_env = android::AndroidRuntime::getJNIEnv()) != NULL &&
                (clsAudioTrack = j_env->FindClass("android/media/AudioTrack")) != NULL &&
                (midRoutingProxy_connect =
                    j_env->GetMethodID(clsAudioTrack, "deferred_connect", "(J)V")) != NULL) {
            j_env->ExceptionClear();
            j_env->CallVoidMethod(pAudioPlayer->mAndroidConfiguration.mRoutingProxy,
                                  midRoutingProxy_connect,
                                  (jlong)pAudioPlayer->mTrackPlayer->mAudioTrack.get());
            if (j_env->ExceptionCheck()) {
                SL_LOGE("Java exception releasing player routing object.");
                result = SL_RESULT_INTERNAL_ERROR;
                pAudioPlayer->mTrackPlayer->mAudioTrack.clear();
                return result;
            }
        }

立马就搜到一处,可是很快我就排除了,为啥?这儿传递的是AudioTrack对象,而不是mTrackPlayer,我们堆栈上是IPC过来的,如果仅仅是AudioTrack,那就不会有IPC调用了,继续搜:

代码语言:javascript复制
        if (pAudioPlayer->mObject.mEngine->mAudioManager == 0) {
            SL_LOGE("AudioPlayer realize: no audio service, player will not be registered");
            pAudioPlayer->mPIId = 0;
        } else {
            pAudioPlayer->mPIId = pAudioPlayer->mObject.mEngine->mAudioManager->trackPlayer(
                    android::PLAYER_TYPE_SLES_AUDIOPLAYER_URI_FD,
                    usageForStreamType(pAudioPlayer->mStreamType), AUDIO_CONTENT_TYPE_UNKNOWN,
                    pAudioPlayer->mTrackPlayer, pAudioPlayer->mSessionId);
        }
        }
        break;

很快我就又找到一处,像!非常像!不管是外表还是气质都很像,甚至我都可以悄悄下结论就是他(是他,是他,就是他。。。)。 原因如下: 这儿传递的是mTrackPlayer对象自己,这个IPC调用就会变成bpbinder,像! 函数名字是trackPlayer,这不就是track player么?像! 接下来就是从代码上确认,首先找到mAudioManager,这个就是AudioManager了,搜下就能看到:

代码语言:javascript复制
    virtual audio_unique_id_t trackPlayer(player_type_t playerType, audio_usage_t usage,
            audio_content_type_t content, const sp<IBinder>& player, audio_session_t sessionId) {
        Parcel data, reply;
        data.writeInterfaceToken(IAudioManager::getInterfaceDescriptor());
        data.writeInt32(1); // non-null PlayerIdCard parcelable
        // marshall PlayerIdCard data
        data.writeInt32((int32_t) playerType);
        //   write attributes of PlayerIdCard
        data.writeInt32((int32_t) usage);
        data.writeInt32((int32_t) content);
        data.writeInt32(0 /*source: none here, this is a player*/);
        data.writeInt32(0 /*flags*/);
        //   write attributes' tags
        data.writeInt32(1 /*FLATTEN_TAGS*/);
        data.writeString16(String16("")); // no tags
        //   write attributes' bundle
        data.writeInt32(-1977 /*ATTR_PARCEL_IS_NULL_BUNDLE*/); // no bundle
        //   write IPlayer
        data.writeStrongBinder(player);
        //   write session Id
        data.writeInt32((int32_t)sessionId);
        // get new PIId in reply
        const status_t res = remote()->transact(TRACK_PLAYER, data, &reply, 0);
        if (res != OK || reply.readExceptionCode() != 0) {
            ALOGE("trackPlayer() failed, piid is %d", PLAYER_PIID_INVALID);
            return PLAYER_PIID_INVALID;
        } else {
            const audio_unique_id_t piid = (audio_unique_id_t) reply.readInt32();
            ALOGV("trackPlayer() returned piid %d", piid);
            return piid;
        }
    }

这个也是IPC,即然名字是AudioManager,对应的IPC server 就应该是AudioService了,要问我如何知道的?这就是专业! 接下来就直接搜AudioService,在里面找trackPlayer:

代码语言:javascript复制
    public int trackPlayer(PlayerBase.PlayerIdCard pic) {
        if (pic != null && pic.mAttributes != null) {
            validateAudioAttributesUsage(pic.mAttributes);
        }
        return mPlaybackMonitor.trackPlayer(pic);
    }

果然也找到了,可能有人奇怪,native 那么多参数,java 咋变成一个了,这个就是IPC的魅力,点开PlayerIdCard就能看到秘密了:

代码语言:javascript复制
  public static class PlayerIdCard implements Parcelable {
        public final int mPlayerType;

        public static final int AUDIO_ATTRIBUTES_NONE = 0;
        public static final int AUDIO_ATTRIBUTES_DEFINED = 1;
        public final AudioAttributes mAttributes;
        public final IPlayer mIPlayer;
        public final int mSessionId;

        PlayerIdCard(int type, @NonNull AudioAttributes attr, @NonNull IPlayer iplayer,
                     int sessionId) {
            mPlayerType = type;
            mAttributes = attr;
            mIPlayer = iplayer;
            mSessionId = sessionId;
        }

        @Override
        public int hashCode() {
            return Objects.hash(mPlayerType, mSessionId);
        }

        @Override
        public int describeContents() {
            return 0;
        }

        @Override
        public void writeToParcel(Parcel dest, int flags) {
            dest.writeInt(mPlayerType);
            mAttributes.writeToParcel(dest, 0);
            dest.writeStrongBinder(mIPlayer == null ? null : mIPlayer.asBinder());
            dest.writeInt(mSessionId);
        }

        public static final @android.annotation.NonNull Parcelable.Creator<PlayerIdCard> CREATOR
        = new Parcelable.Creator<PlayerIdCard>() {
            /**
             * Rebuilds an PlayerIdCard previously stored with writeToParcel().
             * @param p Parcel object to read the PlayerIdCard from
             * @return a new PlayerIdCard created from the data in the parcel
             */
            public PlayerIdCard createFromParcel(Parcel p) {
                return new PlayerIdCard(p);
            }
            public PlayerIdCard[] newArray(int size) {
                return new PlayerIdCard[size];
            }
        };

        private PlayerIdCard(Parcel in) {
            mPlayerType = in.readInt();
            mAttributes = AudioAttributes.CREATOR.createFromParcel(in);
            // IPlayer can be null if unmarshalling a Parcel coming from who knows where
            final IBinder b = in.readStrongBinder();
            mIPlayer = (b == null ? null : IPlayer.Stub.asInterface(b));
            mSessionId = in.readInt();
        }

IPC会帮执行Parcel,这样就能通过IPC传递对象了,实际上并不是传递对象自己,而是新建了一个对象,不过内容和原来的对象一样,这儿指的内容就是在writeToParcel中序列化的部分,这个很基础先继续往下。 AudioService调用了mPlaybackMonitor的trackPlayer,接下来继续看看:

代码语言:javascript复制
public int trackPlayer(PlayerBase.PlayerIdCard pic) {
        final int newPiid = AudioSystem.newAudioPlayerId();
        if (DEBUG) { Log.v(TAG, "trackPlayer() new piid="   newPiid); }
        final AudioPlaybackConfiguration apc =
                new AudioPlaybackConfiguration(pic, newPiid,
                        Binder.getCallingUid(), Binder.getCallingPid());
        apc.init();
        synchronized (mAllowedCapturePolicies) {
            int uid = apc.getClientUid();
            if (mAllowedCapturePolicies.containsKey(uid)) {
                updateAllowedCapturePolicy(apc, mAllowedCapturePolicies.get(uid));
            }
        }
        sEventLogger.log(new NewPlayerEvent(apc));
        synchronized(mPlayerLock) {
            mPlayers.put(newPiid, apc);
            maybeMutePlayerAwaitingConnection(apc);
        }
        return newPiid;
}

这儿又执行了一波封装,核心的就是添加到mPlayers中了。 到了这儿基本可以把流程都串起来了吧,libwilhelm 创建了AudioTrack和PlayerBase,然后又把PlayerBase的bpbinder 传递给了AudioService,这样如果要操作AudioTrack就可以直接调用PlayerBase的API,因为他是一个bpbinder,那么接下来就会走IPC,调用到BnBinder里,找到内部的AudioTrack,执行对应的方法。 接下来显而易见的就是看看究竟是哪里访问了mPlayers就好了,估计会有个地方遍历mPlayers,然后对内部的apc执行setVolume操作:

代码语言:javascript复制
    @Override
    public void mutePlayersForCall(int[] usagesToMute) {
        if (DEBUG) {
            String log = new String("mutePlayersForCall: usages=");
            for (int usage : usagesToMute) { log  = " "   usage; }
            Log.v(TAG, log);
        }
        synchronized (mPlayerLock) {
            final Set<Integer> piidSet = mPlayers.keySet();
            final Iterator<Integer> piidIterator = piidSet.iterator();
            // find which players to mute
            while (piidIterator.hasNext()) {
                final Integer piid = piidIterator.next();
                final AudioPlaybackConfiguration apc = mPlayers.get(piid);
                if (apc == null) {
                    continue;
                }
                final int playerUsage = apc.getAudioAttributes().getUsage();
                boolean mute = false;
                for (int usageToMute : usagesToMute) {
                    if (playerUsage == usageToMute) {
                        mute = true;
                        break;
                    }
                }
                if (mute) {
                    try {
                        sEventLogger.log((new AudioEventLogger.StringEvent("call: muting piid:"
                                  piid   " uid:"   apc.getClientUid())).printLog(TAG));
                        apc.getPlayerProxy().setVolume(0.0f);
                        mMutedPlayers.add(new Integer(piid));
                    } catch (Exception e) {
                        Log.e(TAG, "call: error muting player "   piid, e);
                    }
                }
            }
        }
    }

看到了吗?是不是看到了,这儿果然有setVolume,而且也是0,和log上完全对上了!

代码语言:javascript复制
apc.getPlayerProxy().setVolume(0.0f);

那哪儿会执行这个方法呢?

代码语言:javascript复制
  private void runAudioCheckerForRingOrCallAsync(final boolean enteringRingOrCall) {
        new Thread() {
            public void run() {
                if (enteringRingOrCall) {
                    try {
                        Thread.sleep(RING_CALL_MUTING_ENFORCEMENT_DELAY_MS);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
                synchronized (mAudioFocusLock) {
                    // since the new thread starting running the state could have changed, so
                    // we need to check again mRingOrCallActive, not enteringRingOrCall
                    if (mRingOrCallActive) {
                        mFocusEnforcer.mutePlayersForCall(USAGES_TO_MUTE_IN_RING_OR_CALL);
                    } else {
                        mFocusEnforcer.unmutePlayersForCall();
                    }
                }
            }
        }.start();
    }

这儿就比较明显了,打电话呗,就是Phone在收到来电后会请求焦点顺便也mute 当前的player,这就是为什么在听音乐的时候来电,音乐声就没了的原因了。

到了这儿问题就完全清楚了,来电后,业务执行了停止播放,这时候AudioTrack就会执行销毁,而在AudioService收到死亡通知之前又收到了mute触发的setVolume调用,这时候IPC到了应用进程这边不知道AudioTrack已经销毁了,于是调用AudioTrack的setVolume方法, 就出现Crash了,这样就完全和堆栈对上了,也和log对上了。嗯,总之就这么简单。

0 人点赞