FFmpeg+OpenSLES 实现音频播放

2020-07-31 09:57:12 浏览数 (1)

前言

最近一直在学习FFmpeg,看了网上各位大神的,都玩得很溜,自己也来一波骚操作。于是乎利用FFmpeg结合OpenSles来进行对音频文件的播放。网上看的都是别人的写的代码,拿来运行下,发现不是很适用。别人的毕竟是别人的,还是要自己打通下筋脉掌握下。

1595.jpg1595.jpg

介绍下一些函数

FFmpeg的函数介绍

在之前的文章有介绍,可以参考:https://cloud.tencent.com/developer/article/1666126

OpenSLES的函数介绍
  • slCreateEngine 函数

源码:

代码语言:txt复制
SL_API SLresult SLAPIENTRY slCreateEngine(
	SLObjectItf             *pEngine,
	SLuint32                numOptions,
	const SLEngineOption    *pEngineOptions,
	SLuint32                numInterfaces,
	const SLInterfaceID     *pInterfaceIds,
	const SLboolean         * pInterfaceRequired
);

作用:

初始化引擎对象给使用者一个处理手柄对象,第四个参数(需要支持的interface数目)为零则会忽视第五、第六个参数。

例子:

代码语言:txt复制
SLresult result;
const SLEngineOption engineOptions[1] = {{(SLuint32) SL_ENGINEOPTION_THREADSAFE, (SLuint32) SL_BOOLEAN_TRUE}};
result = slCreateEngine(&engineObject, 1, engineOptions, 0, NULL, NULL);
  • Realize 函数

源码:

代码语言:txt复制
SLresult (*Realize) (
		SLObjectItf self,
		SLboolean async
	);

作用:

转化一个Object从未实例化到实例化过程,第二个参数表示是否异步

例子:

代码语言:txt复制
result = (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE);
  • GetInterface 函数

源码:

代码语言:txt复制
SLresult (*GetInterface) (
		SLObjectItf self,
		const SLInterfaceID iid,
		void * pInterface
	);

作用:

得到由Object暴露的接口,这里指的是引擎接口,第二个参数是接口ID,第三个参数是输出的引擎接口对象。

例子:

代码语言:txt复制
result = (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineEngine);
  • CreateOutputMix 函数

源码:

代码语言:txt复制
	SLresult (*CreateOutputMix) (
		SLEngineItf self,
		SLObjectItf * pMix,
		SLuint32 numInterfaces,
		const SLInterfaceID * pInterfaceIds,
		const SLboolean * pInterfaceRequired
	);

作用:

创建输出混音器--->由引擎接口创建,从第三个参数开始就是支持的interface数目,同样的为零忽略第四第五个参数.

例子:

代码语言:txt复制
const SLInterfaceID interfaceIds[1] = {SL_IID_ENVIRONMENTALREVERB};//这里给一个环境混响的接口id
const SLboolean reqs[1] = {SL_BOOLEAN_TRUE};
result = (*engineEngine)->CreateOutputMix(engineEngine, &outputMixObject, 1, interfaceIds,
                                     reqs);
  • CreateAudioPlayer 函数

源码:

代码语言:txt复制
SLresult (*CreateAudioPlayer) (
		SLEngineItf self,
		SLObjectItf * pPlayer,
		SLDataSource *pAudioSrc,
		SLDataSink *pAudioSnk,
		SLuint32 numInterfaces,
		const SLInterfaceID * pInterfaceIds,
		const SLboolean * pInterfaceRequired
	);

作用:

创建播放器---->由引擎接口创建,第三个参数表示设置播放的数据源(来播放缓存队列),第四个配置音频接收器,第四个参数(需要支持的interface数目)为零则会忽视第五、第六个参数。

  • RegisterCallback 函数

源码:

代码语言:txt复制
SLresult (*RegisterCallback) (
		SLAndroidSimpleBufferQueueItf self,
		slAndroidSimpleBufferQueueCallback callback,
		void* pContext
	);

typedef void (SLAPIENTRY *slAndroidSimpleBufferQueueCallback)(
	SLAndroidSimpleBufferQueueItf caller,
	void *pContext
);

作用:

设置缓存队列的回调函数,第二个就是缓存回调,第三个参数是回调函数slAndroidSimpleBufferQueueCallback的第二个参数

例子:

代码语言:txt复制
result = (*bqPlayerBufferQueue2)->RegisterCallback(bqPlayerBufferQueue2, bqPlayerCallback2,
                                                      (void *) "1");
//回调函数
void bqPlayerCallback2(SLAndroidSimpleBufferQueueItf bq, void *context);
  • SetPlayState 函数

源码:

代码语言:txt复制
SLresult (*SetPlayState) (
		SLPlayItf self,
		SLuint32 state
	);

作用:

设置播放状态

例子:

代码语言:txt复制
    result = (*bqPlayerPlay)->SetPlayState(bqPlayerPlay, SL_PLAYSTATE_PLAYING); //设置为播放中

源码:Enqueue 函数

代码语言:txt复制
SLresult (*Enqueue) (
		SLAndroidSimpleBufferQueueItf self,
		const void *pBuffer,
		SLuint32 size
	);

作用:

将解码数据防区播放缓冲数据栈, 第二个参数是播放数据,第三个是表示数据大小

例子:

代码语言:txt复制
result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, out_buffer, dst_buffer_size);

思路

整体思路:

  1. 由FFmpeg打开音频文件,获取相应的解码器。
  2. 拿到相应的音频编码格式,采样率,声道等。
  3. 编写解码函数getPCM,为了让opensles调用获取到解码的数据。
  4. 创建opensles的对象和接口,创建音频播发器,创建缓冲队列和缓冲回调函数,设置播放状态为播放中。
  5. 主动触发回调函数,在回调函数调用解码函数getPCM,将音频文件转码成pcm文件,然后将每一帧解码的数据和大小,传到openSles的数据缓冲队列中,进行音频播放。
FFmpeg和openSles的流程关系图.pngFFmpeg和openSles的流程关系图.png

开始撸码

  • 创建FFmpeg,获取解码器和相关信息
代码语言:txt复制
int createFFmpeg(JNIEnv *env, jstring srcPath) {

    const char * originPath = env->GetStringUTFChars(srcPath, NULL);
    //创建avFormatContext对象
    avFormatContext = avformat_alloc_context();
    int ret = avformat_open_input(&avFormatContext, originPath, NULL, NULL);
    if(ret != 0) {
        LOGE("打开文件失败");
        return -1;
    }

    //输出文件信息
    av_dump_format(avFormatContext, 0, originPath, 0);

    ret = avformat_find_stream_info(avFormatContext, NULL);
    if(ret < 0) {
        LOGE("获取编码流信息失败");
        return -1;
    }

    //获取当前的类型流的索引位置
    for (int i = 0; i < avFormatContext->nb_streams;   i) {
        //获取流的编码类型
        enum AVMediaType avMediaType = avFormatContext->streams[i]->codecpar->codec_type;
        if(avMediaType == AVMEDIA_TYPE_AUDIO) {
            streamIndex = i;
            break;
        }
    }

    //根据类型流对应的索引位置,获取对应的类型解码器
    AVCodecParameters *avCodecParameters = avFormatContext->streams[streamIndex]->codecpar;
    //获取对应类型的解码器id
    AVCodecID avCodecId = avCodecParameters->codec_id;

    //获取解码器
    AVCodec *avCodec = avcodec_find_decoder(avCodecId);

    //创建解码器上下文
    avCodecContext = avcodec_alloc_context3(NULL);
    if(avCodecContext == NULL) {
        LOGE("创建解码器上下文失败");
        return -1;
    }

    //将AVCodecParameter的相关内容-->AVCodecContext
    avcodec_parameters_to_context(avCodecContext, avCodecParameters);
    // 打开解码器
    ret = avcodec_open2(avCodecContext, avCodec,  NULL);
    if(ret < 0) {
        LOGE("打开解码器失败");
        return -1;
    }

    //创建源文件解码的压缩数据包对象
    avPacket = static_cast<AVPacket *>(av_mallocz(sizeof(AVPacket)));
    //创建一个用于存放解码之后的像素数据
    avFrame = av_frame_alloc();
    //创建SwrContext对象,分配空间
    swrContext = swr_alloc();
    // 原音频的采样编码格式
    AVSampleFormat srcFormat = avCodecContext->sample_fmt;
    // 生成目标采样编码格式
    AVSampleFormat  dstFormat = dst_sample_fmt;
    // 原音频的采样率
    int srcSampleRate = avCodecContext->sample_rate;
    // 生成目标采样率
    int disSampleRate = 48000;
    // 输入声道布局
    uint64_t src_ch_layout = avCodecContext->channel_layout;
    // 输出声道布局
    uint64_t dst_ch_layout = AV_CH_LAYOUT_STEREO;
    // 给Swrcontext 分配空间,设置公共参数
    swr_alloc_set_opts(swrContext, dst_ch_layout, dstFormat, disSampleRate,
                       src_ch_layout, srcFormat, srcSampleRate, 0, NULL
    );
    //SwrContext进行初始化
    swr_init(swrContext);

    // 获取声道数量
    outChannelCount = av_get_channel_layout_nb_channels(dst_ch_layout);
    LOGD("声道数量%d ", outChannelCount);
    // 设置音频缓冲区间 16bit   48000  PCM数据, 双声道
    out_buffer = (uint8_t *) av_malloc(2 * 48000);
    env->ReleaseStringUTFChars(srcPath, originPath);
    return 0;
}
  • 创建OpenSles的引擎
代码语言:txt复制
int createOpenslEngine() {
    SLresult result;
    //线程安全
    const SLEngineOption engineOptions[1] = {{(SLuint32) SL_ENGINEOPTION_THREADSAFE, (SLuint32) SL_BOOLEAN_TRUE}};
    //该函数表示:初始化引擎对象给使用者一个处理手柄对象,第四个参数(需要支持的interface数目)为零则会忽视第五、第六个参数
    result = slCreateEngine(&engineObject, 1, engineOptions, 0, NULL, NULL);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("opensl es引擎创建初始化失败");
        return -1;
    }

    //该函数表示:转化一个Object从未实例化到实例化过程,第二个参数表示是否异步
    result = (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("引擎Object实例化失败");
        return -1;
    }

    //该函数表示:得到由Object暴露的接口,这里指的是引擎接口,第二个参数是接口ID,第三个参数是输出的引擎接口对象
    result = (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineEngine);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("引擎接口获取失败");
        return -1;
    }

    //该函数表示:创建输出混音器--->由引擎接口创建,从第三个参数开始就是支持的interface数目,同样的为零忽略第四第五个参数
    const SLInterfaceID interfaceIds[1] = {SL_IID_ENVIRONMENTALREVERB};//这里给一个环境混响的接口id
    const SLboolean reqs[1] = {SL_BOOLEAN_TRUE};
    result = (*engineEngine)->CreateOutputMix(engineEngine, &outputMixObject, 1, interfaceIds,
                                              reqs);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("创建输出混音器失败");
        return -1;
    }

    //同样的实例化输出混音器对象
    result = (*outputMixObject)->Realize(outputMixObject, SL_BOOLEAN_FALSE);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("输出混音器outout mix实例化失败");
        return -1;
    }

    //因为环境混响接口的失败与否没关系的
    //同样的申请支持了环境混响EnvironmentalReverb接口就可以获取该接口对象
    result = (*outputMixObject)->GetInterface(outputMixObject, SL_IID_ENVIRONMENTALREVERB,
                                              &outputMixEnvironmentalReverb);
    if (result == SL_RESULT_SUCCESS) {
        result = (*outputMixEnvironmentalReverb)->SetEnvironmentalReverbProperties(
                outputMixEnvironmentalReverb, &reverbSettings);
        if (result != SL_RESULT_SUCCESS) {
            LOGD("混响属性设置失败");
        }
    } else {
        LOGD("获取环境混响接口失败");
    }

    return 0;
}
  • 创建播放器和缓冲队列,并且设置回调函数
代码语言:txt复制
/**
创建pcm播放格式:采样率、通道数、单个样本的比特率(s16le)
*/
int createBufferQueue(int sampleRate, int channels) {
    SLresult result;

    // configure audio source
    SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};


    int numChannels = 2;
    SLuint32 samplesPerSec = SL_SAMPLINGRATE_48;//注意是毫秒赫兹
    SLuint32 bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16;
    SLuint32 containerSize = SL_PCMSAMPLEFORMAT_FIXED_16;
    //引文channels=2,native-audio-jni.c中的例子是单声道的所以取SL_SPEAKER_FRONT_CENTER
    SLuint32 channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;
    SLuint32 endianness = SL_BYTEORDER_LITTLEENDIAN;

    numChannels = channels;

    if (channels == 1) {
        channelMask = SL_SPEAKER_FRONT_CENTER;
    } else {
        //2以及更多
        channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;
    }

    SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, (SLuint32) numChannels, samplesPerSec,
                                   bitsPerSample, containerSize, channelMask, endianness};

    SLDataSource audioSrc = {&loc_bufq, &format_pcm};

    // configure audio sink
    SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, outputMixObject};
    SLDataSink audioSnk = {&loc_outmix, NULL};

    // create audio player
    const SLInterfaceID ids[1] = {SL_IID_BUFFERQUEUE};
    const SLboolean req[1] = {SL_BOOLEAN_TRUE};
    result = (*engineEngine)->CreateAudioPlayer(engineEngine, &bqPlayerObject, &audioSrc, &audioSnk,
                                                1, ids, req);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("创建audioplayer失败");
        return -1;
    }


    result = (*bqPlayerObject)->Realize(bqPlayerObject, SL_BOOLEAN_FALSE);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("实例化audioplayer失败");
        return -1;
    }

    LOGD("---createBufferQueueAudioPlayer---");

    // get the play interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_PLAY, &bqPlayerPlay);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("获取play接口对象失败");
        return -1;
    }

    // get the buffer queue interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_BUFFERQUEUE,
                                             &bqPlayerBufferQueue);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("获取BUFFERQUEUE接口对象失败");
        return -1;
    }

    // register callback on the buffer queue   这边的缓冲回调
    result = (*bqPlayerBufferQueue)->RegisterCallback(bqPlayerBufferQueue, bqPlayerCallback,   (void *) "1");
    if (result != SL_RESULT_SUCCESS) {
        LOGD("获取play接口对象失败");
        return -1;
    }

    // set the player's state to playing
    result = (*bqPlayerPlay)->SetPlayState(bqPlayerPlay, SL_PLAYSTATE_PLAYING);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("设置为可播放状态失败");
        return -1;
    }

    return 0;
}
  • 获取解码的数据
代码语言:txt复制
void getPCM(void **pcm, size_t *size){
    int out_channer_nb = av_get_channel_layout_nb_channels(AV_CH_LAYOUT_STEREO);
    while (av_read_frame(avFormatContext, avPacket) >= 0) {
        if (avPacket->stream_index == streamIndex) {
            int ret = avcodec_send_packet(avCodecContext, avPacket);
            currentIndex   ;
            if (ret >= 0) {
                ret = avcodec_receive_frame(avCodecContext, avFrame);
                LOGE("解码 currentIndex = %d",currentIndex);
                // 双声道 2 * 48000
                swr_convert(swrContext, &out_buffer, 48000 * 2, (const uint8_t **) avFrame->data, avFrame->nb_samples);
                bufferSize = av_samples_get_buffer_size(NULL, out_channer_nb, avFrame->nb_samples,AV_SAMPLE_FMT_S16, 1);
                *pcm = out_buffer;
                *size = bufferSize;
            }
            break; //这边读取完一帧数据,就要break掉,不然会一直循环下去
        }
    }
}
  • 回调函数:将获取到的缓冲数据,加入队列
代码语言:txt复制
// 当喇叭播放完声音时回调此方法
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{

    char * args = (char *)context;
    if (strcmp(args, "1") == 0){
        LOGE("来自缓冲的回调");
    } else {
        LOGE("主动触发");
    }


    bufferLen = 0;
    //assert(NULL == context);
    getPCM(&buffer, &bufferLen);
    // for streaming playback, replace this test by logic to find and fill the next buffer
    if (NULL != buffer && 0 != bufferLen) {
        SLresult result;
        // enqueue another buffer
        result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, out_buffer, bufferSize);
        // the most likely other result is SL_RESULT_BUFFER_INSUFFICIENT,
        // which for this code example would indicate a programming error
        if (result != SL_RESULT_SUCCESS) {
            LOGD("入队失败");
        } else {
            LOGD("入队成功");
        }

    }
}
  • 整体的调用
代码语言:txt复制
JNIEXPORT void
Java_com_jason_ndk_ffmpeg_openeles_MainActivity_sound(JNIEnv *env, jobject thiz, jstring input) {

    int ret = createFFmpeg(env, input);

    if(ret < 0) {
        LOGE("创建FFmpeg失败");
        releaseResource();
        return;
    }

    //初始化opensles
    ret = createOpenslEngine();
    if (ret == JNI_FALSE) {
        LOGE("创建opensles引擎失败");
        releaseResource();
        return;
    }

    ret = createBufferQueue(avCodecContext->sample_rate, outChannelCount);
    if (ret == JNI_FALSE) {
        LOGE("创建buffer queue播放器失败");
        releaseResource();
        return;
    }

    LOGD("start av_read_frame");
    //主动调用回调函数
    bqPlayerCallback(bqPlayerBufferQueue, (void *) "0");

}

这样功能就是实现ok了

第二种方法

这边我换了一种思路,就是将解析数据加入队列的操作放在解码的循环当中去做,但是有个问题是需要,去计算每一帧播放的时间,需要手动去做休眠每一帧的播放时间,在进行下一次解码,加入队列......反复操作,来完成播放。

  • 取消掉队列的回调函数 修改上面的代码:
代码语言:txt复制
int createBufferQueue(int sampleRate, int channels) {
    SLresult result;

    // configure audio source
    SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};


    int numChannels = 2;
    SLuint32 samplesPerSec = SL_SAMPLINGRATE_48;//注意是毫秒赫兹
    SLuint32 bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16;
    SLuint32 containerSize = SL_PCMSAMPLEFORMAT_FIXED_16;
    //引文channels=2,native-audio-jni.c中的例子是单声道的所以取SL_SPEAKER_FRONT_CENTER
    SLuint32 channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;
    SLuint32 endianness = SL_BYTEORDER_LITTLEENDIAN;

    numChannels = channels;

    if (channels == 1) {
        channelMask = SL_SPEAKER_FRONT_CENTER;
    } else {
        //2以及更多
        channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;
    }

    SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, (SLuint32) numChannels, samplesPerSec,
                                   bitsPerSample, containerSize, channelMask, endianness};

    SLDataSource audioSrc = {&loc_bufq, &format_pcm};

    // configure audio sink
    SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, outputMixObject};
    SLDataSink audioSnk = {&loc_outmix, NULL};

    // create audio player
    const SLInterfaceID ids[1] = {SL_IID_BUFFERQUEUE};
    const SLboolean req[1] = {SL_BOOLEAN_TRUE};
    result = (*engineEngine)->CreateAudioPlayer(engineEngine, &bqPlayerObject, &audioSrc, &audioSnk,
                                                1, ids, req);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("创建audioplayer失败");
        return JNI_FALSE;
    }


    result = (*bqPlayerObject)->Realize(bqPlayerObject, SL_BOOLEAN_FALSE);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("实例化audioplayer失败");
        return JNI_FALSE;
    }

    LOGD("---createBufferQueueAudioPlayer---");

    // get the play interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_PLAY, &bqPlayerPlay);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("获取play接口对象失败");
        return JNI_FALSE;
    }

    // get the buffer queue interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_BUFFERQUEUE,
                                             &bqPlayerBufferQueue);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("获取BUFFERQUEUE接口对象失败");
        return JNI_FALSE;
    }

    // register callback on the buffer queue   不要注册缓冲的回调  唯一的区别
    /*result = (*bqPlayerBufferQueue)->RegisterCallback(bqPlayerBufferQueue, bqPlayerCallback,
                                                      (void *) "1");
    if (result != SL_RESULT_SUCCESS) {
        LOGD("获取play接口对象失败");
        return JNI_FALSE;
    }*/

    // set the player's state to playing
    result = (*bqPlayerPlay)->SetPlayState(bqPlayerPlay, SL_PLAYSTATE_PLAYING);
    if (result != SL_RESULT_SUCCESS) {
        LOGD("设置为可播放状态失败");
        return JNI_FALSE;
    }

    return JNI_TRUE;
}
  • 队列回调 将getPCM的代码放到回调队列中
代码语言:txt复制
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context) {
    char * args = (char *)context;
    if (strcmp(args, "1") == 0){
        LOGE("来自缓冲的回调");
        return;
    }
    LOGE("主动触发");

    while (av_read_frame(avFormatContext, avPacket) >= 0) {
        if (avPacket->stream_index == streamIndex) {
            int ret = avcodec_send_packet(avCodecContext, avPacket);
            currentIndex   ;
            while (ret >= 0) {
                ret = avcodec_receive_frame(avCodecContext, avFrame);
                swr_convert(swrContext, &out_buffer, 2 * 48000,
                            (const uint8_t **) avFrame->data, avFrame->nb_samples);

                //该函数表示:通过给定的参数得到需要的buffer size
                int dst_buffer_size = av_samples_get_buffer_size(NULL, outChannelCount, avFrame->nb_samples, AV_SAMPLE_FMT_S16, 1);
                if (dst_buffer_size <= 0) {
                    break;
                }

                //MP3每帧是1152字节,ACC每帧是1024/2048字节
                LOGD("WRITE TO AUDIOTRACK %d", dst_buffer_size);//4608

                SLresult result;
                result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, out_buffer, dst_buffer_size);
                if (result != SL_RESULT_SUCCESS) {
                    LOGD("入队失败");
                }
                //计算公式 acc  1024 * 1000 / 48000 = 21.34
                //计算公式 mp3  1152 * 1000 / 48000 = 24
                usleep(1000 * 24);
            }
            LOGE("正在解码%d", currentIndex  );
        }
    }

}

不过从播放的效果来看,个人比较建议用第一种方式,利用opensles的缓冲回调函数来加载每一帧数据,不需要去判断每一帧的播放时长。这样播放的音频文件就不会有问题。

结语

以上就是个人利用FFmpeg OPensles 播放音频文件。如果有错误欢迎指正。

0 人点赞