我需要在播放最后一个缓冲区后的150ms-200ms内通知一个函数…
通过回调方法我知道有多少缓冲区被排队
我知道缓冲区大小,我知道上一个缓冲区填充的字节数.
首先,我初始化一些缓冲区,然后用音频数据填充缓冲区,然后将它们排队.当音频队列需要填充缓冲区时,它会调用回调并用数据填充缓冲区.
当没有更多可用的音频数据时,Audio Queue会向我发送最后一个空缓冲区,所以我用我拥有的任何数据填充它:
if (sharedCache.numberOfToTalPackets>0) { if (currentlyReadingBufferIndex==[sharedCache.baseAudioCache count]-1) { inBuffer->mAudioDataByteSize = (UInt32)bytesFilled; lastEnqueudBufferSize=bytesFilled; err=AudioQueueEnqueueBuffer(inAQ,inBuffer,(UInt32)packetsFilled,packetDescs); if (err) { [self failWithErrorCode:err customError:AP_AUdio_QUEUE_ENQUEUE_Failed]; } printf("if that was the last free packet description,then enqueue the buffer\n"); //go to the next item on keepbuffer array isBufferFilled=YES; [self incrementBufferUsedCount]; return; } }
当Audio Queue通过回调请求更多数据而我没有更多数据时,我开始倒计时缓冲区.如果缓冲区计数等于零,这意味着要播放的航班上只剩下一个缓冲区,则完成片刻播放时我会尝试停止音频队列.
-(void)decrementBufferUsedCount { if (buffersUsed>0) { buffersUsed--; printf("buffer on the queue %i\n",buffersUsed); if (buffersUsed==0) { NSLog(@"playback is finished\n"); // end playback isPlayBackDone=YES; double sampleRate = dataFormat.mSampleRate; double bufferDuration = lastEnqueudBufferSize/ sampleRate; double estimatedTimeNeded=bufferDuration*1; [self performSelector:@selector(stopPlayer) withObject:nil afterDelay:estimatedTimeNeded]; } } } -(void)stopPlayer { @synchronized(self) { state=AP_STOPPING; } err=AudioQueueStop(queue,TRUE); if (err) { [self failWithErrorCode:err customError:AP_AUdio_QUEUE_STOP_Failed]; } else { @synchronized(self) { state=AP_STOPPED; NSLog(@"Stopped\n"); }
然而,似乎我无法在这里得到精确的时间.上面的代码会提前阻止玩家
如果我也提前跟进音频切换
double bufferDuration = XMAQDefaultBufSize/ sampleRate; double estimatedTimeNeded=bufferDuration*1;
如果增加1到2,因为缓冲区大小很大我得到一些延迟,似乎1.5是现在的最佳值但我不明白为什么lastEnqueudBufferSize / sampleRate不是wotking
音频文件和缓冲区的详细信息:
Audio file has 22050 sample rate #define kNumberPlaybackBuffers 4 #define kAQDefaultBufSize 16384 it is a vbr file format with no bitrate information available
解决方法
我找到了一种更简单的方法来获得相同的结果(/ -10ms).使用AudioQueueNewOutput()设置输出队列后,初始化要在输出回调中使用的AudioQueueTimelineRef. (我的第一个方法中包含ticksToSeconds函数)不要忘记导入< mach / mach_time.h>
//After AudioQueueNewOutput() AudioQueueTimelineRef timeLine; //ivar AudioQueueCreateTimeline(queue,self.timeLine);
然后在输出回调中调用AudioQueueGetCurrentTime().警告:队列必须播放有效的时间戳.因此,对于非常短的文件,您可能需要使用下面的AudioQueueProcessingTap方法.
AudioTimeStamp timestamp; AudioQueueGetCurrentTime(queue,self->timeLine,×tamp,NULL);
时间戳将当前播放的样本与当前机器时间联系在一起.有了这些信息,我们可以在将来播放最后一个样本时获得准确的机器时间.
Float64 samplesLeft = self->frameCount - timestamp.mSampleTime;//samples in file - current sample Float64 secondsLeft = samplesLeft / self->sampleRate; //seconds of audio to play UInt64 ticksLeft = secondsLeft / ticksToSeconds(); //seconds converted to machine ticks UInt64 machTimeFinish = timestamp.mHostTime + ticksLeft; //machine time of first sample + ticks left
现在我们拥有了这个未来的机器时间,我们可以使用它来准确地计算您想要做的任何事情.
UInt64 currentMachTime = mach_absolute_time(); Uint64 ticksFromNow = machTimeFinish - currentMachTime; float secondsFromNow = ticksFromNow * ticksToSeconds(); dispatch_after(dispatch_time(DISPATCH_TIME_NOW,(int64_t)(secondsFromNow * NSEC_PER_SEC)),dispatch_get_main_queue(),^{ //do the thing!!! printf("Giggety"); });
如果GCD dispatch_async不够准确,有办法设置precision timer
使用AudioQueueProcessingTap
您可以从AudioQueueProcessingTap获得相当低的响应时间.首先,你的回调基本上会置于音频流之间. MyObject类型就是代码中的self(这是ARC桥接在这里以获得函数内部的自我).检查ioFlags会告诉您流何时开始并完成.输出回调的ioTimeStamp描述了回调中的第一个样本将来会触及发言者的时间.所以,如果你想在这里得到准确的话.我添加了一些便利功能,用于将机器时间转换为秒.
#import <mach/mach_time.h> double getTimeConversion(){ double timecon; mach_timebase_info_data_t tinfo; kern_return_t kerror; kerror = mach_timebase_info(&tinfo); timecon = (double)tinfo.numer / (double)tinfo.denom; return timecon; } double ticksToSeconds(){ static double ticksToSeconds = 0; if (!ticksToSeconds) { ticksToSeconds = getTimeConversion() * 0.000000001; } return ticksToSeconds; } void processingTapCallback( void * inClientData,AudioQueueProcessingTapRef inAQTap,UInt32 inNumberFrames,AudioTimeStamp * ioTimeStamp,UInt32 * ioFlags,UInt32 * outNumberFrames,AudioBufferList * ioData){ MyObject *self = (__bridge Object *)inClientData; AudioQueueProcessingTapGetSourceAudio(inAQTap,inNumberFrames,ioTimeStamp,ioFlags,outNumberFrames,ioData); if (*ioFlags == kAudioQueueProcessingTap_EndOfStream) { Float64 sampTime; UInt32 frameCount; AudioQueueProcessingTapGetQueueTime(inAQTap,&sampTime,&frameCount); Float64 samplesInThisCallback = self->frameCount - sampleTime;//file sampleCount - queue current sample //double secondsInCallback = outNumberFrames / (double)self->sampleRate; outNumberFrames was inaccurate double secondsInCallback = * samplesInThisCallback / (double)self->sampleRate; uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (secondsInCallback / ticksToSeconds()); [self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker]; } } -(void)lastSampleDoneAt:(uint64_t)lastSampTime{ uint64_t currentTime = mach_absolute_time(); if (lastSampTime > currentTime) { double secondsFromNow = (lastSampTime - currentTime) * ticksToSeconds(); dispatch_after(dispatch_time(DISPATCH_TIME_NOW,^{ //do the thing!!! }); } else{ //do the thing!!! } }
您可以在AudioQueueNewOutput之后和AudioQueueStart之前将其设置为这样.注意将桥接self传递给inClientData参数.队列实际上将self保持为void *以在回调中使用,我们将其桥接回回调中的objective-C对象.
AudioStreamBasicDescription format; AudioQueueProcessingTapRef tapRef; UInt32 maxFrames = 0; AudioQueueProcessingTapNew(queue,processingTapCallback,(__bridge void *)self,kAudioQueueProcessingTap_PostEffects,&maxFrames,&format,&tapRef);
一旦文件启动,您就可以获得最终机器时间.还有点清洁.
void processingTapCallback( void * inClientData,ioData); if (*ioFlags == kAudioQueueProcessingTap_StartOfStream) { uint64_t timeOfLastSampleLeavingSpeaker = ioTimeStamp->mHostTime + (self->audioDurSeconds / ticksToSeconds()); [self lastSampleDoneAt:timeOfLastSampleLeavingSpeaker]; } }