ios – 可以使用AVAudioEngine从文件读取,使用音频单元进行处理并写入文件,比实时更快吗?

前端之家收集整理的这篇文章主要介绍了ios – 可以使用AVAudioEngine从文件读取,使用音频单元进行处理并写入文件,比实时更快吗?前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我正在使用一个iOS应用程序,它使用AVAudioEngine进行各种操作,包括将音频录制到文件,使用音频单元对该音频应用效果,并使用应用的效果播放音频.我用一个tap也可以将输出写入一个文件.当这样做完成时,它会在音频播放时实时写入文件.

可以设置从文件读取的AVAudioEngine图形,使用音频单元处理声音,并将其输出文件,但是比实时更快(即,与硬件一样快)?这样做的用例是输出几分钟的音频并应用效果,我当然不想等待几分钟才能被处理.

编辑:这里是我用来设置AVAudioEngine的图形的代码,并播放一个声音文件

AVAudioEngine* engine = [[AVAudioEngine alloc] init];

AVAudioPlayerNode* player = [[AVAudioPlayerNode alloc] init];
[engine attachNode:player];

self.player = player;
self.engine = engine;

if (!self.distortionEffect) {
    self.distortionEffect = [[AVAudioUnitDistortion alloc] init];
    [self.engine attachNode:self.distortionEffect];
    [self.engine connect:self.player to:self.distortionEffect format:[self.distortionEffect outputFormatForBus:0]];
    AVAudioMixerNode* mixer = [self.engine mainMixerNode];
    [self.engine connect:self.distortionEffect to:mixer format:[mixer outputFormatForBus:0]];
}

[self.distortionEffect loadFactoryPreset:AVAudioUnitDistortionPresetDrumsBitBrush];

NSError* error;
if (![self.engine startAndReturnError:&error]) {
    NSLog(@"error: %@",error);
} else {
    NSURL* fileURL = [[NSBundle mainBundle] URLForResource:@"test2" withExtension:@"mp3"];
    AVAudioFile* file = [[AVAudioFile alloc] initForReading:fileURL error:&error];

    if (error) {
        NSLog(@"error: %@",error);
    } else {
        [self.player scheduleFile:file atTime:nil completionHandler:nil];
        [self.player play];
    }
}

上述代码播放test2.mp3文件中的声音,并实时应用AVAudioUnitDistortionPresetDrumsBitBrush失真预设.

然后我通过在[self.player play]之后添加这些行来修改上面的代码

[self.engine stop];
        [self renderAudioAndWriteToFile];

修改了Vladimir提供的renderAudioAndWriteToFile方法,而不是在第一行分配一个新的AVAudioEngine,它只是使用已经设置的self.engine.

但是,在renderAudioAndWriteToFile中,由于AudioUnitRender返回kAudioUnitErr_Uninitialized的状态,因此它记录“无法渲染音频单元”.

编辑2:我应该提到,我非常高兴转换我发布的AVAudioEngine代码使用C apis,如果这将使事情变得更容易.但是,我希望代码产生与AVAudioEngine代码相同的输出(包括使用上面显示的出厂预设).

解决方法

>配置您的引擎和播放器节点.
>您的播放器节点的通话播放方式.
>暂停引擎.
>从AVAudioOutputNode(audioEngine.outputNode)获取音频单元
用这个 method.
>从 AudioUnitRender的音频单元渲染循环,并将音频缓冲区列表写入 Extended Audio File Services文件.

例:

音频引擎配置

- (void)configureAudioEngine {
    self.engine = [[AVAudioEngine alloc] init];
    self.playerNode = [[AVAudioPlayerNode alloc] init];
    [self.engine attachNode:self.playerNode];
    AVAudioUnitDistortion *distortionEffect = [[AVAudioUnitDistortion alloc] init];
    [self.engine attachNode:distortionEffect];
    [self.engine connect:self.playerNode to:distortionEffect format:[distortionEffect outputFormatForBus:0]];
    self.mixer = [self.engine mainMixerNode];
    [self.engine connect:distortionEffect to:self.mixer format:[self.mixer outputFormatForBus:0]];
    [distortionEffect loadFactoryPreset:AVAudioUnitDistortionPresetDrumsBitBrush];
    NSError* error;
    if (![self.engine startAndReturnError:&error])
        NSLog(@"Can't start engine: %@",error);
    else
        [self scheduleFileToPlay];
}

- (void)scheduleFileToPlay {
    NSError* error;
    NSURL *fileURL = [[NSBundle mainBundle] URLForResource:@"filename" withExtension:@"m4a"];
    self.file = [[AVAudioFile alloc] initForReading:fileURL error:&error];
    if (self.file)
        [self.playerNode scheduleFile:self.file atTime:nil completionHandler:nil];
    else
        NSLog(@"Can't read file: %@",error);
}

呈现方式

- (void)renderAudioAndWriteToFile {
    [self.playerNode play];
    [self.engine pause];
    AVAudioOutputNode *outputNode = self.engine.outputNode;
    AudioStreamBasicDescription const *audioDescription = [outputNode outputFormatForBus:0].streamDescription;
    NSString *path = [self filePath];
    ExtAudioFileRef audioFile = [self createAndSetupExtAudioFileWithASBD:audioDescription andFilePath:path];
    if (!audioFile)
        return;
    AVURLAsset *asset = [AVURLAsset assetWithURL:self.file.url];
    NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
    NSUInteger lengthInFrames = duration * audioDescription->mSampleRate;
    const NSUInteger kBufferLength = 4096;
    AudioBufferList *bufferList = AEAllocateAndInitAudioBufferList(*audioDescription,kBufferLength);
    AudioTimeStamp timeStamp;
    memset (&timeStamp,sizeof(timeStamp));
    timeStamp.mFlags = kAudioTimeStampSampleTimeValid;
    OSStatus status = noErr;
    for (NSUInteger i = kBufferLength; i < lengthInFrames; i += kBufferLength) {
        status = [self renderToBufferList:bufferList writeToFile:audioFile bufferLength:kBufferLength timeStamp:&timeStamp];
        if (status != noErr)
            break;
    }
    if (status == noErr && timeStamp.mSampleTime < lengthInFrames) {
        NSUInteger restBufferLength = (NSUInteger) (lengthInFrames - timeStamp.mSampleTime);
        AudioBufferList *restBufferList = AEAllocateAndInitAudioBufferList(*audioDescription,restBufferLength);
        status = [self renderToBufferList:restBufferList writeToFile:audioFile bufferLength:restBufferLength timeStamp:&timeStamp];
        AEFreeAudioBufferList(restBufferList);
    }
    AEFreeAudioBufferList(bufferList);
    ExtAudioFileDispose(audioFile);
    if (status != noErr)
        NSLog(@"An error has occurred");
    else
        NSLog(@"Finished writing to file at path: %@",path);
}

- (NSString *)filePath {
    NSArray *documentsFolders =
            NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES);
    NSString *fileName = [NSString stringWithFormat:@"%@.m4a",[[NSUUID UUID] UUIDString]];
    NSString *path = [documentsFolders[0] stringByAppendingPathComponent:fileName];
    return path;
}

- (ExtAudioFileRef)createAndSetupExtAudioFileWithASBD:(AudioStreamBasicDescription const *)audioDescription
                                          andFilePath:(NSString *)path {
    AudioStreamBasicDescription destinationFormat;
    memset(&destinationFormat,sizeof(destinationFormat));
    destinationFormat.mChannelsPerFrame = audioDescription->mChannelsPerFrame;
    destinationFormat.mSampleRate = audioDescription->mSampleRate;
    destinationFormat.mFormatID = kAudioFormatMPEG4AAC;
    ExtAudioFileRef audioFile;
    OSStatus status = ExtAudioFileCreateWithURL(
            (__bridge CFURLRef) [NSURL fileURLWithPath:path],kAudioFileM4AType,&destinationFormat,NULL,kAudioFileFlags_EraseFile,&audioFile
    );
    if (status != noErr) {
        NSLog(@"Can not create ext audio file");
        return nil;
    }
    UInt32 codecManufacturer = kAppleSoftwareAudioCodecManufacturer;
    status = ExtAudioFileSetProperty(
            audioFile,kExtAudioFileProperty_CodecManufacturer,sizeof(UInt32),&codecManufacturer
    );
    status = ExtAudioFileSetProperty(
            audioFile,kExtAudioFileProperty_ClientDataFormat,sizeof(AudioStreamBasicDescription),audioDescription
    );
    status = ExtAudioFileWriteAsync(audioFile,NULL);
    if (status != noErr) {
        NSLog(@"Can not setup ext audio file");
        return nil;
    }
    return audioFile;
}

- (OSStatus)renderToBufferList:(AudioBufferList *)bufferList
                   writeToFile:(ExtAudioFileRef)audioFile
                  bufferLength:(NSUInteger)bufferLength
                     timeStamp:(AudioTimeStamp *)timeStamp {
    [self clearBufferList:bufferList];
    AudioUnit outputUnit = self.engine.outputNode.audioUnit;
    OSStatus status = AudioUnitRender(outputUnit,timeStamp,bufferLength,bufferList);
    if (status != noErr) {
        NSLog(@"Can not render audio unit");
        return status;
    }
    timeStamp->mSampleTime += bufferLength;
    status = ExtAudioFileWrite(audioFile,bufferList);
    if (status != noErr)
        NSLog(@"Can not write audio to file");
    return status;
}

- (void)clearBufferList:(AudioBufferList *)bufferList {
    for (int bufferIndex = 0; bufferIndex < bufferList->mNumberBuffers; bufferIndex++) {
        memset(bufferList->mBuffers[bufferIndex].mData,bufferList->mBuffers[bufferIndex].mDataByteSize);
    }
}

我使用了this酷框架的一些功能

AudioBufferList *AEAllocateAndInitAudioBufferList(AudioStreamBasicDescription audioFormat,int frameCount) {
    int numberOfBuffers = audioFormat.mFormatFlags & kAudioFormatFlagIsNonInterleaved ? audioFormat.mChannelsPerFrame : 1;
    int channelsPerBuffer = audioFormat.mFormatFlags & kAudioFormatFlagIsNonInterleaved ? 1 : audioFormat.mChannelsPerFrame;
    int bytesPerBuffer = audioFormat.mBytesPerFrame * frameCount;
    AudioBufferList *audio = malloc(sizeof(AudioBufferList) + (numberOfBuffers-1)*sizeof(AudioBuffer));
    if ( !audio ) {
        return NULL;
    }
    audio->mNumberBuffers = numberOfBuffers;
    for ( int i=0; i<numberOfBuffers; i++ ) {
        if ( bytesPerBuffer > 0 ) {
            audio->mBuffers[i].mData = calloc(bytesPerBuffer,1);
            if ( !audio->mBuffers[i].mData ) {
                for ( int j=0; j<i; j++ ) free(audio->mBuffers[j].mData);
                free(audio);
                return NULL;
            }
        } else {
            audio->mBuffers[i].mData = NULL;
        }
        audio->mBuffers[i].mDataByteSize = bytesPerBuffer;
        audio->mBuffers[i].mNumberChannels = channelsPerBuffer;
    }
    return audio;
}

void AEFreeAudioBufferList(AudioBufferList *bufferList ) {
    for ( int i=0; i<bufferList->mNumberBuffers; i++ ) {
        if ( bufferList->mBuffers[i].mData ) free(bufferList->mBuffers[i].mData);
    }
    free(bufferList);
}
原文链接:https://www.f2er.com/iOS/335856.html

猜你在找的iOS相关文章