我正在构建一个应用程序,允许用户将音频过滤器应用于录制的音频,例如Reverb,Boost.
我无法找到有关如何将过滤器应用于文件本身的任何可行信息来源,因为以后需要将处理后的文件上传到服务器.
我目前正在使用AudioKit进行可视化,我知道它能够进行音频处理,但仅限于播放.请提供任何进一步研究的建议.
解决方法
AudioKit有一个不需要iOS 11的离线渲染节点.这是一个例子,需要player.schedule(…)和player.start(at.)位,因为AKAudioPlayer的底层AVAudioPlayerNode将阻塞调用线程等待如果你用player.play()启动它,那么下一个渲染.
import UIKit import AudioKit class ViewController: UIViewController { var player: AKAudioPlayer? var reverb = AKReverb() var boost = AKBooster() var offlineRender = AKOfflineRenderNode() override func viewDidLoad() { super.viewDidLoad() guard let url = Bundle.main.url(forResource: "theFunkiestFunkingFunk",withExtension: "mp3") else { return } var audioFile: AKAudioFile? do { audioFile = try AKAudioFile.init(forReading: url) player = try AKAudioPlayer.init(file: audioFile!) } catch { print(error) return } guard let player = player else { return } player >>> reverb >>> boost >>> offlineRender AudioKit.output = offlineRender AudioKit.start() let docs = FileManager.default.urls(for: .documentDirectory,in: .userDomainMask).first! let dstURL = docs.appendingPathComponent("rendered.caf") offlineRender.internalRenderEnabled = false player.schedule(from: 0,to: player.duration,avTime: nil) let sampleTimeZero = AVAudioTime(sampleTime: 0,atRate: AudioKit.format.sampleRate) player.play(at: sampleTimeZero) do { try offlineRender.renderToURL(dstURL,seconds: player.duration) } catch { print(error) return } offlineRender.internalRenderEnabled = true print("Done! Rendered to " + dstURL.path) } }