我正试图通过Apples Multipeer Connectivity框架将音频从麦克风传输到另一部iPhone.要进行音频捕获和回放我正在使用AVAudioEngine(非常感谢
Rhythmic Fistman’s回答
here).
我通过在输入上安装一个麦克风从麦克风接收数据,从中我得到一个AVAudioPCMBuffer然后我转换为一个UInt8数组,然后我流到另一个手机.
但是当我将数组转换回AVAudioPCMBuffer时,我得到一个EXC_BAD_ACCESS异常,编译器指向我再次将字节数组转换为AVAudioPCMBuffer的方法.
这是我正在采取的转换和流输入的代码:
input.installTap(onBus: 0,bufferSize: 2048,format: input.inputFormat(forBus: 0),block: { (buffer: AVAudioPCMBuffer!,time: AVAudioTime!) -> Void in let audioBuffer = self.typetobinary(buffer) stream.write(audioBuffer,maxLength: audioBuffer.count) })
我的两个转换数据的功能(取自Martin.R的答案here):
func binarytotype <T> (_ value: [UInt8],_: T.Type) -> T { return value.withUnsafeBufferPointer { UnsafeRawPointer($0.baseAddress!).load(as: T.self) } } func typetobinary<T>(_ value: T) -> [UInt8] { var data = [UInt8](repeating: 0,count: MemoryLayout<T>.size) data.withUnsafeMutableBufferPointer { UnsafeMutableRawPointer($0.baseAddress!).storeBytes(of: value,as: T.self) } return data }
并在接收端:
func session(_ session: MCSession,didReceive stream: InputStream,withName streamName: String,fromPeer peerID: MCPeerID) { if streamName == "voice" { stream.schedule(in: RunLoop.current,forMode: .defaultRunLoopMode) stream.open() var bytes = [UInt8](repeating: 0,count: 8) stream.read(&bytes,maxLength: bytes.count) let audioBuffer = self.binarytotype(bytes,AVAudioPCMBuffer.self) //Here is where the app crashes do { try engine.start() audioPlayer.scheduleBuffer(audioBuffer,completionHandler: nil) audioPlayer.play() }catch let error { print(error.localizedDescription) } } }
问题是我可以来回转换字节数组并在流式传输之前播放声音(在同一部手机中)但不在接收端创建AVAudioPCMBuffer.有谁知道为什么转换在接收端不起作用?这是正确的方法吗?
任何帮助,关于此的想法/意见将非常感激.
您的AVAudioPCMBuffer序列化/反序列化是错误的.
原文链接:/swift/319952.htmlSwift3的演员阵容发生了很大的变化.似乎需要比Swift2更多的复制.
以下是如何在[UInt8]和AVAudioPCMBuffers之间进行转换的方法:
N.B:此代码假设单声道浮点数据为44.1kHz.
您可能想要更改它.
func copyAudioBufferBytes(_ audioBuffer: AVAudioPCMBuffer) -> [UInt8] { let srcLeft = audioBuffer.floatChannelData![0] let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame let numBytes = Int(bytesPerFrame * audioBuffer.frameLength) // initialize bytes to 0 (how to avoid?) var audioByteArray = [UInt8](repeating: 0,count: numBytes) // copy data from buffer srcLeft.withMemoryRebound(to: UInt8.self,capacity: numBytes) { srcByteData in audioByteArray.withUnsafeMutableBufferPointer { $0.baseAddress!.initialize(from: srcByteData,count: numBytes) } } return audioByteArray } func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer { // format assumption! make this part of your protocol? let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32,sampleRate: 44100,channels: 1,interleaved: true) let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt,frameCapacity: frameLength) audioBuffer.frameLength = frameLength let dstLeft = audioBuffer.floatChannelData![0] // for stereo // let dstRight = audioBuffer.floatChannelData![1] buf.withUnsafeBufferPointer { let src = UnsafeRawPointer($0.baseAddress!).bindMemory(to: Float.self,capacity: Int(frameLength)) dstLeft.initialize(from: src,count: Int(frameLength)) } return audioBuffer }