从暂停模式唤醒应用程式后,声音根本就不会播放.
当应用程序处于前台时,会在didReceiveRemoteNotification:方法被调用后立即播放声音.
当从挂起的模式唤醒应用程序时,当接收到ReemiveRemoteNotification:方法时,会立即播放声音的恰当方式是什么?
这是一些代码(语音管理器类):
-(void)textToSpeechWithMessage:(NSString*)message andLanguageCode:(NSString*)languageCode{ AVAudioSession *audioSession = [AVAudioSession sharedInstance]; NSError *error = nil; DLog(@"Activating audio session"); if (![audioSession setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionMixWithOthers error:&error]) { DLog(@"Unable to set audio session category: %@",error); } BOOL result = [audioSession setActive:YES error:&error]; if (!result) { DLog(@"Error activating audio session: %@",error); }else{ AVSpeechUtterance *utterance = [AVSpeechUtterance speechUtteranceWithString:message]; [utterance setRate:0.5f]; [utterance setVolume:0.8f]; utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:languageCode]; [self.synthesizer speakUtterance:utterance]; }
}
-(void)textToSpeechWithMessage:(NSString*)message{ [self textToSpeechWithMessage:message andLanguageCode:[[NSLocale preferredLanguages] objectAtIndex:0]]; }
后来在AppDelegate中:
[[MCSpeechManager sharedInstance] textToSpeechWithMessage:messageText];
我启用了功能 – 背景模式部分中的音频,AirPlay和画中画选项.
编辑:
也许我应该开始一个后台任务并运行到期处理程序,如果需要的话?我想这可能有效,但也想听听解决这种情况的常见方式.
Error activating audio session: Error Domain=NSOSStatusErrorDomain
Code=561015905 “(null)”
代码561015905适用于:
AVAudioSessionErrorCodeCannotStartPlaying = ‘!pla’,/* 0x21706C61,
561015905
它被描述为:
This error type can occur if the app’s Information property list does
not permit audio use,or if the app is in the background and using a
category which does not allow background audio.
但是我收到与其他类别相同的错误(AVAudioSessionCategoryAmbient和AVAudioSessionCategorySoloAmbient)
解决方法
>您是否针对最新的SDK进行构建/测试/运行? iOS X中的通知机制有重大变化
>我必须假设对didReceiveRemoteNotification的调用必须发生在响应于所述通知的用户操作,例如点击通知消息.
>没有必要设置任何背景模式保存应用程序下载内容以响应推送通知.
如果上述所有陈述都是真实的,那么现在的答案将集中在通知到达时会发生什么.
>设备接收通知
>用户点击消息
>应用程式启动
> didReceiveRemoteNotification被调用
在步骤4,textToSpeechWithMessage按预期工作:
func application(_ application: UIApplication,didReceiveRemoteNotification userInfo: [AnyHashable : Any],fetchCompletionHandler completionHandler: @escaping (UIBackgroundFetchResult) -> Void) { textToSpeechWithMessage(message: "Speak up","en-US") }
import OneSignal ... _ = OneSignal.init(launchOptions: launchOptions,appId: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx") // or _ = OneSignal.init(launchOptions: launchOptions,appId: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx") { (s:String?,t:[AnyHashable : Any]?,u:Bool) in self.textToSpeechWithMessage(message: "OneDignal","en-US") }
textToSpeechWithMessage大部分是不变的,这里是Swift 3的完整性:
import AVFoundation ... let synthesizer = AVSpeechSynthesizer() func textToSpeechWithMessage(message:String,_ languageCode:String) { let audioSession = AVAudioSession.sharedInstance() print("Activating audio session") do { try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord,with: [AVAudioSessionCategoryOptions.defaultToSpeaker,AVAudioSessionCategoryOptions.mixWithOthers] ) try audioSession.setActive(true) let utterance = AVSpeechUtterance(string:message) utterance.rate = 0.5 utterance.volume = 0.8 utterance.voice = AVSpeechSynthesisVoice(language: languageCode) self.synthesizer.speak(utterance) } catch { print("Unable to set audio session category: %@",error); } }