我正在Swift创建一个测试应用程序,我想使用AVMutableComposition从我的应用程序文档目录中将多个视频拼接在一起.
我在某种程度上取得了成功,我的所有视频都被缝合在一起,一切都显示出正确的肖像和风景.
但是,我的问题是,所有的视频都是按照编辑中最后一个视频的方向显示的.
我知道要解决这个问题,我将需要添加每个轨道的层次说明,但是我似乎无法得到这个正确的答案,我发现整个编译似乎是以横向视频简单地缩放以适应肖像视图,所以当我将手机转到它的一边来观看风景视频时,他们仍然很小,因为它们已经被缩放到肖像大小.
这不是我正在寻找的结果,我想要预期的功能,即如果一个视频是横向,它显示在纵向模式下缩放,但如果手机旋转,我想要这个横向视频填充屏幕(就像在简单地查看照片中的风景视频)和相同的肖像,以便在纵向观看时是全屏幕,当侧面转动时,视频会缩放到景观尺寸(就像在照片中观看纵向视频时一样).
总而言之,我想要的结果是,当查看具有横向和纵向视频的编辑时,我可以使用我的手机查看整个编辑,并且横向视频是全屏和纵向缩放,或者在观看同一个视频时肖像视频是全屏幕,横向视频缩放到大小.
有了所有答案,我发现不是这样,当从照片导入视频以添加到编辑中时,它们似乎都有非常意外的行为,当添加使用前置摄像头拍摄的视频时,这些随机行为清楚我目前从图书馆导入的实施视频,“selfie”视频出现在正确的大小没有这些问题).
我正在寻找一种旋转/缩放这些视频的方法,以便它们总是以正确的方向和比例显示,这取决于用户持有手机的方式.
编辑:我现在知道,我不能在一个视频中同时拥有景观和纵向方向,所以我期望的结果是将最终的视频呈现在横向.我已经弄清楚如何切换所有的方向和尺度,以获得一切相同的方式,但我的输出是一个肖像视频,如果有人可以帮助我改变这一点,所以我的输出是景观,将不胜感激.
func videoTransformForTrack(asset: AVAsset) -> CGAffineTransform { var return_value:CGAffineTransform? let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0] let transform = assetTrack.preferredTransform let assetInfo = orientationFromTransform(transform) var scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.width if assetInfo.isPortrait { scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.height let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio,scaleToFitRatio) return_value = CGAffineTransformConcat(assetTrack.preferredTransform,scaleFactor) } else { let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio,scaleToFitRatio) var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform,scaleFactor),CGAffineTransformMakeTranslation(0,UIScreen.mainScreen().bounds.width / 2)) if assetInfo.orientation == .Down { let fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI)) let windowBounds = UIScreen.mainScreen().bounds let yFix = assetTrack.naturalSize.height + windowBounds.height let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width,yFix) concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown,centerFix),scaleFactor) } return_value = concat } return return_value! }
出口商:
// Create AVMutableComposition to contain all AVMutableComposition tracks let mix_composition = AVMutableComposition() var total_time = kCMTimeZero // Loop over videos and create tracks,keep incrementing total duration let video_track = mix_composition.addMutableTrackWithMediaType(AVMediaTypeVideo,preferredTrackID: CMPersistentTrackID()) var instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: video_track) for video in videos { let shortened_duration = CMTimeSubtract(video.duration,CMTimeMake(1,10)); let videoAssetTrack = video.tracksWithMediaType(AVMediaTypeVideo)[0] do { try video_track.insertTimeRange(CMTimeRangeMake(kCMTimeZero,shortened_duration),ofTrack: videoAssetTrack,atTime: total_time) video_track.preferredTransform = videoAssetTrack.preferredTransform } catch _ { } instruction.setTransform(videoTransformForTrack(video),atTime: total_time) // Add video duration to total time total_time = CMTimeAdd(total_time,shortened_duration) } // Create main instrcution for video composition let main_instruction = AVMutableVideoCompositionInstruction() main_instruction.timeRange = CMTimeRangeMake(kCMTimeZero,total_time) main_instruction.layerInstructions = [instruction] main_composition.instructions = [main_instruction] main_composition.frameDuration = CMTimeMake(1,30) main_composition.renderSize = CGSize(width: UIScreen.mainScreen().bounds.width,height: UIScreen.mainScreen().bounds.height) let exporter = AVAssetExportSession(asset: mix_composition,presetName: AVAssetExportPreset640x480) exporter!.outputURL = final_url exporter!.outputFileType = AVFileTypeMPEG4 exporter!.shouldOptimizeForNetworkUse = true exporter!.videoComposition = main_composition // 6 - Perform the Export exporter!.exportAsynchronouslyWithCompletionHandler() { // Assign return values based on success of export dispatch_async(dispatch_get_main_queue(),{ () -> Void in self.exportDidFinish(exporter!) }) }
对不起,长期的解释,我只是想确保我非常清楚我的问题,因为其他答案没有为我工作.
解决方法
我想你尝试修改它或尝试像:
extension AVAsset { func videoOrientation() -> (orientation: UIInterfaceOrientation,device: AVCaptureDevicePosition) { var orientation: UIInterfaceOrientation = .Unknown var device: AVCaptureDevicePosition = .Unspecified let tracks :[AVAssetTrack] = self.tracksWithMediaType(AVMediaTypeVideo) if let videoTrack = tracks.first { let t = videoTrack.preferredTransform if (t.a == 0 && t.b == 1.0 && t.d == 0) { orientation = .Portrait if t.c == 1.0 { device = .Front } else if t.c == -1.0 { device = .Back } } else if (t.a == 0 && t.b == -1.0 && t.d == 0) { orientation = .PortraitUpsideDown if t.c == -1.0 { device = .Front } else if t.c == 1.0 { device = .Back } } else if (t.a == 1.0 && t.b == 0 && t.c == 0) { orientation = .LandscapeRight if t.d == -1.0 { device = .Front } else if t.d == 1.0 { device = .Back } } else if (t.a == -1.0 && t.b == 0 && t.c == 0) { orientation = .LandscapeLeft if t.d == 1.0 { device = .Front } else if t.d == -1.0 { device = .Back } } } return (orientation,device) } }