ios – OpenGL ES 2.0在iPad / iPhone上的视频

前端之家收集整理的这篇文章主要介绍了ios – OpenGL ES 2.0在iPad / iPhone上的视频前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
尽管StackOverflow上有很好的信息,我仍然在这里,

我正在为iPad 2上的视频编写一个OpenGL渲染缓冲区(使用iOS 4.3).这正是我正在尝试的:

A)设置一个AVAssetWriterInputPixelBufferAdaptor

>创建一个指向视频文件的AVAssetWriter
>使用适当的设置设置AVAssetWriterInput
>设置一个AVAssetWriterInputPixelBufferAdaptor将数据添加到视频文件

B)使用该AVAssetWriterInputPixelBufferAdaptor将数据写入视频文件

将OpenGL代码渲染到屏幕
>通过glReadPixels获取OpenGL缓冲区
>从OpenGL数据创建一个CVPixelBufferRef
>使用appendPixelBuffer方法将PixelBuffer附加到AVAssetWriterInputPixelBufferAdaptor

但是,我有这样做的问题.我现在的策略是在按下按钮时设置AVAssetWriterInputPixelBufferAdaptor.一旦AVAssetWriterInputPixelBufferAdaptor有效,我设置一个标志来发信号EAGLView创建一个像素缓冲区,并通过appendPixelBuffer将其附加到视频文件中,以获得给定的帧数.

现在我的代码崩溃,因为它试图附加第二个像素缓冲区,给我以下错误

-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0

这是我的AVAsset设置代码(很多是基于Rudy Aramayo的代码,它在正常图像上工作,但不是为纹理设置的):

- (void) testVideoWriter {

  //initialize global info
  MOVIE_NAME = @"Documents/Movie.mov";
  CGSize size = CGSizeMake(480,320);
  frameLength = CMTimeMake(1,5); 
  currentTime = kCMTimeZero;
  currentFrame = 0;

  NSString *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
  NSError *error = nil;

  unlink([betaCompressionDirectory UTF8String]);

  videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error];

  NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264,AVVideoCodecKey,[NSNumber numberWithInt:size.width],AVVideoWidthKey,[NSNumber numberWithInt:size.height],AVVideoHeightKey,nil];
  writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

  //writerInput.expectsMediaDataInRealTime = NO;

  NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey,nil];

  adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput                                                                          sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
  [adaptor retain];

  [videoWriter addInput:writerInput];

  [videoWriter startWriting];
  [videoWriter startSessionAtSourceTime:kCMTimeZero];

  VIDEO_WRITER_IS_READY = true;
}

好的,现在我的videoWriter和适配器设置好了,我告诉我的OpenGL渲染器为每个帧创建一个像素缓冲区:

- (void) captureScreenVideo {

  if (!writerInput.readyForMoreMediaData) {
    return;
  }

  CGSize esize = CGSizeMake(eagl.backingWidth,eagl.backingHeight);
  NSInteger myDataLength = esize.width * esize.height * 4;
  GLuint *buffer = (GLuint *) malloc(myDataLength);
  glReadPixels(0,esize.width,esize.height,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
  CVPixelBufferRef pixel_buffer = NULL;
  CVPixelBufferCreateWithBytes (NULL,kCVPixelFormatType_32BGRA,buffer,4 * esize.width,NULL,&pixel_buffer);

  /* DON'T FREE THIS BEFORE USING pixel_buffer! */ 
  //free(buffer);

  if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
      NSLog(@"FAIL");
    } else {
      NSLog(@"Success:%d",currentFrame);
      currentTime = CMTimeAdd(currentTime,frameLength);
    }

   free(buffer);
   CVPixelBufferRelease(pixel_buffer);
  }


  currentFrame++;

  if (currentFrame > MAX_FRAMES) {
    VIDEO_WRITER_IS_READY = false;
    [writerInput markAsFinished];
    [videoWriter finishWriting];
    [videoWriter release];

    [self moveVideoToSavedPhotos]; 
  }
}

最后,我将视频移动到相机卷:

- (void) moveVideoToSavedPhotos {
  ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
  NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];    
  NSURL* fileURL = [NSURL fileURLWithPath:localVid];

  [library writeVideoAtPathToSavedPhotosAlbum:fileURL
                              completionBlock:^(NSURL *assetURL,NSError *error) {
                                if (error) {   
                                  NSLog(@"%@: Error saving context: %@",[self class],[error localizedDescription]);
                                }
                              }];
  [library release];
}

但是,正如我所说,我正在打电话给appendPixelBuffer.

对不起,发送这么多的代码,但我真的不知道我做错了什么.更新将图像写入视频的项目似乎是微不足道的,但是我无法通过glReadPixels创建像素缓冲区并附加它.这让我疯狂!如果任何人有任何建议或OpenGL的工作代码示例 – >视频将是惊人的…谢谢!

解决方法

我刚刚得到类似于这个在我的开源 GPUImage框架中工作的东西,基于上面的代码,所以我以为我会提供我的工作解决方案.在我的情况下,我可以像Srikumar一样使用像素缓冲池,而不是为每个帧手动创建的像素缓冲区.

我首先配置要录制的电影:

NSError *error = nil;

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
    NSLog(@"Error: %@",error);
}


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;

// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA],[NSNumber numberWithInt:videoSize.width],kCVPixelBufferWidthKey,[NSNumber numberWithInt:videoSize.height],kCVPixelBufferHeightKey,nil];

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

[assetWriter addInput:assetWriterVideoInput];

然后使用此代码使用glReadPixels()来抓取每个渲染的帧:

CVPixelBufferRef pixel_buffer = NULL;

CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL,[assetWriterPixelBufferInput pixelBufferPool],&pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
    return;
}
else
{
    CVPixelBufferLockBaseAddress(pixel_buffer,0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
    glReadPixels(0,videoSize.width,videoSize.height,pixelBufferData);
}

// May need to add a check here,because if two consecutive times with the same value are added to the movie,it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{
    NSLog(@"Problem appending pixel buffer at time: %lld",currentTime.value);
} 
else 
{
//        NSLog(@"Recorded pixel buffer at time: %lld",currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer,0);

CVPixelBufferRelease(pixel_buffer);

我注意到的一件事是,如果我试图附加两个具有相同整数时间值的像素缓冲区(在所提供的基础上),则整个录制将失败,输入将永远不会再占用另一个像素缓冲区.同样,如果我尝试在从池中检索后附加像素缓冲区失败,它将中止录像.因此,上述代码中的早期救助.

除了上述代码之外,我使用一个颜色调整的着色器将OpenGL ES场景中的RGBA渲染转换为BGRA,以便AVAssetWriter进行快速编码.有了这个,我可以在iPhone 4上以30 FPS录制640×480的视频.

同样,所有的代码可以在GPUImage存储库中,在GPUImageMovieWriter类下面.

原文链接:https://www.f2er.com/iOS/328763.html

猜你在找的iOS相关文章