摘要 : 计划分享有关 iOS 音视频开发一些列文章,首先是 iOS 视频采集相关介绍,后续会持续丰富每篇内容。
Apple 官网视频采集简要说明
- 首先通过 Apple 官网视频采集相关资料 说明,我们可以知道完成视频采集,需要
AVCaptureSession
对象调节管理输入源和输出源之间的协作,最后通过AVCaptureVideoPreviewLayer
来显示采集画面,主要流程如下 image 1:
- 在 session 中,每当添加一个 input 或 output 进 sesstion 中时,都会建立一个 connection用来管理他们,当然不限于一个 input 和 output,具体如下 image 2:
常用视频采集涉及对象说明
- AVCaptureDevice
代表硬件设备,例如麦克风或摄像头
- AVCaptureInput
从 AVCaptureDevcie 捕获的数据,是个抽象类,需要继承后使用,例如**:**AVCaptureDeviceInput,还有其他input 源,可参考Apple 官网文档根据场景选用。
- AVCaptureOutput
同样,AVCaptureOutput 也是抽象类,常用的有:AVCaptureMovieFileOutput,AVCaptureVideoDataOutput,AVCaptureAudioDataoutput,AVCaptureStillMageOutput等,可根据实际情况选用。 ****
- AVCaptureConnection
用来维护管理 input 和outout,由 AVCaptureSesstion 使用。 ****
- AVCaptureSession
用来管理协调输入输出流,可设置分辨率。
- AVCapturePreviewLayer
提供显示预览功能,AVCapturePreviewLayer 添加到目标 view 的 layer 即可。
视频采集开发基本流程如下:
- 创建 AVCaptureSession
// 设置捕获会话并设置分辨率
- (void)setupSession {
AVCaptureSession *avCaptureSession = [[AVCaptureSession alloc] init];
// 设置分辨率
avCaptureSession.sessionPreset = AVCaptureSessionPreset1280x720;
}
设置分辨率可参考官网文档中参数:
- 添加inout
//获取摄像头
AVCaptureDevice *captureDevice = [[AVCaptureDevice alloc] init];
NSArray *devices = [AVCaptureDevice devices];
for (AVCaptureDevice *device in devices) {
if(device.position == AVCaptureDevicePositionFront) {
captureDevice = device;
}
}
//添加
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:nil];
//设置帧率
captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, 16);
[avCaptureSession addInput:videoInput];
- 添加output ,一般视频原数据格式:YUV,RGB ,常用都使用YUV,因为体积比RGB小。
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.videoSettings = @{(NSString *)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)};
//进制丢帧
videoOutput.alwaysDiscardsLateVideoFrames = NO;
//创建串行队列,获取帧
dispatch_queue_t queue = dispatch_queue_create("captureSerialQueue", DISPATCH_QUEUE_SERIAL);
[videoOutput setSampleBufferDelegate:self queue:queue];
[avCaptureSession addOutput:videoOutput];
- 获取 connection 连接,metiaType 这里采用的是只包含视频,还有其他类型例如:AVMediaTypeAudio 音频,AVMediaTypeMuxed 音视频。
AVCaptureConnection *captureConnection = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
captureConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
captureConnection.videoMirrored = YES;
- 开启和结束采集
[avCaptureSession startRunning];
[avCaptureSession stopRunning];
- 获取采集数据
//视频采集数据回调
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
//获取data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
//或者直接转成image
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVImageBuffer:imageBuffer];
UIImage *image = [UIImage imageWithCIImage:ciImage];
dispatch_async(dispatch_get_main_queue(), ^{
_captrueImageView.image = image;
});
}
参考连接:
https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2
https://xdev.in/posts/audio-and-video-1/#采集过程
https://www.jianshu.com/p/bcff7965e1d0
https://juejin.cn/post/6844903887313305608#heading-14
https://juejin.cn/post/6844903818103095303#heading-29
https://blog.csdn.net/lincsdnnet/article/details/78255773
https://blog.csdn.net/HatsuneMikuFans/article/details/119855953