OpenGL ES iOS笔记03 渲染视频

上一篇笔记

渲染视频实际上就是不断地把视频数据放入纹理中,让OpenGL去渲染显示,本文会利用iOS设备采集RGBA、NV12数据,然后进行渲染显示。

利用AVFoundation采集视频

这里创建一个AVCaptureSession采集前置摄像头视频数据,格式为32BGRA,然后利用AVCaptureVideoDataOutput的代理方法获取采集到的数据。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

@interface ViewController ()<AVCaptureVideoDataOutputSampleBufferDelegate>
@property(nonatomic,strong)AVCaptureSession*session;

@property (weak, nonatomic) IBOutlet LYRGLView *renderView;

@end

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view.
    
    self.session = [[AVCaptureSession alloc]init];
    self.session.sessionPreset = AVCaptureSessionPreset640x480;
    NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    AVCaptureDevice*frontCamera;
    for (AVCaptureDevice *device in cameras){
        if (device.position == AVCaptureDevicePositionFront){
            frontCamera = device;
        }
    }
    NSError *error = nil;
    AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:frontCamera error:&error];
    
    [self.session addInput:videoInput];
    
    AVCaptureVideoDataOutput *avCaptureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
    //设置采集RGBA
    NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                              nil];
    
    avCaptureVideoDataOutput.videoSettings = settings;
    avCaptureVideoDataOutput.alwaysDiscardsLateVideoFrames = YES;
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [avCaptureVideoDataOutput setSampleBufferDelegate:self queue:queue];
    [self.session addOutput:avCaptureVideoDataOutput];
    
    [self.session startRunning];
}

AVCaptureVideoDataOutput代理方法实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
-(void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    
    if(CVPixelBufferLockBaseAddress(imageBuffer, 0) == kCVReturnSuccess)
    {
    	//获取数据地址
        UInt8 *bufferPtr = (UInt8 *)CVPixelBufferGetBaseAddress(imageBuffer);
        size_t width = CVPixelBufferGetWidth(imageBuffer);
        size_t height = CVPixelBufferGetHeight(imageBuffer);
         height:height];
    }
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}

bufferPtr指针即指向RGBA数据的地址。

修改GLView

调用OpenGL的准备工作

这里将调用OpenGL的准备方法整合到一起,并在initWithFrame:和initWithCoder:中都调用它,这样在代码和storyboard中创建View就都会准备好OpenGL。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
@interface LYRGLView ()
{
    GLuint _renderBuffer;
    GLuint _framebuffer;
    
    //纹理缓冲
    GLuint _rgbTexture;
    
    //着色器程序
    GLuint _glprogram;
    //记录renderbuffer的宽高
    GLint           _backingWidth;
    GLint           _backingHeight;
    
    
    dispatch_queue_t _renderQueue;
    
    //纹理参数
    GLint _inputTexture;
    //顶点参数
    GLint _vertexPosition;
    //纹理坐标参数
    GLint _textureCoords;
}
@property(nonatomic,strong)CAEAGLLayer*eaglLayer;
@property(nonatomic,strong)EAGLContext*context;
@end
@implementation LYRGLView
#pragma mark - life cycle
-(instancetype)initWithCoder:(NSCoder *)aDecoder
{
    if (self = [super initWithCoder:aDecoder]) {
        [self commonInit];
    }
    
    return self;
}
-(instancetype)initWithFrame:(CGRect)frame
{
    if (self = [super initWithFrame:frame]) {
        [self commonInit];
    }
    return self;
}
-(void)commonInit{
    
    _renderQueue = dispatch_queue_create("renderQueue", DISPATCH_QUEUE_SERIAL);
    
    
    [self prepareLayer];
    dispatch_sync(_renderQueue, ^{
        [self prepareContext];
        [self prepareShader];
        [self prepareRenderBuffer];
        [self prepareFrameBuffer];
        
    });
}
-(void)prepareLayer
{
    self.eaglLayer = (CAEAGLLayer*)self.layer;
    //设置不透明,节省性能
    self.eaglLayer.opaque = YES;
    self.eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
}
-(void)prepareContext
{
    
    
        self.context = [[EAGLContext alloc]initWithAPI:kEAGLRenderingAPIOpenGLES2];
        [EAGLContext setCurrentContext:self.context];
    
    
}
-(void)prepareRenderBuffer{
    glGenRenderbuffers(1, &_renderBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);
    //调用这个方法来创建一块空间用于存储缓冲数据,替代了glRenderbufferStorage
    [self.context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.eaglLayer];
    
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &_backingWidth);
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &_backingHeight);
}

-(void)prepareFrameBuffer
{
    glGenFramebuffers(1, &_framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
    
    //设置gl渲染窗口大小
    glViewport(0, 0, _backingWidth, _backingHeight);
    //附加之前的_renderBuffer
    //GL_COLOR_ATTACHMENT0指定第一个颜色缓冲区附着点
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                              GL_RENDERBUFFER, _renderBuffer);
}

-(void)prepareShader
{
    //创建顶点着色器
    GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
    
    const GLchar* const vertexShaderSource =  (GLchar*)[vertexShaderString UTF8String];
    GLint vertexShaderLength = (GLint)[vertexShaderString length];
    //读取shader字符串
    glShaderSource(vertexShader, 1, &vertexShaderSource, &vertexShaderLength);
    //编译shader
    glCompileShader(vertexShader);
    
    GLint logLength;
    glGetShaderiv(vertexShader, GL_INFO_LOG_LENGTH, &logLength);
    if (logLength > 0)
    {
        GLchar *log = (GLchar *)malloc(logLength);
        glGetShaderInfoLog(vertexShader, logLength, &logLength, log);
        NSLog(@"%s\n",log);
        free(log);
    }
    
    //创建片元着色器
    GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
    const GLchar* const fragmentShaderSource = (GLchar*)[fragmentShaderString UTF8String];
    GLint fragmentShaderLength = (GLint)[fragmentShaderString length];
    glShaderSource(fragmentShader, 1, &fragmentShaderSource, &fragmentShaderLength);
    glCompileShader(fragmentShader);
    
    glGetShaderiv(fragmentShader, GL_INFO_LOG_LENGTH, &logLength);
    if (logLength > 0)
    {
        GLchar *log = (GLchar *)malloc(logLength);
        glGetShaderInfoLog(fragmentShader, logLength, &logLength, log);
        NSLog(@"%s\n",log);
        free(log);
    }
    
    //创建glprogram
    _glprogram = glCreateProgram();
    
    //绑定shader
    glAttachShader(_glprogram, vertexShader);
    glAttachShader(_glprogram, fragmentShader);
    //链接program
    glLinkProgram(_glprogram);
    
    //选择程序对象为当前使用的程序,类似setCurrentContext
    glUseProgram(_glprogram);
    
    //获取并保存参数位置
    _inputTexture = glGetUniformLocation(_glprogram, "inputTexture");
    _vertexPosition = glGetAttribLocation(_glprogram, "vertexPosition");
    _textureCoords = glGetAttribLocation(_glprogram, "textureCoords");
    
    
    //使参数可见
    glEnableVertexAttribArray(_vertexPosition);
    glEnableVertexAttribArray(_textureCoords);
}
...
@end

这里创建了一个串行队列,所有的OpenGL的操作都将同步在这个队列中执行。

shader程序

这里的shader程序和上一篇笔记中的shader基本一致,就是顶点着色器将顶点坐标和纹理坐标传给片段着色器,然后片段着色器根据纹理和纹理坐标算出每个像素的颜色。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
NSString *const vertexShaderString = SHADER_STRING
(
 //attribute 关键字用来描述传入shader的变量
 attribute vec4 vertexPosition; //传入的顶点坐标
 attribute vec2 textureCoords;//要获取的纹理坐标
 //传给片段着色器参数
 varying  vec2 textureCoordsOut;
 void main(void) {
     gl_Position = vertexPosition; // gl_Position是vertex shader的内建变量,gl_Position中的顶点值最终输出到渲染管线中
     textureCoordsOut = textureCoords;
 }
 );
//片段着色器
NSString *const fragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordsOut;
 
 uniform sampler2D inputTexture;
 void main(void) {
     //gl_FragColor是fragment shader的内建变量,gl_FragColor中的像素值最终输出到渲染管线中
     gl_FragColor = texture2D(inputTexture, textureCoordsOut);
 }
 );

使用视频数据创建纹理并渲染显示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
-(void)renderWithRGBData:(char*)RGBData width:(int)width height:(int)height {
    dispatch_sync(_renderQueue, ^{
        //检查context
        if ([EAGLContext currentContext] != self.context)
        {
            [EAGLContext setCurrentContext:self.context];
        }
        
        GLfloat vertices[] = {
            -1,-1,
            1,-1,
            -1,1,
            1,1,
            
        };
        GLfloat textCoord[] = {
            0.0f, 1.0f,
            1.0f, 1.0f,
            0.0f, 0.0f,
            1.0f, 0.0f,
        };
    
        glBindTexture(GL_TEXTURE_2D, _rgbTexture);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, RGBData);

        
    
        //设置一些边缘的处理
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        
        //确定采样器对应的哪个纹理,由于只使用一个,所以这句话可以不写
        glUniform1i(_inputTexture,0);
        glVertexAttribPointer(_vertexPosition, 2, GL_FLOAT, GL_FALSE, 0, vertices);
        glVertexAttribPointer(_textureCoords, 2, GL_FLOAT, GL_FALSE,0, textCoord);
    
        //清屏为白色
        glClearColor(1.0, 1.0, 1.0, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
        //EACAGLContext 渲染OpenGL绘制好的图像到EACAGLLayer
        [_context presentRenderbuffer:GL_RENDERBUFFER];
    });
}

这里首先检查了EAGLContext是否是对应的context,因为context创建可能在不同的线程,导致不对应。接下来使用glTexImage2D函数将视频数据写入纹理缓冲,其中,由于采集的数据是32BGRA,所以倒数第三个参数传入GL_BGRA。之后使用glDrawArrays函数,传入GL_TRIANGLE_STRIP进行绘制,这个参数会使OpenGL逐个使用顶点并绘制三角形,根据GL_TRIANGLE_STRIP的特点,顶点坐标和纹理坐标以’Z’字形的顺序存放就可以画出矩形。

GL_TRIANGLE_STRIP的绘制

在AVCaptureVideoDataOutput代理中将数据传给GLView

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
-(void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    
    if(CVPixelBufferLockBaseAddress(imageBuffer, 0) == kCVReturnSuccess)
    {
    	//获取数据地址
        UInt8 *bufferPtr = (UInt8 *)CVPixelBufferGetBaseAddress(imageBuffer);
        size_t width = CVPixelBufferGetWidth(imageBuffer);
        size_t height = CVPixelBufferGetHeight(imageBuffer);
         height:height];
         [self.renderView renderWithRGBData:bufferPtr width:width height:height];
    }
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}

将一个GLView添加到Controller并真机运行后,即可看到采集的画面。

渲染yuv420p

iOS系统可采集的另一种视频格式即是NV12,也叫yuv420sp,即是y分量存在一个平面,另外两个分量交替存储在另一个分量。因为NV12相对比较少见,实际渲染时也和yuv420p有些区别,因此先实现yuv420p的渲染,而因为iOS系统不支持直接采集yuv420p,因此此处使用一个三方库,libyuv,来对NV12的数据进行转换,然后再进行渲染。

修改采集格式

1
NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,nil];

这里可选kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange和kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,都是采集NV12,区别是uv分量的取值范围不同,这里选择第一种。

使用libyuv转换为yuv420p格式数据

libyuv

在videoDataOutput的代理方法中,拿到sampleBuffer,取出两个分量,然后使用libyuv的函数NV12ToI420。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
-(void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    
    if(CVPixelBufferLockBaseAddress(imageBuffer, 0) == kCVReturnSuccess)
    {
        //图像宽度(像素)
        size_t pixelWidth = CVPixelBufferGetWidth(imageBuffer);
        //图像高度(像素)
        size_t pixelHeight = CVPixelBufferGetHeight(imageBuffer);
        //获取CVImageBufferRef中的y数据
        uint8_t *y_frame = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
        //获取CMVImageBufferRef中的uv数据
        uint8_t *uv_frame = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
        //y stride
        size_t plane1_stride = CVPixelBufferGetBytesPerRowOfPlane (imageBuffer, 0);
        //uv stride
        size_t plane2_stride = CVPixelBufferGetBytesPerRowOfPlane (imageBuffer, 1);
        //y_size
        size_t plane1_size = plane1_stride * CVPixelBufferGetHeightOfPlane(imageBuffer, 0);
        //uv_size
        size_t plane2_size = CVPixelBufferGetBytesPerRowOfPlane (imageBuffer, 1) * CVPixelBufferGetHeightOfPlane(imageBuffer, 1);
        //yuv_size
        size_t frame_size = plane1_size + plane2_size;
        
        //这些几个指针就是转换后的yuv分量的指针
        uint8* dst_y = malloc(frame_size);
        uint8* dst_u = dst_y + plane1_size;
        uint8* dst_v = dst_u + plane1_size/4;
        
        if (dst_y == NULL || dst_u == NULL || dst_v == NULL) {
            CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
            return;
        }
        // Let libyuv convert
        int ret = NV12ToI420(y_frame, (int)plane1_stride,
                             uv_frame, (int)plane2_stride,
                             dst_y, (int)plane1_stride,
                             dst_u, (int)plane2_stride/2,
                             dst_v, (int)plane2_stride/2,
                             (int)pixelWidth, (int)pixelHeight);
        if (ret < 0) {
            free(dst_y);
            CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
            return;
        }
        
        
		//使用完之后需要手动释放内存
        free(dst_y);
    }
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}

修改shader

因为有yuv三个分量,因此就有三个纹理,片段着色器就有三个sampler2D,顶点着色器不做改动。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
NSString *const fragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordsOut;
 
 uniform sampler2D y_texture;
 uniform sampler2D u_texture;
 uniform sampler2D v_texture;
 
 void main(void) {
     
     highp float y = texture2D(y_texture, textureCoordsOut).r;
     highp float u = texture2D(u_texture, textureCoordsOut).r - 0.5 ;
     highp float v = texture2D(v_texture, textureCoordsOut).r -0.5;
     //yuv转换rgb公式
     highp float r = y +             1.402 * v;
     highp float g = y - 0.344 * u - 0.714 * v;
     highp float b = y + 1.772 * u;
     
     gl_FragColor = vec4(r,g,b,1.0);
     
 }
 );

修改纹理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
@interface LYRYUVView ()
{
    ...
    //纹理缓冲
    GLuint _yTexture;
    GLuint _uTexture;
    GLuint _vTexture;
    
    ...
    
    //纹理参数
    GLint _y_texture;
    GLint _u_texture;
    GLint _v_texture;
    
}
...
-(void)prepareShader
{
    ...
    //选择程序对象为当前使用的程序,类似setCurrentContext
    glUseProgram(_glprogram);
    
    //获取并保存参数位置
    _y_texture = glGetUniformLocation(_glprogram, "y_texture");
    _u_texture = glGetUniformLocation(_glprogram, "u_texture");
    _v_texture = glGetUniformLocation(_glprogram, "v_texture");
    //分配缓冲
    glGenTextures(1, &_yTexture);
    glGenTextures(1, &_uTexture);
    glGenTextures(1, &_vTexture);
    
    _vertexPosition = glGetAttribLocation(_glprogram, "vertexPosition");
   ...    
}

修改渲染方法,将yuv纹理传入纹理缓冲

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
-(void)renderWithYData:(char*)YData UData:(char*)UData VData:(char*)VData width:(int)width height:(int)height
{
    dispatch_sync(_renderQueue, ^{
        //检查context
        if ([EAGLContext currentContext] != self.context)
        {
            [EAGLContext setCurrentContext:self.context];
        }
        
        GLfloat vertices[] = {
            -1,-1,
            1,-1,
            -1,1,
            1,1,
            
        };
        GLfloat textCoord[] = {
            0.0f, 1.0f,
            1.0f, 1.0f,
            0.0f, 0.0f,
            1.0f, 0.0f,
        };
        
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, _yTexture);
        glUniform1i(_y_texture,0);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, YData);
        //设置一些边缘的处理,每个纹理都需要设置
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        
        
        glActiveTexture(GL_TEXTURE0 + 1);
        glBindTexture(GL_TEXTURE_2D, _uTexture);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width/2, height/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, UData);
        glUniform1i(_u_texture,1);

        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        
        
        glActiveTexture(GL_TEXTURE0 + 2);
        glBindTexture(GL_TEXTURE_2D, _vTexture);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width/2, height/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, VData);
        glUniform1i(_v_texture,2);
        
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        
        
        glVertexAttribPointer(_vertexPosition, 2, GL_FLOAT, GL_FALSE, 0, vertices);
        glVertexAttribPointer(_textureCoords, 2, GL_FLOAT, GL_FALSE,0, textCoord);
        
        //清屏为白色
        glClearColor(1.0, 1.0, 1.0, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);
        
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
        //EACAGLContext 渲染OpenGL绘制好的图像到EACAGLLayer
        [_context presentRenderbuffer:GL_RENDERBUFFER];
    });
}

之后调用方法,运行后即可看到采集的视频。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
-(void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
   ...
        // Let libyuv convert
        int ret = NV12ToI420(y_frame, (int)plane1_stride,
                             uv_frame, (int)plane2_stride,
                             dst_y, (int)plane1_stride,
                             dst_u, (int)plane2_stride/2,
                             dst_v, (int)plane2_stride/2,
                             (int)pixelWidth, (int)pixelHeight);
        if (ret < 0) {
            free(dst_y);
            CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
            return;
        }
        
        
        [self.renderView renderWithYData:dst_y UData:dst_u VData:dst_v width:pixelWidth height:pixelHeight];
        //使用完之后需要手动释放内存
        free(dst_y);
    ...
}

渲染NV12

修改shader

这里和yuv420p的区别就是传入的纹理参数是两个,第二个是uv分量,u分量从r中取,v分量从a中取。 在网上一些例子中,v分量是从g中获取的,这个应该是移动端和PC端存储数据位置的不同。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
NSString *const fragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordsOut;
 
 uniform sampler2D y_texture;
 uniform sampler2D uv_texture;
 
 void main(void) {
     
     highp float y = texture2D(y_texture, textureCoordsOut).r;
     highp float u = texture2D(uv_texture, textureCoordsOut).r - 0.5 ;
     highp float v = texture2D(uv_texture, textureCoordsOut).a -0.5;

     highp float r = y +             1.402 * v;
     highp float g = y - 0.344 * u - 0.714 * v;
     highp float b = y + 1.772 * u;

     gl_FragColor = vec4(r,g,b,1.0);
 }
 );

修改纹理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
@interface LYRNV12View ()
{
	...
    //纹理缓冲
    GLuint _yTexture;
    GLuint _uvTexture;
    
    ...
    
    //纹理参数
    GLint _y_texture;
    GLint _uv_texture;
    ...
}


-(void)prepareShader
{
    ...
    
    //选择程序对象为当前使用的程序,类似setCurrentContext
    glUseProgram(_glprogram);
    
    //获取并保存参数位置
    _y_texture = glGetUniformLocation(_glprogram, "y_texture");
    _uv_texture = glGetUniformLocation(_glprogram, "uv_texture");
    
    glGenTextures(1, &_yTexture);
    glGenTextures(1, &_uvTexture);
    
    _vertexPosition = glGetAttribLocation(_glprogram, "vertexPosition");
   ...
    
     
}

修改渲染方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
-(void)renderWithYData:(char*)YData UVData:(char*)UVData width:(int)width height:(int)height
{
    dispatch_sync(_renderQueue, ^{
        //检查context
        if ([EAGLContext currentContext] != self.context)
        {
            [EAGLContext setCurrentContext:self.context];
        }
        
        GLfloat vertices[] = {
            -1,1,
            1,1,
            -1,-1,
            1,-1,
            
        };
        GLfloat textCoord[] = {
            0,1,
            1,1,
            0,0,
            1,0,
        };
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, _yTexture);
        
        glUniform1i(_y_texture,0);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, YData);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        
        
        glActiveTexture(GL_TEXTURE0 + 1);

        glBindTexture(GL_TEXTURE_2D, _uvTexture);
        //uv分量的参数变为GL_LUMINANCE_ALPHA
        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, width/2, height/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, UVData);
        glUniform1i(_uv_texture,1);
        
        
        //设置一些边缘的处理
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        
        
        
        
        glVertexAttribPointer(_vertexPosition, 2, GL_FLOAT, GL_FALSE, 0, vertices);
        glVertexAttribPointer(_textureCoords, 2, GL_FLOAT, GL_FALSE,0, textCoord);
        
        //清屏为白色
        glClearColor(1.0, 1.0, 1.0, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);
        
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
        //EACAGLContext 渲染OpenGL绘制好的图像到EACAGLLayer
        [_context presentRenderbuffer:GL_RENDERBUFFER];
    });
}

这里需要注意,uv分量的参数变为GL_LUMINANCE_ALPHA,这个参数表示按照亮度和alpha值存储纹理单元,而GL_LUMINANCE表示按照亮度值存储纹理单元,alpha值固定为1。在修改shader时也提到,v分量是从a中获取的,也就是说如果使用GL_LUMINANCE,则v分量的值会丢失。

之后调用方法,运行后即可看到采集的视频。

1
2
3
4
5
6
7
8
9
-(void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
	UInt8 *yBuffer = (UInt8 *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
	UInt8 *uvBuffer = (UInt8 *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
	size_t width = CVPixelBufferGetWidth(imageBuffer);
	size_t height = CVPixelBufferGetHeight(imageBuffer);
	self.renderView renderWithYData:yBuffer UVData:uvBuffer width:width height:height];
	
}

总结

视频渲染是OpenGL很重要的一个功能,在学习研究的时候也发现不少坑,主要是一些细节的参数的不同,以及移动端和PC端的差异导致的,其中看kxmovie和GPUImage的源码还是收益不少的。

demo地址

参考:
kxmovie
GPUImage
opengl渲染nv12视频