GPUImage源码研究01 研究图片滤镜过程

之前在一家短视频应用的小创业公司待过,当时领导丢给我几个shader文件和图片,然后叫我用GPUImage把这几个滤镜弄出来,当时看那些shader和GPUImage是一脸懵逼,硬着头皮弄,后来也弄出来了,现在想想当时也没有顺便研究下OpenGL和GPUImage,感觉有点浪费,现在想起来了,之前对OpenGL也有些了解了,就来看看GPUImage的源码吧。

使用滤镜渲染一张图片

这里直接使用GPUImage github主页提供的图片滤镜代码示例,取得滤镜处理过后的图片后,放到UIImageView上显示。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
@interface ViewController ()
@property (weak, nonatomic) IBOutlet UIImageView *filteredImageView;

@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view.
    
    UIImage *inputImage = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:@"container" ofType:@"png"]];
    
    GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
    GPUImageSepiaFilter *stillImageFilter = [[GPUImageSepiaFilter alloc] init];
    
    [stillImageSource addTarget:stillImageFilter];
    [stillImageFilter useNextFrameForImageCapture];
    [stillImageSource processImage];
    
    UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentFramebuffer];
    
    self.filteredImageView.image = currentFilteredVideoFrame;
}


@end

使用GPUImagePicture加载图片

首先使用图片创建了一个GPUImagePicture对象。

获取图片位图数据
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
- (id)initWithImage:(UIImage *)newImageSource;
{
    if (!(self = [self initWithImage:newImageSource smoothlyScaleOutput:NO]))
    {
		return nil;
    }
    
    return self;
}
- (id)initWithImage:(UIImage *)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput;
{
    return [self initWithCGImage:[newImageSource CGImage] smoothlyScaleOutput:smoothlyScaleOutput];
}

- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication
{
	...
}

进入init方法,可以看到最终将传入的UIImage的CGImageRef取出,然后调用方法

1
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication;

后两个参数一个是平滑缩放,另一个字面意思是删除复制前,暂时不明,不过默认都是NO。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication;
{
    if (!(self = [super init]))
    {
		return nil;
    }
    
    hasProcessedImage = NO;
    self.shouldSmoothlyScaleOutput = smoothlyScaleOutput;
    imageUpdateSemaphore = dispatch_semaphore_create(0);
    dispatch_semaphore_signal(imageUpdateSemaphore);


    // TODO: Dispatch this whole thing asynchronously to move image loading off main thread
    CGFloat widthOfImage = CGImageGetWidth(newImageSource);
    CGFloat heightOfImage = CGImageGetHeight(newImageSource);

    // If passed an empty image reference, CGContextDrawImage will fail in future versions of the SDK.
    NSAssert( widthOfImage > 0 && heightOfImage > 0, @"Passed image must not be empty - it should be at least 1px tall and wide");
    
    pixelSizeOfImage = CGSizeMake(widthOfImage, heightOfImage);
    CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
    
    BOOL shouldRedrawUsingCoreGraphics = NO;
    
    // For now, deal with images larger than the maximum texture size by resizing to be within that limit
    CGSize scaledImageSizeToFitOnGPU = [GPUImageContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
    ...
}

首先初始化对象,创建一个信号量,然后获取宽高,并使用sizeThatFitsWithinATextureForSize来判断是否宽高是否有超出纹理最大限制,如果没有,就返回原来的宽高,如果有,则按比例缩小到纹理最大限制之内。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication
{
	...
	GLubyte *imageData = NULL;
    CFDataRef dataFromImageDataProvider = NULL;
    GLenum format = GL_BGRA;
    BOOL isLitteEndian = YES;
    BOOL alphaFirst = NO;
    
    
    ...
    
    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(newImageSource);
    ...
    
    if (byteOrderInfo == kCGBitmapByteOrder32Little) {
                    /* Little endian, for alpha-first we can use this bitmap directly in GL */
                    CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
                    if (alphaInfo != kCGImageAlphaPremultipliedFirst && alphaInfo != kCGImageAlphaFirst &&
                        alphaInfo != kCGImageAlphaNoneSkipFirst) {
                        shouldRedrawUsingCoreGraphics = YES;
                    }
                } else if (byteOrderInfo == kCGBitmapByteOrderDefault || byteOrderInfo == kCGBitmapByteOrder32Big) {
					isLitteEndian = NO;
                    /* Big endian, for alpha-last we can use this bitmap directly in GL */
                    CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
                    if (alphaInfo != kCGImageAlphaPremultipliedLast && alphaInfo != kCGImageAlphaLast &&
                        alphaInfo != kCGImageAlphaNoneSkipLast) {
                        shouldRedrawUsingCoreGraphics = YES;
                    } else {
                        /* Can access directly using GL_RGBA pixel format */
						premultiplied = alphaInfo == kCGImageAlphaPremultipliedLast || alphaInfo == kCGImageAlphaPremultipliedLast;
						alphaFirst = alphaInfo == kCGImageAlphaFirst || alphaInfo == kCGImageAlphaPremultipliedFirst;
						format = GL_RGBA;
                    }
                }  
                ...
}  

然后会取出传入图片的bitmapInfo,然后做一系列判断,确认图片格式是RGBA还是ARGB。

1
2
3
4
5
6
7
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication
{
	...
	dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider(newImageSource));
    imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
 	...       
}

然后将图片的数据拷贝出来。(之中省略一些判断)这样就获得了位图数据。

将图片数据写入纹理缓冲
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
runSynchronouslyOnVideoProcessingQueue(^{
        [GPUImageContext useImageProcessingContext];
        
        outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:pixelSizeToUseForTexture onlyTexture:YES];
        [outputFramebuffer disableReferenceCounting];

        glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
        if (self.shouldSmoothlyScaleOutput)
        {
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
        }
        // no need to use self.outputTextureOptions here since pictures need this texture formats and type
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, format, GL_UNSIGNED_BYTE, imageData);
        
        if (self.shouldSmoothlyScaleOutput)
        {
            glGenerateMipmap(GL_TEXTURE_2D);
        }
        glBindTexture(GL_TEXTURE_2D, 0);
    });

这里使用了一个GPUImage创建的串行队列,然后同步的调用后续的代码。首先设置上下文,GPUImageContext是对EAGLContext的一个封装,创建了一个GPUImage使用的唯一的EAGLContext,然后设置为当前线程的context。然后从framebuffer的缓冲区获取一个可用的framebuffer,这个缓冲区使用类似字典的结构,根据需要的framebuffer的大小和纹理的一些设置选项进行hash计算,然后以hash值为key来获取可用的framebuffer。GPUImageFramebuffer是对OpenGL纹理和帧缓冲的封装,之后就是将图片数据写入这个纹理缓冲。 picture对象持有这个framebuffer,这样一个GPUImagePicture的初始化过程就完成了。

创建一个GPUImageFilter对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
@implementation GPUImageSepiaFilter

- (id)init;
{
    if (!(self = [super init]))
    {
		return nil;
    }
    
    self.intensity = 1.0;
    self.colorMatrix = (GPUMatrix4x4){
        {0.3588, 0.7044, 0.1368, 0.0},
        {0.2990, 0.5870, 0.1140, 0.0},
        {0.2392, 0.4696, 0.0912 ,0.0},
        {0,0,0,1.0},
    };

    return self;
}

@end

这里使用了GPUImageSepiaFilter这个滤镜,它继承自GPUImageColorMatrixFilter,它的init方法就是调用了父类的init方法之后,再赋值了两个参数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
- (id)init;
{
    if (!(self = [super initWithFragmentShaderFromString:kGPUImageColorMatrixFragmentShaderString]))
    {
        return nil;
    }
    
    colorMatrixUniform = [filterProgram uniformIndex:@"colorMatrix"];
    intensityUniform = [filterProgram uniformIndex:@"intensity"];
    
    self.intensity = 1.f;
    self.colorMatrix = (GPUMatrix4x4){
        {1.f, 0.f, 0.f, 0.f},
        {0.f, 1.f, 0.f, 0.f},
        {0.f, 0.f, 1.f, 0.f},
        {0.f, 0.f, 0.f, 1.f}
    };
    
    return self;
}

再看到GPUImageColorMatrixFilter的init方法,实际上调用了GPUImageColorMatrixFilter的父类GPUImageFilter的initWithFragmentShaderFromString:方法,参数传入一个片段着色器的shader字符串。

1
2
3
4
5
6
7
8
9
- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;
{
    if (!(self = [self initWithVertexShaderFromString:kGPUImageVertexShaderString fragmentShaderFromString:fragmentShaderString]))
    {
		return nil;
    }
    
    return self;
}

再看GPUImageFilter的initWithFragmentShaderFromString方法,实际上就是使用了一个默认的kGPUImageVertexShaderString顶点着色器shader,和传入的片段着色器shader,来创建一个filter对象。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;
{
    ...

    runSynchronouslyOnVideoProcessingQueue(^{
        [GPUImageContext useImageProcessingContext];

        filterProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:vertexShaderString fragmentShaderString:fragmentShaderString];
        
       ...        
        filterPositionAttribute = [filterProgram attributeIndex:@"position"];
        filterTextureCoordinateAttribute = [filterProgram attributeIndex:@"inputTextureCoordinate"];
        filterInputTextureUniform = [filterProgram uniformIndex:@"inputImageTexture"]; // This does assume a name of "inputImageTexture" for the fragment shader
        
        [GPUImageContext setActiveShaderProgram:filterProgram];
        
        glEnableVertexAttribArray(filterPositionAttribute);
        glEnableVertexAttribArray(filterTextureCoordinateAttribute);    
    });
    
    return self;
}

这里可以看到initWithVertexShaderFromString:fragmentShaderFromString:方法使用全局的GPUImageContext,创建了一个GLProgram,GLProgram是对OpenGL的program的一个封装,它内部实现了顶点着色器和片段着色器的编译、链接,以及program的运行。GPUImageContext中也包含了一个对GLProgram的缓存,使用顶点着色器shader和片段着色器shader字符串拼接的字符串为key。获取了GLProgram后,就获取到顶点着色器和片段着色器的输入参数。

1
2
3
4
5
6
7
8
9
10
11
12
13
NSString *const kGPUImageVertexShaderString = SHADER_STRING
(
 attribute vec4 position;
 attribute vec4 inputTextureCoordinate;
 
 varying vec2 textureCoordinate;
 
 void main()
 {
     gl_Position = position;
     textureCoordinate = inputTextureCoordinate.xy;
 }
 );

接下来看下顶点着色器程序,就是有两个输入,输入顶点和纹理坐标,然后将gl_Position和textureCoordinate传给片段着色器,了解OpenGL后,就会发现渲染视频图片之类的,顶点着色器基本都是这么写。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
NSString *const kGPUImageColorMatrixFragmentShaderString = SHADER_STRING
(
 varying highp vec2 textureCoordinate;
 
 uniform sampler2D inputImageTexture;
 
 uniform lowp mat4 colorMatrix;
 uniform lowp float intensity;
 
 void main()
 {
     lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
     lowp vec4 outputColor = textureColor * colorMatrix;
     
     gl_FragColor = (intensity * outputColor) + ((1.0 - intensity) * textureColor);
 }
);

接下来看GPUImageColorMatrixFilter的片段着色器程序,可以看到除了传入的纹理坐标和纹理输入外,还有两个参数,分别是一个颜色矩阵,和一个表达变化强度的参数,然后实际是计算到纹理原本的颜色和颜色矩阵相乘得到的颜色之后,根据intensity参数得到一个原始颜色和新颜色按一定比例相加的颜色。可以看出GPUImageColorMatrixFilter这个滤镜就是使用颜色矩阵处理纹理的滤镜,而GPUImageSepiaFilter滤镜就是将颜色矩阵换成这个特效专用的矩阵。
整个filter的初始化过程,就是编译shader、创建progrem、链接并使用的过程。

处理图片过程

将滤镜加入GPUImagePicture的targets
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
- (void)addTarget:(id<GPUImageInput>)newTarget;
{
    NSInteger nextAvailableTextureIndex = [newTarget nextAvailableTextureIndex];
    [self addTarget:newTarget atTextureLocation:nextAvailableTextureIndex];
    
    if ([newTarget shouldIgnoreUpdatesToThisTarget])
    {
        _targetToIgnoreForUpdates = newTarget;
    }
}

- (void)addTarget:(id<GPUImageInput>)newTarget atTextureLocation:(NSInteger)textureLocation;
{
    if([targets containsObject:newTarget])
    {
        return;
    }
    
    cachedMaximumOutputSize = CGSizeZero;
    runSynchronouslyOnVideoProcessingQueue(^{
        [self setInputFramebufferForTarget:newTarget atIndex:textureLocation];
        [targets addObject:newTarget];
        [targetTextureIndices addObject:[NSNumber numberWithInteger:textureLocation]];
        
        allTargetsWantMonochromeData = allTargetsWantMonochromeData && [newTarget wantsMonochromeInput];
    });
}

GPUImagePicture继承自GPUImageOutput,它有一个targets的数组,里面按顺序存放实现了GPUImageInput协议的对象,形成GPUImage的响应链。当要添加一个target时,会去看这个target的纹理索引,由于GPUImageSepiaFilter是单个纹理的滤镜,所以索引是0。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
- (void)addTarget:(id<GPUImageInput>)newTarget atTextureLocation:(NSInteger)textureLocation;
{
    if([targets containsObject:newTarget])
    {
        return;
    }
    
    cachedMaximumOutputSize = CGSizeZero;
    runSynchronouslyOnVideoProcessingQueue(^{
        [self setInputFramebufferForTarget:newTarget atIndex:textureLocation];
        [targets addObject:newTarget];
        [targetTextureIndices addObject:[NSNumber numberWithInteger:textureLocation]];
        
        allTargetsWantMonochromeData = allTargetsWantMonochromeData && [newTarget wantsMonochromeInput];
    });
}

然后将target存入targets数组,纹理索引存入targetTextureIndices,然后设置当前的outputFramebuffer为这个target的inputFramebuffer。

滤镜处理
1
2
3
4
5
6
7
8
9
10
- (void)useNextFrameForImageCapture;
{
    usingNextFrameForImageCapture = YES;

    // Set the semaphore high, if it isn't already
    if (dispatch_semaphore_wait(imageCaptureSemaphore, DISPATCH_TIME_NOW) != 0)
    {
        return;
    }
}

首先给filter一个标记,这个值和后续一些信号量操作有关。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
- (void)processImage;
{
    [self processImageWithCompletionHandler:nil];
}

- (BOOL)processImageWithCompletionHandler:(void (^)(void))completion;
{
    hasProcessedImage = YES;
    
    //    dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_FOREVER);
    
    if (dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_NOW) != 0)
    {
        return NO;
    }
    
    runAsynchronouslyOnVideoProcessingQueue(^{        
        for (id<GPUImageInput> currentTarget in targets)
        {
            NSInteger indexOfObject = [targets indexOfObject:currentTarget];
            NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
            
            [currentTarget setCurrentlyReceivingMonochromeInput:NO];
            [currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
            [currentTarget setInputFramebuffer:outputFramebuffer atIndex:textureIndexOfTarget];
            [currentTarget newFrameReadyAtTime:kCMTimeIndefinite atIndex:textureIndexOfTarget];
        }
        
        dispatch_semaphore_signal(imageUpdateSemaphore);
        
        if (completion != nil) {
            completion();
        }
    });
    
    return YES;
}

首先根据信号量判断是否等待,如果等待直接返回。然后遍历所有target,将outputFramebuffer传给target。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
    static const GLfloat imageVertices[] = {
        -1.0f, -1.0f,
        1.0f, -1.0f,
        -1.0f,  1.0f,
        1.0f,  1.0f,
    };
    
    [self renderToTextureWithVertices:imageVertices textureCoordinates:[[self class] textureCoordinatesForRotation:inputRotation]];

    [self informTargetsAboutNewFrameAtTime:frameTime];
}
- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
    if (self.preventRendering)
    {
        [firstInputFramebuffer unlock];
        return;
    }
    
    [GPUImageContext setActiveShaderProgram:filterProgram];

    outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:[self sizeOfFBO] textureOptions:self.outputTextureOptions onlyTexture:NO];
    [outputFramebuffer activateFramebuffer];
    if (usingNextFrameForImageCapture)
    {
        [outputFramebuffer lock];
    }

    [self setUniformsForProgramAtIndex:0];
    
    glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
    glClear(GL_COLOR_BUFFER_BIT);

	glActiveTexture(GL_TEXTURE2);
	glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);
	
	glUniform1i(filterInputTextureUniform, 2);	

    glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices);
	glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
    
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    
    [firstInputFramebuffer unlock];
    
    if (usingNextFrameForImageCapture)
    {
        dispatch_semaphore_signal(imageCaptureSemaphore);
    }
}

这里filter传入顶点和纹理坐标,然后取得filter的outputFramebuffer,绑定这个buffer的帧缓冲,将从picture得到的buffer的纹理绑定和绘制,这样帧缓冲中就有了渲染后的数据。后面filter还会检查它的targets数组中是否有内容,有的话也会执行类似过程。

将图片从framebuffer中取出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
- (CGImageRef)newCGImageFromCurrentlyProcessedOutput
{
    // Give it three seconds to process, then abort if they forgot to set up the image capture properly
    double timeoutForImageCapture = 3.0;
    dispatch_time_t convertedTimeout = dispatch_time(DISPATCH_TIME_NOW, timeoutForImageCapture * NSEC_PER_SEC);

    if (dispatch_semaphore_wait(imageCaptureSemaphore, convertedTimeout) != 0)
    {
        return NULL;
    }

    GPUImageFramebuffer* framebuffer = [self framebufferForOutput];
    
    usingNextFrameForImageCapture = NO;
    dispatch_semaphore_signal(imageCaptureSemaphore);
    
    CGImageRef image = [framebuffer newCGImageFromFramebufferContents];
    return image;
}

这里就是使用filter的outputFramebuffer生成CGImage。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
- (CGImageRef)newCGImageFromFramebufferContents;
{
	...
	__block CGImageRef cgImageFromBytes;
    
    runSynchronouslyOnVideoProcessingQueue(^{
        [GPUImageContext useImageProcessingContext];
        
        NSUInteger totalBytesForImage = (int)_size.width * (int)_size.height * 4;
        // It appears that the width of a texture must be padded out to be a multiple of 8 (32 bytes) if reading from it using a texture cache
        
        GLubyte *rawImagePixels;
        
        CGDataProviderRef dataProvider = NULL;
        
   		...
   		[self activateFramebuffer];
       rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
       glReadPixels(0, 0, (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
       dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
       [self unlock]; // Don't need to keep this around anymore
            
        ...
        cgImageFromBytes = CGImageCreate((int)_size.width, (int)_size.height, 8, 32, 4 * (int)_size.width, defaultRGBColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
        
        ...
        // Capture image with current device orientation
        CGDataProviderRelease(dataProvider);
        CGColorSpaceRelease(defaultRGBColorSpace);
        
    });
	...
	
}               

这个方法就是绑定这个framebuffer的帧缓冲,然后使用glReadPixels函数获取到帧缓冲中的数据,然后使用数据生成CGImage返回。至此整个滤镜处理流程完毕。

总结

整个处理过程中,GPUImagePicture接收图片数据并写入纹理并创建帧缓冲,GPUImageFilter负责编译、链接着色器程序,并将顶点、纹理坐标数据和图像传送到着色器,然后再有GPUImageFramebuffer将帧缓冲中的数据取出。
整个源码看下来,觉得不愧是老牌图形处理库,将OpenGL封装的很好,并且设计响应链很巧妙,GPUImageFilter即是继承自GPUImageOutput,又实现了GPUImageInput协议,是的filter既可以做输入、又可以做输出。
也有一些还不太确定的地方,例如filter中信号量的作用,就没太看明白,可能处理单张图片的时候还体现不出作用,后续还要继续研究。