GPUImage3源码研究 研究视频滤镜过程

GPUImage是使用OpenGL进行滤镜处理的框架,作者后续开发了使用swift+OpenGL的GPUImage2,后续又有基于Metal的GPUImage3,之前学习了下Metal的一些基本使用,这次来学习下GPUImage3的源码。

采集视频经过滤镜渲染至界面

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import UIKit
import GPUImage
import AVFoundation

class ViewController: UIViewController {

    @IBOutlet weak var renderView: RenderView!
    
    var camera:Camera!
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view.
        
        do {
            camera = try Camera(sessionPreset:.hd1280x720, cameraDevice: nil, location: .frontFacing, captureAsYUV: true)
            let filter = SketchFilter()
            camera --> filter --> renderView
            camera.startCapture()
        } catch {
            fatalError("Could not initialize rendering pipeline: \(error)")
        }
    }


}

这段代码创建了一个Camera对象,一个滤镜对象,以及在Storyboard上添加的一个RenderView,然后使用–>这个运算符将三者作为一个响应链。

1
2
3
4
5
infix operator --> : AdditionPrecedence
@discardableResult public func --><T:ImageConsumer>(source:ImageSource, destination:T) -> T {
    source.addTarget(destination)
    return destination
}

–>实际上是一个自定义运算符,它将右边的对象添加到左边对象的targets中,然后返回右边的对象,这样可以连续使用。 swift自定义运算符

分析代码

Camera对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
public init(sessionPreset:AVCaptureSession.Preset, cameraDevice:AVCaptureDevice? = nil, location:PhysicalCameraLocation = .backFacing, captureAsYUV:Bool = true) throws {
        self.location = location
        
        self.captureSession = AVCaptureSession()
        self.captureSession.beginConfiguration()
        
        self.captureAsYUV = captureAsYUV
        
        if let cameraDevice = cameraDevice {
            self.inputCamera = cameraDevice
        } else {
            if let device = location.device() {
                self.inputCamera = device
            } else {
                self.videoInput = nil
                self.videoOutput = nil
                self.inputCamera = nil
                self.yuvConversionRenderPipelineState = nil
                super.init()
                throw CameraError()
            }
        }
        
        do {
            self.videoInput = try AVCaptureDeviceInput(device:inputCamera)
        } catch {
            self.videoInput = nil
            self.videoOutput = nil
            self.yuvConversionRenderPipelineState = nil
            super.init()
            throw error
        }
        
        if (captureSession.canAddInput(videoInput)) {
            captureSession.addInput(videoInput)
        }
        
        // Add the video frame output
        videoOutput = AVCaptureVideoDataOutput()
        videoOutput.alwaysDiscardsLateVideoFrames = false
		
		...
}

这里首先创建了一些视频采集所需要的对象,如AVCaptureSession等。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
public init(sessionPreset:AVCaptureSession.Preset, cameraDevice:AVCaptureDevice? = nil, location:PhysicalCameraLocation = .backFacing, captureAsYUV:Bool = true) throws {
		...
		if captureAsYUV {
            supportsFullYUVRange = false
            let supportedPixelFormats = videoOutput.availableVideoPixelFormatTypes
            for currentPixelFormat in supportedPixelFormats {
                if ((currentPixelFormat as NSNumber).int32Value == Int32(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)) {
                    supportsFullYUVRange = true
                }
            }
            if (supportsFullYUVRange) {
                let (pipelineState, lookupTable) = generateRenderPipelineState(device:sharedMetalRenderingDevice, vertexFunctionName:"twoInputVertex", fragmentFunctionName:"yuvConversionFullRangeFragment", operationName:"YUVToRGB")
                self.yuvConversionRenderPipelineState = pipelineState
                self.yuvLookupTable = lookupTable
                videoOutput.videoSettings = [kCVPixelBufferMetalCompatibilityKey as String: true,
                                             kCVPixelBufferPixelFormatTypeKey as String:NSNumber(value:Int32(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange))]
            } else {
                let (pipelineState, lookupTable) = generateRenderPipelineState(device:sharedMetalRenderingDevice, vertexFunctionName:"twoInputVertex", fragmentFunctionName:"yuvConversionVideoRangeFragment", operationName:"YUVToRGB")
                self.yuvConversionRenderPipelineState = pipelineState
                self.yuvLookupTable = lookupTable
                videoOutput.videoSettings = [kCVPixelBufferMetalCompatibilityKey as String: true,
                                             kCVPixelBufferPixelFormatTypeKey as String:NSNumber(value:Int32(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange))]
            }
        } else {
            self.yuvConversionRenderPipelineState = nil
            videoOutput.videoSettings = [kCVPixelBufferMetalCompatibilityKey as String: true,
                                         kCVPixelBufferPixelFormatTypeKey as String:NSNumber(value:Int32(kCVPixelFormatType_32BGRA))]
        }
		
		...
}

这里根据设置的是否采集yuv标记,来设置采集的视频数据格式,如果是采集yuv,还需要判断是否是全范围的yuv(fullRange),设置完毕后,如果是yuv格式,还会创建一个用于转换yuv到rgb的MTLRenderPipelineState,sharedMetalRenderingDevice是一个全局的设备,是对MTLDevice的一个封装,里面包含了MTLCommandQueue、MTLLibrary。如果是采集rgba,则直接设置视频格式。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public init(sessionPreset:AVCaptureSession.Preset, cameraDevice:AVCaptureDevice? = nil, location:PhysicalCameraLocation = .backFacing, captureAsYUV:Bool = true) throws {
		...
		if (captureSession.canAddOutput(videoOutput)) {
            captureSession.addOutput(videoOutput)
        }
        
        captureSession.sessionPreset = sessionPreset
        captureSession.commitConfiguration()
        
        super.init()
        
        let _ = CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, sharedMetalRenderingDevice.device, nil, &videoTextureCache)

        videoOutput.setSampleBufferDelegate(self, queue:cameraProcessingQueue)
}

最后创建了一个Metal的纹理缓存。

滤镜对象

这里使用的SketchFilter滤镜,它是继承自TextureSamplingOperation的子类,TextureSamplingOperation继承自BasicOperation,BasicOperation遵循了ImageProcessingOperation协议。ImageProcessingOperation协议又遵循了ImageConsumer,ImageSource,表明BasicOperation既可以是图像数据的来源,也可以是消费者,这样就可以构成响应链的一环。

1
2
3
4
5
6
7
8
9
public init(vertexFunctionName: String? = nil, fragmentFunctionName: String, numberOfInputs: UInt = 1, operationName: String = #file) {
        self.maximumInputs = numberOfInputs
        self.operationName = operationName
        
        let concreteVertexFunctionName = vertexFunctionName ?? defaultVertexFunctionNameForInputs(numberOfInputs)
        let (pipelineState, lookupTable) = generateRenderPipelineState(device:sharedMetalRenderingDevice, vertexFunctionName:concreteVertexFunctionName, fragmentFunctionName:fragmentFunctionName, operationName:operationName)
        self.renderPipelineState = pipelineState
        self.uniformSettings = ShaderUniformSettings(uniformLookupTable:lookupTable)
    }

这里是BasicOperation的初始化函数,首先根据外部传入一个输入(纹理)数量,默认是1,使用operationName保存执行对象的文件路径,用于记录错误信息。然后判断是否传入顶点着色器函数名,没有的话根据输入来确定默认的顶点着色器函数名。之后根据传入的片段着色器函数名和顶点着色器函数名创建一个用于滤镜处理的MTLRenderPipelineState。

RenderView

RenderView是继承自MTKView,并且遵守了了ImageConsumer协议,使它可以作为数据的消费者,响应链的终点。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
public override init(frame frameRect: CGRect, device: MTLDevice?) {
        super.init(frame: frameRect, device: sharedMetalRenderingDevice.device)
        
        commonInit()
    }
    
    public required init(coder: NSCoder) {
        super.init(coder: coder)
        
        commonInit()
    }
    
    private func commonInit() {
        framebufferOnly = false
        autoResizeDrawable = true
        
        self.device = sharedMetalRenderingDevice.device
        
        let (pipelineState, _) = generateRenderPipelineState(device:sharedMetalRenderingDevice, vertexFunctionName:"oneInputVertex", fragmentFunctionName:"passthroughFragment", operationName:"RenderView")
        self.renderPipelineState = pipelineState
        
        enableSetNeedsDisplay = false
        isPaused = true
    }

RenderView初始化时也创建了一个用于渲染数据的MTLRenderPipelineState。

创建MTLRenderPipelineState

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
func generateRenderPipelineState(device:MetalRenderingDevice, vertexFunctionName:String, fragmentFunctionName:String, operationName:String) -> (MTLRenderPipelineState, [String:(Int, MTLDataType)]) {
    guard let vertexFunction = device.shaderLibrary.makeFunction(name: vertexFunctionName) else {
        fatalError("\(operationName): could not compile vertex function \(vertexFunctionName)")
    }
    
    guard let fragmentFunction = device.shaderLibrary.makeFunction(name: fragmentFunctionName) else {
        fatalError("\(operationName): could not compile fragment function \(fragmentFunctionName)")
    }
    
    let descriptor = MTLRenderPipelineDescriptor()
    descriptor.colorAttachments[0].pixelFormat = MTLPixelFormat.bgra8Unorm
    descriptor.rasterSampleCount = 1
    descriptor.vertexFunction = vertexFunction
    descriptor.fragmentFunction = fragmentFunction
    
    do {
        var reflection:MTLAutoreleasedRenderPipelineReflection?
        let pipelineState = try device.device.makeRenderPipelineState(descriptor: descriptor, options: [.bufferTypeInfo, .argumentInfo], reflection: &reflection)

        var uniformLookupTable:[String:(Int, MTLDataType)] = [:]
        if let fragmentArguments = reflection?.fragmentArguments {
            for fragmentArgument in fragmentArguments where fragmentArgument.type == .buffer {
                if (fragmentArgument.bufferDataType == .struct) {
                    for (index, uniform) in fragmentArgument.bufferStructType.members.enumerated() {
                        uniformLookupTable[uniform.name] = (index, uniform.dataType)
                    }
                }
            }
        }
        
        return (pipelineState, uniformLookupTable)
    } catch {
        fatalError("Could not create render pipeline state for vertex:\(vertexFunctionName), fragment:\(fragmentFunctionName), error:\(error)")
    }
}

在创建MTLRenderPipelineState的函数中,首先创建顶点着色器和片段着色器两个MTLFunction,然后创建一个管道状态描述器(MTLRenderPipelineDescriptor),给这个描述器设置函数。然后使用全局的MTLDevice,创建管道状态(MTLRenderPipelineState),最后使用MTLRenderPipelineReflection获取片段着色器参数,将类型为buffer的参数存到一个字典中,再将管道状态和参数字典返回。

Camera采集到视频数据后的处理

1
2
3
4
5
6
7
8
9
10
11
12
13
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
		...
		let cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer)!
        let bufferWidth = CVPixelBufferGetWidth(cameraFrame)
        let bufferHeight = CVPixelBufferGetHeight(cameraFrame)
        let _ = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
        
        CVPixelBufferLockBaseAddress(cameraFrame, CVPixelBufferLockFlags(rawValue:CVOptionFlags(0)))
		
		cameraFrameProcessingQueue.async {
			...
		}
}

在视频数据的回调中,首先取得CMSampleBuffer的pixelBuffer,然后取得宽高。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        
       ...
        cameraFrameProcessingQueue.async {
            self.delegate?.didCaptureBuffer(sampleBuffer)
            CVPixelBufferUnlockBaseAddress(cameraFrame, CVPixelBufferLockFlags(rawValue:CVOptionFlags(0)))
            
            let texture:Texture?
            if self.captureAsYUV {
                var luminanceTextureRef:CVMetalTexture? = nil
                var chrominanceTextureRef:CVMetalTexture? = nil
                // Luminance plane
                let _ = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, self.videoTextureCache!, cameraFrame, nil, .r8Unorm, bufferWidth, bufferHeight, 0, &luminanceTextureRef)
                // Chrominance plane
                let _ = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, self.videoTextureCache!, cameraFrame, nil, .rg8Unorm, bufferWidth / 2, bufferHeight / 2, 1, &chrominanceTextureRef)
                
                if let concreteLuminanceTextureRef = luminanceTextureRef, let concreteChrominanceTextureRef = chrominanceTextureRef,
                    let luminanceTexture = CVMetalTextureGetTexture(concreteLuminanceTextureRef), let chrominanceTexture = CVMetalTextureGetTexture(concreteChrominanceTextureRef) {
                    
                    let conversionMatrix:Matrix3x3
                    if (self.supportsFullYUVRange) {
                        conversionMatrix = colorConversionMatrix601FullRangeDefault
                    } else {
                        conversionMatrix = colorConversionMatrix601Default
                    }
                    
                    let outputWidth:Int
                    let outputHeight:Int
                    if self.location.imageOrientation().rotationNeeded(for:.portrait).flipsDimensions() {
                        outputWidth = bufferHeight
                        outputHeight = bufferWidth
                    } else {
                        outputWidth = bufferWidth
                        outputHeight = bufferHeight
                    }
                    let outputTexture = Texture(device:sharedMetalRenderingDevice.device, orientation:.portrait, width:outputWidth, height:outputHeight)
                    
                    convertYUVToRGB(pipelineState:self.yuvConversionRenderPipelineState!, lookupTable:self.yuvLookupTable,
                                    luminanceTexture:Texture(orientation: self.location.imageOrientation(), texture:luminanceTexture),
                                    chrominanceTexture:Texture(orientation: self.location.imageOrientation(), texture:chrominanceTexture),
                                    resultTexture:outputTexture, colorConversionMatrix:conversionMatrix)
                    texture = outputTexture
                } else {
                    texture = nil
                }
            } else {
                var textureRef:CVMetalTexture? = nil
                let _ = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, self.videoTextureCache!, cameraFrame, nil, .bgra8Unorm, bufferWidth, bufferHeight, 0, &textureRef)
                if let concreteTexture = textureRef, let cameraTexture = CVMetalTextureGetTexture(concreteTexture) {
                    texture = Texture(orientation: self.location.imageOrientation(), texture: cameraTexture)
                } else {
                    texture = nil
                }
            }
            
            if texture != nil {
                self.updateTargetsWithTexture(texture!)
            }

            ...
        }
    }

接下来判断是否是yuv数据,如果是,则创建两个纹理,存放y通道和uv通道,并且确定一个转换矩阵,之后调用convertYUVToRGB方法,将数据转换为RGB然后写入outputTexture。如果是RGBA数据,则直接写入纹理。之后将纹理传给响应链的下一环进行处理。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
public func convertYUVToRGB(pipelineState:MTLRenderPipelineState, lookupTable:[String:(Int, MTLDataType)], luminanceTexture:Texture, chrominanceTexture:Texture, secondChrominanceTexture:Texture? = nil, resultTexture:Texture, colorConversionMatrix:Matrix3x3) {
    let uniformSettings = ShaderUniformSettings(uniformLookupTable:lookupTable)
    uniformSettings["colorConversionMatrix"] = colorConversionMatrix
    
    guard let commandBuffer = sharedMetalRenderingDevice.commandQueue.makeCommandBuffer() else {return}
    
    let inputTextures:[UInt:Texture]
    if let secondChrominanceTexture = secondChrominanceTexture {
        inputTextures = [0:luminanceTexture, 1:chrominanceTexture, 2:secondChrominanceTexture]
    } else {
        inputTextures = [0:luminanceTexture, 1:chrominanceTexture]
    }
    
    commandBuffer.renderQuad(pipelineState:pipelineState, uniformSettings:uniformSettings, inputTextures:inputTextures, useNormalizedTextureCoordinates:true, outputTexture:resultTexture)
    commandBuffer.commit()
}

func renderQuad(pipelineState:MTLRenderPipelineState, uniformSettings:ShaderUniformSettings? = nil, inputTextures:[UInt:Texture], useNormalizedTextureCoordinates:Bool = true, imageVertices:[Float] = standardImageVertices, outputTexture:Texture, outputOrientation:ImageOrientation = .portrait) {
        let vertexBuffer = sharedMetalRenderingDevice.device.makeBuffer(bytes: imageVertices,
                                                                        length: imageVertices.count * MemoryLayout<Float>.size,
                                                                        options: [])!
        vertexBuffer.label = "Vertices"
        
        
        let renderPass = MTLRenderPassDescriptor()
        renderPass.colorAttachments[0].texture = outputTexture.texture
        renderPass.colorAttachments[0].clearColor = MTLClearColorMake(1, 0, 0, 1)
        renderPass.colorAttachments[0].storeAction = .store
        renderPass.colorAttachments[0].loadAction = .clear
        
        guard let renderEncoder = self.makeRenderCommandEncoder(descriptor: renderPass) else {
            fatalError("Could not create render encoder")
        }
        renderEncoder.setFrontFacing(.counterClockwise)
        renderEncoder.setRenderPipelineState(pipelineState)
        renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
        
        for textureIndex in 0..<inputTextures.count {
            let currentTexture = inputTextures[UInt(textureIndex)]!
            
            let inputTextureCoordinates = currentTexture.textureCoordinates(for:outputOrientation, normalized:useNormalizedTextureCoordinates)
            let textureBuffer = sharedMetalRenderingDevice.device.makeBuffer(bytes: inputTextureCoordinates,
                                                                             length: inputTextureCoordinates.count * MemoryLayout<Float>.size,
                                                                             options: [])!
            textureBuffer.label = "Texture Coordinates"

            renderEncoder.setVertexBuffer(textureBuffer, offset: 0, index: 1 + textureIndex)
            renderEncoder.setFragmentTexture(currentTexture.texture, index: textureIndex)
        }
        uniformSettings?.restoreShaderSettings(renderEncoder: renderEncoder)
        renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
        renderEncoder.endEncoding()
    }
}

在convertYUVToRGB中,首先创建了一个commandBuffer,然后将纹理存入一个字典中,再调用commandBuffer的一个扩展方法,renderQuad,最后提交commandBuffer,开始真正的渲染过程。

在renderQuad方法中,首先根据顶点坐标创建了一个MTLBuffer ,然后创建一个MTLRenderPassDescriptor,将输出的纹理作为colorAttachments[0]的纹理,表示将渲染的结果写入这个纹理。之后创建一个renderEncoder,设置顶点buffer。然后遍历输入纹理字典,将y通道和uv通道的纹理按索引传入renderEncoder。之后再将uniform参数字典遍历,将相应的参数写入renderEncoder。最后调用绘制函数。

1
2
3
4
5
public func updateTargetsWithTexture(_ texture:Texture) {
        for (target, index) in targets {
            target.newTextureAvailable(texture, fromSourceIndex:index)
        }
    }

在采集的纹理转换好之后,Camera会调用updateTargetsWithTexture方法,遍历所有的targets,将纹理传给target。target会调用newTextureAvailable方法。

filter的newTextureAvailable

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public func newTextureAvailable(_ texture: Texture, fromSourceIndex: UInt) {
        ...
        
        inputTextures[fromSourceIndex] = texture

        ...
        
        if (UInt(inputTextures.count) >= maximumInputs) {
            let outputWidth:Int
            let outputHeight:Int
            
            let firstInputTexture = inputTextures[0]!
            if firstInputTexture.orientation.rotationNeeded(for:.portrait).flipsDimensions() {
                outputWidth = firstInputTexture.texture.height
                outputHeight = firstInputTexture.texture.width
            } else {
                outputWidth = firstInputTexture.texture.width
                outputHeight = firstInputTexture.texture.height
            }

            if uniformSettings.usesAspectRatio {
                let outputRotation = firstInputTexture.orientation.rotationNeeded(for:.portrait)
                uniformSettings["aspectRatio"] = firstInputTexture.aspectRatio(for: outputRotation)
            }
            
            ...
        }
    }

这里收先将传入的texture写入输入纹理数组。然后判断输入纹理数量是否超出最大限制,超出就什么都不做。然后根据纹理的宽高、缩放方式等,算出一个输出的宽高。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
public func newTextureAvailable(_ texture: Texture, fromSourceIndex: UInt) {
        ...
        
	guard let commandBuffer = sharedMetalRenderingDevice.commandQueue.makeCommandBuffer() else {return}

            let outputTexture = Texture(device:sharedMetalRenderingDevice.device, orientation: .portrait, width: outputWidth, height: outputHeight)
            
            if let alternateRenderingFunction = metalPerformanceShaderPathway, useMetalPerformanceShaders {
                var rotatedInputTextures: [UInt:Texture]
                if (firstInputTexture.orientation.rotationNeeded(for:.portrait) != .noRotation) {
                    let rotationOutputTexture = Texture(device:sharedMetalRenderingDevice.device, orientation: .portrait, width: outputWidth, height: outputHeight)
                    guard let rotationCommandBuffer = sharedMetalRenderingDevice.commandQueue.makeCommandBuffer() else {return}
                    rotationCommandBuffer.renderQuad(pipelineState: sharedMetalRenderingDevice.passthroughRenderState, uniformSettings: uniformSettings, inputTextures: inputTextures, useNormalizedTextureCoordinates: useNormalizedTextureCoordinates, outputTexture: rotationOutputTexture)
                    rotationCommandBuffer.commit()
                    rotatedInputTextures = inputTextures
                    rotatedInputTextures[0] = rotationOutputTexture
                } else {
                    rotatedInputTextures = inputTextures
                }
                alternateRenderingFunction(commandBuffer, rotatedInputTextures, outputTexture)
            } else {
                internalRenderFunction(commandBuffer: commandBuffer, outputTexture: outputTexture)
            }
            commandBuffer.commit()
            
            updateTargetsWithTexture(outputTexture)
}	

func internalRenderFunction(commandBuffer: MTLCommandBuffer, outputTexture: Texture) {
        commandBuffer.renderQuad(pipelineState: renderPipelineState, uniformSettings: uniformSettings, inputTextures: inputTextures, useNormalizedTextureCoordinates: useNormalizedTextureCoordinates, outputTexture: outputTexture)
    }

然后创建一个commandBuffer和一个用于接受输出的纹理对象。之后判断是否有一个替代的渲染函数,默认没有,然后执行默认的渲染函数,实际就是执行commandBuffer的renderQuad函数。然后提交commandBuffer,执行渲染。最后再寻找响应链的下一个target,将输出的纹理传给它。

RenderView的newTextureAvailable

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public func newTextureAvailable(_ texture:Texture, fromSourceIndex:UInt) {
        self.drawableSize = CGSize(width: texture.texture.width, height: texture.texture.height)
        currentTexture = texture
        self.draw()
    }
    
    public override func draw(_ rect:CGRect) {
        if let currentDrawable = self.currentDrawable, let imageTexture = currentTexture {
            let commandBuffer = sharedMetalRenderingDevice.commandQueue.makeCommandBuffer()
            
            let outputTexture = Texture(orientation: .portrait, texture: currentDrawable.texture)
            commandBuffer?.renderQuad(pipelineState: renderPipelineState, inputTextures: [0:imageTexture], outputTexture: outputTexture)
            
            commandBuffer?.present(currentDrawable)
            commandBuffer?.commit()
        }
    }

这里调用draw方法,然后在draw方法中也是创建commandBuffer,然后调用commandBuffer的renderQuad方法,不过输出的纹理使用的是currentDrawable的纹理,它是MTKView的属性用于渲染显示。

总结

GPUImage3也是和GPUImage类似的滤镜链处理,每个滤镜链上的对象都有一个MTLRenderPipelineState,用于做各自的滤镜处理。输入和输出被抽象成两个协议ImageSource和ImageConsumer,每次渲染时会生成一个commandBuffer然后提交渲染命令。感觉这种响应链的模式很值得学习。同时也觉得Metal的代码比OpenGL要简练一些,更符合iOS开发者的习惯。