[SWIFT] [IOS] Make full use of CIFilter to convert negative film photos to negative / positive

The film camera boom is still going on. Unlike digital cameras, the cost of converting data after development is high, and negative films after shooting are often not organized in time and accumulate ... There are many dedicated machines, but many people think that they can easily manage with the iPhone.

In fact, several film scanner apps have been released on the App Store. Some of the famous ones are Kodak Mobile Film Scanner.

However, when asked how these apps realize negative-positive conversion on iOS, few people are developing them and I do not know how to realize them. So, this time I would like to realize the same negative-positive conversion.

Get a positive picture from a negative picture

With negative film, the color of the subject at the time of shooting is reversed to create an image. If it is a white subject, it will be a black subject on the negative film. This is called a negative image. When printing a negative image and converting it to data, the color is re-inverted to restore the color of the subject at the time of shooting. The image obtained here is called a positive image.

In the old days, we used to print from film to photographic paper without converting it into data. Since the photographic paper is also coated with a photosensitizer that inverts the color like the negative film, the frames on the negative film are projected onto the photographic paper using an enlarger, etc., and the negative image is exposed to the photographic paper. The color was reversed again to restore the color of the subject at the time of shooting. In digitization, the color of the subject at the time of shooting is restored by re-inverting the color of the negative image by image processing.

Simply put, you can get a positive image from negative film by re-inverting the colors. negaposi.png

Invert the tone curve

To get a positive image from a negative image, you need to re-invert the colors, but you can do this by flipping the tone curve upside down.

Swift uses the ToneCurveFilter provided in ** CIFilter **.

What is CIFilter?

The image processing filter provided by CoreImageFramework is CIFilter. CoreImage Framework is also a historic framework provided by iOS 5, and now more than 100 types of filters are provided. If you can make full use of CIFilter, you can create rich image editing apps.

You can check the type of CIFilter in this document. Please try by all means try. Core Image Filter Reference

Use ToneCurveFilter

The sample code is as follows.

class ToneReverse {

    func demo() {
        guard
            //Load assuming that there is a negative image named "nega" in the Assets Catalog
            let negaImage = UIImage(named: "nege"),
            //Convert from UIImage to CIImage for processing by CIFilter
            let ciNegaImage = CIImage(image: negaImage),
            //Get the flipped image via the toneCurve flip method
            let toneReversedImage = reverse(with: ciNegaImage) else {
            return
        }
        //Reconvert from CIImage to UIImage
        let posiImage = UIImage(ciImage: toneReversedImage)
    }
    

    /// Tone Reverse
    /// - Parameter image: base image
    /// - Returns: filtered image
    func reverse(with image: CIImage) -> CIImage? {
        return image.applyingFilter(
            "CIToneCurve",
            parameters: [
                "inputPoint0": CIVector(x: 0.0, y: 1.0),
                "inputPoint1": CIVector(x: 0.25, y: 0.75),
                "inputPoint2": CIVector(x: 0.5, y: 0.5),
                "inputPoint3": CIVector(x: 0.75, y: 0.25),
                "inputPoint4": CIVector(x: 1.0, y: 0.0)
            ])
    }
    
}

CIFilter can set the coordinates of ToneCurve from ** inputPoint0 ** to ** inputPoint4 **. The tone curve when nothing is touched is as shown in the image below. To invert the color with ToneCurve, just flip the ToneCurve upside down, so you can do this by flipping the Y coordinate from Point 0 to Point 4.

If the tone curve in the above figure is performed with CIFilter, the implementation will be as follows.

/// Tone Reverse
/// - Parameter image: base image
/// - Returns: filtered image
func reverse(with image: CIImage) -> CIImage? {
    return image.applyingFilter(
        "CIToneCurve",
        parameters: [
            "inputPoint0": CIVector(x: 0.0, y: 1.0),
            "inputPoint1": CIVector(x: 0.25, y: 0.75),
            "inputPoint2": CIVector(x: 0.5, y: 0.5),
            "inputPoint3": CIVector(x: 0.75, y: 0.25),
            "inputPoint4": CIVector(x: 1.0, y: 0.0)
        ])
}

Now it is possible to get a positive image from a negative image. However, although it is a positive image, the bluish tint is too strong to reproduce the color of the subject at the time of shooting. To reproduce the color of the subject, it is necessary to perform further color correction from here.

Perform color correction

I was able to get a positive image for the time being, but I can't stand to see it as it is. So I do color correction, but the ToneCurveFilter provided in Core Image can only play with the ToneCurve on the RGB value composition channel. To reproduce the color of the subject, it is necessary to adjust the color tone of each channel of Red, Green, and Blue to maintain the balance. In this case, it has a strong bluish tint, so it is necessary to reduce the amount of blue tones to create a relative balance.

However, as mentioned above, the ToneCurve prepared in CIFilter can only be touched on the synthetic channel. Then what should we do... Create a custom filter using ** CIKernel **. Currently, when using CIKernel, use Metal.

What is Metal

Metal is a graphics API for the Apple platform that was introduced in 2014. The description language is C ++ 11 based Metal Shading Language.

Settings when using Metal

To use Metal, you need to specify **-fcikernel ** in ** Other Metal Compiler Flags ** and ** Other Metal Linker Flags ** in Build Settings. Here is a pitfall, but in the official Apple document below, there is a description that MELLINKER_FLAGS should be added to user-defined and **-cikernel ** should be set, but Warning is displayed under the environment of Xcode 12.3. Please note that you are required to follow the above settings. Apple - Type Method kernelWithFunctionName:fromMetalLibraryData:error:

Implementation on the Metal side

What I want to do this time is to play with the channels for each RGB with ToneCurve, combine the results of each channel, and acquire it as one image. The operation itself of ToneCurve is performed by CIFilter for each channel with CIFliter, only each element of R, G, B is extracted from the obtained CIImage, and RGB is generated by multiplying each element to generate one image. It realizes the ToneCurve function that can be tampered with for each channel. In realizing it, I referred to the page. HSL color adjustment filter in an iOS 8.0+ app using CoreImage

#include <metal_stdlib>
using namespace metal;

#include <CoreImage/CoreImage.h>
extern "C" {
    namespace coreimage {
        float4 rgbChannelCompositing(sample_t red, sample_t green, sample_t blue) {
            return float4(red.r, green.g, blue.b, 1.0);
        }
    }
}

The above implementation should be described in a file with an arbitrary name + .metal name.

Implementation on the Swift side

First is the whole code. The values ​​in XY of ToneCurvePointModel are the values ​​adjusted for demonstration. It is possible to adjust to warm and cold colors by making fine adjustments.

class RgbCompositing {

    private let redToneCurveModel = ToneCurvePointModel(zeroX: 0, zeroY: 0,
                                                        oneX: 0.38, oneY: 0.07,
                                                        twoX: 0.63, twoY: 0.24,
                                                        threeX: 0.78, threeY: 0.49,
                                                        fourX: 1.0, fourY: 1.0)

    private let greenToneCurveModel = ToneCurvePointModel(zeroX: 0, zeroY: 0,
                                                          oneX: 0.67, oneY: 0.07,
                                                          twoX: 0.85, twoY: 0.19,
                                                          threeX: 0.92, threeY: 0.39,
                                                          fourX: 1.0, fourY: 0.90)

    private let blueToneCurveModel = ToneCurvePointModel(zeroX: 0, zeroY: 0,
                                                         oneX: 0.69, oneY: 0.06,
                                                         twoX: 0.90, twoY: 0.20,
                                                         threeX: 0.96, threeY: 0.44,
                                                         fourX: 1, fourY: 0.92)

    
    func rgbCompositing(with image: CIImage) -> CIImage? {
        let redImage = image.applyingFilter(
            "CIToneCurve",
            parameters: [
                "inputPoint0": redToneCurveModel.pointZero,
                "inputPoint1": redToneCurveModel.pointOne,
                "inputPoint2": redToneCurveModel.pointTwo,
                "inputPoint3": redToneCurveModel.pointThree,
                "inputPoint4": redToneCurveModel.pointFour
            ])
        let greenImage = image.applyingFilter(
            "CIToneCurve",
            parameters: [
                "inputPoint0": greenToneCurveModel.pointZero,
                "inputPoint1": greenToneCurveModel.pointOne,
                "inputPoint2": greenToneCurveModel.pointTwo,
                "inputPoint3": greenToneCurveModel.pointThree,
                "inputPoint4": greenToneCurveModel.pointFour
            ])
        let blueImage = image.applyingFilter(
            "CIToneCurve",
            parameters: [
                "inputPoint0": blueToneCurveModel.pointZero,
                "inputPoint1": blueToneCurveModel.pointOne,
                "inputPoint2": blueToneCurveModel.pointTwo,
                "inputPoint3": blueToneCurveModel.pointThree,
                "inputPoint4": blueToneCurveModel.pointFour
            ])
        
        guard
            let url = Bundle.main.url(forResource: "default", withExtension: "metallib"),
            let data = try? Data(contentsOf: url),
            let rgbChannelCompositingKernel = try? CIColorKernel(functionName: "rgbChannelCompositing", fromMetalLibraryData: data) else {
            return nil
        }
        let extent = redImage.extent.union(greenImage.extent.union(blueImage.extent))
        let arguments = [redImage, greenImage, blueImage]

        guard let ciImage = rgbChannelCompositingKernel.apply(extent: extent, arguments: arguments) else {
            return nil
        }
        return ciImage
    }
    
}

Let's disassemble it little by little.

func rgbCompositing(with image: CIImage) -> CIImage? {
    let redImage = image.applyingFilter(
        "CIToneCurve",
        parameters: [
            "inputPoint0": redToneCurveModel.pointZero,
            "inputPoint1": redToneCurveModel.pointOne,
            "inputPoint2": redToneCurveModel.pointTwo,
            "inputPoint3": redToneCurveModel.pointThree,
            "inputPoint4": redToneCurveModel.pointFour
        ])
    let greenImage = image.applyingFilter(
        "CIToneCurve",
        parameters: [
            "inputPoint0": greenToneCurveModel.pointZero,
            "inputPoint1": greenToneCurveModel.pointOne,
            "inputPoint2": greenToneCurveModel.pointTwo,
            "inputPoint3": greenToneCurveModel.pointThree,
            "inputPoint4": greenToneCurveModel.pointFour
        ])
    let blueImage = image.applyingFilter(
        "CIToneCurve",
        parameters: [
            "inputPoint0": blueToneCurveModel.pointZero,
            "inputPoint1": blueToneCurveModel.pointOne,
            "inputPoint2": blueToneCurveModel.pointTwo,
            "inputPoint3": blueToneCurveModel.pointThree,
            "inputPoint4": blueToneCurveModel.pointFour
        ])

In the above part, the value of ToneCurve that you want to play with each channel of Red, Green, and Blue is applied to the received CIImage. At this point, the CIImage contained in redImage etc. is an image with the tone curve tampered with in the composite channel, so the ideal color tone correction has not been performed yet.

        guard
            let url = Bundle.main.url(forResource: "default", withExtension: "metallib"),
            let data = try? Data(contentsOf: url),
            let rgbChannelCompositingKernel = try? CIColorKernel(functionName: "rgbChannelCompositing", fromMetalLibraryData: data) else {
            return nil
        }

In the above part, CIColorKernel is generated. After compiling, the Metal file is generated in the metallib format directly under the root directory. Unless otherwise set, the resource name seems to be written out as default. Bundle.main.url (forResource :, withExtension :) gets the path of default.metallib and Data (contentsOf :) gets the file as Data type .. Finally, if you pass the method name of the image processing method defined in the Metal file and the acquired default.metallib as an argument to CIColorKernel (functionName :, fromMetalLibraryData :), it will be implemented on the Metal side. rgbChannelCompositing will be available.

        let extent = redImage.extent.union(greenImage.extent.union(blueImage.extent))
        let arguments = [redImage, greenImage, blueImage]

        guard let ciImage = rgbChannelCompositingKernel.apply(extent: extent, arguments: arguments) else {
            return nil
        }
        return ciImage
    }

Then use CIKernel to get a new image. extent is the output range of the image. Use union to get the size containing 3 images. In this case, the original image is the same, so the basic size should not shift. arguments is an array of original images to pass to CIKernel. The order is rgb. If you execute apply (extent: arguments :) with extent and arguments as arguments, you can get a CIImage with ToneCurve adjusted for each RGB channel.

Deliverables

The original image After applying the filter

You can obtain an image with color tone correction by performing negative / positive conversion to an image like this.

Frameworkized

We organize a series of processes and even make it a framework. Sample code is also included, so if you are interested, please click below. (Cocoapods and Carthage will be supported later.) NegaDeveloping - Masami Yamate

Finally

If you look for it, there may be people who are doing it somewhere in the world and there may already be an OSS framework, but it's fun to follow the mechanism yourself. I am developing it as an app for personal use, but if I can make it in time for the winter vacation assignment, I may publish it as a development app on the App Store, so I hope you can wait without expecting it.

Recommended Posts

[IOS] Make full use of CIFilter to convert negative film photos to negative / positive
I tried to make full use of the CPU core in Ruby