[SWIFT] [Core ML] Convert Cycle GAN to Core ML and run it on iOS

I will write how to convert Cycle GAN to Core ML.

It is basically the same as the next article, but as of November 1, 2020, model conversion can be done with Core ML Tools instead of tf-coreml, so I will write the difference of that part.

Convert Cycle GAN to Mobile Model (CoreML) https://qiita.com/john-rocky/items/3fbdb0892b639187c234

In conclusion, you can convert each library version without any problem by doing the following.

tensorflow==2.2.0
keras==2.2.4
coremltools==4.0

procedure

I will write the procedure in Google Colaboratory.

** 1. Open the TensorFlow tutorial **

Open the tutorial from this page https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/cyclegan.ipynb

** 2. Match each library version to Core ML Tools 4.0 **

Uninstall tensorflow and keras that originally came with Colab, then reinstall the appropriate version. Also, install Core ML Tools 4.0. You may need to restart the Google Colab runtime.

Notebook


!pip uninstall -y tensorflow
!pip uninstall -y keras
!pip install tensorflow==2.2.0
!pip install keras==2.2.4
!pip install -U coremltools==4.0

** 3. Perform learning according to the tutorial **

You can learn Cycle GAN just by executing the tutorial from top to bottom, so continue as it is.

** 4. Save the model and convert it to Core ML **

Save the TensorFlow model.

Notebook


generator_g.save('./savedmodel')

Convert to Core ML. The arguments are much simpler than tf-coreml.

Notebook


import coremltools as ct

input_name = generator_g.inputs[0].name.split(':')[0]
print(input_name) #Check input_name.
keras_output_node_name = generator_g.outputs[0].name.split(':')[0]
graph_output_node_name = keras_output_node_name.split('/')[-1]

mlmodel = ct.converters.convert('./savedmodel/',
                       inputs=[ct.ImageType(bias=[-1,-1,-1], scale=2/255,shape=(1, 256, 256, 3,))],
                       output_names=[graph_output_node_name],
                 )

Save the converted Core ML model.

Notebook


mlmodel.save('./cyclegan.mlmodel')

** 5. Load and display Core ML model in Xcode project **

Create a project in Xcode.

Create an app that converts camera images with Cycle GAN.

Create a simple screen with only one UIImageView and UIButton on the Main.storyboard and connect it to the view controller. UIImage connects as a reference and UIButton connects as an action.

This time, let Vision Framework infer, but the inference result is returned as Multi Array type. I need to convert this to a UIImage, but it's easy with Core ML Helper. For that, copy the source code from CoreMLHelper.

In particular https://github.com/hollance/CoreMLHelpers/tree/master/CoreMLHelpers I have copied all the source code below to my project.

Here is a reference for how to use Core ML Helper.

Conversion from MultiArray to Image CoreMLHelper https://qiita.com/john-rocky/items/bfa631aba3e7aabc2a66

I will write the process in the view controller. I'm capturing a camera image, passing it to the Vision Framework, converting it to a UIImage, and pasting it into a UIImageView.

This time, I cut out the process of passing the camera image to the Vision Framework and extracting the inference result UIImage into a class called CycleGanConverter.

ViewController.swift


import UIKit
import Vision

class ViewController: UIViewController, UIImagePickerControllerDelegate & UINavigationControllerDelegate {
    @IBOutlet weak var imageView: UIImageView!
    //Cycle GAN converter
    let converter = CycleGanConverter()

    override func viewDidLoad() {
        super.viewDidLoad()
    }
    
    @IBAction func tappedCamera(_ sender: Any) {
        //Start the camera when the camera button is pressed
        if UIImagePickerController.isSourceTypeAvailable(UIImagePickerController.SourceType.camera) {

            let picker = UIImagePickerController()
            picker.modalPresentationStyle = UIModalPresentationStyle.fullScreen
            picker.delegate = self
            picker.sourceType = UIImagePickerController.SourceType.camera

            self.present(picker, animated: true, completion: nil)
        }
    }

    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
        defer {
            picker.dismiss(animated: true, completion: nil)
        }
        guard let image = info[UIImagePickerController.InfoKey(rawValue: UIImagePickerController.InfoKey.originalImage.rawValue)] as? UIImage else {return}
        //The image taken by the camera is entered in the variable image
        //Converting to image size to be passed to Cycle GAN(Use CoreML Helper Extension)
        let resized = image.resized(to: CGSize(width: 256, height: 256))
     //Converted because it needs to be passed as cgImage type
        guard let cgImage = resized.cgImage else {return}
        //Pass to converter
        converter.convert(image: cgImage) {
            images in
            //The converter returns the result in an images array if the conversion is successful
            guard images.count > 0 else {return}
            let image = images.first!
            DispatchQueue.main.async {
                //Paste into UIImageView
                self.imageView.image = image
            }
        }
    }
}

class CycleGanConverter {

    func convert(image: CGImage, completion: @escaping (_ images: [UIImage]) -> Void) {
        //Set up Vision Framework
        let handler = VNImageRequestHandler(cgImage: image,options: [:])

        let model = try! VNCoreMLModel(for: cyclegan().model)
        let request = VNCoreMLRequest(model: model, completionHandler: {
            [weak self](request: VNRequest, error: Error?) in
            //Since the inference result is returned to the variable request, convert it with the perseImage method.
            guard let images = self?.parseImage(request: request) else {return}
            completion(images)
        })

        try? handler.perform([request])
    }
    
    func parseImage(request: VNRequest) -> [UIImage] {
        guard let results = request.results as? [VNCoreMLFeatureValueObservation] else {return []}
        //featureValue of each element in the results array.Since the converted image is included in multiArrayValue, convert this to UIImage
        return results.compactMap {
            $0.featureValue.multiArrayValue?
                .image(min: -1, max: 1, channel: nil, axes: (3,2,1))? //A method of Core ML Helper that converts to UIImage
                .rotated(by: 90) //Rotate for screen display
                .withHorizontallyFlippedOrientation() //Invert for screen display
        }
    }
}

finish

This time, due to time constraints, the number of trainings has been significantly reduced compared to the tutorial specification, so the conversion results are reasonable.

If you train according to the tutorial, you should get better conversion results.

Finally

Note regularly publishes about iOS development, so please follow us. https://note.com/tokyoyoshida

It is also posted on Twitter. https://twitter.com/jugemjugemjugem

Recommended Posts