Image Classification In iOS Apps Using Turi Create And CoreML

Introduction

These days the hype for Machine Learning is real. Everyone just wants a piece of it in their product development. Let it be a spam filter or just a cookie machine. That being said, the demand is undoubtedly high. True. But another fact that is true is that not everybody can just go in all guns blazing and develop intelligent systems. It requires shenanigans with specialized knowledge to actually create and train and make a system mature. You can follow the tutorials online but they just go over the top of the bun and never give you an idea that the patty is drier than the Sahara Desert. So, the problem is there, people who aren’t expert at building intelligent systems using AI/ Machine Learning techniques – how do they make their applications and products intelligent?

Recently Apple acquired Turi and released Turi Create module for python. Turi Create is a blessing for people who want to make their products smart without thinking too much about AI delicacies. In this article, I’m going to show to get started with Turi Create by developing a simple image classifier application for iOS.

What we need

  • Python 2.7+ (Since Python3 support isn’t there for Turi Create yet)
  • XCode 9.2 (9.1 is also fine)
  • macOS 13.12+
  • You can also use Turi Create on Windows and Linux but for developing the application you’ll need Xcode, unless you’re using Xamarin.

So, let’s get started.

What kind of images to classify?

First, let’s plan what kind of image classifier we’re going to develop. The popular examples around the web will tell you to make Cats Vs Dogsclassifiers. Let’s make it a bit better. Our classifier will classify flowers.

Project Structure

Let’s create our project in the following directory structure.

iOS

Getting the data

They say a machine learning model is as good as the data it’s been trained with. So how are we going to get the data? Let’s get the flower image dataset available with Tensorflow example repository which is there at

http://download.tensorflow.org/example_images/flower_photos.tgz

If you’re on a Linux Distro or macOS, you can use curl to download and unzip inside training/images folder.

iOS
Give it some time to download. The dataset is 218 MB.

The Dataset

We have the following kind of images in the dataset which should serve as our categories.

iOS

Now For training,

Preparing the training env

Create a virtualenv for python, then install turi create using,

  1. # Create a Python virtual environment  
  2. virtualenv turi_create_env  
  3.  
  4. # Activate your virtual environment  
  5. source turi_create_env/bin/activate  
  6.   
  7. (venv) pip install -U turicreate  

Alternatively you can create an anaconda env and install turi create using pip.

Training Script

First we need to load all the image data inside a dataframe to use with our model later. To save data to dataframe, we will be using the following script,

  1. import turicreate as tc  
  2.  
  3. # load data  
  4. image_data = tc.image_analysis.load_images('images', with_path=True)  
  5.   
  6. labels = ['daisy''dandelion''roses''sunflowers''tulips']  
  7.   
  8. def get_label(path, labels=labels):  
  9.     for label in labels:  
  10.         if label in path:  
  11.             return label  
  12.   
  13. image_data['label'] = image_data['path'].apply(get_label)  
  14.  
  15. # save data  
  16. image_data.save('flowers.sframe')  
  17.  
  18. # explore  
  19. image_data.explore()  

Let’s run the python script. Now we have the dataframe. Note that this should take some time depending on the processing power of your computer. We should see the following output on terminal and Visualization from TuriCreate Visualizer.

iOS

iOS

Next we create our model, the one that will be giving us the result on what kind of flower it is when we show our app an image.

  1. import turicreate as tc  
  2.  
  3. # Load the data  
  4. data = tc.SFrame('flowers.sframe')  
  5.  
  6. # Make a train-test split  
  7. train_data, test_data = data.random_split(0.8)  
  8.  
  9. # Create a model  
  10. model = tc.image_classifier.create(train_data, target='label', max_iterations=1000)  
  11.  
  12. # Save predictions to an SFrame (class and corresponding class-probabilities)  
  13. predictions = model.classify(test_data)  
  14.  
  15. # Evaluate the model and save the results into a dictionary  
  16. results = model.evaluate(test_data)  
  17. print "Accuracy         : %s" % results['accuracy']  
  18. print "Confusion Matrix : \n%s" % results['confusion_matrix']  
  19.  
  20. # Save the model for later usage in Turi Create  
  21. model.save('Flowers.model')  

Let’s run this script and let our model train. It’ll also show the accuracy of predictions of the model when put to use. We achieved an accuracy somewhere around 89% which isn’t bad.

iOS

Training model to CoreML model

We’ve trained our model but how should we use it with the mobile application. To use the trained model in an iOS Application we need to convert it to a CoreML model using the following python script.

  1. import turicreate as tc  
  2. model = tc.load_model('Flowers.model')  
  3. model.export_coreml('Flowers_CoreML.mlmodel')  

Now we’re ready to add it to our iOS application.

iOS Application

Let’s open up Xcode and create a single view application. Then drag and drop the coreml model inside the project navigator. Xcode will create the references automatically. To keep things organized you can create a group named Models and drag drop the core ml model there.

It should look like this,

iOS

Now lets Create the following interface. Alternatively you can replace both Storyboard files with the files in zip folder accompanying this article.

iOS
Now add a new file to the project named CGImagePropertyOrientation+UIImageOrientation.swift and add the following code inside.

  1. import UIKit  
  2. import ImageIO  
  3.   
  4. extension CGImagePropertyOrientation {  
  5.     /** 
  6.      Converts a `UIImageOrientation` to a corresponding 
  7.      `CGImagePropertyOrientation`. The cases for each 
  8.      orientation are represented by different raw values. 
  9.       
  10.      - Tag: ConvertOrientation 
  11.      */  
  12.     init(_ orientation: UIImageOrientation) {  
  13.         switch orientation {  
  14.         case .up: self = .up  
  15.         case .upMirrored: self = .upMirrored  
  16.         case .down: self = .down  
  17.         case .downMirrored: self = .downMirrored  
  18.         case .left: self = .left  
  19.         case .leftMirrored: self = .leftMirrored  
  20.         case .right: self = .right  
  21.         case .rightMirrored: self = .rightMirrored  
  22.         }  
  23.     }  
  24. }  

Now edit ViewController.swift and add the following code.

  1. import UIKit  
  2. import CoreML  
  3. import Vision  
  4. import ImageIO  
  5.   
  6. class ImageClassificationViewController: UIViewController {  
  7.     // MARK: - IBOutlets  
  8.       
  9.     @IBOutlet weak var imageView: UIImageView!  
  10.     @IBOutlet weak var cameraButton: UIBarButtonItem!  
  11.     @IBOutlet weak var classificationLabel: UILabel!  
  12.       
  13.     // MARK: - Image Classification  
  14.       
  15.     /// - Tag: MLModelSetup  
  16.     lazy var classificationRequest: VNCoreMLRequest = {  
  17.         do {  
  18.             /* 
  19.              Use the Swift class `MobileNet` Core ML generates from the model. 
  20.              To use a different Core ML classifier model, add it to the project 
  21.              and replace `MobileNet` with that model's generated Swift class. 
  22.              */  
  23.             let model = try VNCoreMLModel(for: Flowers_CoreML().model)  
  24.               
  25.             let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in  
  26.                 self?.processClassifications(for: request, error: error)  
  27.             })  
  28.             request.imageCropAndScaleOption = .centerCrop  
  29.             return request  
  30.         } catch {  
  31.             fatalError("Failed to load Vision ML model: \(error)")  
  32.         }  
  33.     }()  
  34.       
  35.     /// - Tag: PerformRequests  
  36.     func updateClassifications(for image: UIImage) {  
  37.         classificationLabel.text = "Classifying..."  
  38.           
  39.         let orientation = CGImagePropertyOrientation(image.imageOrientation)  
  40.         guard let ciImage = CIImage(image: image) else { fatalError("Unable to create \(CIImage.self) from \(image).") }  
  41.           
  42.         DispatchQueue.global(qos: .userInitiated).async {  
  43.             let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)  
  44.             do {  
  45.                 try handler.perform([self.classificationRequest])  
  46.             } catch {  
  47.                 /* 
  48.                  This handler catches general image processing errors. The `classificationRequest`'s 
  49.                  completion handler `processClassifications(_:error:)` catches errors specific 
  50.                  to processing that request. 
  51.                  */  
  52.                 print("Failed to perform classification.\n\(error.localizedDescription)")  
  53.             }  
  54.         }  
  55.     }  
  56.       
  57.     /// Updates the UI with the results of the classification.  
  58.     /// - Tag: ProcessClassifications  
  59.     func processClassifications(for request: VNRequest, error: Error?) {  
  60.         DispatchQueue.main.async {  
  61.             guard let results = request.results else {  
  62.                 self.classificationLabel.text = "Unable to classify image.\n\(error!.localizedDescription)"  
  63.                 return  
  64.             }  
  65.             // The `results` will always be `VNClassificationObservation`s, as specified by the Core ML model in this project.  
  66.             let classifications = results as! [VNClassificationObservation]  
  67.               
  68.             if classifications.isEmpty {  
  69.                 self.classificationLabel.text = "Nothing recognized."  
  70.             } else {  
  71.                 // Display top classifications ranked by confidence in the UI.  
  72.                 let topClassifications = classifications.prefix(2)  
  73.                 let descriptions = topClassifications.map { classification in  
  74.                     // Formats the classification for display; e.g. "(0.37) cliff, drop, drop-off".  
  75.                     return String(format: "  (%.2f) %@", classification.confidence, classification.identifier)  
  76.                 }  
  77.                 self.classificationLabel.text = "Classification:\n" + descriptions.joined(separator: "\n")  
  78.             }  
  79.         }  
  80.     }  
  81.       
  82.     // MARK: - Photo Actions  
  83.       
  84.     @IBAction func takePicture() {  
  85.         // Show options for the source picker only if the camera is available.  
  86.         guard UIImagePickerController.isSourceTypeAvailable(.camera) else {  
  87.             presentPhotoPicker(sourceType: .photoLibrary)  
  88.             return  
  89.         }  
  90.           
  91.         let photoSourcePicker = UIAlertController()  
  92.         let takePhoto = UIAlertAction(title: "Take Photo", style: .default) { [unowned self] _ in  
  93.             self.presentPhotoPicker(sourceType: .camera)  
  94.         }  
  95.         let choosePhoto = UIAlertAction(title: "Choose Photo", style: .default) { [unowned self] _ in  
  96.             self.presentPhotoPicker(sourceType: .photoLibrary)  
  97.         }  
  98.           
  99.         photoSourcePicker.addAction(takePhoto)  
  100.         photoSourcePicker.addAction(choosePhoto)  
  101.         photoSourcePicker.addAction(UIAlertAction(title: "Cancel", style: .cancel, handler: nil))  
  102.           
  103.         present(photoSourcePicker, animated: true)  
  104.     }  
  105.       
  106.     func presentPhotoPicker(sourceType: UIImagePickerControllerSourceType) {  
  107.         let picker = UIImagePickerController()  
  108.         picker.delegate = self  
  109.         picker.sourceType = sourceType  
  110.         present(picker, animated: true)  
  111.     }  
  112. }  
  113.   
  114. extension ImageClassificationViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {  
  115.     // MARK: - Handling Image Picker Selection  
  116.       
  117.     func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String: Any]) {  
  118.         picker.dismiss(animated: true)  
  119.           
  120.         // We always expect `imagePickerController(:didFinishPickingMediaWithInfo:)` to supply the original image.  
  121.         let image = info[UIImagePickerControllerOriginalImage] as! UIImage  
  122.         imageView.image = image  
  123.         updateClassifications(for: image)  
  124.     }  
  125.  

We’re using Vision and CoreML frameworks from iOS SDK to use the trained model. Remember the model is the key component that does all the work. Here your app will take a picture, either from camera or galley and send it to the model. The model will process the image and return the result. The app will send the image as a request, which will be handled by Vision framework and sent to CoreML model via CoreML framework.

Ready to run

The app is now ready to run, you can either run it inside the emulator or use an actual device.

We’ll be testing here inside the simulator. So, to get images, we press go to home on the simulator, open Safari and collect a rose and a sunflower image from Google Image Search and put them to test. Here are the screenshots containing results from the simulator.

iOSiOS

Now if you want, you can load the app on an actual device and test on real life flowers (flowers should be of the categories trained on, else you’ll get random results, that’s how intelligent systems work. They only know what you trained them for)

Application in a nutshell
iOS
Conclusion

That was just an example of integrating some intelligence using machine learning in your iOS apps. However image classification isn’t the only smart thing you can add. There’re a lot of usage of machine learning with mobile applications and the list is growing day by day.

References

  1. https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml
  2. https://github.com/apple/turicreate


Similar Articles