0

I have trained a model to differentiate between malignant and benign skin lesions to potentially detect if a patient has skin cancer, and have converted my keras model to coreML. Now I am trying to apply my model to an ios app using swift (through Xcode) which I have no experience in at all (still learning through trial and error).

Currently I am trying to get the model working through a simple app that just takes an image from the phone's camera to get a predicted label as output, but I am quite stuck in getting the camera to actually work to do just that.

import UIKit
import CoreML
import Vision
import Social

@UIApplicationMain
class ViewControl: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate, UIApplicationDelegate {

    @IBOutlet weak var imageView: UIImageView!
    var classificationResults : [VNClassificationObservation] = []

    let imagePicker = UIImagePickerController()

    override func viewDidLoad() {
        super.viewDidLoad()

        imagePicker.delegate = self

    }

    func detect(image: CIImage) {

        // Load the ML model through its generated class
        guard let model = try? VNCoreMLModel(for: weights_skin_cancer().model) else {
            fatalError("can't load ML model")
        }

        let request = VNCoreMLRequest(model: model) { request, error in
            guard let results = request.results as? [VNClassificationObservation],
                let topResult = results.first
                else {
                    fatalError("unexpected result type from VNCoreMLRequest")
                }

                if topResult.identifier.contains("malignant") {
                    DispatchQueue.main.async {
                        self.navigationItem.title = "mal!"
                        self.navigationController?.navigationBar.barTintColor = UIColor.green
                        self.navigationController?.navigationBar.isTranslucent = false


                    }
                }
                else {
                    DispatchQueue.main.async {
                        self.navigationItem.title = "benign!"
                        self.navigationController?.navigationBar.barTintColor = UIColor.red
                        self.navigationController?.navigationBar.isTranslucent = false

                    }
                }


        }

        let handler = VNImageRequestHandler(ciImage: image)

        do { try handler.perform([request]) }
        catch { print(error) }



    }


    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {


        if let image = info[UIImagePickerController.InfoKey.originalImage] as? UIImage {

            imageView.image = image

            imagePicker.dismiss(animated: true, completion: nil)


            guard let ciImage = CIImage(image: image) else {
                fatalError("couldn't convert uiimage to CIImage")
            }

            detect(image: ciImage)

        }
    }


    @IBAction func cameraTapped(_ sender: Any) {

        imagePicker.sourceType = .camera
        imagePicker.allowsEditing = false

        present(imagePicker, animated: true, completion: nil)
    }

}

Here's also the code used to convert my model to coreML for reference:

import coremltools

output_labels = ['benign', 'malignant']
scale = 1/255.
coreml_model = coremltools.converters.keras.convert('/Users/Grampun/Desktop/ISIC-Archive-Downloader-master/trained_models/lr_0.00006-400_DS-20_epochs/weights.best.from_scratch.6.hdf5',
                                                    input_names='image',
                                                    image_input_names='image',
                                                    output_names='output',
                                                    class_labels=output_labels,
                                                    image_scale=scale)

coreml_model.author = 'Jack Bugeja'
coreml_model.short_description = 'Model used to identify between benign and malignant skin lesions'

coreml_model.input_description['image'] = 'Dermascopic image of skin lesion to evaluate'
coreml_model.input_description['output'] = 'Malignant/Benign'

coreml_model.save(
    '/Users/Grampun/Desktop/ISIC-Archive-Downloader-master/trained_models/model_for_ios/lr_0.00006-400_DS-20_epochs/weights_skin_cancer.mlmodel')

Any help in general would be highly appreciate. Thanks!

2
  • What is the actual question you're asking? Commented Mar 19, 2020 at 19:25
  • @MatthijsHollemans sorry if wasn't clear, I was asking how I could get the camera working - what in my code is making my camera not activate once pressed. Commented Mar 20, 2020 at 10:52

1 Answer 1

1
  1. Open the camera:

    @IBAction func cameraTapped(_ sender: Any) {
        let controller = UIImagePickerController()
        controller.sourceType = .camera
        controller.mediaTypes = ["public.image"]
        controller.allowsEditing = false
        controller.delegate = self
        present(controller, animated: true)
    }
    
  2. Add the YourModel.mlmodel to your project.

  3. In didFinishPickingMediaWithInfo add this code:

    if let imageURL = info[.imageURL] as? URL {
        if let image = UIImage(contentsOfFile: imageURL.absoluteString) {
            self.getPrediction(image)
        }
    }
    
  4. Add this to get prediction:

    func getPrediction(_ image: UIImage) {
        let model = YourModel()
    
        guard let pixelBuffer = buffer(from: image) else { return }
        guard let prediction = try? model.prediction(image: pixelBuffer) else { return }
    
        print(prediction.classLabel) // Most likely image category as string value
    }
    
  5. Use this helper function to make from your UIImage a CVPixelBuffer that you need to use it in getPrediction()

    func buffer(from image: UIImage) -> CVPixelBuffer? {
        let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
        var pixelBuffer : CVPixelBuffer?
        let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
        guard (status == kCVReturnSuccess) else {
            return nil
        }
    
        CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
    
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
    
        context?.translateBy(x: 0, y: image.size.height)
        context?.scaleBy(x: 1.0, y: -1.0)
    
        UIGraphicsPushContext(context!)
        image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
        UIGraphicsPopContext()
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
    
        return pixelBuffer
    }
    
Sign up to request clarification or add additional context in comments.

6 Comments

Thanks for your reply. unfortunately I am unable to open the camera as of this point.
@Grampun I edited my answer, see if that solve your problem
You may also need to add permissions to your app's Info.plist, otherwise it will not be able to access the camera. There should be an error message about this in Xcode's debug output pane.
@MatthijsHollemans you're right. Needs to add Privacy - Camera Usage Description to Info.plist
@MatthijsHollemans added the permissions to .plist however the camera is still not functioning. will try to work on it more to hopefully get it working...
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.