0

I'm building an app for iPhone using Swift 4. I have a few test filters. Both work fine through the camera's output, but when I'm creating an array of images out of the more complex one, my memory overflows to catastrophic proportions and crashes my app.

I'm calling this one below in a loop which overflows my memory:

func rotateHue2(with ciImage: CIImage,
               rotatedByHue deltaHueRadians: CGFloat,
               orientation:UIImageOrientation?,
               screenWidth:CGFloat,
               screenHeight:CGFloat) -> UIImage {


    let sourceCore = ciImage


    let transBG = UIImage(color: .clear, size: CGSize(width: screenWidth, height: screenHeight))

    let transBGCI = CIImage(cgImage: (transBG?.cgImage)!)



    // Part 1

    let gradientPoint0Pos: [CGFloat] = [0, 0]
    let inputPoint0Vector = CIVector(values: gradientPoint0Pos, count: gradientPoint0Pos.count)

    var gradientPoint1Pos: [CGFloat]
    if(orientation == nil){

        gradientPoint1Pos = [0, screenWidth*2]

    }else{

        gradientPoint1Pos = [screenHeight*2, 0]

    }

    let inputPoint1Vector = CIVector(values: gradientPoint1Pos, count: gradientPoint1Pos.count)


    let gradientFilter = CIFilter(name: "CISmoothLinearGradient")
    gradientFilter?.setDefaults()
    gradientFilter?.setValue(inputPoint0Vector, forKey: "inputPoint0")
    gradientFilter?.setValue(inputPoint1Vector, forKey: "inputPoint1")


    gradientFilter?.setValue(CIColor.clear, forKey:"inputColor0")
    gradientFilter?.setValue(CIColor.black, forKey:"inputColor1")


    let gradient = gradientFilter?.outputImage?
        .cropped(to: sourceCore.extent)



    let hue1 = sourceCore
        .applyingFilter("CIHueAdjust", parameters: [kCIInputImageKey: sourceCore,
                                                    kCIInputAngleKey: deltaHueRadians])
        .cropped(to: sourceCore.extent)


    let alphaMaskBlend1 = CIFilter(name: "CIBlendWithAlphaMask",
                                   withInputParameters: [kCIInputImageKey: hue1,
                                                         kCIInputBackgroundImageKey: transBGCI,
                                                         kCIInputMaskImageKey:gradient!])?.outputImage?
        .cropped(to: sourceCore.extent)


    // Part 2


    let hue2 = sourceCore
        .applyingFilter("CIHueAdjust", parameters: [kCIInputImageKey: sourceCore,
                                                    kCIInputAngleKey: deltaHueRadians+1.5707])
        .cropped(to: sourceCore.extent)


    let blendedMasks = hue2
        .applyingFilter(compositeOperationFilters[compositeOperationFiltersIndex], parameters: [kCIInputImageKey: alphaMaskBlend1!,
                                                                                                kCIInputBackgroundImageKey: hue2])
        .cropped(to: sourceCore.extent)




    // Convert the filter output back into a UIImage.
    let context = CIContext(options: nil)
    let resultRef = context.createCGImage(blendedMasks, from: blendedMasks.extent)

    var result:UIImage? = nil

    if(orientation != nil){

        result = UIImage(cgImage: resultRef!, scale: 1.0, orientation: orientation!)

    }else{

        result = UIImage(cgImage: resultRef!)

    }
    return result!
}

Each image is resized down to 1280 or 720 wide depending on the phone's orientation. Why does this give me a memory warning when my other image filter works fine?

Just for kicks, here's the other one that doesn't make it crash:

func rotateHue(with ciImage: CIImage,
                rotatedByHue deltaHueRadians: CGFloat,
                orientation:UIImageOrientation?,
                screenWidth:CGFloat,
                screenHeight:CGFloat) -> UIImage {


    // Create a Core Image version of the image.
    let sourceCore = ciImage
    // Apply a CIHueAdjust filter
    let hueAdjust = CIFilter(name: "CIHueAdjust")
    hueAdjust?.setDefaults()
    hueAdjust?.setValue(sourceCore, forKey: "inputImage")
    hueAdjust?.setValue(deltaHueRadians, forKey: "inputAngle")

    let resultCore = CIFilter(name: "CIHueAdjust",
                              withInputParameters: [kCIInputImageKey: sourceCore,
                                                    kCIInputAngleKey: deltaHueRadians])?.outputImage?
        .cropped(to: sourceCore.extent)


    // Convert the filter output back into a UIImage.
    let context = CIContext(options: nil)
    let resultRef = context.createCGImage(resultCore!, from: (resultCore?.extent)!)

    var result:UIImage? = nil

    if(orientation != nil){

        result = UIImage(cgImage: resultRef!, scale: 1.0, orientation: orientation!)

    }else{

        result = UIImage(cgImage: resultRef!)

    }
    return result!
}

1 Answer 1

2

The first thing you should do is move your CIContext out of the function and make it as global as possible. Creating it is a major use of memory.

Less an issue, why are you cropping five times per image? This probably isn't the issue, but it "feels" wrong to me. A CIImage isn't an image - it's much closer to a "recipe".

Chain things more tightly - let the input of the next filter be the output of the prior one. Crop when finished. And most of all, create as few CIContexts as possible.

Sign up to request clarification or add additional context in comments.

6 Comments

That definitely helped, but I still see the memory spike up to 1.5 GB! Why?!
It could be the chaining, but I'm not sure how to chain them anymore tightly. Maybe because I'm also remaking transBGCI and that somehow builds up in the loop?
High level, what exactly are you trying to do? Also, at a bit more technical level, what is rotateHue2(with:) doing? I can't edit with "tighter" code without knowing this. For instance, why is the function creating a UIImage, transBG, turning it into a CIImage, then doing everything else? At the very least - after moving the context out of the function, break it down into component functions and take timings. Depending on what you are trying to do you may find that you can pass things into the function that uses the chained CIFilter.
Last note. By tightly chaining I just mean to set each kCIInputImageKey to be the previous filter's outputImage directly. The CIFilter.cropped(to:) may well be the issue - there's five in the first function and only one in the second. Also, try making the CIFilter calls more global. If you are going to use four "named" filters repeatedly, maybe create them outside of the function so as not to instantiate them every time you call the function. (You never did say how large this image array is.)
Good suggestions, and I’ll try them Monday. The image array is 31 images. They camera’s output is turned into two images with different color alterations that change every time the camera’s output is captured or when the user takes a photo and is altered in the for loop. I’m making the transparent image as a background to a mask so that one image can fade into 0 alpha which is layered ontop of another with different color so it appears to have a gradient.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.