2

I'm working on a Computer Vision system and this is giving me a serious headache. I'm having trouble re-implementing an old gradient operator more efficiently, I'm working with numpy and openCV2.

This is what I had:

def gradientX(img):
    rows, cols = img.shape
    out = np.zeros((rows,cols))
    for y in range(rows-1):
        Mr = img[y]
        Or = out[y]
        Or[0] = Mr[1] - Mr[0]
        for x in xrange(1, cols - 2):
            Or[x] = (Mr[x+1] - Mr[x-1])/2.0
        Or[cols-1] = Mr[cols-1] - Mr[cols-2]
    return out

def gradient(img):
    return [gradientX(img), (gradientX(img.T).T)]

I've tried using numpy's gradient operator but the result is not the same For this input

array([[  3,   4,   5],
   [255,   0,  12],
   [ 25,  15, 200]])

Using my gradient returns

[array([[   1.,    0.,    1.],
   [-255.,    0.,   12.],
   [   0.,    0.,    0.]]), 
array([[ 252.,   -4.,    0.],
   [   0.,    0.,    0.],
   [-230.,   15.,    0.]])]

While using numpy's np.gradient returns

[array([[ 252. ,   -4. ,    7. ],
   [  11. ,    5.5,   97.5],
   [-230. ,   15. ,  188. ]]), 
array([[   1. ,    1. ,    1. ],
   [-255. , -121.5,   12. ],
   [ -10. ,   87.5,  185. ]])]

There are cleary some similarities between the results but they're definitely not the same. So I'm missing something here or the two operators aren't mean to produce the same results. In that case, I wanted to know how to re-implement my gradientX function so it doesn't use that awful looking double loop for traversing the 2-d array using mostly numpy's potency.

3
  • @Peque OP is clearly trying to use numpy, but is saying that the default numpy gradient doesn't give the same result as his pure python function. Therefore, he wants to know how to rewrite his gradient function using numpy. Commented Jul 9, 2015 at 16:06
  • 1
    Can you explain what gradientX computes? Your gradient function returns the same thing as numpy.gradient(img)[::-1] with certain rows/columns set to zero. Commented Jul 9, 2015 at 16:33
  • It's a finite difference aproximation to a 1st derivative in the X axis. Same as what you can achieve with the gradient operator in Matlab. Commented Jul 9, 2015 at 17:38

1 Answer 1

2

I've been working a bit more on this just to find that my mistake. I was skipping last row and last column when iterating. As @wflynny noted, the result was identical except for a row and a column of zeros.

Provided this, the result could not be the same as np.gradient, but with that change, the results are identical, so there's no need to find any other numpy implementation for this.

Answering my own question, a good numpy's implementation for my gradient algorithm would be

import numpy as np
def gradientX(img):
    return np.gradient(img)[::-1]

I'm also posting the working code, just because it shows how numpy's gradient operator works

def computeMatXGradient(img):
    rows, cols = img.shape
    out = np.zeros((rows,cols))
    for y in range(rows):
        Mr = img[y]
        Or = out[y]
        Or[0] = float(Mr[1]) - float(Mr[0])
        for x in xrange(1, cols - 1):
            Or[x] = (float(Mr[x+1]) - float(Mr[x-1]))/2.0
        Or[cols-1] = float(Mr[cols-1]) - float(Mr[cols-2])
    return out
Sign up to request clarification or add additional context in comments.

1 Comment

did you have the same result from gradientX and computeMatXGradient?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.