We could have an approach using 2D convolution.
The basic steps would be :
- As pre-processing step, replace
NaNs with 0s as we need to do windowed summation on the input data.
- Get the windowed summations with
Scipy's convolve2d for the data values
and also the mask of NaNs. We will use boundary elements as zeros.
- Subtract the windowed count of
NaNs from the window size to get count of valid elements responsible for summations.
- For the boundary elements, we would have gradually lesser elements accounting for the summations.
Now, these intervaled-summations could also be obtained by Scipy's1Duniform-filter that is comparatively more efficient. Other benefit is that we could specify the axis along which these summations/averaging are to be performed.
With a mix of Scipy's 2D convolution and 1D uniform filter, we would have few approaches as listed next.
Import relevant Scipy functions -
from scipy.signal import convolve2d as conv2
from scipy.ndimage.filters import uniform_filter1d as uniff
Approach #1 :
def nanmoving_mean_numpy(data, W): # data: input array, W: Window size
N = data.shape[-1]
hW = (W-1)//2
nan_mask = np.isnan(data)
data1 = np.where(nan_mask,0,data)
value_sums = conv2(data1.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
nan_sums = conv2(nan_mask.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
value_sums.shape = data.shape
nan_sums.shape = data.shape
b_sizes = hW+1+np.arange(hW) # Boundary sizes
count = np.hstack(( b_sizes , W*np.ones(N-2*hW), b_sizes[::-1] ))
return value_sums/(count - nan_sums)
Approach #2 :
def nanmoving_mean_numpy_v2(data, W): # data: input array, W: Window size
N = data.shape[-1]
hW = (W-1)//2
nan_mask = np.isnan(data)
data1 = np.where(nan_mask,0,data)
value_sums = uniff(data1,size=W, axis=-1, mode='constant')*W
nan_sums = conv2(nan_mask.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
nan_sums.shape = data.shape
b_sizes = hW+1+np.arange(hW) # Boundary sizes
count = np.hstack(( b_sizes , W*np.ones(N-2*hW,dtype=int), b_sizes[::-1] ))
out = value_sums/(count - nan_sums)
out = np.where(np.isclose( count, nan_sums), np.nan, out)
return out
Approach #3 :
def nanmoving_mean_numpy_v3(data, W): # data: input array, W: Window size
N = data.shape[-1]
hW = (W-1)//2
nan_mask = np.isnan(data)
data1 = np.where(nan_mask,0,data)
nan_avgs = uniff(nan_mask.astype(float),size=W, axis=-1, mode='constant')
b_sizes = hW+1+np.arange(hW) # Boundary sizes
count = np.hstack(( b_sizes , W*np.ones(N-2*hW), b_sizes[::-1] ))
scale = ((count/float(W)) - nan_avgs)
out = uniff(data1,size=W, axis=-1, mode='constant')/scale
out = np.where(np.isclose( scale, 0), np.nan, out)
return out
Runtime test
Dataset #1 :
In [807]: # Create random input array and insert NaNs
...: data = np.random.randint(10,size=(20,30,60)).astype(float)
...:
...: # Add 10% NaNs across the data randomly
...: idx = np.random.choice(data.size,size=int(data.size*0.1),replace=0)
...: data.ravel()[idx] = np.nan
...:
...: W = 5 # Window size
...:
In [808]: %timeit nanmoving_mean(data,window=W,axis=2)
...: %timeit nanmoving_mean_numpy(data, W)
...: %timeit nanmoving_mean_numpy_v2(data, W)
...: %timeit nanmoving_mean_numpy_v3(data, W)
...:
10 loops, best of 3: 22.3 ms per loop
100 loops, best of 3: 3.31 ms per loop
100 loops, best of 3: 2.99 ms per loop
1000 loops, best of 3: 1.76 ms per loop
Dataset #2 [Bigger dataset] :
In [811]: # Create random input array and insert NaNs
...: data = np.random.randint(10,size=(120,130,160)).astype(float)
...:
...: # Add 10% NaNs across the data randomly
...: idx = np.random.choice(data.size,size=int(data.size*0.1),replace=0)
...: data.ravel()[idx] = np.nan
...:
In [812]: %timeit nanmoving_mean(data,window=W,axis=2)
...: %timeit nanmoving_mean_numpy(data, W)
...: %timeit nanmoving_mean_numpy_v2(data, W)
...: %timeit nanmoving_mean_numpy_v3(data, W)
...:
1 loops, best of 3: 796 ms per loop
1 loops, best of 3: 486 ms per loop
1 loops, best of 3: 275 ms per loop
10 loops, best of 3: 161 ms per loop