Is this a right Numpy reshape? - python

I've just started with Python and Numpy.
I have found this piece of code:
def preprocessing(FLAIR_array, T1_array):
brain_mask = np.ndarray(np.shape(FLAIR_array), dtype=np.float32)
brain_mask[FLAIR_array >=thresh] = 1
brain_mask[FLAIR_array < thresh] = 0
for iii in range(np.shape(FLAIR_array)[0]):
brain_mask[iii,:,:] = scipy.ndimage.morphology.binary_fill_holes(brain_mask[iii,:,:]) #fill the holes inside brain
FLAIR_array -=np.mean(FLAIR_array[brain_mask == 1]) #Gaussion Normalization
FLAIR_array /=np.std(FLAIR_array[brain_mask == 1])
rows_o = np.shape(FLAIR_array)[1]
cols_o = np.shape(FLAIR_array)[2]
FLAIR_array = FLAIR_array[:, int((rows_o-rows_standard)/2):int((rows_o-rows_standard)/2)+rows_standard, int((cols_o-cols_standard)/2):int((cols_o-cols_standard)/2)+cols_standard]
What are they doing in the last line? In this one:
FLAIR_array[:, int((rows_o-rows_standard)/2):int((rows_o-rows_standard)/2)+rows_standard, int((cols_o-cols_standard)/2):int((cols_o-cols_standard)/2)+cols_standard]
FLAIR_array has this shape: [48,240,240].
48 is the number of images.
240, 240 is its height and witdh.
Or maybe, they are slicing it.

Yes, they're only performing Numpy slicing (and not reshaping) on FLAIR_array whose resultant dimensions will be:
All elements in the 0th dimension are retained from the original array (as indicated by :)
Elements int((rows_o-rows_standard)/2) to int((rows_o-rows_standard)/2)+rows_standard - 1 are used from 1st dimension from the original array
Elements int((cols_o-cols_standard)/2) to int((cols_o-cols_standard)/2)+cols_standard - 1 are used from 2nd dimension from the original array

Hard to tell, as rows standard is not defined inside the function.
But if you rewrite it as (dropping some of the int(..) to increase readability)
rows_center = int(rows_o/2)
cols_center = int(cols_o/2)
delta_rows = int(rows_standard)
delta_cols = int(cols_standard)
FLAIR_array = FLAIR_array[:, rows_center - rows_delta/2:rows_center + rows_delta/2, cols_center - cols_delta/2:cols_center + cols_delta/2]
It seems that they are extracting for each image a small crop centered at rows_center and cols_center with the number of rows and columns equal to delta_rows, delta_cols

Related

Append Value to 3D array numpy

I iterate over a 3D numpy array and want to append in every step a float value to the array in the 3rd dimension (axis =2).
Something like (I know the code doesn't work as of now, latIndex, data and lonIndex for simplicity as randoms)
import numpy as np
import random
GridData = np.ones((121, 201, 1000))
data = np.random.rand(4800, 4800)
for row in range(4800):
for column in range(4800):
latIndex = random.randrange(0, 121, 1)
lonIndex = random.randrange(0, 201, 1)
GridData = np.append(GridData[latIndex, lonIndex, :], data[column, row], axis = 2)
The 3rd dimension of GridData is arbitrary in this example of size 1000.
How can I achieve this?
Addition:
It might be possible without np.append but then I don't know how to do this since the 3rd index is different for every combination of latIndex and lonIndex.
You can allocate extra space for your array grid_data, fill it with the NaN, and keep track of the next index to be filled in another array while iterating through and filling with values from data. If you completely fill the third dimension for some lat_idx, lon_idx with non-NaN values, then you just allocate more space. Since appending is expensive with numpy, it's best that this extra space is pretty large so you only do it once or twice (below I allocate twice the original space).
Once the array is filled, you can remove the added space that was unused with numpy.isnan(). This solution does what you want but is very slow (for the example values you gave it took about two minutes), but the slow execution comes from iterating rather than the numpy operations.
Here's the code:
import random
import numpy as np
grid_data = np.ones(shape=(121, 201, 1000))
data = np.random.rand(4800, 4800)
# keep track of next index to fill for all the arrays in axis 2
next_to_fill = np.full(shape=(grid_data.shape[0], grid_data.shape[1]),
fill_value=grid_data.shape[2],
dtype=np.int32)
# allocate more space
double_shape = (grid_data.shape[0], grid_data.shape[1], grid_data.shape[2] * 2)
extra_space = np.full(shape=double_shape, fill_value=np.nan)
grid_data = np.append(grid_data, extra_space, axis=2)
for row in range(4800):
for col in range(4800):
lat_idx = random.randint(0, 120)
lon_idx = random.randint(0, 200)
# allocate more space if needed
if next_to_fill[lat_idx, lon_idx] >= grid_data.shape[2]:
grid_data = np.append(grid_data, extra_space, axis=2)
grid_data[lat_idx, lon_idx, next_to_fill[lat_idx, lon_idx]] = data[row,
col]
next_to_fill[lat_idx, lon_idx] += 1
# remove unnecessary nans that were appended
not_all_nan_idxs = ~np.isnan(grid_data).all(axis=(0, 1))
grid_data = grid_data[:, :, not_all_nan_idxs]

Numpy dynamic array slicing based on min/max values

I have a 3 dimensional array of hape (365, x, y) where 36 corresponds to =daily data. In some cases, all the elements along the time axis axis=0 are np.nan.
The time series for each point along the axis=0 looks something like this:
I need to find the index at which the maximum value (peak data) occurs and then the two minimum values on each side of the peak.
import numpy as np
a = np.random.random(365, 3, 3) * 10
a[:, 0, 0] = np.nan
peak_mask = np.ma.masked_array(a, np.isnan(a))
peak_indexes = np.nanargmax(peak_mask, axis=0)
I can find the minimum before the peak using something like this:
early_minimum_indexes = np.full_like(peak_indexes, fill_value=0)
for i in range(peak_indexes.shape[0]):
for j in range(peak_indexes.shape[1]):
if peak_indexes[i, j] == 0:
early_minimum_indexes[i, j] = 0
else:
early_mask = np.ma.masked_array(a, np.isnan(a))
early_loc = np.nanargmin(early_mask[:peak_indexes[i, j], i, j], axis=0)
early_minimum_indexes[i, j] = early_loc
With the resulting peak and trough plotted like this:
This approach is very unreasonable time-wise for large arrays (1m+ elements). Is there a better way to do this using numpy?
While using masked arrays may not be the most efficient solution in this case, it will allow you to perform masked operations on specific axes while more-or-less preserving shape, which is a great convenience. Keep in mind that in many cases, the masked functions will still end up copying the masked data.
You have mostly the right idea in your current code, but you missed a couple of tricks, like being able to negate and combine masks. Also the fact that allocating masks as boolean up front is more efficient, and little nitpicks like np.full(..., 0) -> np.zeros(..., dtype=bool).
Let's work through this backwards. Let's say you had a well-behaved 1-D array with a peak, say a1. You can use masking to easily find the maxima and minima (or indices) like this:
peak_index = np.nanargmax(a1)
mask = np.zeros(a1.size, dtype=np.bool)
mask[peak:] = True
trough_plus = np.nanargmin(np.ma.array(a1, mask=~mask))
trough_minus = np.nanargmin(np.ma.array(a1, mask=mask))
This respects the fact that masked arrays flip the sense of the mask relative to normal numpy boolean indexing. It's also OK that the maximum value appears in the calculation of trough_plus, since it's guaranteed not to be a minimum (unless you have the all-nan situation).
Now if a1 was a masked array already (but still 1D), you could do the same thing, but combine the masks temporarily. For example:
a1 = np.ma.array(a1, mask=np.isnan(a1))
peak_index = a1.argmax()
mask = np.zeros(a1.size, dtype=np.bool)
mask[peak:] = True
trough_plus = np.ma.masked_array(a1, mask=a.mask | ~mask).argmin()
trough_minus (np.ma.masked_array(a1, mask=a.mask | mask).argmin()
Again, since masked arrays have reversed masks, it's important to combine the masks using | instead of &, as you would for normal numpy boolean masks. In this case, there is no need for calling the nan version of argmax and argmin, since all the nans are already masked out.
Hopefully, the generalization to multiple dimensions becomes clear from here, given the prevalence of the axis keyword in numpy functions:
a = np.ma.array(a, mask=np.isnan(a))
peak_indices = a.argmax(axis=0).reshape(1, *a.shape[1:])
mask = np.arange(a.shape[0]).reshape(-1, *(1,) * (a.ndim - 1)) >= peak_indices
trough_plus = np.ma.masked_array(a, mask=~mask | a.mask).argmin(axis=0)
trough_minus = np.ma.masked_array(a, mask=mask | a.mask).argmin(axis=0)
N-dimensional masking technique comes from Fill mask efficiently based on start indices, which was asked just for this purpose.
Here is a method that
copies the data
saves all nan positions and replaces all nans with global min-1
finds the rowwise argmax
subtracts its value from the entire row
note that each row now has only non-positive values with the max value now being zero
zeros all nan positions
flips the sign of all values right of the max
this is the main idea; it creates a new row-global max at the position where before there was the right hand min; at the same time it ensures that the left hand min is now row-global
retrieves the rowwise argmin and argmax, these are the postitions of the left and right mins in the original array
finds all-nan rows and overwrites the max and min indices at these positions with INVALINT
Code:
INVALINT = -9999
t,x,y = a.shape
t,x,y = np.ogrid[:t,:x,:y]
inval = np.isnan(a)
b = np.where(inval,np.nanmin(a)-1,a)
pk = b.argmax(axis=0)
pkval = b[pk,x,y]
b -= pkval
b[inval] = 0
b[t>pk[None]] *= -1
ltr = b.argmin(axis=0)
rtr = b.argmax(axis=0)
del b
inval = inval.all(axis=0)
pk[inval] = INVALINT
ltr[inval] = INVALINT
rtr[inval] = INVALINT
# result is now in ltr ("left trough"), pk ("peak") and rtr

How can I add a column of ones to a normalized array with numpy?

My code is
import numpy as np
housing_data = np.loadtxt('Housing.csv', delimiter=',')
x1 = housing_data[:,0]
x2 = housing_data[:,1]
y = housing_data[:,2]
avgX1 = np.mean(x1)
stdX1 = np.std(x1)
normX1 = (x1 - avgX1) / stdX1
avgX2 = np.mean(x2)
stdX2 = np.std(x2)
normX2 = (x2 - avgX2) / stdX2
ones = np.ones((normX2.shape[0], 1))
normalizedX = np.array((ones[0], normX1, normX2))
I'm trying to create a new normalized array with the ones in the first column, then the normX1 and normX2. For some reason, my code isn't working. Any idea what I'm doing wrong?
The actual issue is that you made ones 2D where normX1 and normX2 are 1D. then when you call np.array((ones[0], normX1, normX2)) you get the first row of ones which is another array of length 1. The mismatch in length between the three args for np.array causes it to return a list of the objects instead (a numpy array with dtype=object).
I'd just make ones big enough to fit all your data in the first place and avoid making one extra array. Then just assign the values of normX1 and normX2 to the columns of that array:
normalizedX = np.ones((normX2.shape[0], 3))
normalizedX[:,1] = normX1
normalizedX[:,2] = normX2
print(normalizedX)

numpy insert 2D array into 4D structure

I have a 4D array: array = np.random.rand(3432,1,30,512)
I also have 5 sets of 2D arrays with shape (30,512)
I want to insert these into the 4D structure along axis 1 so that my final shape is (3432,6,30,512) (5 new arrays + the original 1). I need to iteratively insert this set for each of the 3432 elements
Whats the most effective way to do this?
I've tried reshaping the 2D to 4D and then inserting along axis 1. I'm expecting axis 1 to never exceed a size of 6, but the 2D arrays just keep getting added, rather than a set for each of the 3432 elements. I think my problem lies in not fully understanding the obj param for the insert method:
all_data = np.reshape(all_data, (-1, 1, 30, 512))
for i in range(all_data.shape[0]):
num_band = 1
for band in range(5):
temp_trial = np.zeros((30, 512)) # Just an example. values arent actually 0
temp_trial = np.reshape(temp_trial, (1,1,30,512))
all_data = np.insert(all_data, num_band, temp_trial, 1)
num_band += 1
Create an array with the final shape first and insert the elements later:
final = np.zeros((3432,6,30,512))
for i in range(3432): # note, this will take a while
for j in range(6):
final[i, j, :, :] = # insert your array here (np.ones((30, 512)))
or if you actually want to broadcast this over the zeroth axis, assuming each of the 3432 should be the same for each "band":
for i in range(6):
final[:, i, :, :] = # insert your array here (np.ones((30, 512)))
As long as you don't do many loops there is no need to vectorize it

resize a 2D numpy array excluding NaN

I'm trying to resize a 2D numpy array of a given factor, obtaining a smaller array in output.
The array is read from an image file and some of the values should be NaN (Not a Number, np.nan from numpy): it is the result of remote sensing measurements from satellite and simply some pixels weren't measured.
The suitable package I found for this is scypy.misc.imresize, but each pixel in the output array containing a NaN is set to NaN, even if there are some valid data in the original pixels interpolated together.
My solution is appended here, what I've done is essentially :
create a new array based on the original array shape and the desired reduction factor
create an index array to address all the pixels of the original array to be averaged for each pixel in the new
cycle through the new array pixels and average all the not-NaN pixel to obtain the new array pixel value; it there are only NaN, the output will be NaN.
I'm planning to add keyword to choice between different output (average, median, standard deviation of the input pixels and so on).
It is working as expected, but on a ~1Mpx image it takes around 3 seconds. Due to my lack of experience in python I'm searching for improvements.
Do anyone have suggestion how to do it better and more efficiently?
Do anyone know a library that already implements all that stuff?
Thanks.
Here you have an example output for random pixel input generated with the code here below:
import numpy as np
import pylab as plt
from scipy import misc
def resize_2d_nonan(array,factor):
"""
Resize a 2D array by different factor on two axis sipping NaN values.
If a new pixel contains only NaN, it will be set to NaN
Parameters
----------
array : 2D np array
factor : int or tuple. If int x and y factor wil be the same
Returns
-------
array : 2D np array scaled by factor
Created on Mon Jan 27 15:21:25 2014
#author: damo_ma
"""
xsize, ysize = array.shape
if isinstance(factor,int):
factor_x = factor
factor_y = factor
elif isinstance(factor,tuple):
factor_x , factor_y = factor[0], factor[1]
else:
raise NameError('Factor must be a tuple (x,y) or an integer')
if not (xsize %factor_x == 0 or ysize % factor_y == 0) :
raise NameError('Factors must be intger multiple of array shape')
new_xsize, new_ysize = xsize/factor_x, ysize/factor_y
new_array = np.empty([new_xsize, new_ysize])
new_array[:] = np.nan # this saves us an assignment in the loop below
# submatrix indexes : is the average box on the original matrix
subrow, subcol = np.indices((factor_x, factor_y))
# new matrix indexs
row, col = np.indices((new_xsize, new_ysize))
# some output for testing
#for i, j, ind in zip(row.reshape(-1), col.reshape(-1),range(row.size)) :
# print '----------------------------------------------'
# print 'i: %i, j: %i, ind: %i ' % (i, j, ind)
# print 'subrow+i*new_ysize, subcol+j*new_xsize :'
# print i,'*',new_xsize,'=',i*factor_x
# print j,'*',new_ysize,'=',j*factor_y
# print subrow+i*factor_x,subcol+j*factor_y
# print '---'
# print 'array[subrow+i*factor_x,subcol+j*factor_y] : '
# print array[subrow+i*factor_x,subcol+j*factor_y]
for i, j, ind in zip(row.reshape(-1), col.reshape(-1),range(row.size)) :
# define the small sub_matrix as view of input matrix subset
sub_matrix = array[subrow+i*factor_x,subcol+j*factor_y]
# modified from any(a) and all(a) to a.any() and a.all()
# see https://stackoverflow.com/a/10063039/1435167
if not (np.isnan(sub_matrix)).all(): # if we haven't all NaN
if (np.isnan(sub_matrix)).any(): # if we haven no NaN at all
msub_matrix = np.ma.masked_array(sub_matrix,np.isnan(sub_matrix))
(new_array.reshape(-1))[ind] = np.mean(msub_matrix)
else: # if we haven some NaN
(new_array.reshape(-1))[ind] = np.mean(sub_matrix)
# the case assign NaN if we have all NaN is missing due
# to the standard values of new_array
return new_array
row , cols = 6, 4
a = 10*np.random.random_sample((row , cols))
a[0:3,0:2] = np.nan
a[0,2] = np.nan
factor_x = 2
factor_y = 2
a_misc = misc.imresize(a, .5, interp='nearest', mode='F')
a_2d_nonan = resize_2d_nonan(a,(factor_x,factor_y))
print a
print
print a_misc
print
print a_2d_nonan
plt.subplot(131)
plt.imshow(a,interpolation='nearest')
plt.title('original')
plt.xticks(arange(a.shape[1]))
plt.yticks(arange(a.shape[0]))
plt.subplot(132)
plt.imshow(a_misc,interpolation='nearest')
plt.title('scipy.misc')
plt.xticks(arange(a_misc.shape[1]))
plt.yticks(arange(a_misc.shape[0]))
plt.subplot(133)
plt.imshow(a_2d_nonan,interpolation='nearest')
plt.title('my.func')
plt.xticks(arange(a_2d_nonan.shape[1]))
plt.yticks(arange(a_2d_nonan.shape[0]))
EDIT
I add some modification to address ChrisProsser comment.
If I substitute the NaN with some other value, let say the average of the not-NaN pixels, it will affect all the subsequent calculation: the difference between the resampled original array and the resampled array with NaN substituted shows that 2 pixels changed their values.
My goal is simply skip all the NaN pixels.
# substitute NaN with the average value
ind_nonan , ind_nan = np.where(np.isnan(a) == False), np.where(np.isnan(a) == True)
a_substitute = np.copy(a)
a_substitute[ind_nan] = np.mean(a_substitute[ind_nonan]) # substitute the NaN with average on the not-Nan
a_substitute_misc = misc.imresize(a_substitute, .5, interp='nearest', mode='F')
a_substitute_2d_nonan = resize_2d_nonan(a_substitute,(factor_x,factor_y))
print a_2d_nonan-a_substitute_2d_nonan
[[ nan -0.02296697]
[ 0.23143208 0. ]
[ 0. 0. ]]
** 2nd EDIT**
To address the Hooked's answer I put some additional code. It is an iteresting idea, sadly it interpolates new values over pixels that should be "empty" (NaN) and for my small example generate more NaN than good values.
X , Y = np.indices((row , cols))
X_new , Y_new = np.indices((row/factor_x , cols/factor_y))
from scipy.interpolate import CloughTocher2DInterpolator as intp
C = intp((X[ind_nonan],Y[ind_nonan]),a[ind_nonan])
a_interp = C(X_new , Y_new)
print a
print
print a_interp
[[ nan, nan],
[ nan, nan],
[ nan, 6.32826577]])
You are operating on small windows of the array. Instead of looping through the array to make the windows, the array can be efficiently restructured by manipulating its strides. The numpy library provides the as_strided() function to help with that. An example is provided in the SciPy CookBook Stride tricks for the Game of Life.
The following will use a generalized sliding window function which I will include it at the end.
Determine the shape of the new array:
rows, cols = a.shape
new_shape = rows / 2, cols / 2
Restructure the array into the windows you need, and create an indexing array identifying NaNs:
# 2x2 windows of the original array
windows = sliding_window(a, (2,2))
# make a windowed boolean array for indexing
notNan = sliding_window(np.logical_not(np.isnan(a)), (2,2))
The new array can be made using a list comprehension or a generator expression.
# using a list comprehension
# make a list of the means of the windows, disregarding the Nan's
means = [window[index].mean() for window, index in zip(windows, notNan)]
new_array = np.array(means).reshape(new_shape)
# generator expression
# produces the means of the windows, disregarding the Nan's
means = (window[index].mean() for window, index in zip(windows, notNan))
new_array = np.fromiter(means, dtype = np.float32).reshape(new_shape)
The generator expression should conserve memory. Using itertools.izip() instead of ```zip`` should also help if memory is a problem. I just used the list comprehension for your solution.
Your function:
def resize_2d_nonan(array,factor):
"""
Resize a 2D array by different factor on two axis skipping NaN values.
If a new pixel contains only NaN, it will be set to NaN
Parameters
----------
array : 2D np array
factor : int or tuple. If int x and y factor wil be the same
Returns
-------
array : 2D np array scaled by factor
Created on Mon Jan 27 15:21:25 2014
#author: damo_ma
"""
xsize, ysize = array.shape
if isinstance(factor,int):
factor_x = factor
factor_y = factor
window_size = factor, factor
elif isinstance(factor,tuple):
factor_x , factor_y = factor
window_size = factor
else:
raise NameError('Factor must be a tuple (x,y) or an integer')
if (xsize % factor_x or ysize % factor_y) :
raise NameError('Factors must be integer multiple of array shape')
new_shape = xsize / factor_x, ysize / factor_y
# non-overlapping windows of the original array
windows = sliding_window(a, window_size)
# windowed boolean array for indexing
notNan = sliding_window(np.logical_not(np.isnan(a)), window_size)
#list of the means of the windows, disregarding the Nan's
means = [window[index].mean() for window, index in zip(windows, notNan)]
# new array
new_array = np.array(means).reshape(new_shape)
return new_array
I haven't done any time comparisons with your original function, but it should be faster.
Many solutions I've seen here on SO vectorize the operations to increase speed/efficiency - I don't quite have a handle on that and don't know if it can be applied to your problem. Searching SO for window, array, moving average, vectorize, and numpy should produce similar questions and answers for reference.
sliding_window() see attribution below:
import numpy as np
from numpy.lib.stride_tricks import as_strided as ast
from itertools import product
def norm_shape(shape):
'''
Normalize numpy array shapes so they're always expressed as a tuple,
even for one-dimensional shapes.
Parameters
shape - an int, or a tuple of ints
Returns
a shape tuple
'''
try:
i = int(shape)
return (i,)
except TypeError:
# shape was not a number
pass
try:
t = tuple(shape)
return t
except TypeError:
# shape was not iterable
pass
raise TypeError('shape must be an int, or a tuple of ints')
def sliding_window(a,ws,ss = None,flatten = True):
'''
Return a sliding window over a in any number of dimensions
Parameters:
a - an n-dimensional numpy array
ws - an int (a is 1D) or tuple (a is 2D or greater) representing the size
of each dimension of the window
ss - an int (a is 1D) or tuple (a is 2D or greater) representing the
amount to slide the window in each dimension. If not specified, it
defaults to ws.
flatten - if True, all slices are flattened, otherwise, there is an
extra dimension for each dimension of the input.
Returns
an array containing each n-dimensional window from a
'''
if None is ss:
# ss was not provided. the windows will not overlap in any direction.
ss = ws
ws = norm_shape(ws)
ss = norm_shape(ss)
# convert ws, ss, and a.shape to numpy arrays so that we can do math in every
# dimension at once.
ws = np.array(ws)
ss = np.array(ss)
shape = np.array(a.shape)
# ensure that ws, ss, and a.shape all have the same number of dimensions
ls = [len(shape),len(ws),len(ss)]
if 1 != len(set(ls)):
raise ValueError(\
'a.shape, ws and ss must all have the same length. They were %s' % str(ls))
# ensure that ws is smaller than a in every dimension
if np.any(ws > shape):
raise ValueError(\
'ws cannot be larger than a in any dimension.\
a.shape was %s and ws was %s' % (str(a.shape),str(ws)))
# how many slices will there be in each dimension?
newshape = norm_shape(((shape - ws) // ss) + 1)
# the shape of the strided array will be the number of slices in each dimension
# plus the shape of the window (tuple addition)
newshape += norm_shape(ws)
# the strides tuple will be the array's strides multiplied by step size, plus
# the array's strides (tuple addition)
newstrides = norm_shape(np.array(a.strides) * ss) + a.strides
strided = ast(a,shape = newshape,strides = newstrides)
if not flatten:
return strided
# Collapse strided so that it has one more dimension than the window. I.e.,
# the new array is a flat list of slices.
meat = len(ws) if ws.shape else 0
firstdim = (np.product(newshape[:-meat]),) if ws.shape else ()
dim = firstdim + (newshape[-meat:])
# remove any dimensions with size 1
dim = filter(lambda i : i != 1,dim)
return strided.reshape(dim)
sliding_window() attribution
I originally found this on a blog page that is now a broken link:
Efficient Overlapping Windows with Numpy - http://www.johnvinyard.com/blog/?p=268
With a little searching it looks like it now resides in the Zounds github repository. Thanks John Vinyard.
Note this post is pretty old and there are a lot of SO Q&A's regarding sliding windows, rolling windows, and for images- patch extraction. There are a lot of one-offs using numpy's as_strided but this function still seems the only one to handle n-d windowing. scikits sklearn.feature_extraction.image library seems to be often cited for extracting or viewing image patches.
Interpolate the points, using scipy.interpolate, on a different grid. Below I've shown a cubic interpolator, which is slower but probably more accurate. You'll notice that the corner pixels are missing with this function, you could then use a linear or nearest neighbor interpolation to handle those last values.
import numpy as np
import pylab as plt
# Test data
row = np.linspace(-3,3,50)
X,Y = np.meshgrid(row,row)
Z = np.sqrt(X**2+Y**2) + np.cos(Y)
# Make some dead pixels, favor an edge
dead = np.random.random(Z.shape)
dead = (dead*X>.7)
Z[dead] =np.nan
from scipy.interpolate import CloughTocher2DInterpolator as intp
C = intp((X[~dead],Y[~dead]),Z[~dead])
new_row = np.linspace(-3,3,25)
xi,yi = np.meshgrid(new_row,new_row)
zi = C(xi,yi)
plt.subplot(121)
plt.title("Original signal 50x50")
plt.imshow(Z,interpolation='nearest')
plt.subplot(122)
plt.title("Interpolated signal 25x25")
plt.imshow(zi,interpolation='nearest')
plt.show()

Categories