Aggregate Numpy Array By Summing - python

I have a Numpy array of shape (4320,8640). I would like to have an array of shape (2160,4320).
You'll notice that each cell of the new array maps to a 2x2 set of cells in the old array. I would like a cell's value in the new array to be the sum of the values in this block in the old array.
I can achieve this as follows:
import numpy
#Generate an example array
arr = numpy.random.randint(10,size=(4320,8640))
#Perform the transformation
arrtrans = numpy.array([ [ arr[y][x]+arr[y+1][x]+arr[y][x+1]+arr[y+1][x+1] for x in range(0,8640,2)] for y in range(0,4320,2)])
But this is slow and more than a little ugly.
Is there a way to do this using Numpy (or an interoperable package)?

When the window fits exactly into the array, reshaping to more dimensions and collapsing the extra dimensions with np.sum is sort of the canonical way of doing this with numpy:
>>> a = np.random.rand(4320,8640)
>>> a.shape
(4320, 8640)
>>> a_small = a.reshape(2160, 2, 4320, 2).sum(axis=(1, 3))
>>> a_small.shape
(2160, 4320)
>>> np.allclose(a_small[100, 203], a[200:202, 406:408].sum())
True

I'm not sure there exists the package you want, but this code will compute much faster.
>>> arrtrans2 = arr[::2, ::2] + arr[::2, 1::2] + arr[1::2, ::2] + arr[1::2, 1::2]
>>> numpy.allclose(arrtrans, arrtrans2)
True
Where ::2 and 1::2 are translated by 0, 2, 4, ... and 1, 3, 5, ... respectively.

You are operating on sliding windows of the original array. There are numerous questions and answers on SO regarding. sliding windows and numpy and python. By manipulating the strides of an array, this process can be sped up considerably. Here is a generic function that will return (x,y) windows of the array with or without overlap. Using this stride trick appears to be just a hair slower than #mskimm's solution. It's a nice thing to have in your toolkit. This function is not mine - it was found at Efficient Overlapping Windows with Numpy
import numpy as np
from numpy.lib.stride_tricks import as_strided as ast
from itertools import product
def norm_shape(shape):
'''
Normalize numpy array shapes so they're always expressed as a tuple,
even for one-dimensional shapes.
Parameters
shape - an int, or a tuple of ints
Returns
a shape tuple
from http://www.johnvinyard.com/blog/?p=268
'''
try:
i = int(shape)
return (i,)
except TypeError:
# shape was not a number
pass
try:
t = tuple(shape)
return t
except TypeError:
# shape was not iterable
pass
raise TypeError('shape must be an int, or a tuple of ints')
def sliding_window(a,ws,ss = None,flatten = True):
'''
Return a sliding window over a in any number of dimensions
Parameters:
a - an n-dimensional numpy array
ws - an int (a is 1D) or tuple (a is 2D or greater) representing the size
of each dimension of the window
ss - an int (a is 1D) or tuple (a is 2D or greater) representing the
amount to slide the window in each dimension. If not specified, it
defaults to ws.
flatten - if True, all slices are flattened, otherwise, there is an
extra dimension for each dimension of the input.
Returns
an array containing each n-dimensional window from a
from http://www.johnvinyard.com/blog/?p=268
'''
if None is ss:
# ss was not provided. the windows will not overlap in any direction.
ss = ws
ws = norm_shape(ws)
ss = norm_shape(ss)
# convert ws, ss, and a.shape to numpy arrays so that we can do math in every
# dimension at once.
ws = np.array(ws)
ss = np.array(ss)
shape = np.array(a.shape)
# ensure that ws, ss, and a.shape all have the same number of dimensions
ls = [len(shape),len(ws),len(ss)]
if 1 != len(set(ls)):
error_string = 'a.shape, ws and ss must all have the same length. They were{}'
raise ValueError(error_string.format(str(ls)))
# ensure that ws is smaller than a in every dimension
if np.any(ws > shape):
error_string = 'ws cannot be larger than a in any dimension. a.shape was {} and ws was {}'
raise ValueError(error_string.format(str(a.shape),str(ws)))
# how many slices will there be in each dimension?
newshape = norm_shape(((shape - ws) // ss) + 1)
# the shape of the strided array will be the number of slices in each dimension
# plus the shape of the window (tuple addition)
newshape += norm_shape(ws)
# the strides tuple will be the array's strides multiplied by step size, plus
# the array's strides (tuple addition)
newstrides = norm_shape(np.array(a.strides) * ss) + a.strides
strided = ast(a,shape = newshape,strides = newstrides)
if not flatten:
return strided
# Collapse strided so that it has one more dimension than the window. I.e.,
# the new array is a flat list of slices.
meat = len(ws) if ws.shape else 0
firstdim = (np.product(newshape[:-meat]),) if ws.shape else ()
dim = firstdim + (newshape[-meat:])
# remove any dimensions with size 1
dim = filter(lambda i : i != 1,dim)
return strided.reshape(dim)
Usage:
# 2x2 windows with NO overlap
b = sliding_window(arr, (2,2), flatten = False)
c = b.sum((1,2))
Approximate 24% performance improvement using numpy.einsum
c = np.einsum('ijkl -> ij', b)
One SO Q&A example How can I efficiently process a numpy array in blocks similar to Matlab's blkproc (blockproc) function, the selected answer would work for you.

Related

Reshape tensor to matrix and back

I have 2 methods: One to convert 4D matrix (tensor) in a matrix and other to convert 2D matrix in 4D.
Reshaping from 4D to 2D work's well, but when I try reconvert again in a tensor, I don't achieve the same order of the elements. The methods are:
# Method to convert the tensor in a matrix
def tensor2matrix(tensor):
# rows, columns, channels and filters
r, c, ch, f = tensor[0].shape
new_dim = [r*c*ch, f] # Inferer the new matrix dims
# Transpose is necesary because the columns are the channels weights
# flattened in columns
return np.reshape(np.transpose(tensor[0], [2,0,1,3]), new_dim)
# Method to convert the matrix in a tensor
def matrix2tensor(matrix, fs):
return np.reshape(matrix, fs, order="F")
I think that the problem is in the np.transpose because when is a matrix only I can permute columns by rows... Is there anyway to back the tensor from the matrix without loops?
Consider the following changes:
Replace the two tensor[0] by tensor, to avoid
ValueError: not enough values to unpack (expected 4, got 3)
when running the example provided below
Ensure both np.reshape calls use the same order="F"
Use another np.transpose call inside matrix2tensor to undo the np.transpose from tensor2matrix
The updated code is
import numpy as np
# Method to convert the tensor in a matrix
def tensor2matrix(tensor):
# rows, columns, channels and filters
r, c, ch, f = tensor.shape
new_dim = [r*c*ch, f] # Inferer the new matrix dims
# Transpose is necesary because the columns are the channels weights
# flattened in columns
return np.reshape(np.transpose(tensor, [2,0,1,3]), new_dim, order="F")
# Method to convert the matrix in a tensor
def matrix2tensor(matrix, fs):
return np.transpose(np.reshape(matrix, fs, order="F"), [1,2,0,3])
and it can be tested like this:
x,y,z,t = 2,3,4,5
shape = (x,y,z,t)
m1 = np.arange(x*y*z*t).reshape((x*y*z, 5))
t1 = matrix2tensor(m1, shape)
m2 = tensor2matrix(t1)
assert (m1 == m2).all()
t2 = matrix2tensor(m2, shape)
assert (t1 == t2).all()

Subtract Mean from Multidimensional Numpy-Array

I'm currently learning about broadcasting in Numpy and in the book I'm reading (Python for Data Analysis by Wes McKinney the author has mentioned the following example to "demean" a two-dimensional array:
import numpy as np
arr = np.random.randn(4, 3)
print(arr.mean(0))
demeaned = arr - arr.mean(0)
print(demeaned)
print(demeand.mean(0))
Which effectively causes the array demeaned to have a mean of 0.
I had the idea to apply this to an image-like, three-dimensional array:
import numpy as np
arr = np.random.randint(0, 256, (400,400,3))
demeaned = arr - arr.mean(2)
Which of course failed, because according to the broadcasting rule, the trailing dimensions have to match, and that's not the case here:
print(arr.shape) # (400, 400, 3)
print(arr.mean(2).shape) # (400, 400)
Now, i have gotten it to work mostly, by substracting the mean from every single index in the third dimension of the array:
demeaned = np.ones(arr.shape)
for i in range(3):
demeaned[...,i] = arr[...,i] - means
print(demeaned.mean(0))
At this point, the returned values are very close to zero and i think, that's a precision error. Am i actually right with this thought or is there another caveat, that i missed?
Also, this doesn't seam to be the cleanest, most 'numpy'-way to achieve what i wanted to achieve. Is there a function or a principle that i can make use of to improve the code?
As of numpy version 1.7.0, np.mean, and several other functions, accept a tuple in their axis parameter. This means that you can perform the operation on the planes of the image all at once:
m = arr.mean(axis=(0, 1))
This mean will have shape (3,), with one element for each plane of the image.
If you want to subtract the means of each pixel individually, you have to remember that broadcasting aligns shape tuples on the right edge. That means that you need to insert an extra dimension:
n = arr.mean(axis=2)
n = n.reshape(*n.shape, 1)
Or
n = arr.mean(axis=2)[..., None]
Try np.apply_along_axis().
np.apply_along_axis(lambda x: x - np.mean(x), 2, arr)
Output: you get the array of the same shape where each cell is demeaned in the dimension you want (the second parameter, here it is 2).

Selecting which dimension to index in a numpy array

I am writing a program that is suppose to be able to import numpy arrays of some higher dimension, e.g. something like an array a:
a = numpy.zeros([3,5,7,2])
Further, each dimension will correspond to some physical dimension, e.g. frequency, distance, ... and I will also import arrays with information about these dimensions, e.g. for a above:
freq = [1,2,3]
time = [0,1,2,3,4,5,6]
distance = [0,0,0,4,1]
angle = [0,180]
Clearly from this example and the signature it can be figured out that freq belong to dimension 0, time to dimension 2 and so on. But since this is not known in advance, I can take a frequency slice like
a_f1 = a[1,:,:,:]
since I do not know which dimension the frequency is indexed.
So, what I would like is to have some way to chose which dimension to index with an index; in some Python'ish code something like
a_f1 = a.get_slice([0,], [[1],])
This is suppose to return the slice with index 1 from dimension 0 and the full other dimensions.
Doing
a_p = a[0, 1:, ::2, :-1]
would then correspond to something like
a_p = a.get_slice([0, 1, 2, 3], [[0,], [1,2,3,4], [0,2,4,6], [0,]])
You can fairly easily construct a tuple of indices, using slice objects where needed, and then use this to index into your array. The basic is recipe is this:
indices = {
0: # put here whatever you want to get on dimension 0,
1: # put here whatever you want to get on dimension 1,
# leave out whatever dimensions you want to get all of
}
ix = [indices.get(dim, slice(None)) for dim in range(arr.ndim)]
arr[ix]
Here I have done it with a dictionary since I think that makes it easier to see which dimension goes with which indexer.
So with your example data:
x = np.zeros([3,5,7,2])
We do this:
indices = {0: 1}
ix = [indices.get(dim, slice(None)) for dim in range(x.ndim)]
>>> x[ix].shape
(5L, 7L, 2L)
Because your array is all zeros, I'm just showing the shape of the result to indicate that it is what we want. (Even if it weren't all zeros, it's hard to read a 3D array in text form.)
For your second example:
indices = {
0: 0,
1: slice(1, None),
2: slice(None, None, 2),
3: slice(None, -1)
}
ix = [indices.get(dim, slice(None)) for dim in range(x.ndim)]
>>> x[ix].shape
(4L, 4L, 1L)
You can see that the shape corresponds to the number of values in your a_p example. One thing to note is that the first dimension is gone, since you only specified one value for that index. The last dimension still exists, but with a length of one, because you specified a slice that happens to just get one element. (This is the same reason that some_list[0] gives you a single value, but some_list[:1] gives you a one-element list.)
You can use advanced indexing to achieve this.
The index for each dimension needs to be shaped appropriately so that the indices will broadcast correctly across the array. For example, the index for the first dimension of a 3-d array needs to be shaped (x, 1, 1) so that it will broadcast across the first dimension. The index for the second dimension of a 3-d array needs to be shaped (1, y, 1) so that it will broadcast across the second dimension.
import numpy as np
a = np.zeros([3,5,7,2])
b = a[0, 1:, ::2, :-1]
indices = [[0,], [1,2,3,4], [0,2,4,6], [0,]]
def get_aslice(a, indices):
n_dim_ = len(indices)
index_array = [np.array(thing) for thing in indices]
idx = []
# reshape the arrays by adding single-dimensional entries
# based on the position in the index array
for d, thing in enumerate(index_array):
shape = [1] * n_dim_
shape[d] = thing.shape[0]
#print(d, shape)
idx.append(thing.reshape(shape))
c = a[idx]
# to remove leading single-dimensional entries from the shape
#while c.shape[0] == 1:
# c = np.squeeze(c, 0)
# To remove all single-dimensional entries from the shape
#c = np.squeeze(c).shape
return c
For a as an input, it returns an array with shape (1,4,4,1) your a_p example has a shape of (4,4,1). If the extra dimensions need to be removed un-comment the np.squeeze lines in the function.
Now I feel silly. While reading the docs slower I noticed numpy has an indexing routine that does what you want - numpy.ix_
>>> a = numpy.zeros([3,5,7,2])
>>> indices = [[0,], [1,2,3,4], [0,2,4,6], [0,]]
>>> index_arrays = np.ix_(*indices)
>>> a_p = a[index_arrays]
>>> a_p.shape
(1, 4, 4, 1)
>>> a_p = np.squeeze(a_p)
>>> a_p.shape
(4, 4)
>>>

resize a 2D numpy array excluding NaN

I'm trying to resize a 2D numpy array of a given factor, obtaining a smaller array in output.
The array is read from an image file and some of the values should be NaN (Not a Number, np.nan from numpy): it is the result of remote sensing measurements from satellite and simply some pixels weren't measured.
The suitable package I found for this is scypy.misc.imresize, but each pixel in the output array containing a NaN is set to NaN, even if there are some valid data in the original pixels interpolated together.
My solution is appended here, what I've done is essentially :
create a new array based on the original array shape and the desired reduction factor
create an index array to address all the pixels of the original array to be averaged for each pixel in the new
cycle through the new array pixels and average all the not-NaN pixel to obtain the new array pixel value; it there are only NaN, the output will be NaN.
I'm planning to add keyword to choice between different output (average, median, standard deviation of the input pixels and so on).
It is working as expected, but on a ~1Mpx image it takes around 3 seconds. Due to my lack of experience in python I'm searching for improvements.
Do anyone have suggestion how to do it better and more efficiently?
Do anyone know a library that already implements all that stuff?
Thanks.
Here you have an example output for random pixel input generated with the code here below:
import numpy as np
import pylab as plt
from scipy import misc
def resize_2d_nonan(array,factor):
"""
Resize a 2D array by different factor on two axis sipping NaN values.
If a new pixel contains only NaN, it will be set to NaN
Parameters
----------
array : 2D np array
factor : int or tuple. If int x and y factor wil be the same
Returns
-------
array : 2D np array scaled by factor
Created on Mon Jan 27 15:21:25 2014
#author: damo_ma
"""
xsize, ysize = array.shape
if isinstance(factor,int):
factor_x = factor
factor_y = factor
elif isinstance(factor,tuple):
factor_x , factor_y = factor[0], factor[1]
else:
raise NameError('Factor must be a tuple (x,y) or an integer')
if not (xsize %factor_x == 0 or ysize % factor_y == 0) :
raise NameError('Factors must be intger multiple of array shape')
new_xsize, new_ysize = xsize/factor_x, ysize/factor_y
new_array = np.empty([new_xsize, new_ysize])
new_array[:] = np.nan # this saves us an assignment in the loop below
# submatrix indexes : is the average box on the original matrix
subrow, subcol = np.indices((factor_x, factor_y))
# new matrix indexs
row, col = np.indices((new_xsize, new_ysize))
# some output for testing
#for i, j, ind in zip(row.reshape(-1), col.reshape(-1),range(row.size)) :
# print '----------------------------------------------'
# print 'i: %i, j: %i, ind: %i ' % (i, j, ind)
# print 'subrow+i*new_ysize, subcol+j*new_xsize :'
# print i,'*',new_xsize,'=',i*factor_x
# print j,'*',new_ysize,'=',j*factor_y
# print subrow+i*factor_x,subcol+j*factor_y
# print '---'
# print 'array[subrow+i*factor_x,subcol+j*factor_y] : '
# print array[subrow+i*factor_x,subcol+j*factor_y]
for i, j, ind in zip(row.reshape(-1), col.reshape(-1),range(row.size)) :
# define the small sub_matrix as view of input matrix subset
sub_matrix = array[subrow+i*factor_x,subcol+j*factor_y]
# modified from any(a) and all(a) to a.any() and a.all()
# see https://stackoverflow.com/a/10063039/1435167
if not (np.isnan(sub_matrix)).all(): # if we haven't all NaN
if (np.isnan(sub_matrix)).any(): # if we haven no NaN at all
msub_matrix = np.ma.masked_array(sub_matrix,np.isnan(sub_matrix))
(new_array.reshape(-1))[ind] = np.mean(msub_matrix)
else: # if we haven some NaN
(new_array.reshape(-1))[ind] = np.mean(sub_matrix)
# the case assign NaN if we have all NaN is missing due
# to the standard values of new_array
return new_array
row , cols = 6, 4
a = 10*np.random.random_sample((row , cols))
a[0:3,0:2] = np.nan
a[0,2] = np.nan
factor_x = 2
factor_y = 2
a_misc = misc.imresize(a, .5, interp='nearest', mode='F')
a_2d_nonan = resize_2d_nonan(a,(factor_x,factor_y))
print a
print
print a_misc
print
print a_2d_nonan
plt.subplot(131)
plt.imshow(a,interpolation='nearest')
plt.title('original')
plt.xticks(arange(a.shape[1]))
plt.yticks(arange(a.shape[0]))
plt.subplot(132)
plt.imshow(a_misc,interpolation='nearest')
plt.title('scipy.misc')
plt.xticks(arange(a_misc.shape[1]))
plt.yticks(arange(a_misc.shape[0]))
plt.subplot(133)
plt.imshow(a_2d_nonan,interpolation='nearest')
plt.title('my.func')
plt.xticks(arange(a_2d_nonan.shape[1]))
plt.yticks(arange(a_2d_nonan.shape[0]))
EDIT
I add some modification to address ChrisProsser comment.
If I substitute the NaN with some other value, let say the average of the not-NaN pixels, it will affect all the subsequent calculation: the difference between the resampled original array and the resampled array with NaN substituted shows that 2 pixels changed their values.
My goal is simply skip all the NaN pixels.
# substitute NaN with the average value
ind_nonan , ind_nan = np.where(np.isnan(a) == False), np.where(np.isnan(a) == True)
a_substitute = np.copy(a)
a_substitute[ind_nan] = np.mean(a_substitute[ind_nonan]) # substitute the NaN with average on the not-Nan
a_substitute_misc = misc.imresize(a_substitute, .5, interp='nearest', mode='F')
a_substitute_2d_nonan = resize_2d_nonan(a_substitute,(factor_x,factor_y))
print a_2d_nonan-a_substitute_2d_nonan
[[ nan -0.02296697]
[ 0.23143208 0. ]
[ 0. 0. ]]
** 2nd EDIT**
To address the Hooked's answer I put some additional code. It is an iteresting idea, sadly it interpolates new values over pixels that should be "empty" (NaN) and for my small example generate more NaN than good values.
X , Y = np.indices((row , cols))
X_new , Y_new = np.indices((row/factor_x , cols/factor_y))
from scipy.interpolate import CloughTocher2DInterpolator as intp
C = intp((X[ind_nonan],Y[ind_nonan]),a[ind_nonan])
a_interp = C(X_new , Y_new)
print a
print
print a_interp
[[ nan, nan],
[ nan, nan],
[ nan, 6.32826577]])
You are operating on small windows of the array. Instead of looping through the array to make the windows, the array can be efficiently restructured by manipulating its strides. The numpy library provides the as_strided() function to help with that. An example is provided in the SciPy CookBook Stride tricks for the Game of Life.
The following will use a generalized sliding window function which I will include it at the end.
Determine the shape of the new array:
rows, cols = a.shape
new_shape = rows / 2, cols / 2
Restructure the array into the windows you need, and create an indexing array identifying NaNs:
# 2x2 windows of the original array
windows = sliding_window(a, (2,2))
# make a windowed boolean array for indexing
notNan = sliding_window(np.logical_not(np.isnan(a)), (2,2))
The new array can be made using a list comprehension or a generator expression.
# using a list comprehension
# make a list of the means of the windows, disregarding the Nan's
means = [window[index].mean() for window, index in zip(windows, notNan)]
new_array = np.array(means).reshape(new_shape)
# generator expression
# produces the means of the windows, disregarding the Nan's
means = (window[index].mean() for window, index in zip(windows, notNan))
new_array = np.fromiter(means, dtype = np.float32).reshape(new_shape)
The generator expression should conserve memory. Using itertools.izip() instead of ```zip`` should also help if memory is a problem. I just used the list comprehension for your solution.
Your function:
def resize_2d_nonan(array,factor):
"""
Resize a 2D array by different factor on two axis skipping NaN values.
If a new pixel contains only NaN, it will be set to NaN
Parameters
----------
array : 2D np array
factor : int or tuple. If int x and y factor wil be the same
Returns
-------
array : 2D np array scaled by factor
Created on Mon Jan 27 15:21:25 2014
#author: damo_ma
"""
xsize, ysize = array.shape
if isinstance(factor,int):
factor_x = factor
factor_y = factor
window_size = factor, factor
elif isinstance(factor,tuple):
factor_x , factor_y = factor
window_size = factor
else:
raise NameError('Factor must be a tuple (x,y) or an integer')
if (xsize % factor_x or ysize % factor_y) :
raise NameError('Factors must be integer multiple of array shape')
new_shape = xsize / factor_x, ysize / factor_y
# non-overlapping windows of the original array
windows = sliding_window(a, window_size)
# windowed boolean array for indexing
notNan = sliding_window(np.logical_not(np.isnan(a)), window_size)
#list of the means of the windows, disregarding the Nan's
means = [window[index].mean() for window, index in zip(windows, notNan)]
# new array
new_array = np.array(means).reshape(new_shape)
return new_array
I haven't done any time comparisons with your original function, but it should be faster.
Many solutions I've seen here on SO vectorize the operations to increase speed/efficiency - I don't quite have a handle on that and don't know if it can be applied to your problem. Searching SO for window, array, moving average, vectorize, and numpy should produce similar questions and answers for reference.
sliding_window() see attribution below:
import numpy as np
from numpy.lib.stride_tricks import as_strided as ast
from itertools import product
def norm_shape(shape):
'''
Normalize numpy array shapes so they're always expressed as a tuple,
even for one-dimensional shapes.
Parameters
shape - an int, or a tuple of ints
Returns
a shape tuple
'''
try:
i = int(shape)
return (i,)
except TypeError:
# shape was not a number
pass
try:
t = tuple(shape)
return t
except TypeError:
# shape was not iterable
pass
raise TypeError('shape must be an int, or a tuple of ints')
def sliding_window(a,ws,ss = None,flatten = True):
'''
Return a sliding window over a in any number of dimensions
Parameters:
a - an n-dimensional numpy array
ws - an int (a is 1D) or tuple (a is 2D or greater) representing the size
of each dimension of the window
ss - an int (a is 1D) or tuple (a is 2D or greater) representing the
amount to slide the window in each dimension. If not specified, it
defaults to ws.
flatten - if True, all slices are flattened, otherwise, there is an
extra dimension for each dimension of the input.
Returns
an array containing each n-dimensional window from a
'''
if None is ss:
# ss was not provided. the windows will not overlap in any direction.
ss = ws
ws = norm_shape(ws)
ss = norm_shape(ss)
# convert ws, ss, and a.shape to numpy arrays so that we can do math in every
# dimension at once.
ws = np.array(ws)
ss = np.array(ss)
shape = np.array(a.shape)
# ensure that ws, ss, and a.shape all have the same number of dimensions
ls = [len(shape),len(ws),len(ss)]
if 1 != len(set(ls)):
raise ValueError(\
'a.shape, ws and ss must all have the same length. They were %s' % str(ls))
# ensure that ws is smaller than a in every dimension
if np.any(ws > shape):
raise ValueError(\
'ws cannot be larger than a in any dimension.\
a.shape was %s and ws was %s' % (str(a.shape),str(ws)))
# how many slices will there be in each dimension?
newshape = norm_shape(((shape - ws) // ss) + 1)
# the shape of the strided array will be the number of slices in each dimension
# plus the shape of the window (tuple addition)
newshape += norm_shape(ws)
# the strides tuple will be the array's strides multiplied by step size, plus
# the array's strides (tuple addition)
newstrides = norm_shape(np.array(a.strides) * ss) + a.strides
strided = ast(a,shape = newshape,strides = newstrides)
if not flatten:
return strided
# Collapse strided so that it has one more dimension than the window. I.e.,
# the new array is a flat list of slices.
meat = len(ws) if ws.shape else 0
firstdim = (np.product(newshape[:-meat]),) if ws.shape else ()
dim = firstdim + (newshape[-meat:])
# remove any dimensions with size 1
dim = filter(lambda i : i != 1,dim)
return strided.reshape(dim)
sliding_window() attribution
I originally found this on a blog page that is now a broken link:
Efficient Overlapping Windows with Numpy - http://www.johnvinyard.com/blog/?p=268
With a little searching it looks like it now resides in the Zounds github repository. Thanks John Vinyard.
Note this post is pretty old and there are a lot of SO Q&A's regarding sliding windows, rolling windows, and for images- patch extraction. There are a lot of one-offs using numpy's as_strided but this function still seems the only one to handle n-d windowing. scikits sklearn.feature_extraction.image library seems to be often cited for extracting or viewing image patches.
Interpolate the points, using scipy.interpolate, on a different grid. Below I've shown a cubic interpolator, which is slower but probably more accurate. You'll notice that the corner pixels are missing with this function, you could then use a linear or nearest neighbor interpolation to handle those last values.
import numpy as np
import pylab as plt
# Test data
row = np.linspace(-3,3,50)
X,Y = np.meshgrid(row,row)
Z = np.sqrt(X**2+Y**2) + np.cos(Y)
# Make some dead pixels, favor an edge
dead = np.random.random(Z.shape)
dead = (dead*X>.7)
Z[dead] =np.nan
from scipy.interpolate import CloughTocher2DInterpolator as intp
C = intp((X[~dead],Y[~dead]),Z[~dead])
new_row = np.linspace(-3,3,25)
xi,yi = np.meshgrid(new_row,new_row)
zi = C(xi,yi)
plt.subplot(121)
plt.title("Original signal 50x50")
plt.imshow(Z,interpolation='nearest')
plt.subplot(122)
plt.title("Interpolated signal 25x25")
plt.imshow(zi,interpolation='nearest')
plt.show()

sparse 3d matrix/array in Python?

In scipy, we can construct a sparse matrix using scipy.sparse.lil_matrix() etc. But the matrix is in 2d.
I am wondering if there is an existing data structure for sparse 3d matrix / array (tensor) in Python?
p.s. I have lots of sparse data in 3d and need a tensor to store / perform multiplication. Any suggestions to implement such a tensor if there's no existing data structure?
Happy to suggest a (possibly obvious) implementation of this, which could be made in pure Python or C/Cython if you've got time and space for new dependencies, and need it to be faster.
A sparse matrix in N dimensions can assume most elements are empty, so we use a dictionary keyed on tuples:
class NDSparseMatrix:
def __init__(self):
self.elements = {}
def addValue(self, tuple, value):
self.elements[tuple] = value
def readValue(self, tuple):
try:
value = self.elements[tuple]
except KeyError:
# could also be 0.0 if using floats...
value = 0
return value
and you would use it like so:
sparse = NDSparseMatrix()
sparse.addValue((1,2,3), 15.7)
should_be_zero = sparse.readValue((1,5,13))
You could make this implementation more robust by verifying that the input is in fact a tuple, and that it contains only integers, but that will just slow things down so I wouldn't worry unless you're releasing your code to the world later.
EDIT - a Cython implementation of the matrix multiplication problem, assuming other tensor is an N Dimensional NumPy array (numpy.ndarray) might look like this:
#cython: boundscheck=False
#cython: wraparound=False
cimport numpy as np
def sparse_mult(object sparse, np.ndarray[double, ndim=3] u):
cdef unsigned int i, j, k
out = np.ndarray(shape=(u.shape[0],u.shape[1],u.shape[2]), dtype=double)
for i in xrange(1,u.shape[0]-1):
for j in xrange(1, u.shape[1]-1):
for k in xrange(1, u.shape[2]-1):
# note, here you must define your own rank-3 multiplication rule, which
# is, in general, nontrivial, especially if LxMxN tensor...
# loop over a dummy variable (or two) and perform some summation:
out[i,j,k] = u[i,j,k] * sparse((i,j,k))
return out
Although you will always need to hand roll this for the problem at hand, because (as mentioned in code comment) you'll need to define which indices you're summing over, and be careful about the array lengths or things won't work!
EDIT 2 - if the other matrix is also sparse, then you don't need to do the three way looping:
def sparse_mult(sparse, other_sparse):
out = NDSparseMatrix()
for key, value in sparse.elements.items():
i, j, k = key
# note, here you must define your own rank-3 multiplication rule, which
# is, in general, nontrivial, especially if LxMxN tensor...
# loop over a dummy variable (or two) and perform some summation
# (example indices shown):
out.addValue(key) = out.readValue(key) +
other_sparse.readValue((i,j,k+1)) * sparse((i-3,j,k))
return out
My suggestion for a C implementation would be to use a simple struct to hold the indices and the values:
typedef struct {
int index[3];
float value;
} entry_t;
you'll then need some functions to allocate and maintain a dynamic array of such structs, and search them as fast as you need; but you should test the Python implementation in place for performance before worrying about that stuff.
An alternative answer as of 2017 is the sparse package. According to the package itself it implements sparse multidimensional arrays on top of NumPy and scipy.sparse by generalizing the scipy.sparse.coo_matrix layout.
Here's an example taken from the docs:
import numpy as np
n = 1000
ndims = 4
nnz = 1000000
coords = np.random.randint(0, n - 1, size=(ndims, nnz))
data = np.random.random(nnz)
import sparse
x = sparse.COO(coords, data, shape=((n,) * ndims))
x
# <COO: shape=(1000, 1000, 1000, 1000), dtype=float64, nnz=1000000>
x.nbytes
# 16000000
y = sparse.tensordot(x, x, axes=((3, 0), (1, 2)))
y
# <COO: shape=(1000, 1000, 1000, 1000), dtype=float64, nnz=1001588>
Have a look at sparray - sparse n-dimensional arrays in Python (by Jan Erik Solem). Also available on github.
Nicer than writing everything new from scratch may be to use scipy's sparse module as far as possible. This may lead to (much) better performance. I had a somewhat similar problem, but I only had to access the data efficiently, not perform any operations on them. Furthermore, my data were only sparse in two out of three dimensions.
I have written a class that solves my problem and could (as far as I think) easily be extended to satisfiy the OP's needs. It may still hold some potential for improvement, though.
import scipy.sparse as sp
import numpy as np
class Sparse3D():
"""
Class to store and access 3 dimensional sparse matrices efficiently
"""
def __init__(self, *sparseMatrices):
"""
Constructor
Takes a stack of sparse 2D matrices with the same dimensions
"""
self.data = sp.vstack(sparseMatrices, "dok")
self.shape = (len(sparseMatrices), *sparseMatrices[0].shape)
self._dim1_jump = np.arange(0, self.shape[1]*self.shape[0], self.shape[1])
self._dim1 = np.arange(self.shape[0])
self._dim2 = np.arange(self.shape[1])
def __getitem__(self, pos):
if not type(pos) == tuple:
if not hasattr(pos, "__iter__") and not type(pos) == slice:
return self.data[self._dim1_jump[pos] + self._dim2]
else:
return Sparse3D(*(self[self._dim1[i]] for i in self._dim1[pos]))
elif len(pos) > 3:
raise IndexError("too many indices for array")
else:
if (not hasattr(pos[0], "__iter__") and not type(pos[0]) == slice or
not hasattr(pos[1], "__iter__") and not type(pos[1]) == slice):
if len(pos) == 2:
result = self.data[self._dim1_jump[pos[0]] + self._dim2[pos[1]]]
else:
result = self.data[self._dim1_jump[pos[0]] + self._dim2[pos[1]], pos[2]].T
if hasattr(pos[2], "__iter__") or type(pos[2]) == slice:
result = result.T
return result
else:
if len(pos) == 2:
return Sparse3D(*(self[i, self._dim2[pos[1]]] for i in self._dim1[pos[0]]))
else:
if not hasattr(pos[2], "__iter__") and not type(pos[2]) == slice:
return sp.vstack([self[self._dim1[pos[0]], i, pos[2]]
for i in self._dim2[pos[1]]]).T
else:
return Sparse3D(*(self[i, self._dim2[pos[1]], pos[2]]
for i in self._dim1[pos[0]]))
def toarray(self):
return np.array([self[i].toarray() for i in range(self.shape[0])])
I also need 3D sparse matrix for solving the 2D heat equations (2 spatial dimensions are dense, but the time dimension is diagonal plus and minus one offdiagonal.) I found this link to guide me. The trick is to create an array Number that maps the 2D sparse matrix to a 1D linear vector. Then build the 2D matrix by building a list of data and indices. Later the Number matrix is used to arrange the answer back to a 2D array.
[edit] It occurred to me after my initial post, this could be handled better by using the .reshape(-1) method. After research, the reshape method is better than flatten because it returns a new view into the original array, but flatten copies the array. The code uses the original Number array. I will try to update later.[end edit]
I test it by creating a 1D random vector and solving for a second vector. Then multiply it by the sparse 2D matrix and I get the same result.
Note: I repeat this many times in a loop with exactly the same matrix M, so you might think it would be more efficient to solve for inverse(M). But the inverse of M is not sparse, so I think (but have not tested) using spsolve is a better solution. "Best" probably depends on how large the matrix is you are using.
#!/usr/bin/env python3
# testSparse.py
# profhuster
import numpy as np
import scipy.sparse as sM
import scipy.sparse.linalg as spLA
from array import array
from numpy.random import rand, seed
seed(101520)
nX = 4
nY = 3
r = 0.1
def loadSpNodes(nX, nY, r):
# Matrix to map 2D array of nodes to 1D array
Number = np.zeros((nY, nX), dtype=int)
# Map each element of the 2D array to a 1D array
iM = 0
for i in range(nX):
for j in range(nY):
Number[j, i] = iM
iM += 1
print(f"Number = \n{Number}")
# Now create a sparse matrix of the "stencil"
diagVal = 1 + 4 * r
offVal = -r
d_list = array('f')
i_list = array('i')
j_list = array('i')
# Loop over the 2D nodes matrix
for i in range(nX):
for j in range(nY):
# Recall the 1D number
iSparse = Number[j, i]
# populate the diagonal
d_list.append(diagVal)
i_list.append(iSparse)
j_list.append(iSparse)
# Now, for each rectangular neighbor, add the
# off-diagonal entries
# Use a try-except, so boundry nodes work
for (jj,ii) in ((j+1,i),(j-1,i),(j,i+1),(j,i-1)):
try:
iNeigh = Number[jj, ii]
if jj >= 0 and ii >=0:
d_list.append(offVal)
i_list.append(iSparse)
j_list.append(iNeigh)
except IndexError:
pass
spNodes = sM.coo_matrix((d_list, (i_list, j_list)), shape=(nX*nY,nX*nY))
return spNodes
MySpNodes = loadSpNodes(nX, nY, r)
print(f"Sparse Nodes = \n{MySpNodes.toarray()}")
b = rand(nX*nY)
print(f"b=\n{b}")
x = spLA.spsolve(MySpNodes.tocsr(), b)
print(f"x=\n{x}")
print(f"Multiply back together=\n{x * MySpNodes}")
I needed a 3d look up table for x,y,z and came up with this solution..
Why not use one of the dimensions to be a divisor of the third dimension? ie. use x and 'yz' as the matrix dimensions
eg. if x has 80 potential members, y has 100 potential' and z has 20 potential'
you make the sparse matrix to be 80 by 2000 (i.e. xy=100x20)
x dimension is as usual
yz dimension: the first 100 elements will represent z=0, y=0 to 99
..............the second 100 will represent z=2, y=0 to 99 etc
so given element located at (x,y,z) would be in sparse matrix at (x, z*100 + y)
if you need to use negative numbers design a aritrary offset into your matrix translation. the solutio could be expanded to n dimensions if necessary
from scipy import sparse
m = sparse.lil_matrix((100,2000), dtype=float)
def add_element((x,y,z), element):
element=float(element)
m[x,y+z*100]=element
def get_element(x,y,z):
return m[x,y+z*100]
add_element([3,2,4],2.2)
add_element([20,15,7], 1.2)
print get_element(0,0,0)
print get_element(3,2,4)
print get_element(20,15,7)
print " This is m sparse:";print m
====================
OUTPUT:
0.0
2.2
1.2
This is m sparse:
(3, 402L) 2.2
(20, 715L) 1.2
====================

Categories