How to create multidimensional array with numpy.mgrid - python

I wonder how to create a grid (multidimensional array) with numpy mgrid for an unknown number of dimensions (D), each dimension with a lower and upper bound and number of bins:
n_bins = numpy.array([100 for d in numpy.arrange(D)])
bounds = numpy.array([(0.,1) for d in numpy.arrange(D)])
grid = numpy.mgrid[numpy.linspace[(numpy.linspace(bounds(d)[0], bounds(d)[1], n_bins[d] for d in numpy.arrange(D)]
I guess above doesn't work, since mgrid creates array of indices not values. But how to use it to create array of values.
Thanks
Aso.agile

You might use
np.mgrid[[slice(row[0], row[1], n*1j) for row, n in zip(bounds, n_bins)]]
import numpy as np
D = 3
n_bins = 100*np.ones(D)
bounds = np.repeat([(0,1)], D, axis = 0)
result = np.mgrid[[slice(row[0], row[1], n*1j) for row, n in zip(bounds, n_bins)]]
ans = np.mgrid[0:1:100j,0:1:100j,0:1:100j]
assert np.allclose(result, ans)
Note that np.ogrid can be used in many places where np.mgrid is used, and it requires less memory because the arrays are smaller.

Related

incrementing a multidimensional numpy array (python) with products generated from a set of vectors corresponding to axes of the array

A is a k dimensional numpy array of floats (k could be pretty big, e.g. up to 10)
I need to implement an update to A by incrementing each of the values (as described below). I'm wondering if there is a numpy-style way that would be fast.
Let L_i be the length of axis i
An update to this array is generated in two steps follows:
For each axis of A a corresponding vector G is generated.
For example, corresponding to axis i a vector G_i of length L_i is generated (from data).
Update A at all positions by calculating an increment from the G vectors for each position in A
To do this at any particular position, let p be an array of k indices, corresponding to a position in A. Then A at p is incremented by a value calculated as the product:
Product(G_i[p[i]], for i from 0 to k-1)
A full update to A involves doing this operation for all locations in A (i.e. all possible values of p)
This operation would be very slow doing positions one by one via loops.
Is there a numpy style way to do this that would be fast?
edit
## this for three dimensions, final matrix at pos i,j,k has the
## product of c[i]*b[j]*a[k]
## but for arbitrary # of dimensions it will have a loop in a loop
## and will be slow
import numpy as np
a = np.array([1,2])
b = np.array([3,4,5])
c = np.array([6,7,8,9])
ab = []
for bval in b:
ab.append(bval*a)
ab = np.stack(ab)
abc = []
for cval in c:
abc.append(cval*ab)
abc = np.stack(abc)
as a function
def loopfunc(arraylist):
ndim = len(arraylist)
m = arraylist[0]
for i in range(1,ndim):
ml = []
for val in arraylist[i]:
ml.append(val*m)
m = np.stack(ml)
return m
This is a wacky problem, but I like it.
If I understand what you need from your example, you can accomplish this with some reshaping trickery and NumPy's usual broadcasting rules. The idea is to reshape each array so it has the right number of dimensions, then just directly multiply.
Here's a function that implements this.
from functools import reduce
import operator
import numpy as np
import scipy.linalg
def wacky_outer_product(*arrays):
assert len(arrays) >= 2
assert all(arr.ndim == 1 for arr in arrays)
ndim = len(arrays)
shapes = scipy.linalg.toeplitz((-1,) + (1,) * (ndim - 1))
reshaped = (arr.reshape(new_shape) for arr, new_shape in zip(arrays, shapes))
return reduce(operator.mul, reshaped).T
Testing this on your example arrays, we have:
>>> foo = wacky_outer_product(a, b, c)
>>> np.all(foo, abc)
True
Edit
Ok, the above function is fun, but the below is probably much better. No transposing, clearer, and much smaller:
from functools import reduce
import operator
import numpy as np
def wacky_outer_product(*arrays):
return reduce(operator.mul, np.ix_(*reversed(arrays)))

How to mirror a NxNx3 numpy array diagonally

How can I transpose a 3D array in a similar fashion to a 2D array, except that the entries at the lowest level are arrays of three instead of scalar values?
This is what I mean:
M = [[[0,0,0][1,1,1][2,2,2]]
[[0,0,0][0,0,0][3,3,3]]
[[0,0,0][0,0,0][0,0,0]]]
N = some_operation(M)
N = [[[0,0,0][0,0,0][0,0,0]]
[[1,1,1][0,0,0][0,0,0]]
[[2,2,2][3,3,3][0,0,0]]]
I have an example in python code that shows what I mean as well:
import numpy as np
M = np.array([[[0,0,0],[1,1,1],[2,2,2]],[[0,0,0],[0,0,0],[3,3,3]],[[0,0,0],[0,0,0],[0,0,0]]])
N = np.array([[[0,0,0],[0,0,0],[0,0,0]],[[1,1,1],[0,0,0],[0,0,0]],[[2,2,2],[3,3,3],[0,0,0]]])
print(M)
print('\n\n')
print(M_flipped)
The np.transpose() function doesn't seem to be adaptable for my case.
Thanks in advance.
Simply permute axes with np.transpose -
N = M.transpose(1,0,2)
Or with np.moveaxis -
N = np.moveaxis(M,0,1)
With np.rollaxis -
N = np.rollaxis(M,1,0)

Adding to Numpy Array by-index, where there are duplicate index values

Imagine we have an array called count, size N, initialized to zero:
import numpy as np
N = 100
count = np.zeros(N) # Shape (N,)
And a set of indexes, which may contain duplicates, and a set of boolean values (or whatever kind of values really):
IDX = np.random.choice(N,N,replace=True) # Shape (N,)
mask = np.random.rand(N)>.5 # Shape (N,)
I want to count, by location, the number of True's.
count[IDX] += mask
But for this output, the maximum of count will be equal to 1.
This is because, I assume, duplicates can't be processed in the pipeline like this, either the first or last is used I suspect.
Is there any vectorized equivalent code which can perform this function accurately?
Thanks!

resize a 2D numpy array excluding NaN

I'm trying to resize a 2D numpy array of a given factor, obtaining a smaller array in output.
The array is read from an image file and some of the values should be NaN (Not a Number, np.nan from numpy): it is the result of remote sensing measurements from satellite and simply some pixels weren't measured.
The suitable package I found for this is scypy.misc.imresize, but each pixel in the output array containing a NaN is set to NaN, even if there are some valid data in the original pixels interpolated together.
My solution is appended here, what I've done is essentially :
create a new array based on the original array shape and the desired reduction factor
create an index array to address all the pixels of the original array to be averaged for each pixel in the new
cycle through the new array pixels and average all the not-NaN pixel to obtain the new array pixel value; it there are only NaN, the output will be NaN.
I'm planning to add keyword to choice between different output (average, median, standard deviation of the input pixels and so on).
It is working as expected, but on a ~1Mpx image it takes around 3 seconds. Due to my lack of experience in python I'm searching for improvements.
Do anyone have suggestion how to do it better and more efficiently?
Do anyone know a library that already implements all that stuff?
Thanks.
Here you have an example output for random pixel input generated with the code here below:
import numpy as np
import pylab as plt
from scipy import misc
def resize_2d_nonan(array,factor):
"""
Resize a 2D array by different factor on two axis sipping NaN values.
If a new pixel contains only NaN, it will be set to NaN
Parameters
----------
array : 2D np array
factor : int or tuple. If int x and y factor wil be the same
Returns
-------
array : 2D np array scaled by factor
Created on Mon Jan 27 15:21:25 2014
#author: damo_ma
"""
xsize, ysize = array.shape
if isinstance(factor,int):
factor_x = factor
factor_y = factor
elif isinstance(factor,tuple):
factor_x , factor_y = factor[0], factor[1]
else:
raise NameError('Factor must be a tuple (x,y) or an integer')
if not (xsize %factor_x == 0 or ysize % factor_y == 0) :
raise NameError('Factors must be intger multiple of array shape')
new_xsize, new_ysize = xsize/factor_x, ysize/factor_y
new_array = np.empty([new_xsize, new_ysize])
new_array[:] = np.nan # this saves us an assignment in the loop below
# submatrix indexes : is the average box on the original matrix
subrow, subcol = np.indices((factor_x, factor_y))
# new matrix indexs
row, col = np.indices((new_xsize, new_ysize))
# some output for testing
#for i, j, ind in zip(row.reshape(-1), col.reshape(-1),range(row.size)) :
# print '----------------------------------------------'
# print 'i: %i, j: %i, ind: %i ' % (i, j, ind)
# print 'subrow+i*new_ysize, subcol+j*new_xsize :'
# print i,'*',new_xsize,'=',i*factor_x
# print j,'*',new_ysize,'=',j*factor_y
# print subrow+i*factor_x,subcol+j*factor_y
# print '---'
# print 'array[subrow+i*factor_x,subcol+j*factor_y] : '
# print array[subrow+i*factor_x,subcol+j*factor_y]
for i, j, ind in zip(row.reshape(-1), col.reshape(-1),range(row.size)) :
# define the small sub_matrix as view of input matrix subset
sub_matrix = array[subrow+i*factor_x,subcol+j*factor_y]
# modified from any(a) and all(a) to a.any() and a.all()
# see https://stackoverflow.com/a/10063039/1435167
if not (np.isnan(sub_matrix)).all(): # if we haven't all NaN
if (np.isnan(sub_matrix)).any(): # if we haven no NaN at all
msub_matrix = np.ma.masked_array(sub_matrix,np.isnan(sub_matrix))
(new_array.reshape(-1))[ind] = np.mean(msub_matrix)
else: # if we haven some NaN
(new_array.reshape(-1))[ind] = np.mean(sub_matrix)
# the case assign NaN if we have all NaN is missing due
# to the standard values of new_array
return new_array
row , cols = 6, 4
a = 10*np.random.random_sample((row , cols))
a[0:3,0:2] = np.nan
a[0,2] = np.nan
factor_x = 2
factor_y = 2
a_misc = misc.imresize(a, .5, interp='nearest', mode='F')
a_2d_nonan = resize_2d_nonan(a,(factor_x,factor_y))
print a
print
print a_misc
print
print a_2d_nonan
plt.subplot(131)
plt.imshow(a,interpolation='nearest')
plt.title('original')
plt.xticks(arange(a.shape[1]))
plt.yticks(arange(a.shape[0]))
plt.subplot(132)
plt.imshow(a_misc,interpolation='nearest')
plt.title('scipy.misc')
plt.xticks(arange(a_misc.shape[1]))
plt.yticks(arange(a_misc.shape[0]))
plt.subplot(133)
plt.imshow(a_2d_nonan,interpolation='nearest')
plt.title('my.func')
plt.xticks(arange(a_2d_nonan.shape[1]))
plt.yticks(arange(a_2d_nonan.shape[0]))
EDIT
I add some modification to address ChrisProsser comment.
If I substitute the NaN with some other value, let say the average of the not-NaN pixels, it will affect all the subsequent calculation: the difference between the resampled original array and the resampled array with NaN substituted shows that 2 pixels changed their values.
My goal is simply skip all the NaN pixels.
# substitute NaN with the average value
ind_nonan , ind_nan = np.where(np.isnan(a) == False), np.where(np.isnan(a) == True)
a_substitute = np.copy(a)
a_substitute[ind_nan] = np.mean(a_substitute[ind_nonan]) # substitute the NaN with average on the not-Nan
a_substitute_misc = misc.imresize(a_substitute, .5, interp='nearest', mode='F')
a_substitute_2d_nonan = resize_2d_nonan(a_substitute,(factor_x,factor_y))
print a_2d_nonan-a_substitute_2d_nonan
[[ nan -0.02296697]
[ 0.23143208 0. ]
[ 0. 0. ]]
** 2nd EDIT**
To address the Hooked's answer I put some additional code. It is an iteresting idea, sadly it interpolates new values over pixels that should be "empty" (NaN) and for my small example generate more NaN than good values.
X , Y = np.indices((row , cols))
X_new , Y_new = np.indices((row/factor_x , cols/factor_y))
from scipy.interpolate import CloughTocher2DInterpolator as intp
C = intp((X[ind_nonan],Y[ind_nonan]),a[ind_nonan])
a_interp = C(X_new , Y_new)
print a
print
print a_interp
[[ nan, nan],
[ nan, nan],
[ nan, 6.32826577]])
You are operating on small windows of the array. Instead of looping through the array to make the windows, the array can be efficiently restructured by manipulating its strides. The numpy library provides the as_strided() function to help with that. An example is provided in the SciPy CookBook Stride tricks for the Game of Life.
The following will use a generalized sliding window function which I will include it at the end.
Determine the shape of the new array:
rows, cols = a.shape
new_shape = rows / 2, cols / 2
Restructure the array into the windows you need, and create an indexing array identifying NaNs:
# 2x2 windows of the original array
windows = sliding_window(a, (2,2))
# make a windowed boolean array for indexing
notNan = sliding_window(np.logical_not(np.isnan(a)), (2,2))
The new array can be made using a list comprehension or a generator expression.
# using a list comprehension
# make a list of the means of the windows, disregarding the Nan's
means = [window[index].mean() for window, index in zip(windows, notNan)]
new_array = np.array(means).reshape(new_shape)
# generator expression
# produces the means of the windows, disregarding the Nan's
means = (window[index].mean() for window, index in zip(windows, notNan))
new_array = np.fromiter(means, dtype = np.float32).reshape(new_shape)
The generator expression should conserve memory. Using itertools.izip() instead of ```zip`` should also help if memory is a problem. I just used the list comprehension for your solution.
Your function:
def resize_2d_nonan(array,factor):
"""
Resize a 2D array by different factor on two axis skipping NaN values.
If a new pixel contains only NaN, it will be set to NaN
Parameters
----------
array : 2D np array
factor : int or tuple. If int x and y factor wil be the same
Returns
-------
array : 2D np array scaled by factor
Created on Mon Jan 27 15:21:25 2014
#author: damo_ma
"""
xsize, ysize = array.shape
if isinstance(factor,int):
factor_x = factor
factor_y = factor
window_size = factor, factor
elif isinstance(factor,tuple):
factor_x , factor_y = factor
window_size = factor
else:
raise NameError('Factor must be a tuple (x,y) or an integer')
if (xsize % factor_x or ysize % factor_y) :
raise NameError('Factors must be integer multiple of array shape')
new_shape = xsize / factor_x, ysize / factor_y
# non-overlapping windows of the original array
windows = sliding_window(a, window_size)
# windowed boolean array for indexing
notNan = sliding_window(np.logical_not(np.isnan(a)), window_size)
#list of the means of the windows, disregarding the Nan's
means = [window[index].mean() for window, index in zip(windows, notNan)]
# new array
new_array = np.array(means).reshape(new_shape)
return new_array
I haven't done any time comparisons with your original function, but it should be faster.
Many solutions I've seen here on SO vectorize the operations to increase speed/efficiency - I don't quite have a handle on that and don't know if it can be applied to your problem. Searching SO for window, array, moving average, vectorize, and numpy should produce similar questions and answers for reference.
sliding_window() see attribution below:
import numpy as np
from numpy.lib.stride_tricks import as_strided as ast
from itertools import product
def norm_shape(shape):
'''
Normalize numpy array shapes so they're always expressed as a tuple,
even for one-dimensional shapes.
Parameters
shape - an int, or a tuple of ints
Returns
a shape tuple
'''
try:
i = int(shape)
return (i,)
except TypeError:
# shape was not a number
pass
try:
t = tuple(shape)
return t
except TypeError:
# shape was not iterable
pass
raise TypeError('shape must be an int, or a tuple of ints')
def sliding_window(a,ws,ss = None,flatten = True):
'''
Return a sliding window over a in any number of dimensions
Parameters:
a - an n-dimensional numpy array
ws - an int (a is 1D) or tuple (a is 2D or greater) representing the size
of each dimension of the window
ss - an int (a is 1D) or tuple (a is 2D or greater) representing the
amount to slide the window in each dimension. If not specified, it
defaults to ws.
flatten - if True, all slices are flattened, otherwise, there is an
extra dimension for each dimension of the input.
Returns
an array containing each n-dimensional window from a
'''
if None is ss:
# ss was not provided. the windows will not overlap in any direction.
ss = ws
ws = norm_shape(ws)
ss = norm_shape(ss)
# convert ws, ss, and a.shape to numpy arrays so that we can do math in every
# dimension at once.
ws = np.array(ws)
ss = np.array(ss)
shape = np.array(a.shape)
# ensure that ws, ss, and a.shape all have the same number of dimensions
ls = [len(shape),len(ws),len(ss)]
if 1 != len(set(ls)):
raise ValueError(\
'a.shape, ws and ss must all have the same length. They were %s' % str(ls))
# ensure that ws is smaller than a in every dimension
if np.any(ws > shape):
raise ValueError(\
'ws cannot be larger than a in any dimension.\
a.shape was %s and ws was %s' % (str(a.shape),str(ws)))
# how many slices will there be in each dimension?
newshape = norm_shape(((shape - ws) // ss) + 1)
# the shape of the strided array will be the number of slices in each dimension
# plus the shape of the window (tuple addition)
newshape += norm_shape(ws)
# the strides tuple will be the array's strides multiplied by step size, plus
# the array's strides (tuple addition)
newstrides = norm_shape(np.array(a.strides) * ss) + a.strides
strided = ast(a,shape = newshape,strides = newstrides)
if not flatten:
return strided
# Collapse strided so that it has one more dimension than the window. I.e.,
# the new array is a flat list of slices.
meat = len(ws) if ws.shape else 0
firstdim = (np.product(newshape[:-meat]),) if ws.shape else ()
dim = firstdim + (newshape[-meat:])
# remove any dimensions with size 1
dim = filter(lambda i : i != 1,dim)
return strided.reshape(dim)
sliding_window() attribution
I originally found this on a blog page that is now a broken link:
Efficient Overlapping Windows with Numpy - http://www.johnvinyard.com/blog/?p=268
With a little searching it looks like it now resides in the Zounds github repository. Thanks John Vinyard.
Note this post is pretty old and there are a lot of SO Q&A's regarding sliding windows, rolling windows, and for images- patch extraction. There are a lot of one-offs using numpy's as_strided but this function still seems the only one to handle n-d windowing. scikits sklearn.feature_extraction.image library seems to be often cited for extracting or viewing image patches.
Interpolate the points, using scipy.interpolate, on a different grid. Below I've shown a cubic interpolator, which is slower but probably more accurate. You'll notice that the corner pixels are missing with this function, you could then use a linear or nearest neighbor interpolation to handle those last values.
import numpy as np
import pylab as plt
# Test data
row = np.linspace(-3,3,50)
X,Y = np.meshgrid(row,row)
Z = np.sqrt(X**2+Y**2) + np.cos(Y)
# Make some dead pixels, favor an edge
dead = np.random.random(Z.shape)
dead = (dead*X>.7)
Z[dead] =np.nan
from scipy.interpolate import CloughTocher2DInterpolator as intp
C = intp((X[~dead],Y[~dead]),Z[~dead])
new_row = np.linspace(-3,3,25)
xi,yi = np.meshgrid(new_row,new_row)
zi = C(xi,yi)
plt.subplot(121)
plt.title("Original signal 50x50")
plt.imshow(Z,interpolation='nearest')
plt.subplot(122)
plt.title("Interpolated signal 25x25")
plt.imshow(zi,interpolation='nearest')
plt.show()

sparse 3d matrix/array in Python?

In scipy, we can construct a sparse matrix using scipy.sparse.lil_matrix() etc. But the matrix is in 2d.
I am wondering if there is an existing data structure for sparse 3d matrix / array (tensor) in Python?
p.s. I have lots of sparse data in 3d and need a tensor to store / perform multiplication. Any suggestions to implement such a tensor if there's no existing data structure?
Happy to suggest a (possibly obvious) implementation of this, which could be made in pure Python or C/Cython if you've got time and space for new dependencies, and need it to be faster.
A sparse matrix in N dimensions can assume most elements are empty, so we use a dictionary keyed on tuples:
class NDSparseMatrix:
def __init__(self):
self.elements = {}
def addValue(self, tuple, value):
self.elements[tuple] = value
def readValue(self, tuple):
try:
value = self.elements[tuple]
except KeyError:
# could also be 0.0 if using floats...
value = 0
return value
and you would use it like so:
sparse = NDSparseMatrix()
sparse.addValue((1,2,3), 15.7)
should_be_zero = sparse.readValue((1,5,13))
You could make this implementation more robust by verifying that the input is in fact a tuple, and that it contains only integers, but that will just slow things down so I wouldn't worry unless you're releasing your code to the world later.
EDIT - a Cython implementation of the matrix multiplication problem, assuming other tensor is an N Dimensional NumPy array (numpy.ndarray) might look like this:
#cython: boundscheck=False
#cython: wraparound=False
cimport numpy as np
def sparse_mult(object sparse, np.ndarray[double, ndim=3] u):
cdef unsigned int i, j, k
out = np.ndarray(shape=(u.shape[0],u.shape[1],u.shape[2]), dtype=double)
for i in xrange(1,u.shape[0]-1):
for j in xrange(1, u.shape[1]-1):
for k in xrange(1, u.shape[2]-1):
# note, here you must define your own rank-3 multiplication rule, which
# is, in general, nontrivial, especially if LxMxN tensor...
# loop over a dummy variable (or two) and perform some summation:
out[i,j,k] = u[i,j,k] * sparse((i,j,k))
return out
Although you will always need to hand roll this for the problem at hand, because (as mentioned in code comment) you'll need to define which indices you're summing over, and be careful about the array lengths or things won't work!
EDIT 2 - if the other matrix is also sparse, then you don't need to do the three way looping:
def sparse_mult(sparse, other_sparse):
out = NDSparseMatrix()
for key, value in sparse.elements.items():
i, j, k = key
# note, here you must define your own rank-3 multiplication rule, which
# is, in general, nontrivial, especially if LxMxN tensor...
# loop over a dummy variable (or two) and perform some summation
# (example indices shown):
out.addValue(key) = out.readValue(key) +
other_sparse.readValue((i,j,k+1)) * sparse((i-3,j,k))
return out
My suggestion for a C implementation would be to use a simple struct to hold the indices and the values:
typedef struct {
int index[3];
float value;
} entry_t;
you'll then need some functions to allocate and maintain a dynamic array of such structs, and search them as fast as you need; but you should test the Python implementation in place for performance before worrying about that stuff.
An alternative answer as of 2017 is the sparse package. According to the package itself it implements sparse multidimensional arrays on top of NumPy and scipy.sparse by generalizing the scipy.sparse.coo_matrix layout.
Here's an example taken from the docs:
import numpy as np
n = 1000
ndims = 4
nnz = 1000000
coords = np.random.randint(0, n - 1, size=(ndims, nnz))
data = np.random.random(nnz)
import sparse
x = sparse.COO(coords, data, shape=((n,) * ndims))
x
# <COO: shape=(1000, 1000, 1000, 1000), dtype=float64, nnz=1000000>
x.nbytes
# 16000000
y = sparse.tensordot(x, x, axes=((3, 0), (1, 2)))
y
# <COO: shape=(1000, 1000, 1000, 1000), dtype=float64, nnz=1001588>
Have a look at sparray - sparse n-dimensional arrays in Python (by Jan Erik Solem). Also available on github.
Nicer than writing everything new from scratch may be to use scipy's sparse module as far as possible. This may lead to (much) better performance. I had a somewhat similar problem, but I only had to access the data efficiently, not perform any operations on them. Furthermore, my data were only sparse in two out of three dimensions.
I have written a class that solves my problem and could (as far as I think) easily be extended to satisfiy the OP's needs. It may still hold some potential for improvement, though.
import scipy.sparse as sp
import numpy as np
class Sparse3D():
"""
Class to store and access 3 dimensional sparse matrices efficiently
"""
def __init__(self, *sparseMatrices):
"""
Constructor
Takes a stack of sparse 2D matrices with the same dimensions
"""
self.data = sp.vstack(sparseMatrices, "dok")
self.shape = (len(sparseMatrices), *sparseMatrices[0].shape)
self._dim1_jump = np.arange(0, self.shape[1]*self.shape[0], self.shape[1])
self._dim1 = np.arange(self.shape[0])
self._dim2 = np.arange(self.shape[1])
def __getitem__(self, pos):
if not type(pos) == tuple:
if not hasattr(pos, "__iter__") and not type(pos) == slice:
return self.data[self._dim1_jump[pos] + self._dim2]
else:
return Sparse3D(*(self[self._dim1[i]] for i in self._dim1[pos]))
elif len(pos) > 3:
raise IndexError("too many indices for array")
else:
if (not hasattr(pos[0], "__iter__") and not type(pos[0]) == slice or
not hasattr(pos[1], "__iter__") and not type(pos[1]) == slice):
if len(pos) == 2:
result = self.data[self._dim1_jump[pos[0]] + self._dim2[pos[1]]]
else:
result = self.data[self._dim1_jump[pos[0]] + self._dim2[pos[1]], pos[2]].T
if hasattr(pos[2], "__iter__") or type(pos[2]) == slice:
result = result.T
return result
else:
if len(pos) == 2:
return Sparse3D(*(self[i, self._dim2[pos[1]]] for i in self._dim1[pos[0]]))
else:
if not hasattr(pos[2], "__iter__") and not type(pos[2]) == slice:
return sp.vstack([self[self._dim1[pos[0]], i, pos[2]]
for i in self._dim2[pos[1]]]).T
else:
return Sparse3D(*(self[i, self._dim2[pos[1]], pos[2]]
for i in self._dim1[pos[0]]))
def toarray(self):
return np.array([self[i].toarray() for i in range(self.shape[0])])
I also need 3D sparse matrix for solving the 2D heat equations (2 spatial dimensions are dense, but the time dimension is diagonal plus and minus one offdiagonal.) I found this link to guide me. The trick is to create an array Number that maps the 2D sparse matrix to a 1D linear vector. Then build the 2D matrix by building a list of data and indices. Later the Number matrix is used to arrange the answer back to a 2D array.
[edit] It occurred to me after my initial post, this could be handled better by using the .reshape(-1) method. After research, the reshape method is better than flatten because it returns a new view into the original array, but flatten copies the array. The code uses the original Number array. I will try to update later.[end edit]
I test it by creating a 1D random vector and solving for a second vector. Then multiply it by the sparse 2D matrix and I get the same result.
Note: I repeat this many times in a loop with exactly the same matrix M, so you might think it would be more efficient to solve for inverse(M). But the inverse of M is not sparse, so I think (but have not tested) using spsolve is a better solution. "Best" probably depends on how large the matrix is you are using.
#!/usr/bin/env python3
# testSparse.py
# profhuster
import numpy as np
import scipy.sparse as sM
import scipy.sparse.linalg as spLA
from array import array
from numpy.random import rand, seed
seed(101520)
nX = 4
nY = 3
r = 0.1
def loadSpNodes(nX, nY, r):
# Matrix to map 2D array of nodes to 1D array
Number = np.zeros((nY, nX), dtype=int)
# Map each element of the 2D array to a 1D array
iM = 0
for i in range(nX):
for j in range(nY):
Number[j, i] = iM
iM += 1
print(f"Number = \n{Number}")
# Now create a sparse matrix of the "stencil"
diagVal = 1 + 4 * r
offVal = -r
d_list = array('f')
i_list = array('i')
j_list = array('i')
# Loop over the 2D nodes matrix
for i in range(nX):
for j in range(nY):
# Recall the 1D number
iSparse = Number[j, i]
# populate the diagonal
d_list.append(diagVal)
i_list.append(iSparse)
j_list.append(iSparse)
# Now, for each rectangular neighbor, add the
# off-diagonal entries
# Use a try-except, so boundry nodes work
for (jj,ii) in ((j+1,i),(j-1,i),(j,i+1),(j,i-1)):
try:
iNeigh = Number[jj, ii]
if jj >= 0 and ii >=0:
d_list.append(offVal)
i_list.append(iSparse)
j_list.append(iNeigh)
except IndexError:
pass
spNodes = sM.coo_matrix((d_list, (i_list, j_list)), shape=(nX*nY,nX*nY))
return spNodes
MySpNodes = loadSpNodes(nX, nY, r)
print(f"Sparse Nodes = \n{MySpNodes.toarray()}")
b = rand(nX*nY)
print(f"b=\n{b}")
x = spLA.spsolve(MySpNodes.tocsr(), b)
print(f"x=\n{x}")
print(f"Multiply back together=\n{x * MySpNodes}")
I needed a 3d look up table for x,y,z and came up with this solution..
Why not use one of the dimensions to be a divisor of the third dimension? ie. use x and 'yz' as the matrix dimensions
eg. if x has 80 potential members, y has 100 potential' and z has 20 potential'
you make the sparse matrix to be 80 by 2000 (i.e. xy=100x20)
x dimension is as usual
yz dimension: the first 100 elements will represent z=0, y=0 to 99
..............the second 100 will represent z=2, y=0 to 99 etc
so given element located at (x,y,z) would be in sparse matrix at (x, z*100 + y)
if you need to use negative numbers design a aritrary offset into your matrix translation. the solutio could be expanded to n dimensions if necessary
from scipy import sparse
m = sparse.lil_matrix((100,2000), dtype=float)
def add_element((x,y,z), element):
element=float(element)
m[x,y+z*100]=element
def get_element(x,y,z):
return m[x,y+z*100]
add_element([3,2,4],2.2)
add_element([20,15,7], 1.2)
print get_element(0,0,0)
print get_element(3,2,4)
print get_element(20,15,7)
print " This is m sparse:";print m
====================
OUTPUT:
0.0
2.2
1.2
This is m sparse:
(3, 402L) 2.2
(20, 715L) 1.2
====================

Categories