Combining 2-d arrays to form a 3-d array - python

I'm defining a function which will return a 3-d grid. During it, I use a function defined already that returns a 2-d array. I want to join these 2-d arrarys to form the 3-d during an iteration but I've looked at functions like meshgrid(), dstack(), concatenate() but can't seem to get any of them to fit right into the code.
The program models the spread of waves from a point source on the 2-d array, and the 3-d array shows how the displacement of the medium changes over the course of a wavelength.
def make_wave_snapshot(size,wavelength,phase):
waves_array = np.zeros((size,size),np.float)
if size%2==0:
for y in range(size):
for x in range(size):
r = math.hypot((size/2 - x - 0.5),(size/2 - y - 0.5))
d = np.sin((2*math.pi*r/wavelength)-phase)/np.sqrt(r)
waves_array[y,x] = d
dp.display_2d_array(waves_array) #This is in another module altogether
return waves_array #Displays array showing values
else:
return 'Please use integer of size.'
def make_wave_sequence(size,wavelength,nsteps):
waves_sequence = np.zeros((nsteps,size,size),np.float)
if nsteps%1==0:
for z in range(nsteps):
make_wave_snapshot(size,wavelength,(2*math.pi*z/nsteps))
waves_sequence = ???
return waves_sequence #Displays array showing values
else:
return 'Please use positive integer for number of steps'
The issue is turning the 'wave_array's into a 'wave_sequence'. Generous commenting would be very appreciated if you write any code. Many thanks!

If I understand correctly you have a three dimensional array, something like:
wave = np.zeros((2, 2, 2), np.float)
([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0., 0.]]])
And you want to insert a two dimensional array, returned from your function like:
([[ 1., 2.],
[ 3., 4.]])
Such that your 3D array is now:
([[[1., 2.],
[3., 4.]],
[[0., 0.],
[0., 0.]]])
After the first iteration of your for loop. If that is correct, then it's actually pretty simple and you're most of the way there. You can assign an "element" to your 3D array that is a 2D array as long as you select the correct entry:
for z in range(nsteps):
waves_sequence[z] = make_wave_snapshot(size,wavelength,(2*math.pi*z/nsteps))

Related

Normalise 2D Numpy Array: Zero Mean Unit Variance

I have a 2D Numpy array, in which I want to normalise each column to zero mean and unit variance. Since I'm primarily used to C++, the method in which I'm doing is to use loops to iterate over elements in a column and do the necessary operations, followed by repeating this for all columns. I wanted to know about a pythonic way to do so.
Let class_input_data be my 2D array. I can get the column mean as:
column_mean = numpy.sum(class_input_data, axis = 0)/class_input_data.shape[0]
I then subtract the mean from all columns by:
class_input_data = class_input_data - column_mean
By now, the data should be zero mean. However, the value of:
numpy.sum(class_input_data, axis = 0)
isn't equal to 0, implying that I have done something wrong in my normalisation. By isn't equal to 0, I don't mean very small numbers which can be attributed to floating point inaccuracies.
Something like:
import numpy as np
eg_array = 5 + (np.random.randn(10, 10) * 2)
normed = (eg_array - eg_array.mean(axis=0)) / eg_array.std(axis=0)
normed.mean(axis=0)
Out[14]:
array([ 1.16573418e-16, -7.77156117e-17, -1.77635684e-16,
9.43689571e-17, -2.22044605e-17, -6.09234885e-16,
-2.22044605e-16, -4.44089210e-17, -7.10542736e-16,
4.21884749e-16])
normed.std(axis=0)
Out[15]: array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])

Array of hermite values in numpy

I have a data structure that looks like a list values and I am trying to compute the (x,y) 2d hermite functions from them using numpy. I'm trying to use as many numpy arrays as possible due to the performance boost you get from getting to Fortran as quickly as possible (I'm expecting x to be in practice many thousands of 3-arrays). Specifically, my code looks like this:
x = np.array([[1., 2., 3.], [4., 5., 6.]])
coefs = np.array([[[1., 0.],[0., 1.]], [[0., 1.], [1., 0.]]])
z = np.array([0., 0.])
z[:] = hermval2d(x[:,0], x[:,1], coefs[:])
This returns an error about the shape of hermval2d, which according to just running the hermval2d function instead of assigning it:
In [XX]: hermval2d(x[:,0], x[:,1], coefs[:])
Out[XX]:
array([[ 9., 81.],
[ 6., 18.]])
I would expect the hermval2d to be a scalar for every x, y, and coefficient matrix, which is what you would expect from the documentation. So what am I missing here? What's the score?
It's right there in the docs :)
hermval2d(x, y, c)
[...]
The shape of the result will be c.shape[2:] + x.shape
In your case this seems to return the Hermite values for x and y evaluated for each ith 2d array in c[:,:,i].

logically adding in numpy

I want to do the following operation. But It likes the histogram operation.
maxIndex = 6
dst =zeros((1,6))
a =array([1,2,3,4,7,0,3,4,5,7])
index=array([1,1,1,3,3,4,4,5,5,5])
a's length == index's length,
for i in (a.size):
dst[index[i]] = dst[index[i]] + a[i]
How can I do this more pythonic. and more efficiently
If I understand correctly, I think you are looking for numpy.bincount:
dst = numpy.bincount(index, weights=a, minlength=maxIndex)
This give me array([ 0., 6., 0., 11., 3., 16.]) as the output. If you don't want to calculate maxIndex by hand, you can omit minlength parameter from the function call and numpy will return an appropriately-sized array for you.

How to convert a 2d bitmap into points with x-y coordinates in python - and fast

I am working on a motion detection project using opencv/python.
I have reached the point of creating an image bitmap/array where motion that has been detected is represented by 1's in the array, 0's where there is no motion (by subtracting one picture from a video feed from another after a short lag).
I wish to determine my moving targets (where there may be more than 1) by using Numpy's (or opencv's) kmeans algorithm.
However, these algorithms require an input of an array of x,y coordinates to be used.
What's the fastest and most efficient method to convert a 2d image bitmap array to an array of x y coordinates? (i.e. we're talking image processing here, so slow Python code is undesirable).
Numpy's nonzero will do this:
Here's an example from the docs:
>>> x = np.eye(3)
>>> x
array([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
>>> np.nonzero(x)
(array([0, 1, 2]), array([0, 1, 2]))
That is, the output is the indices of the non-zero elements in the array.

expanding (adding a row or column) a scipy.sparse matrix

Suppose I have a NxN matrix M (lil_matrix or csr_matrix) from scipy.sparse, and I want to make it (N+1)xN where M_modified[i,j] = M[i,j] for 0 <= i < N (and all j) and M[N,j] = 0 for all j. Basically, I want to add a row of zeros to the bottom of M and preserve the remainder of the matrix. Is there a way to do this without copying the data?
Scipy doesn't have a way to do this without copying the data but you can do it yourself by changing the attributes that define the sparse matrix.
There are 4 attributes that make up the csr_matrix:
data: An array containing the actual values in the matrix
indices: An array containing the column index corresponding to each value in data
indptr: An array that specifies the index before the first value in data for each row. If the row is empty then the index is the same as the previous column.
shape: A tuple containing the shape of the matrix
If you are simply adding a row of zeros to the bottom all you have to do is change the shape and indptr for your matrix.
x = np.ones((3,5))
x = csr_matrix(x)
x.toarray()
>> array([[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.]])
# reshape is not implemented for csr_matrix but you can cheat and do it yourself.
x._shape = (4,5)
# Update indptr to let it know we added a row with nothing in it. So just append the last
# value in indptr to the end.
# note that you are still copying the indptr array
x.indptr = np.hstack((x.indptr,x.indptr[-1]))
x.toarray()
array([[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 0., 0., 0., 0., 0.]])
Here is a function to handle the more general case of vstacking any 2 csr_matrices. You still end up copying the underlying numpy arrays but it is still significantly faster than the scipy vstack method.
def csr_vappend(a,b):
""" Takes in 2 csr_matrices and appends the second one to the bottom of the first one.
Much faster than scipy.sparse.vstack but assumes the type to be csr and overwrites
the first matrix instead of copying it. The data, indices, and indptr still get copied."""
a.data = np.hstack((a.data,b.data))
a.indices = np.hstack((a.indices,b.indices))
a.indptr = np.hstack((a.indptr,(b.indptr + a.nnz)[1:]))
a._shape = (a.shape[0]+b.shape[0],b.shape[1])
return a
Not sure if you're still looking for a solution, but maybe others can look into hstack and vstack - http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.hstack.html. I think we can define a csr_matrix for the single additional row and then vstack it with the previous matrix.
I don't think that there is any way to really escape from doing the copying. Both of those types of sparse matrices store their data as Numpy arrays (in the data and indices attributes for csr and in the data and rows attributes for lil) internally and Numpy arrays can't be extended.
Update with more information:
LIL does stand for LInked List, but the current implementation doesn't quite live up to the name. The Numpy arrays used for data and rows are both of type object. Each of the objects in these arrays are actually Python lists (an empty list when all values are zero in a row). Python lists aren't exactly linked lists, but they are kind of close and quite frankly a better choice due to O(1) look-up. Personally, I don't immediately see the point of using a Numpy array of objects here rather than just a Python list. You could fairly easily change the current lil implementation to use Python lists instead which would allow you to add a row without copying the whole matrix.

Categories