Index array with the result of .nonzero() - python

I am having difficulties selecting rows using two condition in Numpy. The following code does not return the intended output
tot_length=0.3
steps=0.1
start_val=0.0
list_no =np.arange(start_val, tot_length, steps)
x, y, z = np.meshgrid(*[list_no for _ in range(3)], sparse=True)
a = ((x>=y) & (y>=z)).nonzero() # this maybe the problem
output
(array([0, 0, 0, 1, 1, 1, 1, 2, 2, 2]), array([0, 1, 2, 1, 1, 2, 2, 2, 2, 2]), array([0, 0, 0, 0, 1, 0, 1, 0, 1, 2]))
whereas, the intended output
[[0. 0. 0. ]
[0.1 0. 0. ]
[0.1 0.1 0. ]
[0.1 0.1 0.1]
[0.2 0. 0. ]
[0.2 0.1 0. ]
[0.2 0.1 0.1]
[0.2 0.2 0. ]
[0.2 0.2 0.1]
[0.2 0.2 0.2]]

ndarray.nonzero as well as np.where return tuples of arrays of indices. This makes unpacking those indices into separate arrays, which can then be used to index along a given axis. Stacking them up into a 2D array is trivial though, simply build a new array and transpose as:
ix = np.array(((x>=y) & (y>=z)).nonzero()).T
Then you can easily use the array of indices to index list_no as:
list_no[ix]
array([[0. , 0. , 0. ],
[0. , 0.1, 0. ],
[0. , 0.2, 0. ],
[0.1, 0.1, 0. ],
[0.1, 0.1, 0.1],
[0.1, 0.2, 0. ],
[0.1, 0.2, 0.1],
[0.2, 0.2, 0. ],
[0.2, 0.2, 0.1],
[0.2, 0.2, 0.2]])

Related

How to add a 4X4 matrix values into a 6x6 matrix using numpy

suppose i have multiple 4x4 matrices which i want to add to a final 6x6 zero matrix by adding some of the values in the designated coordination. how would i do this. I throughout of adding slices to np.zero 6x6 matrix , but i believe this may be quite tedious.
matrix 1 would go to this position first position and you have matrix 2 going to this position position 2. these two positions would be added and form the following final matrix Final position matrix
import numpy as np
from math import sqrt
# Element 1
C_1= 3/5
S_1= 4/5
matrix_1 = np.matrix([[C_1**2, C_1*S_1,-C_1**2,-C_1*S_1],[C_1*S_1,S_1**2,-C_1*S_1,-S_1**2],
[-C_1**2,-C_1*S_1,C_1**2,C_1*S_1],[-C_1*S_1,-S_1**2,C_1*S_1,S_1**2]])
empty_mat1 = np.zeros((6,6))
empty_mat1[0:4 , 0:4] = empty_mat1[0:4 ,0:4] + matrix_1
#print(empty_mat1)
# Element 2
C_2 = 0
S_2 = 1
matrix_2 = 1.25*np.matrix([[C_2**2, C_2*S_2,-C_2**2,-C_2*S_2],[C_2*S_2,S_2**2,-C_2*S_2,-S_2**2],
[-C_2**2,-C_2*S_2,C_2**2,C_2*S_2],[-C_2*S_2,-S_2**2,C_2*S_2,S_2**2]])
empty_mat2 = np.zeros((6,6))
empty_mat2[0:2,0:2] = empty_mat2[0:2,0:2] + matrix_2[0:2,0:2]
empty_mat2[4:6,0:2] = empty_mat2[4:6,0:2] + matrix_2[2:4,0:2]
empty_mat2[0:2,4:6] = empty_mat2[0:2,4:6] + matrix_2[2:4,2:4]
empty_mat2[4:6,4:6] = empty_mat2[4:6,4:6] + matrix_2[0:2,0:2]
print(empty_mat1+empty_mat2)
Adding two arrays of differents dimensions is a little bit tricky with numpy.
However, with array comprehension, you could do it with the following "rustic" method :
Supposing M1 and M2 your 2 input arrays, M3 (from M1) and M4 (from M2) your temporary arrays and M5 the final array :
#Initalisation
M1 = np.array([[ 0.36, 0.48, -0.36, -0.48], [ 0.48, 0.64, -0.48, -0.64], [ -0.36, -0.48, 0.36, 0.48], [-0.48, -0.64, 0.48, 0.64]])
M2 = np.array([[ 0, 0, 0, 0], [ 0, 1.25, 0, -1.25], [ 0, 0, 0, 0], [ 0, -1.25, 0, 1.25]])
M3, M4 = np.zeros((6, 6)), np.zeros((6, 6))
#M3 and M4 operations
M3[0:4, 0:4] = M1[0:4, 0:4] + M3[0:4, 0:4]
M4[0:2, 0:2] = M2[0:2, 0:2]
M4[0:2, 4:6] = M2[0:2, 2:4]
M4[4:6, 0:2] = M2[2:4, 0:2]
M4[4:6, 4:6] = M2[2:4, 2:4]
#Final operation
M5 = M3+M4
print(M5)
Output :
[[ 0.36 0.48 -0.36 -0.48 0. 0. ]
[ 0.48 1.89 -0.48 -0.64 0. -1.25]
[-0.36 -0.48 0.36 0.48 0. 0. ]
[-0.48 -0.64 0.48 0.64 0. 0. ]
[ 0. 0. 0. 0. 0. 0. ]
[ 0. -1.25 0. 0. 0. 1.25]]
Have a good day.
You will need to encode some way of where your 4x4 matrices end up in the final 6x6 matrix. Suppose you have N (=2 in your case) such 4x4 matrices. You can then define two new arrays (shape Nx4) that denote the row and col indices of the final 6x6 matrix that you want your 4x4 matrices to end up in. Finally, you use fancy indexing and broadcasting to build up a Nx6x6 array which you can sum over. Your example:
import numpy as np
N = 2
arr = np.array([[
[0.36, 0.48, -0.36, -0.48],
[0.48, 0.64, -0.48, -0.64],
[-0.36, -0.48, 0.36, 0.48],
[-0.48, -0.64, 0.48, 0.64],
], [
[0, 0, 0, 0],
[0, 1.25, 0, -1.25],
[0, 0, 0, 0],
[0, -1.25, 0, 1.25],
]])
rows = np.array([
[0, 1, 2, 3],
[0, 1, 4, 5]
])
cols = np.array([
[0, 1, 2, 3],
[0, 1, 4, 5]
])
i = np.arange(N)
out = np.zeros((N, 6, 6))
out[
i[:, None, None],
rows[:, :, None],
cols[:, None, :]
] = arr
out = out.sum(axis=0)
Gives as output:
array([[ 0.36, 0.48, -0.36, -0.48, 0. , 0. ],
[ 0.48, 1.89, -0.48, -0.64, 0. , -1.25],
[-0.36, -0.48, 0.36, 0.48, 0. , 0. ],
[-0.48, -0.64, 0.48, 0.64, 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , -1.25, 0. , 0. , 0. , 1.25]])
If you want even more control over where each row/col ends up, you can pull off some more trickery as follows:
rows = np.array([
[1, 2, 3, 4, 0, 0],
[1, 2, 0, 0, 3, 4]
])
cols = np.array([
[1, 2, 3, 4, 0, 0],
[1, 2, 0, 0, 3, 4]
])
i = np.arange(N)
out = np.pad(arr, ((0, 0), (1, 0), (1, 0)))[
i[:, None, None],
rows[:, :, None],
cols[:, None, :]
].sum(axis=0)
which has the same output. This would allow you to shuffle the rows/cols of arr by shuffling the values 1-4 in the rows, cols arrays. I would prefer option 1 though.
I probably should wait for you to correct your question, but I'll go ahead and give you some code - yes, in the most tedious form - based on your images
res = np.zeros((6,6))
# arr1, arr2 are (4,4) arrays
res[:4, :4] += arr1
idx = np.array([0,1,4,5])
res[idx[:,None], idx] += arr2
The first is contiguous block, so the 2 slices are enough.
The second is split up, so I'm using advanced indexing.

Add complementary values to numpy array

I have a 1D numpy array, for example the following:
import numpy as np
arr = np.array([0.33, 0.2, 0.8, 0.9])
Now I would like to change the array so that also one minus the value is included. That means the array should look like:
[[0.77, 0.33],
[0.8, 0.2],
[0.2, 0.8],
[0.1, 0.9]]
How can this be done?
>>> np.vstack((1 - arr, arr)).T
array([[0.67, 0.33],
[0.8 , 0.2 ],
[0.2 , 0.8 ],
[0.1 , 0.9 ]])
Alternatively, you can create an empty array and fill in entries:
>>> np.empty((*arr.shape, 2))
>>> x[..., 0] = 1 - arr
>>> x[..., 1] = arr
>>> x
array([[0.67, 0.33],
[0.8 , 0.2 ],
[0.2 , 0.8 ],
[0.1 , 0.9 ]])
Try column_stack
np.column_stack([1 - arr, arr])
Out[33]:
array([[0.67, 0.33],
[0.8 , 0.2 ],
[0.2 , 0.8 ],
[0.1 , 0.9 ]])
Use:
arr=np.insert(1-arr,np.arange(len(arr)),arr).reshape(-1,2)
arr
Output:
array([[0.33, 0.67],
[0.2 , 0.8 ],
[0.8 , 0.2 ],
[0.9 , 0.1 ]])

Summing sparse matrix rows by column groups

I have a scipy sparse matrix in coo format:
from scipy.sparse import coo_matrix
data = np.asarray([[1, 0, 0], [.8, .2, 0], [0, 1, 0], [0.4, 0.3, 0.3]])
data
array([[1. , 0. , 0. ],
[0.8, 0.2, 0. ],
[0. , 1. , 0. ],
[0.4, 0.3, 0.3]])
sparse_matrix = coo_matrix(data)
For each column I have a cluster assignment, I would like to sum rows grouped by their cluster assignment. During this operation I would like to stay in sparse format for memory issues.
Example:
labels = ["a", "b", "b"]
Expected output:
1, 0
.8, .2
0, 1
.4, .6
It could be approached the same was with dense arrays - for each group, select the desired columns, and sum. Collect the results.
In [2]: data = np.asarray([[1, 0, 0], [.8, .2, 0], [0, 1, 0], [0.4, 0.3, 0.3]])
In [3]: M = sparse.csc_matrix(data)
In [4]: M
Out[4]:
<4x3 sparse matrix of type '<class 'numpy.float64'>'
with 7 stored elements in Compressed Sparse Column format>
In [5]: M.A
Out[5]:
array([[1. , 0. , 0. ],
[0.8, 0.2, 0. ],
[0. , 1. , 0. ],
[0.4, 0.3, 0.3]])
In [6]: M[:,[0]].sum(axis=1)
Out[6]:
matrix([[1. ],
[0.8],
[0. ],
[0.4]])
In [7]: M[:,[1,2]].sum(axis=1)
Out[7]:
matrix([[0. ],
[0.2],
[1. ],
[0.6]])
In [8]: res = np.concatenate((Out[6], Out[7]), axis=1)
In [9]: res
Out[9]:
matrix([[1. , 0. ],
[0.8, 0.2],
[0. , 1. ],
[0.4, 0.6]])
Note that the sum produces a dense np.matrix. sparse does this routinely, I think, because such summations are always denser than the source. A sum will be 0 only if all the elements are 0 (except for the rare case of a bunch of nonzeros canceling each other out).
Since the column indexing and sum are both implemented as matrix products, it might be possible to speed up the process a bit by constructing a matrix that does both actions at once. But that's an implementation detail.
Indexing of sparse matrices is pretty slow (compared to dense ones).

Python - Break numpy array into positive and negative components

I have numpy arrays of shape (600,600,3), where the values are [-1.0, 1.0]. I would like to expand the array to (600,600,6), where the original values are split into the amounts above and below 0. Some examples (1,1,3) arrays, where th function foo() does the trick:
>>> a = [-0.5, 0.2, 0.9]
>>> foo(a)
[0.0, 0.5, 0.2, 0.0, 0.9, 0.0] # [positive component, negative component, ...]
>>> b = [1.0, 0.0, -0.3] # notice the behavior of 0.0
>>> foo(b)
[1.0, 0.0, 0.0, 0.0, 0.0, 0.3]
Use slicing to assign the min/max to different parts of the output array
In [33]: a = np.around(np.random.random((2,2,3))-0.5, 1)
In [34]: a
Out[34]:
array([[[-0.1, 0.3, 0.3],
[ 0.3, -0.2, -0.1]],
[[-0. , -0.2, 0.3],
[-0.1, -0. , 0.1]]])
In [35]: out = np.zeros((2,2,6))
In [36]: out[:,:,::2] = np.maximum(a, 0)
In [37]: out[:,:,1::2] = np.maximum(-a, 0)
In [38]: out
Out[38]:
array([[[ 0. , 0.1, 0.3, 0. , 0.3, 0. ],
[ 0.3, 0. , 0. , 0.2, 0. , 0.1]],
[[-0. , 0. , 0. , 0.2, 0.3, 0. ],
[ 0. , 0.1, -0. , 0. , 0.1, 0. ]]])

Improving performance iterating in 2d numpy array

I have two 2d numpy array (images).
First one defined by image, is storing the sum of a movement at the pixel (i,j)
Second one define by nbCameras, is storing the number of cameras who can see a movement at this pixel (i,j)
I want to create a third image imgFinal which only store the value of the pixel (i,j) and it's neighbours (3 x 3) mask, if the number of cameras who can see the pixel (i,j) is greater than 1.
For now I'm using two for loops which is not the best way. I'd like to increase the speed of the computation but I didn't find the best way to do it yet.
Also I'm a bit blocked as the fact I want to converse the neighbours of the pixel (i, j)
I also tried to use bumpy.vectorize but i can keep the neighbours of my pixel in this case.
What would be the best way to increase the speed of this function?
Thanks for your help!
maskWidth = 3
dstCenterMask = int( (maskWidth - 1) / 2)
imgFinal = np.zeros((image.shape),dtype = np.float32)
for j in range(dstCenterMask,image.shape[0] - dstCenterMask):
for i in range(dstCenterMask,image.shape[1] - dstCenterMask):
if nbCameras[j,i] > 1
imgFinal[j - dstCenterMask : j + dstCenterMask + 1, i - dstCenterMask : i + dstCenterMask + 1] =
image[j - dstCenterMask : j + dstCenterMask + 1, i - dstCenterMask : i + dstCenterMask + 1]
This got quite elegant using skimage.morphology's binary_dilation function. It will take a binary array, and kinda expand any pixels that are true into a 3x3 grid of true values (or any other size). This should also handle cases at the edges. Which i think your implementation did not.
Using this mask it's quite easy to calculate imgFinal
from skimage.morphology import binary_dilation, square
mask = binary_dilation(nbCameras > 1, square(maskWidth))
imgFinal = np.where(mask, image, 0)
square(3) is just shorthand for np.ones((3,3))
http://scikit-image.org/docs/dev/api/skimage.morphology.html?highlight=dilation#skimage.morphology.dilation
Example use of dilation for better explenation of what it does:
In [27]: a
Out[27]:
array([[ 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0.]])
In [28]: binary_dilation(a, square(3))
Out[28]:
array([[1, 1, 0, 0, 0],
[1, 1, 0, 0, 0],
[0, 0, 1, 1, 1],
[0, 0, 1, 1, 1],
[0, 0, 1, 1, 1]], dtype=uint8)
Option 1: Try to rewrite the code in a vectorized way. You could convolve with a 3x3 mask like this:
import numpy as np
from scipy.signal import convolve2d
image = np.random.random((100,100))
nbCameras = np.abs(np.random.normal(size=(100,100)).round())
maskWidth = 3
mask = np.ones((maskWidth, maskWidth))
visibilityMask = (nbCameras>1).astype(np.float)
visibilityMask = convolve2d(visibilityMask, mask, mode="same").astype(np.bool)
imgFinal = image.copy()
imgFinal[~visibilityMask] *= 0
import matplotlib.pyplot as plt
for i, (im, title) in enumerate([(image, "image"),
(nbCameras, "nbCameras"),
(visibilityMask, "visibilityMask"),
(imgFinal, "imgFinal")]):
plt.subplot(2,2,i+1)
plt.title(title)
plt.imshow(im, cmap=plt.cm.gray)
plt.show()
This will result in this plot:
Option 2: Use Numba. This uses an advanced just-in-time optimization technique and is specifically useful for speeding up loops.
This doesn't handle cameras on the edge of the array, but neither does your code:
import numpy as np
from numpy.lib.stride_tricks import as_strided
rows, cols, mask_width = 10, 10, 3
mask_radius = mask_width // 2
image = np.random.rand(rows, cols)
nb_cameras = np.random.randint(3 ,size=(rows, cols))
image_view = as_strided(image, shape=image.shape + (mask_width, mask_width),
strides=image.strides*2)
img_final = np.zeros_like(image)
img_final_view = as_strided(img_final,
shape=img_final.shape + (mask_width, mask_width),
strides=img_final.strides*2)
copy_mask = nb_cameras[mask_radius:-mask_radius,
mask_radius:-mask_radius] > 1
img_final_view[copy_mask] = image_view[copy_mask]
After running the above code:
>>> nb_cameras
array([[0, 2, 1, 0, 2, 0, 1, 2, 1, 0],
[0, 1, 1, 1, 1, 2, 1, 1, 2, 1],
[1, 2, 2, 2, 1, 2, 1, 0, 2, 0],
[0, 2, 2, 0, 1, 2, 1, 0, 1, 0],
[1, 2, 0, 1, 2, 0, 1, 0, 0, 2],
[2, 0, 1, 1, 1, 1, 1, 1, 0, 1],
[1, 0, 2, 2, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 0, 1, 0, 1, 0, 2, 2],
[0, 1, 0, 1, 1, 2, 1, 1, 2, 2],
[2, 2, 0, 1, 0, 0, 1, 2, 1, 0]])
>>> np.round(img_final, 1)
array([[ 0. , 0. , 0. , 0. , 0.7, 0.5, 0.6, 0.5, 0.6, 0.9],
[ 0.1, 0.6, 1. , 0.2, 0.3, 0.6, 0. , 0.2, 0.9, 0.9],
[ 0.2, 0.3, 0.3, 0.5, 0.2, 0.3, 0.4, 0.1, 0.7, 0.5],
[ 0.9, 0.1, 0.7, 0.8, 0.2, 0.9, 0.9, 0.1, 0.3, 0.3],
[ 0.8, 0.8, 1. , 0.9, 0.2, 0.5, 1. , 0. , 0. , 0. ],
[ 0.2, 0.3, 0.5, 0.4, 0.6, 0.2, 0. , 0. , 0. , 0. ],
[ 0. , 0.2, 1. , 0.2, 0.8, 0. , 0. , 0.7, 0.9, 0.6],
[ 0. , 0.2, 0.9, 0.9, 0.3, 0.4, 0.6, 0.6, 0.3, 0.6],
[ 0. , 0. , 0. , 0. , 0.8, 0.8, 0.1, 0.7, 0.4, 0.4],
[ 0. , 0. , 0. , 0. , 0. , 0.5, 0.1, 0.4, 0.3, 0.9]])
Another option, to manage the edges, is to use a convolution function from scipy.ndimage:
import scipy.ndimage
mask = scipy.ndimage.convolve(nb_cameras > 1, np.ones((3,3)),
mode='constant') != 0
img_final[mask] = image[mask]
>>> np.round(img_final, 1)
array([[ 0.6, 0.8, 0.7, 0.9, 0.7, 0.5, 0.6, 0.5, 0.6, 0.9],
[ 0.1, 0.6, 1. , 0.2, 0.3, 0.6, 0. , 0.2, 0.9, 0.9],
[ 0.2, 0.3, 0.3, 0.5, 0.2, 0.3, 0.4, 0.1, 0.7, 0.5],
[ 0.9, 0.1, 0.7, 0.8, 0.2, 0.9, 0.9, 0.1, 0.3, 0.3],
[ 0.8, 0.8, 1. , 0.9, 0.2, 0.5, 1. , 0. , 0.3, 0.8],
[ 0.2, 0.3, 0.5, 0.4, 0.6, 0.2, 0. , 0. , 0.7, 0.6],
[ 0.2, 0.2, 1. , 0.2, 0.8, 0. , 0. , 0.7, 0.9, 0.6],
[ 0. , 0.2, 0.9, 0.9, 0.3, 0.4, 0.6, 0.6, 0.3, 0.6],
[ 0.4, 1. , 0.8, 0. , 0.8, 0.8, 0.1, 0.7, 0.4, 0.4],
[ 0.9, 0.5, 0.8, 0. , 0. , 0.5, 0.1, 0.4, 0.3, 0.9]])

Categories