Related
I believe similar questions have been asked but none which deal with the problem I am facing.
I have an array of shape (H,W,L) - I must loop through each instance of the array to filter out values (their x,y location) that meet a particular criteria. (say val > t_r and val < t_c) - I must repeat this for each of the K values.
For eg: If we have an array of shape (2,3,4)
A = [[[1,2,3,], [3,4,5,]],
[[6,7,8],[1,4,5]],
[[5,7,7],[9,4,3]],
[[1,2,4],[4,6,7]]]
suppose the first criteria is val > 2 and the second criteria is val < 6 and store the (row, col) value in a N x 3 array. Where the first 2 values are the 'row','col' and the last one corresponds to the layer / third dimension.
then the expected output of the operation should be something like -
output = [[0,2,0],[1,0,0],[1,0,0],[1,1,0],[1,2,0]....] this would correspond to the values filtered from A[:,:,0]
One approach I have thought of is - using 3 for loops - i,j,k to loop over each of the elements, but I am unable to figure out the exact implementation. I would also like to implement vectorization wherever possible. I could use some guidance.
You may use np.nonzero and vectorize your comparisons.
a = np.asarray(A)
res = np.vstack(np.nonzero((a>2)&(a<6))).T
array([[0, 0, 2],
[0, 1, 0],
[0, 1, 1],
[0, 1, 2],
[1, 1, 1],
[1, 1, 2],
[2, 0, 0],
[2, 1, 1],
[2, 1, 2],
[3, 0, 2],
[3, 1, 0]], dtype=int64)
You can always reorder the columns to your liking e.g.:
res[:, [1,2,0]]
array([[0, 2, 0],
[1, 0, 0],
[1, 1, 0],
[1, 2, 0],
[1, 1, 1],
[1, 2, 1],
[0, 0, 2],
[1, 1, 2],
[1, 2, 2],
[0, 2, 3],
[1, 0, 3]], dtype=int64)
I have a numpy array and a list of valid values in that array:
import numpy as np
arr = np.array([[1,2,0], [2,2,0], [4,1,0], [4,1,0], [3,2,0], ... ])
valid = [1,4]
Is there a nice pythonic way to set all array values to zero, that are not in the list of valid values and do it in-place? After this operation, the list should look like this:
[[1,0,0], [0,0,0], [4,1,0], [4,1,0], [0,0,0], ... ]
The following creates a copy of the array in memory, which is bad for large arrays:
arr = np.vectorize(lambda x: x if x in valid else 0)(arr)
It bugs me, that for now I loop over each array element and set it to zero if it is in the valid list.
Edit: I found an answer suggesting there is no in-place function to achieve this. Also stop changing my whitespaces. It's easier to see the changes in arr whith them.
You can use np.place for an in-situ update -
np.place(arr,~np.in1d(arr,valid),0)
Sample run -
In [66]: arr
Out[66]:
array([[1, 2, 0],
[2, 2, 0],
[4, 1, 0],
[4, 1, 0],
[3, 2, 0]])
In [67]: np.place(arr,~np.in1d(arr,valid),0)
In [68]: arr
Out[68]:
array([[1, 0, 0],
[0, 0, 0],
[4, 1, 0],
[4, 1, 0],
[0, 0, 0]])
Along the same lines, np.put could also be used -
np.put(arr,np.where(~np.in1d(arr,valid))[0],0)
Sample run -
In [70]: arr
Out[70]:
array([[1, 2, 0],
[2, 2, 0],
[4, 1, 0],
[4, 1, 0],
[3, 2, 0]])
In [71]: np.put(arr,np.where(~np.in1d(arr,valid))[0],0)
In [72]: arr
Out[72]:
array([[1, 0, 0],
[0, 0, 0],
[4, 1, 0],
[4, 1, 0],
[0, 0, 0]])
Indexing with booleans would work too:
>>> arr = np.array([[1, 2, 0], [2, 2, 0], [4, 1, 0], [4, 1, 0], [3, 2, 0]])
>>> arr[~np.in1d(arr, valid).reshape(arr.shape)] = 0
>>> arr
array([[1, 0, 0],
[0, 0, 0],
[4, 1, 0],
[4, 1, 0],
[0, 0, 0]])
I have an array with coordinates of N points. Another array contains the masses of these N points.
>>> import numpy as np
>>> N=10
>>> xyz=np.random.randint(0,2,(N,3))
>>> mass=np.random.rand(len(xyz))
>>> xyz
array([[1, 0, 1],
[1, 1, 0],
[0, 1, 1],
[0, 0, 0],
[0, 1, 0],
[1, 1, 0],
[1, 0, 1],
[0, 0, 1],
[1, 0, 1],
[0, 0, 1]])
>>> mass
array([ 0.38668401, 0.44385111, 0.47756182, 0.74896529, 0.20424403,
0.21828435, 0.98937523, 0.08736635, 0.24790248, 0.67759276])
Now I want to obtain an array with unique values of xyz and a corresponding array of summed up masses. That means the following arrays:
>>> xyz_unique
array([[0, 1, 1],
[1, 1, 0],
[0, 0, 1],
[1, 0, 1],
[0, 0, 0],
[0, 1, 0]])
>>> mass_unique
array([ 0.47756182, 0.66213546, 0.76495911, 1.62396172, 0.74896529,
0.20424403])
My attempt was the following code with a double for-loop:
>>> xyz_unique=np.array(list(set(tuple(p) for p in xyz)))
>>> mass_unique=np.zeros(len(xyz_unique))
>>> for j in np.arange(len(xyz_unique)):
... indices=np.array([],dtype=np.int64)
... for i in np.arange(len(xyz)):
... if np.all(xyz[i]==xyz_unique[j]):
... indices=np.append(indices,i)
... mass_unique[j]=np.sum(mass[indices])
The problem is that this takes too long, I actually have N=100000.
Is there a faster way or how could I improve my code?
EDIT My coordinates are actually float numbers. To keep things simple, I made random integers to have duplicates at low N.
Case 1: Binary numbers in xyz
If the elements in the input array xyz were 0's and 1's, you can convert each row into a decimal number, then label each row based on their uniqueness with other decimal numbers. Then, based on those labels, you can use np.bincount to accumulate the summations, just like in MATLAB one could use accumarray. Here's the implementation to achieve all that -
import numpy as np
# Input arrays xyz and mass
xyz = np.array([
[1, 0, 1],
[1, 1, 0],
[0, 1, 1],
[0, 0, 0],
[0, 1, 0],
[1, 1, 0],
[1, 0, 1],
[0, 0, 1],
[1, 0, 1],
[0, 0, 1]])
mass = np.array([ 0.38668401, 0.44385111, 0.47756182, 0.74896529, 0.20424403,
0.21828435, 0.98937523, 0.08736635, 0.24790248, 0.67759276])
# Convert each row entry in xyz into equivalent decimal numbers
dec_num = np.dot(xyz,2**np.arange(xyz.shape[1])[:,None])
# Get indices of the first occurrences of the unique values and also label each row
_, unq_idx,row_labels = np.unique(dec_num, return_index=True, return_inverse=True)
# Find unique rows from xyz array
xyz_unique = xyz[unq_idx,:]
# Accumulate the summations from mass based on the row labels
mass_unique = np.bincount(row_labels, weights=mass)
Output -
In [148]: xyz_unique
Out[148]:
array([[0, 0, 0],
[0, 1, 0],
[1, 1, 0],
[0, 0, 1],
[1, 0, 1],
[0, 1, 1]])
In [149]: mass_unique
Out[149]:
array([ 0.74896529, 0.20424403, 0.66213546, 0.76495911, 1.62396172,
0.47756182])
Case 2: Generic
For a general case, you can use this -
import numpy as np
# Perform lex sort and get the sorted indices
sorted_idx = np.lexsort(xyz.T)
sorted_xyz = xyz[sorted_idx,:]
# Differentiation along rows for sorted array
df1 = np.diff(sorted_xyz,axis=0)
df2 = np.append([True],np.any(df1!=0,1),0)
# Get unique sorted labels
sorted_labels = df2.cumsum(0)-1
# Get labels
labels = np.zeros_like(sorted_idx)
labels[sorted_idx] = sorted_labels
# Get unique indices
unq_idx = sorted_idx[df2]
# Get unique xyz's and the mass counts using accumulation with bincount
xyz_unique = xyz[unq_idx,:]
mass_unique = np.bincount(labels, weights=mass)
Sample run -
In [238]: xyz
Out[238]:
array([[1, 2, 1],
[1, 2, 1],
[0, 1, 0],
[1, 0, 1],
[2, 1, 2],
[2, 1, 1],
[0, 1, 0],
[1, 0, 0],
[2, 1, 0],
[2, 0, 1]])
In [239]: mass
Out[239]:
array([ 0.5126308 , 0.69075674, 0.02749734, 0.384824 , 0.65151772,
0.77718427, 0.18839268, 0.78364902, 0.15962722, 0.09906355])
In [240]: xyz_unique
Out[240]:
array([[1, 0, 0],
[0, 1, 0],
[2, 1, 0],
[1, 0, 1],
[2, 0, 1],
[2, 1, 1],
[1, 2, 1],
[2, 1, 2]])
In [241]: mass_unique
Out[241]:
array([ 0.78364902, 0.21589002, 0.15962722, 0.384824 , 0.09906355,
0.77718427, 1.20338754, 0.65151772])
I have 2D numpy.array and a tuple of indices:
a = array([[0, 0], [0, 1], [1, 0], [1, 1]])
ix = (2, 0, 3, 1)
How can I sort array's rows by the indices? Expected result:
array([[1, 0], [0, 0], [1, 1], [0, 1]])
I tried using numpy.take, but it works as I expect only with 1D arrays.
You can in fact use ndarray.take() for this. The trick is to supply the second argument (axis):
>>> a.take(ix, 0)
array([[1, 0],
[0, 0],
[1, 1],
[0, 1]])
(Without axis, the array is flattened before elements are taken.)
Alternatively:
>>> a[ix, ...]
array([[1, 0],
[0, 0],
[1, 1],
[0, 1]])
Suppose I have the following numpy arrays:
>>a
array([[0, 0, 2],
[2, 0, 1],
[2, 2, 1]])
>>b
array([[2, 2, 0],
[2, 0, 2],
[1, 1, 2]])
that I then vertically stack
c=np.dstack((a,b))
resulting in:
>>c
array([[[0, 2],
[0, 2],
[2, 0]],
[[2, 2],
[0, 0],
[1, 2]],
[[2, 1],
[2, 1],
[1, 2]]])
From this I wish to, for each 3rd dimension of c, check which combination is present in this subarray, and then number it accordingingly with the index of the list-match. I've tried the following, but it is not working. The algorithm is simple enough with double for-loops, but because c is very large, it is prohibitively slow.
classes=[(0,0),(2,1),(2,2)]
out=np.select( [h==c for h in classes], range(len(classes)), default=-1)
My desired output would be
out = [[-1,-1,-1],
[3, 1,-1],
[2, 2,-1]]
How about this:
(np.array([np.array(h)[...,:] == c for h in classes]).all(axis = -1) *
(2 + np.arange(len(classes)))[:, None, None]).max(axis=0) - 1
It returns, what you actually need
array([[-1, -1, -1],
[ 3, 1, -1],
[ 2, 2, -1]])
You can test the a and b arrays separately like this:
clsa = (0,2,2)
clesb = (0,1,2)
np.select ( [(ca==a) & (cb==b) for ca,cb in zip (clsa, clsb)], range (3), default = -1)
which gets your desired result (except returns 0,1,2 instead of 1,2,3).
Here is another way to get what you want, thought I would post it in case it's useful to anyone.
import numpy as np
a = np.array([[0, 0, 2],
[2, 0, 1],
[2, 2, 1]])
b = np.array([[2, 2, 0],
[2, 0, 2],
[1, 1, 2]])
classes=[(0,0),(2,1),(2,2)]
c = np.empty(a.shape, dtype=[('a', a.dtype), ('b', b.dtype)])
c['a'] = a
c['b'] = b
classes = np.array(classes, dtype=c.dtype)
classes.sort()
out = classes.searchsorted(c)
out = np.where(c == classes[out], out+1, -1)
print out
#array([[-1, -1, -1]
# [ 3, 1, -1]
# [ 2, 1, -1]])