So I have two 2D numpy arrays of equal size, both obtained using the pygame.surfarray.array3d method on two different surfaces.
Each value in the array is also an array in the form [a, b, c] (so I basically have a 2D array with 1D elements).
I'd essentially like to compare the two based on the condition:
if any(val1 != val2) and all(val1 != [0, 0, 0]):
# can't be equal and val1 cant be [0, 0, 0]
Is there any more efficient way of doing this without simply iterating through either array as shown below?
for y in range(len(array1)):
for x in range(len(array1[y])):
val1 = array1[y,x]; val2 = array[y,x]
if any(val1 != val2) and all(val1 != [0, 0, 0]):
# do something
import numpy as np
if np.any(array1 != array2) and not np.any(np.all(a == 0, axis=-1))
np.any(array1 != array2) is comparing each element of the "big" 3D array. This is, however, equivalent to comparing val1 to val2 for every x and y.
The other condition, np.any(np.all(a == 0, axis=-1)) is a little bit more complicated. The innermost np.all(a == 0, axis=-1) creates a 2D array of boolean values. Each value is set to True or False depending if all values in the last dimension are 0. The outer condition checks if any of the values in the 2D array are True which would mean that there was an element of array1[y, x] that was equal to [0, 0, 0].
Related
I have two arrays mp of shape (240, 480) and pop_fts_norm which is an array of length N of arrays of shape (240, 480). I want to create an array of length N with values from mp but only where elements of x are != 0, for each array x in pop_fts_norm.
Inefficient code:
full_mp = np.array([np.array([mp[i][j] if (x != 0)[i][j] is True else 0 for i in range(240) for j in range(480)]).reshape(240, 480) for x in pop_fts_norm], dtype='float32')
full_mp.shape
This works but works slowly.. How can be more efficiently written in pythonic style ?
Thanks,
Petru
full_mp = np.array([np.where(x != 0, mp, 0) for x in pop_fts_norm])
full_mp.shape
Found a solution.
You can transform your matrices in the array to a set of 0 and 1, and after that you can just multiply elementwise the two matrix to get the result:
for x in pop_fts_norm:
x[x!=0] = 1
full_mp = np.array([np.array(np.multiply(mp,x) for x in pop_fts_norm])
Suppose that I have a 3d numpy array where at each canvas[row, col], there is another numpy array in the format of [R, G, B, A].I want to check if the numpy array at canvas[row, col] is equal to another numpy array [0, 0, 0, 240~255], where the last element is a range of values that will be accepted as "equal". For example, both [0,0,0, 242] and [0,0,0, 255] will pass this check. Below, I have it so that it only accepts the latter case.
(canvas[row,col] == np.array([0,0,0,255])).all()
How might I write this condition so it does as I described previously?
You can compare slices:
(
(canvas[row, col, :3] == 0).all() # checking that color is [0, 0, 0]
and
(canvas[row, col, 3] >= 240) # checking that alpha >= 240
)
Also, if you need to check this on a lot of values, you can optimize it with vectorization, producing a 2D array of boolean values:
np.logical_and(
(canvas[..., :3] == 0).all(axis=-1), # checking that color is [0, 0, 0]
(canvas[..., 3] >= 240) # checking that alpha >= 240
)
If I had a 2D array of shape (n,n) and a given point (x,y) and valuer v I want to find the length longest sub array in all sub arrays that go through that point e.g.:
n=3
x=1
y=1
v=2
array = [
[1,0,2],
[2,2,0],
[2,1,2]
]
Then the following arrays would be checked:
[2,2,0] (horizontal) returns 2
[0,2,1] (vertical) returns 1
[1,2,2] (left diag) returns 2
[2,2,2] (right diag) returns 3
If you have (n*n) matrix, you can use np.diag to get values on the main diagonal of the matrix. np.fliplr can be combined by np.diag to get values on the reverse diagonal. For specified column and row, it can be achieved by indexing on NumPy array. Satisfaction of the condition can be checked by == v, which will get a Boolean array with True for where value is equal to v. Summing the Boolean arrays will get number of Trues. So it shows the number of v in that flattened array:
array = np.array([[1, 0, 2],
[2, 2, 0],
[2, 1, 2]])
v = 2
x = 1
y = 1
horz = (array[x, :] == v).sum() # --> 2
vert = (array[:, y] == v).sum() # --> 1
diag = (np.diag(array) == v).sum() # --> 2
reversed_diag = (np.diag(np.fliplr(array)) == v).sum() # --> 3
It will works on your prepared (n*n) arrays where x and y are the center of the matrix; Diagonals can be specified by np.diag using parameter k if we know which diagonals will pass the point. there is a good example in this regard on SO.
It may be possible to specify the two passing (from the point) diagonals, diag and reverse_diag, related arrays by the following relations, too, as another alternative method. The correctness of the resulted arrays must be checked further:
a_mod = array.ravel()
size = array.shape[0]
if y >= x:
diag = a_mod[y-x:(x+size-y)*size:size+1]
else:
diag = a_mod[(x-y)*size::size+1]
if x-(size-1-y) >= 0:
reverse_diag = array[:, ::-1].ravel()[(x-(size-1-y))*size::size+1]
else:
reverse_diag = a_mod[x:x*size+1:size-1]
diag = (diag == v).sum()
reverse_diag = (reverse_diag == v).sum()
I think these solutions will be helpful to solve the problem. If any other results needed, it must be clarified by the OP for further works on.
I have a very simple task that I cannot figure out how to do in numpy. I have a 3 channel array and wherever the array value does not equal (1,1,1) I want to convert that array value to (0,0,0).
So the following:
[[0,1,1],
[1,1,1],
[1,0,1]]
Should change to:
[[0,0,0],
[1,1,1],
[0,0,0]]
How can I achieve this in numpy? The following is not achieving the desired results:
# my_arr.dtype = uint8
my_arr[my_arr != (1,1,1)] = 0
my_arr = np.where(my_arr == (1,1,1), my_arr, (0,0,0))
Use numpy.array.all(1) to filter and assign 0:
import numpy as np
arr = np.array([[0,1,1],
[1,1,1],
[1,0,1]])
arr[~(arr == 1).all(1)] = 0
Output:
array([[0, 0, 0],
[1, 1, 1],
[0, 0, 0]])
Explain:
arr==1: returns array of bools that satisfy the condition (here it's 1)
all(axis=1): returns array of bools if each row has all True (i.e. all rows that are 1`
~(arr==1).all(1): selects rows that are not all 1
This is just comparing the two lists.
x = [[0,1,1],
[1,1,1],
[1,0,1]]
for i in range(len(x)):
if x[i] != [1,1,1]:
x[i] = [0,0,0]
Considering the 3 arrays below:
np.random.seed(0)
X = np.random.randint(10, size=(4,5))
W = np.random.randint(10, size=(3,4))
y = np.random.randint(3, size=(5,1))
i want to add and sum each column of the matrix X to the row of W ,given by y as index. So ,for example, if the first element in y is 3 , i'll add the first column of X to the fourth row of W(index 3 in python) and sum it. i'll do it over and over until all columns of X are added to the specific row of W and summed.
i could do it in different ways:
1- using for loop:
for i,j in enumerate(y):
W[j]+=X[:,i]
2- using the add.at function
np.add.at(W,(y.ravel()),X.T)
3- but i can't understand how to do it using einsum.
i was given a solution ,but really can't understand it.
N = y.max()+1
W[:N] += np.einsum('ijk,lk->il',(np.arange(N)[:,None,None] == y.ravel()),X)
Anyone could explain me this structure?
1 - (np.arange(N)[:,None,None] == y.ravel(),X). i imagine this part refers to summing the column of X to the specific row of W ,according to y. But where s W ? and why do we have to transform W in 4 dimensions in this case?
2- 'ijk,lk->il' - i didnt understand this either.
i -refers to the rows,
j - columns,
k- each element,
l - what does 'l' refers too?.
if anyone can understand this and explain to me , i would really appreciate.
Thanks in advance.
Let's simplify the problem by dropping one dimension and using values that are easy to verify manually:
W = np.zeros(3, np.int)
y = np.array([0, 1, 1, 2, 2])
X = np.array([1, 2, 3, 4, 5])
Values in the vector W get added values from X by looking up with y:
for i, j in enumerate(y):
W[j] += X[i]
W is calculated as [1, 5, 9], (check quickly by hand).
Now, how could this code be vectorized? We can't do a simple W[y] += X[y] as y has duplicate values in it and the different sums would overwrite each other at indices 1 and 2.
What could be done is to broadcast the values into a new dimension of len(y) and then sum up over this newly created dimension.
N = W.shape[0]
select = (np.arange(N) == y[:, None]).astype(np.int)
Taking the index range of W ([0, 1, 2]), and setting the values where they match y to 1 in a new dimension, otherwise 0. select contains this array:
array([[1, 0, 0],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[0, 0, 1]])
It has len(y) == len(X) rows and len(W) columns and shows for every y/row, what index of W it contributes to.
Let's multiply X with this array, mult = select * X[:, None]:
array([[1, 0, 0],
[0, 2, 0],
[0, 3, 0],
[0, 0, 4],
[0, 0, 5]])
We have effectively spread out X into a new dimension, and sorted it in a way we can get it into shape W by summing over the newly created dimension. The sum over the rows is the vector we want to add to W:
sum_Xy = np.sum(mult, axis=0) # [1, 5, 9]
W += sum_Xy
The computation of select and mult can be combined with np.einsum:
# `select` has shape (len(y)==len(X), len(W)), or `yw`
# `X` has shape len(X)==len(y), or `y`
# we want something `len(W)`, or `w`, and to reduce the other dimension
sum_Xy = np.einsum("yw,y->w", select, X)
And that's it for the one-dimensional example. For the two-dimensional problem posed in the question it is exactly the same approach: introduce an additional dimension, broadcast the y indices, and then reduce the additional dimension with einsum.
If you internalize how every step works for the one-dimensional example, I'm sure you can work out how the code is doing it in two dimensions, as it is just a matter of getting the indices right (W rows, X columns).