I have a two-dimensional (2D) array that contains many one-dimensional (1D) arrays of random boolean values.
import numpy as np
def random_array_of_bools():
return np.random.choice(a=[False, True], size=5)
boolean_arrays = np.array([
random_array_of_bools(),
random_array_of_bools(),
... so on
])
Assume that I have three arrays:
[True, False, True, True, False]
[False, True, True, True, True]
[True, True, True, False, False]
This is my desired result:
[False, False, True, False, False]
How can I achieve this with NumPy?
Use min with axis=0:
>>> boolean_array.min(axis=0)
array([False, False, True, False, False])
>>>
Use .all:
import numpy as np
arr = np.array([[True, False, True, True, False],
[False, True, True, True, True],
[True, True, True, False, False]])
res = arr.all(0)
print(res)
Output
[False False True False False]
try numpy bitwise_and =>
out_arr = np.bitwise_and(np.bitwise_and(in_arr1, in_arr2),in_arr3)
Related
I need all permutations of a bool array, the following code is inefficient, but does what I want:
from itertools import permutations
import numpy as np
n1=2
n2=3
a = np.array([True]*n1+[False]*n2)
perms = set(permutations(a))
However it is inefficient and fails for long arrays. Is there a more efficent implementation?
What about sampling the combinations of indices of the True values:
from itertools import combinations
import numpy as np
a = np.arange(n1+n2)
out = [np.isin(a, x).tolist() for x in combinations(range(n1+n2), r=n1)]
Output:
[[True, True, False, False, False],
[True, False, True, False, False],
[True, False, False, True, False],
[True, False, False, False, True],
[False, True, True, False, False],
[False, True, False, True, False],
[False, True, False, False, True],
[False, False, True, True, False],
[False, False, True, False, True],
[False, False, False, True, True]]
I know that Numpy provides logical_and() which allows us to intersect two boolean arrays for True values only (True and True would yield True while True and False would yield False). For example,
a = np.array([True, False, False, True, False], dtype=bool)
b = np.array([False, True, True, True, False], dtype=bool)
np.logical_and(a, b)
> array([False, False, False, True, False], dtype=bool)
However, I'm wondering how I can apply this to two subarrays in an overall array? For example, consider the array:
[[[ True, True], [ True, False]], [[ True, False], [False, True]]]
The two subarrays I'm looking to intersect are:
[[ True, True], [ True, False]]
and
[[ True, False], [False, True]]
which should yield:
[[ True, False], [False, False]]
Is there a way to specify that I want to apply logical_and() to the outermost subarrays to combine the two?
You can use .reduce() along the first axis:
>>> a = np.array([[[ True, True], [ True, False]], [[ True, False], [False, True]]])
>>> np.logical_and.reduce(a, axis=0)
array([[ True, False],
[False, False]])
This works even when you have more than two "sub-arrays" in your outer array. I prefer this over the unpacking approach because it allows you to apply your function (np.logical_and) over any axis of your array.
If I understand your question correctly, you are looking to do:
import numpy as np
output = np.logical_and(a[:, 0], a[:, 1])
This simply slices your arrays so that you can use logical_and the way your results suggest.
I want to compare adjacent values in a (potentially multi-dimensional) bool numpy array such that if there are adjacent True values in a row, only the leftmost would be kept while the rest would be flipped to False. For example:
Input: [True, False, False, True]
Output: [True, False, False, True]
Input: [True, True, False, True]
Output: [True, False, False, True]
Input: [True, True, True, True]
Output: [True, False, False, False]
Is there an efficient (i.e. vectorized) way of achieving this in NumPy, SciPy, or TensorFlow?
You can calculate the logical_and of the array with its shifted version, if both true, flip the values:
a[np.concatenate(([False], a[:-1])) & a] = False
Testing:
a = np.array([True, True, True, True])
a[np.concatenate(([False], a[:-1])) & a] = False
a
# array([ True, False, False, False], dtype=bool)
a = np.array([True, True, False, True])
a[np.concatenate(([False], a[:-1])) & a] = False
a
# array([ True, False, False, True], dtype=bool)
a = np.array([True, False, False, True])
a[np.concatenate(([False], a[:-1])) & a] = False
a
# array([ True, False, False, True], dtype=bool)
For a 1-d array:
a = np.array([True, True, False, True])
b = np.diff(a)
a[1:] = np.logical_and(a[1:], b)
>>> a
array([ True, False, False, True], dtype=bool)
>>>
Having the numpy arrays
a = np.array([ True, False, False, True, False], dtype=bool)
b = np.array([False, True, True, True, False], dtype=bool)
how can I make the intersection of the two so that only the True values match? I can do something like:
a == b
array([False, False, False, True, True], dtype=bool)
but the last item is True (understandably because both are False), whereas I would like the result array to be True only in the 4th element, something like:
array([False, False, False, True, False], dtype=bool)
Numpy provides logical_and() for that purpose:
a = np.array([ True, False, False, True, False], dtype=bool)
b = np.array([False, True, True, True, False], dtype=bool)
c = np.logical_and(a, b)
# array([False, False, False, True, False], dtype=bool)
More at Numpy Logical operations.
In numpy you can use the allclose(X, Y) function to check element-wise for approximate equality between two arrays. Moreover, with an expression like X==5 you can check element-wise equality between an array and a scalar.
Is there a function that combines both functionalities?? That is, can compare an array and a scalar for approximate element-wise equality??
The term array or array-like in the numpy documentation mostly indicates that the input is converted to an array with np.asarray(in_arg) or np.asanyarray(in_arg). So if you input a scalar it will be converted to an scalar array:
>>> import numpy as np
>>> np.asarray(5) # or np.asanyarray
array(5)
And the functions np.allclose or np.isclose just do element-wise comparison no matter if the second argument is a scalar array, an array with the same shape or an array that correctly broadcasts to the first array:
>>> import numpy as np
>>> arr = np.array([1,2,1,0,1.00001,0.9999999])
>>> np.allclose(arr, 1)
False
>>> np.isclose(arr, 1)
array([ True, False, True, False, True, True], dtype=bool)
>>> np.isclose(arr, np.ones((10, 6)))
array([[ True, False, True, False, True, True],
[ True, False, True, False, True, True],
[ True, False, True, False, True, True],
[ True, False, True, False, True, True],
[ True, False, True, False, True, True],
[ True, False, True, False, True, True],
[ True, False, True, False, True, True],
[ True, False, True, False, True, True],
[ True, False, True, False, True, True],
[ True, False, True, False, True, True]], dtype=bool)
So no need to find another function that explicitly handles scalars, these functions already correctly work with scalars.