Getting indices of True values in a boolean list - python

I have a piece of my code where I'm supposed to create a switchboard. I want to return a list of all the switches that are on. Here "on" will equal True and "off" equal False. So now I just want to return a list of all the True values and their position. This is all I have but it only return the position of the first occurrence of True (this is just a portion of my code):
self.states = [False, False, False, False, True, True, False, True, False, False, False, False, False, False, False, False]
def which_switch(self):
x = [self.states.index(i) for i in self.states if i == True]
This only returns "4"

Use enumerate, list.index returns the index of first match found.
>>> t = [False, False, False, False, True, True, False, True, False, False, False, False, False, False, False, False]
>>> [i for i, x in enumerate(t) if x]
[4, 5, 7]
For huge lists, it'd be better to use itertools.compress:
>>> from itertools import compress
>>> list(compress(xrange(len(t)), t))
[4, 5, 7]
>>> t = t*1000
>>> %timeit [i for i, x in enumerate(t) if x]
100 loops, best of 3: 2.55 ms per loop
>>> %timeit list(compress(xrange(len(t)), t))
1000 loops, best of 3: 696 µs per loop

If you have numpy available:
>>> import numpy as np
>>> states = [False, False, False, False, True, True, False, True, False, False, False, False, False, False, False, False]
>>> np.where(states)[0]
array([4, 5, 7])

TL; DR: use np.where as it is the fastest option. Your options are np.where, itertools.compress, and list comprehension.
See the detailed comparison below, where it can be seen np.where outperforms both itertools.compress and also list comprehension.
>>> from itertools import compress
>>> import numpy as np
>>> t = [False, False, False, False, True, True, False, True, False, False, False, False, False, False, False, False]`
>>> t = 1000*t
Method 1: Using list comprehension
>>> %timeit [i for i, x in enumerate(t) if x]
457 µs ± 1.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Method 2: Using itertools.compress
>>> %timeit list(compress(range(len(t)), t))
210 µs ± 704 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Method 3 (the fastest method): Using numpy.where
>>> %timeit np.where(t)
179 µs ± 593 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Using element-wise multiplication and a set:
>>> states = [False, False, False, False, True, True, False, True, False, False, False, False, False, False, False, False]
>>> set(multiply(states,range(1,len(states)+1))-1).difference({-1})
Output:
{4, 5, 7}

You can use filter for it:
filter(lambda x: self.states[x], range(len(self.states)))
The range here enumerates elements of your list and since we want only those where self.states is True, we are applying a filter based on this condition.
For Python > 3.0:
list(filter(lambda x: self.states[x], range(len(self.states))))

Use dictionary comprehension way,
x = {k:v for k,v in enumerate(states) if v == True}
Input:
states = [False, False, False, False, True, True, False, True, False, False, False, False, False, False, False, False]
Output:
{4: True, 5: True, 7: True}

Simply do this:
def which_index(self):
return [
i for i in range(len(self.states))
if self.states[i] == True
]

I got different benchmark result compared to #meysham answer. In this test, compress seems the fastest (python 3.7).
from itertools import compress
import numpy as np
t = [True, False, False, False, False, True, True, False, True, False, False, False, False, False, False, False, False]
%timeit [i for i, x in enumerate(t) if x]
%timeit list(compress(range(len(t)), t))
%timeit list(filter(lambda x: t[x], range(len(t))))
%timeit np.where(t)[0]
# 2.54 µs ± 400 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# 2.67 µs ± 600 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# 6.22 µs ± 624 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# 6.52 µs ± 768 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
t = 1000*t
%timeit [i for i, x in enumerate(t) if x]
%timeit list(compress(range(len(t)), t))
%timeit list(filter(lambda x: t[x], range(len(t))))
%timeit np.where(t)[0]
# 1.68 ms ± 112 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 947 µs ± 105 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# 3.96 ms ± 97 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 2.14 ms ± 45.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

You can filter by using boolean mask array with square bracket, it's faster than np.where
>>> states = [True, False, False, True]
>>> np.arange(len(states))[states]
array([0, 3])
>>> size = 1_000_000
>>> states = np.arange(size) % 2 == 0
>>> states
array([ True, False, True, ..., False, True, False])
>>> true_index = np.arange(size)[states]
>>> len(true_index)
500000
>>> true_index
array([ 0, 2, 4, ..., 999994, 999996, 999998])

Related

Vectorize simple for loop in numpy

I'm pretty new to numpy and I'm trying to vectorize a simple for loop for performance reasons, but I can't seem to come up with a solution. I have a numpy array with unique words and for each of these words i need the number of times they occur in another numpy array, called array_to_compare. The number is passed to a third numpy array, which has the same shape as the unique words array.
Here is the code which contains the for loop:
import numpy as np
unique_words = np.array(['a', 'b', 'c', 'd'])
array_to_compare = np.array(['a', 'b', 'a', 'd'])
vector_array = np.zeros(len(unique_words))
for word in np.nditer(unique_words):
counter = np.count_nonzero(array_to_compare == word)
vector_array[np.where(unique_words == word)] = counter
vector_array = [2. 1. 0. 1.] #the desired output
I tried it with np.where and np.isin, but did not get the desired result. I am thankful for any help!
I'd probably use a Counter and a list comprehension to solve this:
In [1]: import numpy as np
...:
...: unique_words = np.array(['a', 'b', 'c', 'd'])
...: array_to_compare = np.array(['a', 'b', 'a', 'd'])
In [2]: from collections import Counter
In [3]: counter = Counter(array_to_compare)
In [4]: counter
Out[4]: Counter({'a': 2, 'b': 1, 'd': 1})
In [5]: vector_array = np.array([counter[key] for key in unique_words])
In [6]: vector_array
Out[6]: array([2, 1, 0, 1])
Assembling the Counter is done in linear time and iterating through your unique_words is also linear.
A numpy comparison of array values using broadcasting:
In [76]: unique_words[:,None]==array_to_compare
Out[76]:
array([[ True, False, True, False],
[False, True, False, False],
[False, False, False, False],
[False, False, False, True]])
In [77]: (unique_words[:,None]==array_to_compare).sum(1)
Out[77]: array([2, 1, 0, 1])
In [78]: timeit (unique_words[:,None]==array_to_compare).sum(1)
9.5 µs ± 2.79 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
But Counter is also a good choice:
In [72]: %%timeit
...: c=Counter(array_to_compare)
...: [c[key] for key in unique_words]
12.7 µs ± 30.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Your use of count_nonzero can be improved with
In [73]: %%timeit
...: words=unique_words.tolist()
...: vector_array = np.zeros(len(words))
...: for i,word in enumerate(words):
...: counter = np.count_nonzero(array_to_compare == word)
...: vector_array[i] = counter
...:
23.4 µs ± 505 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Iteration on lists is faster than on arrays (nditer doesn't add much). And enumerate lets us skip the where test.
Similar to #DanielLenz's answer, but using np.unique to create a dict:
import numpy as np
unique_words = np.array(['a', 'b', 'c', 'd'])
array_to_compare = np.array(['a', 'b', 'a', 'd'])
counts = dict(zip(*np.unique(array_to_compare, return_counts=True)))
result = np.array([counts[word] if word in counts else 0 for word in unique_words])
[2 1 0 1]

How to obtain only the first True value from each row of a numpy array?

I have a 4x3 boolean numpy array, and I'm trying to return a same-sized array which is all False, except for the location of the first True value on each row of the original. So if I have a starting array of
all_bools = np.array([[False, True, True],[True, True, True],[False, False, True],[False,False,False]])
all_bools
array([[False, True, True], # First true value = index 1
[ True, True, True], # First true value = index 0
[False, False, True], # First true value = index 2
[False, False, False]]) # No True Values
then I'd like to return
[[False, True, False],
[True, False, False],
[False, False, True],
[False, False, False]]
so indices 1, 0 and 2 on the first three rows have been set to True and nothing else. Essentially any True value (beyond the first on each row) from the original way have been set to False.
I've been fiddling around with this with np.where and np.argmax and I haven't yet found a good solution - any help gratefully received. This needs to run many, many times so I'd like to avoid iterating.
You can use cumsum, and find the first bool by comparing the result with 1.
all_bools.cumsum(axis=1).cumsum(axis=1) == 1
array([[False, True, False],
[ True, False, False],
[False, False, True],
[False, False, False]])
This also accounts for the issue #a_guest pointed out. The second cumsum call is needed to avoid matching all False values between the first and second True value.
If performance is important, use argmax and set values:
y = np.zeros_like(all_bools, dtype=bool)
idx = np.arange(len(x)), x.argmax(axis=1)
y[idx] = x[idx]
y
array([[False, True, False],
[ True, False, False],
[False, False, True],
[False, False, False]])
Perfplot Performance Timings
I'll take this opportunity to show off perfplot, with some timings, since it is good to see how our solutions vary with different sized inputs.
import numpy as np
import perfplot
def cs1(x):
return x.cumsum(axis=1).cumsum(axis=1) == 1
def cs2(x):
y = np.zeros_like(x, dtype=bool)
idx = np.arange(len(x)), x.argmax(axis=1)
y[idx] = x[idx]
return y
def a_guest(x):
b = np.zeros_like(x, dtype=bool)
i = np.argmax(x, axis=1)
b[np.arange(i.size), i] = np.logical_or.reduce(x, axis=1)
return b
perfplot.show(
setup=lambda n: np.random.randint(0, 2, size=(n, n)).astype(bool),
kernels=[cs1, cs2, a_guest],
labels=['cs1', 'cs2', 'a_guest'],
n_range=[2**k for k in range(1, 8)],
xlabel='N'
)
The trend carries forward to larger N. cumsum is very expensive, while there is a constant time difference between my second solution, and #a_guest's.
You can use the following approach using np.argmax and a product with np.logical_or.reduce for dealing with rows that are all False:
b = np.zeros_like(a, dtype=bool)
i = np.argmax(a, axis=1)
b[np.arange(i.size), i] = np.logical_or.reduce(a, axis=1)
Timing results
Different versions in increasing performance, i.e. fastest approach comes last:
In [1]: import numpy as np
In [2]: def f(a):
...: return a.cumsum(axis=1).cumsum(axis=1) == 1
...:
...:
In [3]: def g(a):
...: b = np.zeros_like(a, dtype=bool)
...: i = np.argmax(a, axis=1)
...: b[np.arange(i.size), i] = np.logical_or.reduce(a, axis=1)
...: return b
...:
...:
In [4]: x = np.random.randint(0, 2, size=(1000, 1000)).astype(bool)
In [5]: %timeit f(x)
10.4 ms ± 155 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [6]: %timeit g(x)
120 µs ± 184 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [7]: def h(a):
...: y = np.zeros_like(x)
...: idx = np.arange(len(x)), x.argmax(axis=1)
...: y[idx] += x[idx]
...: return y
...:
...:
In [8]: %timeit h(x)
92.1 µs ± 3.51 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [9]: def h2(a):
...: y = np.zeros_like(x)
...: idx = np.arange(len(x)), x.argmax(axis=1)
...: y[idx] = x[idx]
...: return y
...:
...:
In [10]: %timeit h2(x)
78.5 µs ± 353 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

numpy conditional list membership element wise

I have a 2D numpy array:
a = np.array([[0,1],
[2,3]])
I have a list of values to keep:
vals_keep = [1,2]
I want to test for list membership for each element in the array. Something like:
mask = a in vals_keep
The result I want:
array([[False, True],
[True, False]])
You can use isin
isin is an element-wise function version of the python keyword in
np.isin(a, vals_keep)
array([[False, True],
[ True, False]])
An added benefit of isin is that it's flexible with arrays of different dimensions:
a = np.arange(4).reshape(1,2,2,1)
np.isin(a, vals_keep)
array([[[[False],
[ True]],
[[ True],
[False]]]])
Here is one way using broadcasting:
In [35]: (a[:, :, None] == vals_keep).any(2)
Out[35]:
array([[False, True],
[ True, False]])
Which is faster than isin for small arrays (less than 100 rows):
In [37]: %timeit np.isin(a, vals_keep)
22 µs ± 728 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [38]: %timeit (a[:, :, None] == vals_keep).any(2)
12.6 µs ± 95.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
For large arrays it's better to use isin because broadcasting in 3D is not very efficient for large arrays/matrices.

Search indexes where values in my array match a value in a different array (python) [duplicate]

I have two numpy arrays, A and B. A conatains unique values and B is a sub-array of A.
Now I am looking for a way to get the index of B's values within A.
For example:
A = np.array([1,2,3,4,5,6,7,8,9,10])
B = np.array([1,7,10])
# I need a function fun() that:
fun(A,B)
>> 0,6,9
You can use np.in1d with np.nonzero -
np.nonzero(np.in1d(A,B))[0]
You can also use np.searchsorted, if you care about maintaining the order -
np.searchsorted(A,B)
For a generic case, when A & B are unsorted arrays, you can bring in the sorter option in np.searchsorted, like so -
sort_idx = A.argsort()
out = sort_idx[np.searchsorted(A,B,sorter = sort_idx)]
I would add in my favorite broadcasting too in the mix to solve a generic case -
np.nonzero(B[:,None] == A)[1]
Sample run -
In [125]: A
Out[125]: array([ 7, 5, 1, 6, 10, 9, 8])
In [126]: B
Out[126]: array([ 1, 10, 7])
In [127]: sort_idx = A.argsort()
In [128]: sort_idx[np.searchsorted(A,B,sorter = sort_idx)]
Out[128]: array([2, 4, 0])
In [129]: np.nonzero(B[:,None] == A)[1]
Out[129]: array([2, 4, 0])
Have you tried searchsorted?
A = np.array([1,2,3,4,5,6,7,8,9,10])
B = np.array([1,7,10])
A.searchsorted(B)
# array([0, 6, 9])
Just for completeness: If the values in A are non negative and reasonably small:
lookup = np.empty((np.max(A) + 1), dtype=int)
lookup[A] = np.arange(len(A))
indices = lookup[B]
I had the same question these days. However, the timing performance is very critical for me. Therefore, I guess the timing comparison of different solutions may be useful for others.
As Divakar mentioned, you can use np.in1d(A, B) with np.where, np.nonzero. Moreover, you can use the np.in1d(A, B) with np.intersect1d (based on this page). Also, you can use np.searchsorted as another useful approach for sorted arrays.
I want to add another simple solution. You can use the comprehension list. It may take longer that the previous ones. However, if you take the advantage of Numba python package, it is much less time-consuming.
In [1]: import numpy as np
In [2]: from numba import njit
In [3]: a = np.array([1,2,3,4,5,6,7,8,9,10])
In [4]: b = np.array([1,7,10])
In [5]: np.where(np.in1d(a, b))[0]
...: array([0, 6, 9])
In [6]: np.nonzero(np.in1d(a, b))[0]
...: array([0, 6, 9])
In [7]: np.searchsorted(a, b)
...: array([0, 6, 9])
In [8]: np.searchsorted(a, np.intersect1d(a, b))
...: array([0, 6, 9])
In [9]: [i for i, x in enumerate(a) if x in b]
...: [0, 6, 9]
In [10]: #njit
...: def func(a, b):
...: return [i for i, x in enumerate(a) if x in b]
In [11]: func(a, b)
...: [0, 6, 9]
Now, let's compare the timing performance of these solutions.
In [12]: %timeit np.where(np.in1d(a, b))[0]
4.26 µs ± 6.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [13]: %timeit np.nonzero(np.in1d(a, b))[0]
4.39 µs ± 14.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [14]: %timeit np.searchsorted(a, b)
800 ns ± 6.04 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [15]: %timeit np.searchsorted(a, np.intersect1d(a, b))
8.8 µs ± 73.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [16]: %timeit [i for i, x in enumerate(a) if x in b]
15.4 µs ± 18.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [17]: %timeit func(a, b)
336 ns ± 0.579 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

Finding indices of matches of one array in another array

I have two numpy arrays, A and B. A conatains unique values and B is a sub-array of A.
Now I am looking for a way to get the index of B's values within A.
For example:
A = np.array([1,2,3,4,5,6,7,8,9,10])
B = np.array([1,7,10])
# I need a function fun() that:
fun(A,B)
>> 0,6,9
You can use np.in1d with np.nonzero -
np.nonzero(np.in1d(A,B))[0]
You can also use np.searchsorted, if you care about maintaining the order -
np.searchsorted(A,B)
For a generic case, when A & B are unsorted arrays, you can bring in the sorter option in np.searchsorted, like so -
sort_idx = A.argsort()
out = sort_idx[np.searchsorted(A,B,sorter = sort_idx)]
I would add in my favorite broadcasting too in the mix to solve a generic case -
np.nonzero(B[:,None] == A)[1]
Sample run -
In [125]: A
Out[125]: array([ 7, 5, 1, 6, 10, 9, 8])
In [126]: B
Out[126]: array([ 1, 10, 7])
In [127]: sort_idx = A.argsort()
In [128]: sort_idx[np.searchsorted(A,B,sorter = sort_idx)]
Out[128]: array([2, 4, 0])
In [129]: np.nonzero(B[:,None] == A)[1]
Out[129]: array([2, 4, 0])
Have you tried searchsorted?
A = np.array([1,2,3,4,5,6,7,8,9,10])
B = np.array([1,7,10])
A.searchsorted(B)
# array([0, 6, 9])
Just for completeness: If the values in A are non negative and reasonably small:
lookup = np.empty((np.max(A) + 1), dtype=int)
lookup[A] = np.arange(len(A))
indices = lookup[B]
I had the same question these days. However, the timing performance is very critical for me. Therefore, I guess the timing comparison of different solutions may be useful for others.
As Divakar mentioned, you can use np.in1d(A, B) with np.where, np.nonzero. Moreover, you can use the np.in1d(A, B) with np.intersect1d (based on this page). Also, you can use np.searchsorted as another useful approach for sorted arrays.
I want to add another simple solution. You can use the comprehension list. It may take longer that the previous ones. However, if you take the advantage of Numba python package, it is much less time-consuming.
In [1]: import numpy as np
In [2]: from numba import njit
In [3]: a = np.array([1,2,3,4,5,6,7,8,9,10])
In [4]: b = np.array([1,7,10])
In [5]: np.where(np.in1d(a, b))[0]
...: array([0, 6, 9])
In [6]: np.nonzero(np.in1d(a, b))[0]
...: array([0, 6, 9])
In [7]: np.searchsorted(a, b)
...: array([0, 6, 9])
In [8]: np.searchsorted(a, np.intersect1d(a, b))
...: array([0, 6, 9])
In [9]: [i for i, x in enumerate(a) if x in b]
...: [0, 6, 9]
In [10]: #njit
...: def func(a, b):
...: return [i for i, x in enumerate(a) if x in b]
In [11]: func(a, b)
...: [0, 6, 9]
Now, let's compare the timing performance of these solutions.
In [12]: %timeit np.where(np.in1d(a, b))[0]
4.26 µs ± 6.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [13]: %timeit np.nonzero(np.in1d(a, b))[0]
4.39 µs ± 14.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [14]: %timeit np.searchsorted(a, b)
800 ns ± 6.04 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [15]: %timeit np.searchsorted(a, np.intersect1d(a, b))
8.8 µs ± 73.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [16]: %timeit [i for i, x in enumerate(a) if x in b]
15.4 µs ± 18.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [17]: %timeit func(a, b)
336 ns ± 0.579 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

Categories