I have a list of n arrays with 4 elements each, i.e (n=2):
l = [[1, 2, 3, 4], [5, 6, 7, 8]]
and am trying to find all elements of the list that are 'non-dominated' - that is they are not dominated by any other element in the list. An array dominates another array if each item inside it is less than or equal to the corresponding item in the other array. So
dominates([1, 2, 3, 4], [5, 6, 7, 8]) == True
as 1 <= 5 and 2 <= 6 and 3 <= 7 and 4 <= 8. But
dominates([1, 2, 3, 9], [5, 6, 7, 8]) == False
as 9 > 8. This function is relatively easy to write, for example:
def dominates(a, b):
return all(i <= j for i, j in zip(a, b))
More succinctly, given l = [a1, a2, a3, .., an] where the a are length 4 arrays, I'm looking to find all a that are not dominated by any other a in l.
I have the following solution:
def get_non_dominated(l):
to_remove = set()
for ind, item_1 in enumerate(l):
if item_2 in to_remove:
continue
for item_2 in l[ind + 1:]:
if dominates(item_2, item_1):
to_remove.add(item_1)
break
elif dominates(item_1, item_2):
to_remove.add(item_2)
return [i for i in l if i not in to_remove]
So get_non_dominated([[1, 2, 3, 4], [5, 6, 7, 8]]) should return [[1, 2, 3, 4]]. Similarly get_non_dominated([[1, 2, 3, 9], [5, 6, 7, 8]]) should return the list unchanged by the logic above (nothing dominates anything else).
But this check happens a lot and l is potentially quite large. I was wondering if anyone had ideas on a way to speed this up? My first thought was to try and vectorize this code with numpy, but I have relatively little experience with it and am struggling a bit. You can assume l has all unique arrays. Any ideas are greatly appreciated!
Another version of #Nyps answer:
def dominates(a, b):
return (np.asarray(a) <= b).all()
It is the vectorized approach of your code using numpy.
This might still be slow if you have to loop through all the rows you have. If you have a list with all the rows and you want to compare them pairwise, you could use scipy to create a N x N array (where N is the number of rows).
import numpy as np
a = np.random.randint(0, 10, size=(1000, 10))
a here is a 1000 x 10 array, simulating 1000 rows of 10 elements it:
from scipy.spatial.distance import cdist
X = cdist(a, a, metric=dominates).astype(np.bool)
X is now a 1000 x 1000 matrix containing the pairwise comparison between all the entries. This is, X[i, j] contains True if sample i dominates sample j or False otherwise.
You can now extract fancy results from X, such as the sample that dominates them all:
>>> a[50] = 0 # set a row to all 0s to fake a dominant row
>>> X = cdist(a, a, metric=dominates).astype(np.bool)
>>> non_dominated = np.where(X.all(axis=1))[0]
>>> non_dominated
array([50])
Sample at position 50 is the ruler if your population, you should watch it closely.
Now, if you want to preserve only the dominated you can do:
if non_dominated.size > 0:
return [a[i] for i in non_dominated]
else: # no one dominates every other
return a
As a recap:
import numpy as np
from scipy.spatial.distance import cdist
def get_ruler(a):
X = cdist(a, a, metric=dominates).astype(np.bool)
rulers = np.where(X.all(axis=1))[0]
if rulers.size > 0:
return [a[i] for i in rulers]
else: # no one dominates every other
return a
How about:
import numpy as np
np.all((np.asarry(l[1])-np.asarry(l[0]))>=0)
You can go a simliar way in case you are able to create your list as numpy array straight away, i.e. type(l) == np.ndarray. Then the syntax would be:
np.all(p[1])-p[0])>=0)
I have a few simple questions I'm not able to find the answer to. They are both stated in the following example code. Thank you for any help!
import numpy as np
#here are two arrays to join together
a = np.array([1,2,3,4,5])
b = np.array([6,7,8,9,10])
#here comes the joining step I don't know how to do better
#QUESTION 1: How to form all permutations of two 1D arrays?
temp = np.array([]) #empty array to be filled with values
for aa in a:
for bb in b:
temp = np.append(temp,[aa,bb]) #fill the array
#QUESTION 2: Why do I have to reshape? How can I avoid this?
temp = temp.reshape((int(temp.size/2),2))
edit: made code more minimal
To answer your first question, you can use np.meshgrid to form those combinations between elements of the two input arrays and get to the final version of temp in a vectorized manner avoiding those loops, like so -
np.array(np.meshgrid(a,b)).transpose(2,1,0).reshape(-1,2)
As seen, we would still need a reshape if you intend to get a 2-column output array.
There are other ways we could construct the array with the meshed structure and thus avoid a reshape. One of those ways would be with np.column_stack, as shown below -
r,c = np.meshgrid(a,b)
temp = np.column_stack((r.ravel('F'), c.ravel('F')))
The proper way to build an array iteratively is with list append. np.append is poorly named, and often mis used.
In [274]: a = np.array([1,2,3,4,5])
...: b = np.array([6,7,8,9,10])
...:
In [275]: temp = []
In [276]: for aa in a:
...: for bb in b:
...: temp.append([aa,bb])
...:
In [277]: temp
Out[277]:
[[1, 6],
[1, 7],
[1, 8],
[1, 9],
[1, 10],
[2, 6],
....
[5, 9],
[5, 10]]
In [278]: np.array(temp).shape
Out[278]: (25, 2)
It's better to avoid loops at all, but if you must, use this list append approach.
I have any array of values, that are often the same and I am trying to find the index of the smallest one. But I want to know all the objects that are the same.
So for example I have the array a = [1, 2, 3, 4] and to find the index of the smallest one I use a.index(min(a)) and this returns 0. But if I had an array of a = [1, 1, 1, 1], using the same thing would still return 0.
I want to know that multiple indices match what I am searching for and what those indices are. How would I go about doing this?
list.index(value) returns the index of the first occurrence of value in list.
A better idea is to use a simple list comprehension and enumerate:
indices = [i for i, x in enumerate(iterable) if x == v]
where v is the value you want to search for and iterable is an object that supports iterator protocol e.g. it can be a generator or a sequence (like list).
For your specific use case, that'll look like
def smallest(seq):
m = min(seq)
return [i for i, x in enumerate(seq) if x == m]
Some examples:
In [23]: smallest([1, 2, 3, 4])
Out[23]: [0]
In [24]: smallest([1, 1, 1, 1])
Out[24]: [0, 1, 2, 3]
If you're not sure whether the seq is empty or not, you can pass the default=-1 (or some other value) argument to min function (in Python 3.4+):
m = min(seq, default=-1)
Consider using m = min(seq or (-1,)) (again, any value) instead, when using older Python.
A different approach using numpy.where could look like
In [1]: import numpy as np
In [2]: def np_smallest(seq):
...: return np.where(seq==seq.min())[0]
In [3]: np_smallest(np.array([1,1,1,1]))
Out[3]: array([0, 1, 2, 3])
In [4]: np_smallest(np.array([1,2,3,4]))
Out[4]: array([0])
This approach is slighly less efficient than the list comprehension for small list but if you face large arrays, numpy may save you some time.
In [5]: seq = np.random.randint(100, size=1000)
In [6]: %timeit np_smallest(seq)
100000 loops, best of 3: 10.1 µs per loop
In [7]: %timeit smallest(seq)
1000 loops, best of 3: 194 µs per loop
Here is my solution:
def all_smallest(seq):
"""Takes sequence, returns list of all smallest elements"""
min_i = min(seq)
amount = seq.count(min_i)
ans = []
if amount > 1:
for n, i in enumerate(seq):
if i == min_i:
ans.append(n)
if len(ans) == amount:
return ans
return [seq.index(min_i)]
Code very straightforward I think here all clear without any explanation.
Hi there on a Saturday Fun Night,
I am getting around in python and I am quite enjoying it.
Assume I have a python array:
x = [1, 0, 0, 1, 3]
What is the fastest way to count all non zero elements in the list (ans: 3) ? Also I would like to do it without for loops if possible - the most succint and terse manner possibe, say something conceptually like
[counter += 1 for y in x if y > 0]
Now - my real problem is that I have a multi dimensional array and what I really want to avoid is doing the following:
for p in range(BINS):
for q in range(BINS):
for r in range(BINS):
if (mat3D[p][q][r] > 0): some_feature_set_count += 1
From the little python I have seen, my gut feeling is that there is a really clean syntax (and efficient) way how to do this.
Ideas, anyone?
For the single-dimensional case:
sum(1 for i in x if i)
For the multi-dimensional case, you can either nest:
sum(sum(1 for i in row if i) for row in rows)
or do it all within the one construct:
sum(1 for row in rows
for i in row if i)
If you are using numpy as suggested by the fact that you're using multi-dimensional arrays in Python, the following is similar to #Marcelo's answer, but a tad cleaner:
>>> a = numpy.array([[1,2,3,0],[0,4,2,0]])
>>> sum(1 for i in a.flat if i)
5
>>>
If you go with numpy and your 3D array is a numpy array, this one-liner will do the trick:
numpy.where(your_array_name != 0, 1, 0).sum()
example:
In [23]: import numpy
In [24]: a = numpy.array([ [[0, 1, 2], [0, 0, 7], [9, 2, 0]], [[0, 0, 0], [1, 4, 6], [9, 0, 3]], [[1, 3, 2], [3, 4, 0], [1, 7, 9]] ])
In [25]: numpy.where(a != 0, 1, 0).sum()
Out[25]: 18
While perhaps not concise, this is my choice of how to solve this which works for any dimension:
def sum(li):
s = 0
for l in li:
if isinstance(l, list):
s += sum(l)
elif l:
s += 1
return s
def zeros(n):
return len(filter(lambda x:type(x)==int and x!=0,n))+sum(map(zeros,filter(lambda x:type(x)==list,n)))
Can't really say if it is the fastest way but it is recursive and works with N dimensional lists.
zeros([1,2,3,4,0,[1,2,3,0,[1,2,3,0,0,0]]]) => 10
I would have slightly changed Marcelo's answer to the following:
len([x for x in my_list if x != 0])
The sum() above tricked me for a second, as I thought he was getting the total value instead of count until I seen the 1 hovering at the start. I'd rather be explicit with len().
Using chain to reduce array lookups:
from itertools import chain
BINS = [[[2,2,2],[0,0,0],[1,2,0]],
[[1,0,0],[0,0,2],[1,2,0]],
[[0,0,0],[1,1,1],[1,3,0]]]
sum(1 for c in chain.from_iterable(chain.from_iterable(BINS)) if c > 0)
14
I haven't done any performance checks on this. But it doesn't use any significant memory.
Note that it is using a generator expression, not a list comprehension. Adding the [list comprehension] syntax will create an array to be summed instead of feeding one number at a time to sum.
I know there is a method for a Python list to return the first index of something:
>>> xs = [1, 2, 3]
>>> xs.index(2)
1
Is there something like that for NumPy arrays?
Yes, given an array, array, and a value, item to search for, you can use np.where as:
itemindex = numpy.where(array == item)
The result is a tuple with first all the row indices, then all the column indices.
For example, if an array is two dimensions and it contained your item at two locations then
array[itemindex[0][0]][itemindex[1][0]]
would be equal to your item and so would be:
array[itemindex[0][1]][itemindex[1][1]]
If you need the index of the first occurrence of only one value, you can use nonzero (or where, which amounts to the same thing in this case):
>>> t = array([1, 1, 1, 2, 2, 3, 8, 3, 8, 8])
>>> nonzero(t == 8)
(array([6, 8, 9]),)
>>> nonzero(t == 8)[0][0]
6
If you need the first index of each of many values, you could obviously do the same as above repeatedly, but there is a trick that may be faster. The following finds the indices of the first element of each subsequence:
>>> nonzero(r_[1, diff(t)[:-1]])
(array([0, 3, 5, 6, 7, 8]),)
Notice that it finds the beginning of both subsequence of 3s and both subsequences of 8s:
[1, 1, 1, 2, 2, 3, 8, 3, 8, 8]
So it's slightly different than finding the first occurrence of each value. In your program, you may be able to work with a sorted version of t to get what you want:
>>> st = sorted(t)
>>> nonzero(r_[1, diff(st)[:-1]])
(array([0, 3, 5, 7]),)
You can also convert a NumPy array to list in the air and get its index. For example,
l = [1,2,3,4,5] # Python list
a = numpy.array(l) # NumPy array
i = a.tolist().index(2) # i will return index of 2
print i
It will print 1.
Just to add a very performant and handy numba alternative based on np.ndenumerate to find the first index:
from numba import njit
import numpy as np
#njit
def index(array, item):
for idx, val in np.ndenumerate(array):
if val == item:
return idx
# If no item was found return None, other return types might be a problem due to
# numbas type inference.
This is pretty fast and deals naturally with multidimensional arrays:
>>> arr1 = np.ones((100, 100, 100))
>>> arr1[2, 2, 2] = 2
>>> index(arr1, 2)
(2, 2, 2)
>>> arr2 = np.ones(20)
>>> arr2[5] = 2
>>> index(arr2, 2)
(5,)
This can be much faster (because it's short-circuiting the operation) than any approach using np.where or np.nonzero.
However np.argwhere could also deal gracefully with multidimensional arrays (you would need to manually cast it to a tuple and it's not short-circuited) but it would fail if no match is found:
>>> tuple(np.argwhere(arr1 == 2)[0])
(2, 2, 2)
>>> tuple(np.argwhere(arr2 == 2)[0])
(5,)
l.index(x) returns the smallest i such that i is the index of the first occurrence of x in the list.
One can safely assume that the index() function in Python is implemented so that it stops after finding the first match, and this results in an optimal average performance.
For finding an element stopping after the first match in a NumPy array use an iterator (ndenumerate).
In [67]: l=range(100)
In [68]: l.index(2)
Out[68]: 2
NumPy array:
In [69]: a = np.arange(100)
In [70]: next((idx for idx, val in np.ndenumerate(a) if val==2))
Out[70]: (2L,)
Note that both methods index() and next return an error if the element is not found. With next, one can use a second argument to return a special value in case the element is not found, e.g.
In [77]: next((idx for idx, val in np.ndenumerate(a) if val==400),None)
There are other functions in NumPy (argmax, where, and nonzero) that can be used to find an element in an array, but they all have the drawback of going through the whole array looking for all occurrences, thus not being optimized for finding the first element. Note also that where and nonzero return arrays, so you need to select the first element to get the index.
In [71]: np.argmax(a==2)
Out[71]: 2
In [72]: np.where(a==2)
Out[72]: (array([2], dtype=int64),)
In [73]: np.nonzero(a==2)
Out[73]: (array([2], dtype=int64),)
Time comparison
Just checking that for large arrays the solution using an iterator is faster when the searched item is at the beginning of the array (using %timeit in the IPython shell):
In [285]: a = np.arange(100000)
In [286]: %timeit next((idx for idx, val in np.ndenumerate(a) if val==0))
100000 loops, best of 3: 17.6 µs per loop
In [287]: %timeit np.argmax(a==0)
1000 loops, best of 3: 254 µs per loop
In [288]: %timeit np.where(a==0)[0][0]
1000 loops, best of 3: 314 µs per loop
This is an open NumPy GitHub issue.
See also: Numpy: find first index of value fast
If you're going to use this as an index into something else, you can use boolean indices if the arrays are broadcastable; you don't need explicit indices. The absolute simplest way to do this is to simply index based on a truth value.
other_array[first_array == item]
Any boolean operation works:
a = numpy.arange(100)
other_array[first_array > 50]
The nonzero method takes booleans, too:
index = numpy.nonzero(first_array == item)[0][0]
The two zeros are for the tuple of indices (assuming first_array is 1D) and then the first item in the array of indices.
For one-dimensional sorted arrays, it would be much more simpler and efficient O(log(n)) to use numpy.searchsorted which returns a NumPy integer (position). For example,
arr = np.array([1, 1, 1, 2, 3, 3, 4])
i = np.searchsorted(arr, 3)
Just make sure the array is already sorted
Also check if returned index i actually contains the searched element, since searchsorted's main objective is to find indices where elements should be inserted to maintain order.
if arr[i] == 3:
print("present")
else:
print("not present")
For 1D arrays, I'd recommend np.flatnonzero(array == value)[0], which is equivalent to both np.nonzero(array == value)[0][0] and np.where(array == value)[0][0] but avoids the ugliness of unboxing a 1-element tuple.
To index on any criteria, you can so something like the following:
In [1]: from numpy import *
In [2]: x = arange(125).reshape((5,5,5))
In [3]: y = indices(x.shape)
In [4]: locs = y[:,x >= 120] # put whatever you want in place of x >= 120
In [5]: pts = hsplit(locs, len(locs[0]))
In [6]: for pt in pts:
.....: print(', '.join(str(p[0]) for p in pt))
4, 4, 0
4, 4, 1
4, 4, 2
4, 4, 3
4, 4, 4
And here's a quick function to do what list.index() does, except doesn't raise an exception if it's not found. Beware -- this is probably very slow on large arrays. You can probably monkey patch this on to arrays if you'd rather use it as a method.
def ndindex(ndarray, item):
if len(ndarray.shape) == 1:
try:
return [ndarray.tolist().index(item)]
except:
pass
else:
for i, subarray in enumerate(ndarray):
try:
return [i] + ndindex(subarray, item)
except:
pass
In [1]: ndindex(x, 103)
Out[1]: [4, 0, 3]
An alternative to selecting the first element from np.where() is to use a generator expression together with enumerate, such as:
>>> import numpy as np
>>> x = np.arange(100) # x = array([0, 1, 2, 3, ... 99])
>>> next(i for i, x_i in enumerate(x) if x_i == 2)
2
For a two dimensional array one would do:
>>> x = np.arange(100).reshape(10,10) # x = array([[0, 1, 2,... 9], [10,..19],])
>>> next((i,j) for i, x_i in enumerate(x)
... for j, x_ij in enumerate(x_i) if x_ij == 2)
(0, 2)
The advantage of this approach is that it stops checking the elements of the array after the first match is found, whereas np.where checks all elements for a match. A generator expression would be faster if there's match early in the array.
There are lots of operations in NumPy that could perhaps be put together to accomplish this. This will return indices of elements equal to item:
numpy.nonzero(array - item)
You could then take the first elements of the lists to get a single element.
Comparison of 8 methods
TL;DR:
(Note: applicable to 1d arrays under 100M elements.)
For maximum performance use index_of__v5 (numba + numpy.enumerate + for loop; see the code below).
If numba is not available:
Use index_of__v7 (for loop + enumerate) if the target value is expected to be found within the first 100k elements.
Else use index_of__v2/v3/v4 (numpy.argmax or numpy.flatnonzero based).
Powered by perfplot
import numpy as np
from numba import njit
# Based on: numpy.argmax()
# Proposed by: John Haberstroh (https://stackoverflow.com/a/67497472/7204581)
def index_of__v1(arr: np.array, v):
is_v = (arr == v)
return is_v.argmax() if is_v.any() else -1
# Based on: numpy.argmax()
def index_of__v2(arr: np.array, v):
return (arr == v).argmax() if v in arr else -1
# Based on: numpy.flatnonzero()
# Proposed by: 1'' (https://stackoverflow.com/a/42049655/7204581)
def index_of__v3(arr: np.array, v):
idxs = np.flatnonzero(arr == v)
return idxs[0] if len(idxs) > 0 else -1
# Based on: numpy.argmax()
def index_of__v4(arr: np.array, v):
return np.r_[False, (arr == v)].argmax() - 1
# Based on: numba, for loop
# Proposed by: MSeifert (https://stackoverflow.com/a/41578614/7204581)
#njit
def index_of__v5(arr: np.array, v):
for idx, val in np.ndenumerate(arr):
if val == v:
return idx[0]
return -1
# Based on: numpy.ndenumerate(), for loop
def index_of__v6(arr: np.array, v):
return next((idx[0] for idx, val in np.ndenumerate(arr) if val == v), -1)
# Based on: enumerate(), for loop
# Proposed by: Noyer282 (https://stackoverflow.com/a/40426159/7204581)
def index_of__v7(arr: np.array, v):
return next((idx for idx, val in enumerate(arr) if val == v), -1)
# Based on: list.index()
# Proposed by: Hima (https://stackoverflow.com/a/23994923/7204581)
def index_of__v8(arr: np.array, v):
l = list(arr)
try:
return l.index(v)
except ValueError:
return -1
Go to Colab
The numpy_indexed package (disclaimer, I am its author) contains a vectorized equivalent of list.index for numpy.ndarray; that is:
sequence_of_arrays = [[0, 1], [1, 2], [-5, 0]]
arrays_to_query = [[-5, 0], [1, 0]]
import numpy_indexed as npi
idx = npi.indices(sequence_of_arrays, arrays_to_query, missing=-1)
print(idx) # [2, -1]
This solution has vectorized performance, generalizes to ndarrays, and has various ways of dealing with missing values.
There is a fairly idiomatic and vectorized way to do this built into numpy. It uses a quirk of the np.argmax() function to accomplish this -- if many values match, it returns the index of the first match. The trick is that for booleans, there will only ever be two values: True (1) and False (0). Therefore, the returned index will be that of the first True.
For the simple example provided, you can see it work with the following
>>> np.argmax(np.array([1,2,3]) == 2)
1
A great example is computing buckets, e.g. for categorizing. Let's say you have an array of cut points, and you want the "bucket" that corresponds to each element of your array. The algorithm is to compute the first index of cuts where x < cuts (after padding cuts with np.Infitnity). I could use broadcasting to broadcast the comparisons, then apply argmax along the cuts-broadcasted axis.
>>> cuts = np.array([10, 50, 100])
>>> cuts_pad = np.array([*cuts, np.Infinity])
>>> x = np.array([7, 11, 80, 443])
>>> bins = np.argmax( x[:, np.newaxis] < cuts_pad[np.newaxis, :], axis = 1)
>>> print(bins)
[0, 1, 2, 3]
As expected, each value from x falls into one of the sequential bins, with well-defined and easy to specify edge case behavior.
Another option not previously mentioned is the bisect module, which also works on lists, but requires a pre-sorted list/array:
import bisect
import numpy as np
z = np.array([104,113,120,122,126,138])
bisect.bisect_left(z, 122)
yields
3
bisect also returns a result when the number you're looking for doesn't exist in the array, so that the number can be inserted in the correct place.
Note: this is for python 2.7 version
You can use a lambda function to deal with the problem, and it works both on NumPy array and list.
your_list = [11, 22, 23, 44, 55]
result = filter(lambda x:your_list[x]>30, range(len(your_list)))
#result: [3, 4]
import numpy as np
your_numpy_array = np.array([11, 22, 23, 44, 55])
result = filter(lambda x:your_numpy_array [x]>30, range(len(your_list)))
#result: [3, 4]
And you can use
result[0]
to get the first index of the filtered elements.
For python 3.6, use
list(result)
instead of
result
Use ndindex
Sample array
arr = np.array([[1,4],
[2,3]])
print(arr)
...[[1,4],
[2,3]]
create an empty list to store the index and the element tuples
index_elements = []
for i in np.ndindex(arr.shape):
index_elements.append((arr[i],i))
convert the list of tuples into dictionary
index_elements = dict(index_elements)
The keys are the elements and the values are their
indices - use keys to access the index
index_elements[4]
output
... (0,1)
For my use case, I could not sort the array ahead of time because the order of the elements is important. This is my all-NumPy implementation:
import numpy as np
# The array in question
arr = np.array([1,2,1,2,1,5,5,3,5,9])
# Find all of the present values
vals=np.unique(arr)
# Make all indices up-to and including the desired index positive
cum_sum=np.cumsum(arr==vals.reshape(-1,1),axis=1)
# Add zeros to account for the n-1 shape of diff and the all-positive array of the first index
bl_mask=np.concatenate([np.zeros((cum_sum.shape[0],1)),cum_sum],axis=1)>=1
# The desired indices
idx=np.where(np.diff(bl_mask))[1]
# Show results
print(list(zip(vals,idx)))
>>> [(1, 0), (2, 1), (3, 7), (5, 5), (9, 9)]
I believe it accounts for unsorted arrays with duplicate values.
Found another solution with loops:
new_array_of_indicies = []
for i in range(len(some_array)):
if some_array[i] == some_value:
new_array_of_indicies.append(i)
index_lst_form_numpy = pd.DataFrame(df).reset_index()["index"].tolist()