Minimal distance between elements in a vector - python

I need the minimal distance between elements of an array.
I did:
numpy.min(numpy.ediff1d(numpy.sort(x)))
Is there a better / more efficient / more elegant / faster way of doing this?

If you are after sheer speed, here are some timings:
In [13]: a = np.random.rand(1000)
In [14]: %timeit np.sort(a)
10000 loops, best of 3: 31.9 us per loop
In [15]: %timeit np.ediff1d(a)
100000 loops, best of 3: 15.2 us per loop
In [16]: %timeit np.diff(a)
100000 loops, best of 3: 7.76 us per loop
In [17]: %timeit np.min(a)
100000 loops, best of 3: 3.19 us per loop
In [18]: %timeit np.unique(a)
10000 loops, best of 3: 53.8 us per loop
The timing of unique was in hopes that it would be comparably fast to sort, and you could break out early without the calls to diff and min if the length of the unique array was shorter than the array itself (as that would mean your answer was 0). But the overhead of unique is more than any gain to be made.
So it seems the only potential improvement I can offer is replacing ediff1d with diff:
In [19]: %timeit np.min(np.diff(np.sort(a)))
10000 loops, best of 3: 47.7 us per loop
In [20]: %timeit np.min(np.ediff1d(np.sort(a)))
10000 loops, best of 3: 57.1 us per loop

Your current approach is definitely optimal. By sorting first, you're reducing the space in between each element and ediff1d will return a difference array. Here's a suggestion:
Since we know that the difference must be positive since we have an ascending-order sort, we can implement ediff1d manually and include a break where the difference is zero. That way, if you have the sorted array x:
[1, 1, 2, 3, 4, 5, 6, 7, ... , n]
Rather than going through n elements, your ediff1d function breaks early and covers only the first two elements, returning [0]. This also reduces the size of the difference array, reducing the amount of iterations required by your min call.
Here is an example without the use of numpy:
x = [1, 12, 3, 8, 4, 1, 4, 9, 1, 29, 210, 313, 12]
def ediff1d_custom(x):
darr = []
for i in xrange(len(x)):
if i != len(x) - 1:
diff = x[i + 1] - x[i]
darr.append(diff)
if diff == 0:
break
return darr
print min(ediff1d_custom(sorted(x))) # prints 0

try:
min(x[i+1]-x[i] for i in xrange(0, len(x)-1))
except ValueError:
print 'Array contains less than two values.'

Related

Fastest way in numpy to get distance of product of n pairs in array

I have N number of points, for example:
A = [2, 3]
B = [3, 4]
C = [3, 3]
.
.
.
And they're in an array like so:
arr = np.array([[2, 3], [3, 4], [3, 3]])
I need as output all pairwise distances in BFS (Breadth First Search) order to track which distance is which, like: A->B, A->C, B->C. For the above example data, the result would be [1.41, 1.0, 1.0].
EDIT: I have to accomplish it with numpy or core libraries.
If you can use it, SciPy has a function for this:
In [2]: from scipy.spatial.distance import pdist
In [3]: pdist(arr)
Out[3]: array([1.41421356, 1. , 1. ])
Here's a numpy-only solution (fair warning: it requires a lot of memory, unlike pdist)...
dists = np.triu(np.linalg.norm(arr - arr[:, None], axis=-1)).flatten()
dists = dists[dists != 0]
Demo:
In [4]: arr = np.array([[2, 3], [3, 4], [3, 3], [5, 2], [4, 5]])
In [5]: pdist(arr)
Out[5]:
array([1.41421356, 1. , 3.16227766, 2.82842712, 1. ,
2.82842712, 1.41421356, 2.23606798, 2.23606798, 3.16227766])
In [6]: dists = np.triu(np.linalg.norm(arr - arr[:, None], axis=-1)).flatten()
In [7]: dists = dists[dists != 0]
In [8]: dists
Out[8]:
array([1.41421356, 1. , 3.16227766, 2.82842712, 1. ,
2.82842712, 1.41421356, 2.23606798, 2.23606798, 3.16227766])
Timings (with the solution above wrapped in a function called triu):
In [9]: %timeit pdist(arr)
7.27 µs ± 738 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [10]: %timeit triu(arr)
25.5 µs ± 4.58 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
As an alternative method, but similar to ddejohn answer, we can use np.triu_indices which return just the upper triangular indices in the matrix, which may be more memory-efficient:
np.linalg.norm(arr - arr[:, None], axis=-1)[np.triu_indices(arr.shape[0], 1)]
This doesn't need additional modules like flattening and indexing. Its performance is similar to the aforementioned answer for large data (e.g. you can check it by arr = np.random.rand(10000, 2) on colab, which will be done near 4.6 s for both; It may beats the np.triu and flatten in larger data).
I have tested the memory usage one time by memory-profiler as follows, but it must be checked again if it be important in terms of memory usage (I'm not sure):
Update:
I have tried to limit the calculations just to the upper triangle, that speed the code up 2 to 3 times on the tested arrays. As array size grows, the performance difference between this loop and the previous methods by np.triu_indices or np.triu grows and be more obvious:
ind = np.arange(arr.shape[0] - 1)
sub_ind = ind + 1
result = np.zeros(sub_ind.sum())
j = 0
for i in range(ind.shape[0]):
result[j:j+ind[-1-i]+1] = np.linalg.norm(arr[ind[i]] - arr[sub_ind[i]:], axis=-1)
j += ind[-1-i]+1
Also, through this way, the memory consumption is reduced at least ~x4. So, this method made it possible to work on larger arrays and more quickly.
Benchmarks:
# arr = np.random.rand(100, 2)
100 loops, best of 5: 459 µs per loop (ddejohns --> np.triu & np.flatten)
100 loops, best of 5: 528 µs per loop (mine --> np.triu_indices)
100 loops, best of 5: 1.42 ms per loop (This method)
--------------------------------------
# arr = np.random.rand(1000, 2)
10 loops, best of 5: 49.9 ms per loop
10 loops, best of 5: 49.7 ms per loop
10 loops, best of 5: 30.4 ms per loop (~x1.7) The fastest
--------------------------------------
# arr = np.random.rand(10000, 2)
2 loops, best of 5: 4.56 s per loop
2 loops, best of 5: 4.6 s per loop
2 loops, best of 5: 1.85 s per loop (~x2.5) The fastest

How could I get numpy array indices by some conditions

I come to a problem like this:
suppose I have arrays like this:
a = np.array([[1,2,3,4,5,4,3,2,1],])
label = np.array([[1,0,1,0,0,1,1,0,1],])
I need to obtain the indices of a at which position the element value of label is 1 and the value of a is the largest amount all that causing label to be 1.
It maybe confusing, in the above example, the indices where label is 1 are: 0, 2, 5, 6, 8, their corresponding values of a are thus: 1, 3, 4, 3, 1, among which 4 is the larges, thus I need to get the result of 5 which is the index of number 4 in a. How could I do this with numpy ?
Get the 1s indices say as idx, then index into a with it, get max index and finally trace it back to the original order by indexing into idx -
idx = np.flatnonzero(label==1)
out = idx[a[idx].argmax()]
Sample run -
# Assuming inputs to be 1D
In [18]: a
Out[18]: array([1, 2, 3, 4, 5, 4, 3, 2, 1])
In [19]: label
Out[19]: array([1, 0, 1, 0, 0, 1, 1, 0, 1])
In [20]: idx = np.flatnonzero(label==1)
In [21]: idx[a[idx].argmax()]
Out[21]: 5
For a as ints and label as an array of 0s and 1s, we could optimize further as we could scale a based on the range of values in it, like so -
(label*(a.max()-a.min()+1) + a).argmax()
Furthermore, if a has positive numbers only, it would simplify to -
(label*(a.max()+1) + a).argmax()
Timings for positive ints largish a -
In [115]: np.random.seed(0)
...: a = np.random.randint(0,10,(100000))
...: label = np.random.randint(0,2,(100000))
In [117]: %%timeit
...: idx = np.flatnonzero(label==1)
...: out = idx[a[idx].argmax()]
1000 loops, best of 3: 592 µs per loop
In [116]: %timeit (label*(a.max()-a.min()+1) + a).argmax()
1000 loops, best of 3: 357 µs per loop
# #coldspeed's soln
In [120]: %timeit np.ma.masked_where(~label.astype(bool), a).argmax()
1000 loops, best of 3: 1.63 ms per loop
# won't work with negative numbers in a
In [119]: %timeit (label*(a.max()+1) + a).argmax()
1000 loops, best of 3: 292 µs per loop
# #klim's soln (won't work with negative numbers in a)
In [121]: %timeit np.argmax(a * (label == 1))
1000 loops, best of 3: 229 µs per loop
You can use masked arrays:
>>> np.ma.masked_where(~label.astype(bool), a).argmax()
5
Here is one of the simplest ways.
>>> np.argmax(a * (label == 1))
5
>>> np.argmax(a * (label == 1), axis=1)
array([5])
Coldspeed's method may take more time.

Getting {ValueError} 'a' must be 1-dimensoinal for list of lists from np.random.choice

I want to create a toy training set from the XOR function:
xor = [[0, 0, 0],
[0, 1, 1],
[1, 0, 1],
[1, 1, 0]]
input_x = np.random.choice(a=xor, size=200)
However, this is giving me
{ValueError} 'a' must be 1-dimensoinal
But, if I add e.g. a number to this list:
xor = [[0, 0, 0],
[0, 1, 1],
[1, 0, 1],
[1, 1, 0],
1337] # With this it will work
input_x = np.random.choice(a=xor, size=200)
it starts to work. Why is this the case and how can I make this work without having to add another primitive to the xor list?
In case of an array I would do the following:
xor = np.array([[0,0,0],
[0,1,1],
[1,0,1],
[1,1,0]])
rnd_indices = np.random.choice(len(xor), size=200)
xor_data = xor[rnd_indices]
If you want a random list from xor, you should probably be doing this.
xor[np.random.choice(len(xor),1)]
You can use the random package instead:
import random
input_x = [random.choice(xor) for _ in range(200)]
Interesting!! Seems that numpy implicitly converts the input to np.array first. so, for your first input
np.array(xor).shape == (4, 3)
while for the second value
np.array(xor).shape == (5, )
so, the second value is seen by numpy as 1d!!!
So, to pick a random row, just pick a random index, and then the corresponding row
ind = np.choice(len(xor))
random_row = xor[ind, :]
With focus on performance, we could use the decimal number equivalents of those four numbers, feed those to np.random.choice() to generate 200 such numbers randomly chosen and finally get their binary equivalents with bit-shift operation.
Thus, an implementation would be -
def bitshift_approach(N):
nums = np.random.choice(np.array([0,3,5,6]),size=(N))
return ((nums & (1 << np.arange(3))[:,None])!=0).T.astype(int)
Another approach would be using very similar to what others have suggested to use np.random.choice(len(xor) to generate the row indices and then use row-indexing to select rows off xor. A slight modification to that would be to use np.take to select those rows. With such repeated indices, as is the case here, this should be performant.
Thus, an alternative approach would be -
np.take(xor,np.random.choice(len(xor), size=N))
Runtime test -
In [42]: N = 200
In [43]: %timeit xor[np.random.choice(np.arange(len(xor)), size=N)]
...: %timeit xor[np.random.choice(len(xor), size=N)]
...: %timeit bitshift_approach(N)
...: %timeit np.take(xor,np.random.choice(len(xor), size=N))
...:
10000 loops, best of 3: 43.3 µs per loop
10000 loops, best of 3: 38.3 µs per loop
10000 loops, best of 3: 59.4 µs per loop
10000 loops, best of 3: 35 µs per loop
In [44]: N = 1000
In [45]: %timeit xor[np.random.choice(np.arange(len(xor)), size=N)]
...: %timeit xor[np.random.choice(len(xor), size=N)]
...: %timeit bitshift_approach(N)
...: %timeit np.take(xor,np.random.choice(len(xor), size=N))
...:
10000 loops, best of 3: 69.5 µs per loop
10000 loops, best of 3: 64.7 µs per loop
10000 loops, best of 3: 77.7 µs per loop
10000 loops, best of 3: 38.7 µs per loop
In [46]: N = 10000
In [47]: %timeit xor[np.random.choice(np.arange(len(xor)), size=N)]
...: %timeit xor[np.random.choice(len(xor), size=N)]
...: %timeit bitshift_approach(N)
...: %timeit np.take(xor,np.random.choice(len(xor), size=N))
...:
1000 loops, best of 3: 363 µs per loop
1000 loops, best of 3: 351 µs per loop
1000 loops, best of 3: 225 µs per loop
10000 loops, best of 3: 134 µs per loop
You can use random.choice() directly and just run it 200 times to get 200 sample
Since np.random.choice() requires values to be in 1 d shape like ["1","2","3"]
and can't work with a list of lists or list of tuples only list of scaler values

Partial sum over an array given a list of indices

I have a 2D matrix and I need to sum a subset of the matrix elements, given two lists of indices imp_list and bath_list. Here is what I'm doing right now:
s = 0.0
for i in imp_list:
for j in bath_list:
s += K[i,j]
which appears to be very slow. What would be a better solution to perform the sum?
If you're working with large arrays, you should get a huge speed boost by using NumPy's own indexing routines over Python's for loops.
In the general case you can use np.ix_ to select a subarray of the matrix to sum:
K[np.ix_(imp_list, bath_list)].sum()
Note that np.ix_ carries some overhead, so if your two lists contain consecutive or evenly-spaced values, it's worth using regular slicing to index the array instead (see method3() below).
Here's some data to illustrate the improvements:
K = np.arange(1000000).reshape(1000, 1000)
imp_list = range(100) # [0, 1, 2, ..., 99]
bath_list = range(200) # [0, 1, 2, ..., 199]
def method1():
s = 0
for i in imp_list:
for j in bath_list:
s += K[i,j]
return s
def method2():
return K[np.ix_(imp_list, bath_list)].sum()
def method3():
return K[:100, :200].sum()
Then:
In [80]: method1() == method2() == method3()
Out[80]: True
In [91]: %timeit method1()
10 loops, best of 3: 9.93 ms per loop
In [92]: %timeit method2()
1000 loops, best of 3: 884 µs per loop
In [93]: %timeit method3()
10000 loops, best of 3: 34 µs per loop

Select elements row-wise based on single array

Say I have an array d of size (N,T), out of which I need to select elements using index of shape (N,), where the first element corresponds to the index in the first row, etc... how would I do that?
For example
>>> d
Out[748]:
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]])
>>> index
Out[752]: array([5, 6, 1], dtype=int64)
Expected Output:
array([[5],
[6],
[2])
Which is an array containing the fifth element of the first row, the 6th element of the second row and the second element of the third row.
Update
Since I will have sufficiently larger N, I was interested in the speed of the different methods for higher N. With N = 30000:
>>> %timeit np.diag(e.take(index2, axis=1)).reshape(N*3, 1)
1 loops, best of 3: 3.9 s per loop
>>> %timeit e.ravel()[np.arange(e.shape[0])*e.shape[1]+index2].reshape(N*3, 1)
1000 loops, best of 3: 287 µs per loop
Finally, you suggest reshape(). As I want to leave it as general as possible (without knowing N), I instead use [:,np.newaxis] - it seems to increase duration from 287µs to 288µs, which I'll take :)
This might be ugly but more efficient:
>>> d.ravel()[np.arange(d.shape[0])*d.shape[1]+index]
array([5, 6, 2])
edit
As pointed out by #deinonychusaur the statement above can be written as clean as:
d[np.arange(index.size),index]
There might be nicer ways, but a combo of take, diag and reshape would do:
In [137]: np.diag(d.take(index, axis=1)).reshape(3, 1)
Out[137]:
array([[5],
[6],
[2]])
EDIT
Comparisons with #Emanuele Paolinis' alterative, adding reshape to it to match the sought output:
In [142]: %timeit d.reshape(d.size)[np.arange(d.shape[0])*d.shape[1]+index].reshape(3, 1)
100000 loops, best of 3: 9.51 µs per loop
In [143]: %timeit np.diag(d.take(index, axis=1)).reshape(3, 1)
100000 loops, best of 3: 3.81 µs per loop
In [146]: %timeit d.ravel()[np.arange(d.shape[0])*d.shape[1]+index].reshape(3, 1)
100000 loops, best of 3: 8.56 µs per loop
This method is about twice as fast as both proposed alternatives.
EDIT 2: An even better method
Based on #Emanuele Paulinis' version but reduced number of operations outperforms all on large arrays 10k rows by 100 columns.
In [199]: %timeit d[(np.arange(index.size), index)].reshape(index.size, 1)
1000 loops, best of 3: 364 µs per loop
In [200]: %timeit d.ravel()[np.arange(d.shape[0])*d.shape[1]+index].reshape(index.size, 1)
100 loops, best of 3: 5.22 ms per loop
So if speed is of essence:
d[(np.arange(index.size), index)].reshape(index.size, 1)

Categories