Removing axis argument from numpy argmin, but still vectorized - python

So i have the following lines of code
np.argmin(distances, axis = 0)
Here distances is a distances matrix between k centroids, and n points. so it's a k x n matrix.
So with this line of code i'm trying to find the closest centroid for each point, by taking the argmin along axis 0.
My goal is to have a similar vectorized code without the axis argument, as it is not implemented in the fork of numpy i'm using.
Any help would be nice :)

Here's a vectorized one -
def partial_argsort(a):
idar = np.zeros(a.max()+1,dtype=int)
idar[a] = np.arange(len(a))
return idar[np.sort(a)]
def argmin_0(a):
# Define a scaling array to scale each col such that each col is
# offsetted against its previous one
s = (a.max()+1)*np.arange(a.shape[1])
# Scale each col, flatten with col-major order. Find global partial-argsort.
# With the offsetting, those argsort indices would be limited to per-col
# Subtract each group of ncols elements based on the offsetting.
m,n = a.shape
a1D = (a+s).T.ravel()
return partial_argsort(a1D)[::m]-m*np.arange(n)
Sample run for verification -
In [442]: np.random.seed(0)
...: a = np.random.randint(11,9999,(1000,1000))
...: idx0 = argmin_0(a)
...: idx1 = a.argmin(0)
...: r = np.arange(len(idx0))
...: print (a[idx0,r] == a[idx1,r]).all()
True

Related

Subtracting Two dimensional arrays using numpy broadcasting

I'm new to the numpy in general so this is an easy question however i'm clueless as how to solve it.
i'm trying to implement K nearest neighbor algorithm for classification of a Data set
there are to arrays named new_points and point that respectively have the shape of (30,4)
and (120,4) (with 4 being the total number of the properties of each element)
so i'm trying to calculate the distance between each new point and all old points using numpy.broadcasting
def calc_no_loop(new_points, points):
return np.sum((new_points-points)**2,axis=1)
#doesn't work here is log
ValueError: operands could not be broadcast together with shapes (30,4) (120,4)
however as per rules of broadcasting two array of shapes (30,4) and (120,4) are incompatible
so i would appreciate any insight on how to slove this (using .reshape prehaps - not sure)
please note: that i'have already implemented the same function using one and two loops but can't implement it without one
def calc_two_loops(new_points, points):
m, n = len(new_points), len(points)
d = np.zeros((m, n))
for i in range(m):
for j in range(n):
d[i, j] = np.sum((new_points[i] - points[j])**2)
return d
def calc_one_loop(new_points, points):
m, n = len(new_points), len(points)
d = np.zeros((m, n))
print(d)
for i in range(m):
d[i] = np.sum((new_points[i] - points)**2)
return d
Let's create an exapmle smaller in size:
nNew = 3; nOld = 5 # Number of new / old points
# New points
new_points = np.arange(100, 100 + nNew * 4).reshape(nNew, 4)
# Old points
points = np.arange(10, 10 + nOld * 8, 2).reshape(nOld, 4)
To compute the differences alone, run:
dfr = new_points[:, np.newaxis, :] - points[np.newaxis, :, :]
So far we have differences in each property of each point (every new point with every old point).
The shape of dfr is (3, 5, 4):
first dimension: the number of new point,
second dimension: the number of old point,
third dimension: the difference in each property.
Then, to sum squares of differences by points, run:
d = np.power(dfr, 2).sum(axis=2)
and this is your result.
For my sample data, the result is:
array([[31334, 25926, 21030, 16646, 12774],
[34230, 28566, 23414, 18774, 14646],
[37254, 31334, 25926, 21030, 16646]], dtype=int32)
So you have 30 new points, and 120 old points, so if I understand you correctly you want a shape(120,30) array result of distances.
You could do
import numpy as np
points = np.random.random(120*4).reshape(120,4)
new_points = np.random.random(30*4).reshape(30,4)
def calc_no_loop(new_points, points):
res = np.zeros([len(points[:,0]),len(new_points[:,0])])
for idx in range(len(points[:,0])):
res[idx,:] = np.sum((points[idx,:]-new_points)**2,axis=1)
return np.sqrt(res)
test = calc_no_loop(new_points,points)
print(np.shape(test))
print(test)
Which gives
(120, 30)
[[0.67166838 0.78096694 0.94983683 ... 1.00960301 0.48076185 0.56419991]
[0.88156338 0.54951826 0.73919191 ... 0.87757896 0.76305462 0.52486626]
[0.85271938 0.56085692 0.73063341 ... 0.97884167 0.90509791 0.7505591 ]
...
[0.53968258 0.64514941 0.89225849 ... 0.99278462 0.31861253 0.44615026]
[0.51647526 0.58611128 0.83298535 ... 0.86669406 0.64931403 0.71517123]
[1.08515826 0.64626221 0.6898687 ... 0.96882542 1.08075076 0.80144746]]
But from your function name above I get the notion that you do not want a loop? Then you could do this instead:
def calc_no_loop(new_points, points):
new_points1 = np.repeat(new_points[np.newaxis,...],len(points),axis=0)
points1 = np.repeat(points[:,np.newaxis,:],len(new_points),axis=1)
return np.sqrt(np.sum((new_points-points1)**2 ,axis=2))
test = calc_no_loop(new_points,points)
print(np.shape(test))
print(test)
which has output:
(120, 30)
[[0.67166838 0.78096694 0.94983683 ... 1.00960301 0.48076185 0.56419991]
[0.88156338 0.54951826 0.73919191 ... 0.87757896 0.76305462 0.52486626]
[0.85271938 0.56085692 0.73063341 ... 0.97884167 0.90509791 0.7505591 ]
...
[0.53968258 0.64514941 0.89225849 ... 0.99278462 0.31861253 0.44615026]
[0.51647526 0.58611128 0.83298535 ... 0.86669406 0.64931403 0.71517123]
[1.08515826 0.64626221 0.6898687 ... 0.96882542 1.08075076 0.80144746]]
i.e. the same result. Note that I added the np.sqrt() into the result which you may have forgotten in your example above.

What is a fast(er) way to get the center points of objects represented in a 2D numpy array?

I have an image mask stored as a 2D numpy array where the values indicate the presence of objects that have been segmented in the image (0 = no object, 1..n = object 1 through n). I want to get a single coordinate for each object representing the center of the object. It doesn't have to be a perfectly accurate centroid or center of gravity. I'm just taking the mean of the x and y indices of all cells in the array that contain each object. I'm wondering if there's a faster way to do this than my current method:
for obj in np.unique(mask):
if obj == 0:
continue
x, y = np.mean(np.where(mask == obj), axis=1)
Here is a reproducible example:
import numpy as np
mask = np.array([
[0,0,0,0,0,2,0,0,0,0],
[0,1,1,0,2,2,2,0,0,0],
[0,0,1,0,2,2,2,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,3,3,3,0,0,4,0,0,0],
[0,0,0,0,0,4,4,4,0,0],
[0,0,0,0,0,0,4,0,0,0],
])
points = []
for obj in np.unique(mask):
if obj == 0:
continue
points.append(np.mean(np.where(mask == obj), axis=1))
print(points)
This outputs:
[array([1.33333333, 1.66666667]),
array([1.28571429, 5. ]),
array([4., 2.]),
array([5., 6.])]
I came up with another way to do it that seems to be about 3x faster:
import numpy as np
mask = np.array([
[0,0,0,0,0,2,0,0,0,0],
[0,1,1,0,2,2,2,0,0,0],
[0,0,1,0,2,2,2,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,3,3,3,0,0,4,0,0,0],
[0,0,0,0,0,4,4,4,0,0],
[0,0,0,0,0,0,4,0,0,0],
])
flat = mask.flatten()
split = np.unique(np.sort(flat), return_index=True)[1]
points = []
for inds in np.split(flat.argsort(), split)[2:]:
points.append(np.array(np.unravel_index(inds, mask.shape)).mean(axis=1))
print(points)
I wonder if the for loop can be replaced with a numpy operation which would likely be even faster.
You can copy this answer (give them an upvote too if this answer works for you) and use sparse matrices instead of np arrays. However, this only proves to be quicker for large arrays, with increasing speed boosts the larger your array is:
import numpy as np, time
from scipy.sparse import csr_matrix
def compute_M(data):
cols = np.arange(data.size)
return csr_matrix((cols, (np.ravel(data), cols)),
shape=(data.max() + 1, data.size))
def get_indices_sparse(data,M):
#M = compute_M(data)
return [np.mean(np.unravel_index(row.data, data.shape),1) for R,row in enumerate(M) if R>0]
def gen_random_mask(C, n, m):
mask = np.zeros([n,m],int)
for i in range(C):
x = np.random.randint(n)
y = np.random.randint(m)
mask[x:x+np.random.randint(n-x),y:y+np.random.randint(m-y)] = i
return mask
N = 100
C = 4
for S in [10,100,1000,10000]:
mask = gen_random_mask(C, S, S)
print('Time for size {:d}x{:d}:'.format(S,S))
s = time.time()
for _ in range(N):
points = []
for obj in np.unique(mask):
if obj == 0:
continue
points.append(np.mean(np.where(mask == obj), axis=1))
points_np = np.array(points)
print('NP: {:f}'.format((time.time() - s)/N))
mask_s = compute_M(mask)
s = time.time()
for _ in range(100):
points = get_indices_sparse(mask,mask_s)
print('Sparse: {:f}'.format((time.time() - s)/N))
np.testing.assert_equal(points,points_np)
Which results in the timings of:
Time for size 10x10:
NP: 0.000066
Sparse: 0.000226
Time for size 100x100:
NP: 0.000207
Sparse: 0.000253
Time for size 1000x1000:
NP: 0.018662
Sparse: 0.004472
Time for size 10000x10000:
NP: 2.545973
Sparse: 0.501061
The problem likely comes from np.where(mask == obj) which iterates on the whole mask array over and over. This is a problem when there are a lot of objects. You can solve this problem efficiently using a group-by strategy. However, Numpy do not yet provide such an operation. You can implement that using a sort followed by a split. But a sort is generally not efficient. An alternative method is to ask Numpy to return the index in the unique call so that you can then accumulate the value regarding the object (like a reduce-by-key where the reduction operator is an addition and the key are object integers). The mean can be obtained using a simple division in the end.
objects, inverts, counts = np.unique(mask, return_counts=True, return_inverse=True)
# Reduction by object
x = np.full(len(objects), 0.0)
y = np.full(len(objects), 0.0)
xPos = np.repeat(np.arange(mask.shape[0]), mask.shape[1])
yPos = np.tile(np.arange(mask.shape[1]), reps=mask.shape[0])
np.add.at(x, inverts, xPos)
np.add.at(y, inverts, yPos)
# Compute the final mean from the sum
x /= counts
y /= counts
# Discard the first item (when obj == 0)
x = x[1:]
y = y[1:]
If you need something faster, you could use Numba and perform the reduction manually (and possibly in parallel).
EDIT: if you really need a list in output, you can use points = list(np.stack([x, y]).T) but this is rather slow to use lists instead of Numpy arrays (and not memory efficient either).
Because the mask values number the segments they can be directly used as indices into numpy arrays. Combined with Cython this can be used to achieve a strong speed-up.
In Jupyter start with loading Cython:
%load_ext Cython
then use Python magic and a single pass over the whole array to calculate the means:
%%cython -a
import cython
import numpy as np
cimport numpy as np
#cython.boundscheck(False) # turn off bounds-checking for entire function
#cython.wraparound(False) # turn off negative index wrapping for entire function
def calc_xy_mean4(int[:,:] mask, int number_of_maskvalues):
cdef int[:] sum_x = np.zeros(number_of_maskvalues, dtype='int')
cdef int[:] sum_y = np.zeros(number_of_maskvalues, dtype='int')
n = np.zeros(number_of_maskvalues, dtype='int')
cdef int[:] n_mv = n
mean_x = np.zeros(number_of_maskvalues, dtype='float')
mean_y = np.zeros(number_of_maskvalues, dtype='float')
cdef double[:] mean_x_mv = mean_x
cdef double[:] mean_y_mv = mean_y
cdef int x_max = mask.shape[0]
cdef int y_max = mask.shape[1]
cdef int segment_index
cdef int x
cdef int y
for x in range(x_max):
for y in range(y_max):
segment_index = mask[x,y]
n_mv[segment_index] += 1
sum_x[segment_index] += x
sum_y[segment_index] += y
for segment_index in range(number_of_maskvalues):
mean_x_mv[segment_index] = sum_x[segment_index]/n[segment_index]
mean_y_mv[segment_index] = sum_y[segment_index]/n[segment_index]
return mean_x, mean_y, n
and call it with timeit magic
mask = np.array([
[0,0,0,0,0,2,0,0,0,0],
[0,1,1,0,2,2,2,0,0,0],
[0,0,1,0,2,2,2,0,0,0],
[0,0,0,0,0,0,0,0,0,0],
[0,3,3,3,0,0,4,0,0,0],
[0,0,0,0,0,4,4,4,0,0],
[0,0,0,0,0,0,4,0,0,0],
])
%timeit calc_xy_mean4(mask, 5)
This Cython solution is on my machine 9 times faster than the original code.
6.32 µs ± 18.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
and if we run the same instruction without the timeit magic:
calc_xy_mean4(mask, 5)
we obtain as output:
(array([3.07692308, 1.33333333, 1.28571429, 4. , 5. ]),
array([4.59615385, 1.66666667, 5. , 2. , 6. ]),
array([52, 3, 7, 3, 5]))

Efficient ways of calculating affinity only for local neighbourhood in a large graph

I am relatively new to python and numpy and am trying to cluster a dense matrix with floating point numbers and having dimensions of 256x256 using spectral clustering. Since the affinity matrix will be of size 65536x65536, a full affinity matrix cannot be computed (due to memory limitations). As such, I am currently calculating the affinity between a given matrix entry and its 5x5 local neighbourhood, and build a sparse graph (in 3-tuple representation).
To do so, I am using for loops (basically, a sliding widow approach) which I think is not the most efficient way of doing so.
import numpy as np
def getAffinity(f1, f2):
return np.exp(-np.linalg.norm(np.absolute(f1 - f2))/ 2.1)
G = np.arange(256*256).reshape((256,256))
dim1 = 256 # Dimension 1 of matrix
dim2 = 256 # Dimension 1 of matrix
values = np.zeros(1623076, dtype=np.float32) # To hold affinities
rows = np.zeros(1623076, dtype=np.int32) # To hold row index
cols = np.zeros(1623076, dtype=np.int32) # To hold column index
index = 0 # To hold column index
for i in range(dim1):
for j in range(dim2):
current = G[i, j]
for k in range(np.maximum(0, i-2), np.minimum(dim1 , i+3)): # traverse rows
for l in range(np.maximum(0, j-2), np.minimum(dim2 , j+3)): # traverse columns
rows[index] = i*d1 + j
cols[index] = k*d1 + l
values[index] = getAffinity(current, G[k, l])
index += 1
I was wondering whether there are any other efficient ways of achieving the same goal.
Here is a sparse matrix approach. It is >800x faster than the loopy code.
import numpy as np
from scipy import sparse
from time import perf_counter as pc
T = []
T.append(pc())
def getAffinity(f1, f2):
return np.exp(-np.linalg.norm(np.absolute(f1 - f2))/ 2.1)
G = 2*np.arange(256*256).reshape((256,256))
dim1 = 256 # Dimension 1 of matrix
dim2 = 256 # Dimension 1 of matrix
values = np.zeros(1623076, dtype=np.float32) # To hold affinities
rows = np.zeros(1623076, dtype=np.int32) # To hold row index
cols = np.zeros(1623076, dtype=np.int32) # To hold column index
index = 0 # To hold column index
for i in range(dim1):
for j in range(dim2):
current = G[i, j]
for k in range(np.maximum(0, i-2), np.minimum(dim1 , i+3)): # traverse rows
for l in range(np.maximum(0, j-2), np.minimum(dim2 , j+3)): # traverse columns
rows[index] = i*dim1 + j
cols[index] = k*dim1 + l
values[index] = getAffinity(current, G[k, l])
index += 1
T.append(pc())
affs_OP = sparse.coo_matrix((values,(rows,cols))).tocsr()
import scipy.sparse as sp
def getAffinity(f1, f2): # similar to #PaulPanzer, I don't think OP is right
return np.exp(-np.abs(f1 - f2)/ 2.1)
def affinity_block(dim = 256, dist = 2):
i = np.arange(-dist, dist+1)
init_block = sp.dia_matrix((np.ones((i.size, dim)), i), (dim, dim))
out = sp.kron(init_block, init_block).tocoo()
out.data = getAffinity(Gf[out.row], Gf[out.col])
return out
T.append(pc())
Gf = G.ravel()
offsets = np.concatenate((np.mgrid[1:3,-2:3].reshape(2,-1).T,np.mgrid[:1,1:3].reshape(2,-1).T), axis=0)
def make_diag(yo,xo):
o = 256*yo+xo
diag = np.exp(-np.abs(Gf[o:]-Gf[:-o])/2.1)
if xo>0:
diag[:xo-256].reshape(-1,256)[:,-xo:] = 0
elif xo<0:
diag[:xo].reshape(-1,256)[:,:-xo] = 0
diag[xo:] = 0
return diag
diags = [make_diag(*o) for o in offsets]
offsets = np.sum(offsets*[256,1], axis=1)
affs_pp = sparse.diags([*diags,[np.ones(256*256)],*diags],np.concatenate([offsets,[0],-offsets]))
T.append(pc())
affs_df = affinity_block()
T.append(pc())
print("OP: {:.3f} s convert OP to sparse matrix: {:.3f} s pp {:.3f} s df: {:.3f} s".format(*np.diff(T)))
diff = affs_pp-affs_OP
diff *= diff.sign()
md = diff.max()
print(f"max deviation pp-OP: {md}")
print(f"number of different entries pp-df: {(affs_pp-affs_df).nnz}")
Sample run:
OP: 23.392 s convert OP to sparse matrix: 0.020 s pp 0.025 s df: 0.093 s
max deviation pp-OP: 2.0616356788405454e-08
number of different entries pp-df: 0
A bit of explanation, 1D first to keep it simple. Let's imagine an actually sliding window, so we can use time as an intuitive axis:
space
+-------------->
|
t | xo... x: window center
i | oxo.. o: window off center
m | .oxo. .: non window
e | ..oxo
| ...ox
v
time here actually is equivalent to space because we move with constant speed. We can now see that all the window points can be described as three diagonals. Offsets are 0, 1 and -1 but note that because the affinities are symmetric and the one for 0 is trivial, we need only calculate them for 1.
Now lets skip to 2D, the smallest example we can do is 3x3 window in 4x4 array. In row major this looks like.
xo..oo..........
oxo.ooo.........
.oxo.ooo........
..ox..oo........
oo..xo..oo......
ooo.oxo.ooo.....
.ooo.oxo.ooo....
..oo..ox..oo....
....oo..xo..oo..
....ooo.oxo.ooo.
.....ooo.oxo.ooo
......oo..ox..oo
........oo..xo..
........ooo.oxo.
.........ooo.oxo
..........oo..ox
The relevant offsets are (0,1),(1,-1),(1,0),(1,1) or in row major 0x4+1 = 1, 1x4-1 = 3, 1x4+0 = 4, 1x4+1 = 5. Also note that most of these diagonals are not complete, the missing bits explained by row major wrapping around, i.e. at z = y,x x = 3 the right neighbor z+1 is not actually a right neighbor y,x+1 ; instead, because of line jump, it is y+1,0 The if-else clause in the code above blanks the right bits of each diagonal.
#DanielF's strategy is similar but takes advantage of the block structure evident in the figure.
xo.. oo.. .... ....
oxo. ooo. .... ....
.oxo .ooo .... ....
..ox ..oo .... ....
oo.. xo.. oo.. ....
ooo. oxo. ooo. ....
.ooo .oxo .ooo ....
..oo ..ox ..oo ....
.... oo.. xo.. oo..
.... ooo. oxo. ooo.
.... .ooo .oxo .ooo
.... ..oo ..ox ..oo
.... .... oo.. xo..
.... .... ooo. oxo.
.... .... .ooo .oxo
.... .... ..oo ..ox
This seems to be a bit more elegant and extensible, albeit a bit (4x) slower, way to do the same thing as #PaulPanzer
import scipy.sparse as sp
from functools import reduce
def getAffinity(f1, f2): # similar to #PaulPanzer, I don't think OP is right
return np.exp(-np.abs(f1 - f2)/ 2.1)
def affinity_block(G, dist = 2):
Gf = G.ravel()
i = np.arange(-dist, dist+1)
init_blocks = [1]
for dim in G.shape:
init_blocks.append(sp.dia_matrix((np.ones((i.size, dim)), i), (dim, dim)))
out = reduce(sp.kron, init_blocks).tocoo()
out.data = getAffinity(Gf[out.row], Gf[out.col])
return out
This allows non-square G matrices, and higher dimensions.

numpy how to find index of nearest value according to a thresold value of multi dimensional array? [duplicate]

How do I find the nearest value in a numpy array? Example:
np.find_nearest(array, value)
import numpy as np
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
Example usage:
array = np.random.random(10)
print(array)
# [ 0.21069679 0.61290182 0.63425412 0.84635244 0.91599191 0.00213826
# 0.17104965 0.56874386 0.57319379 0.28719469]
print(find_nearest(array, value=0.5))
# 0.568743859261
IF your array is sorted and is very large, this is a much faster solution:
def find_nearest(array,value):
idx = np.searchsorted(array, value, side="left")
if idx > 0 and (idx == len(array) or math.fabs(value - array[idx-1]) < math.fabs(value - array[idx])):
return array[idx-1]
else:
return array[idx]
This scales to very large arrays. You can easily modify the above to sort in the method if you can't assume that the array is already sorted. It’s overkill for small arrays, but once they get large this is much faster.
With slight modification, the answer above works with arrays of arbitrary dimension (1d, 2d, 3d, ...):
def find_nearest(a, a0):
"Element in nd array `a` closest to the scalar value `a0`"
idx = np.abs(a - a0).argmin()
return a.flat[idx]
Or, written as a single line:
a.flat[np.abs(a - a0).argmin()]
Summary of answer: If one has a sorted array then the bisection code (given below) performs the fastest. ~100-1000 times faster for large arrays, and ~2-100 times faster for small arrays. It does not require numpy either.
If you have an unsorted array then if array is large, one should consider first using an O(n logn) sort and then bisection, and if array is small then method 2 seems the fastest.
First you should clarify what you mean by nearest value. Often one wants the interval in an abscissa, e.g. array=[0,0.7,2.1], value=1.95, answer would be idx=1. This is the case that I suspect you need (otherwise the following can be modified very easily with a followup conditional statement once you find the interval). I will note that the optimal way to perform this is with bisection (which I will provide first - note it does not require numpy at all and is faster than using numpy functions because they perform redundant operations). Then I will provide a timing comparison against the others presented here by other users.
Bisection:
def bisection(array,value):
'''Given an ``array`` , and given a ``value`` , returns an index j such that ``value`` is between array[j]
and array[j+1]. ``array`` must be monotonic increasing. j=-1 or j=len(array) is returned
to indicate that ``value`` is out of range below and above respectively.'''
n = len(array)
if (value < array[0]):
return -1
elif (value > array[n-1]):
return n
jl = 0# Initialize lower
ju = n-1# and upper limits.
while (ju-jl > 1):# If we are not yet done,
jm=(ju+jl) >> 1# compute a midpoint with a bitshift
if (value >= array[jm]):
jl=jm# and replace either the lower limit
else:
ju=jm# or the upper limit, as appropriate.
# Repeat until the test condition is satisfied.
if (value == array[0]):# edge cases at bottom
return 0
elif (value == array[n-1]):# and top
return n-1
else:
return jl
Now I'll define the code from the other answers, they each return an index:
import math
import numpy as np
def find_nearest1(array,value):
idx,val = min(enumerate(array), key=lambda x: abs(x[1]-value))
return idx
def find_nearest2(array, values):
indices = np.abs(np.subtract.outer(array, values)).argmin(0)
return indices
def find_nearest3(array, values):
values = np.atleast_1d(values)
indices = np.abs(np.int64(np.subtract.outer(array, values))).argmin(0)
out = array[indices]
return indices
def find_nearest4(array,value):
idx = (np.abs(array-value)).argmin()
return idx
def find_nearest5(array, value):
idx_sorted = np.argsort(array)
sorted_array = np.array(array[idx_sorted])
idx = np.searchsorted(sorted_array, value, side="left")
if idx >= len(array):
idx_nearest = idx_sorted[len(array)-1]
elif idx == 0:
idx_nearest = idx_sorted[0]
else:
if abs(value - sorted_array[idx-1]) < abs(value - sorted_array[idx]):
idx_nearest = idx_sorted[idx-1]
else:
idx_nearest = idx_sorted[idx]
return idx_nearest
def find_nearest6(array,value):
xi = np.argmin(np.abs(np.ceil(array[None].T - value)),axis=0)
return xi
Now I'll time the codes:
Note methods 1,2,4,5 don't correctly give the interval. Methods 1,2,4 round to nearest point in array (e.g. >=1.5 -> 2), and method 5 always rounds up (e.g. 1.45 -> 2). Only methods 3, and 6, and of course bisection give the interval properly.
array = np.arange(100000)
val = array[50000]+0.55
print( bisection(array,val))
%timeit bisection(array,val)
print( find_nearest1(array,val))
%timeit find_nearest1(array,val)
print( find_nearest2(array,val))
%timeit find_nearest2(array,val)
print( find_nearest3(array,val))
%timeit find_nearest3(array,val)
print( find_nearest4(array,val))
%timeit find_nearest4(array,val)
print( find_nearest5(array,val))
%timeit find_nearest5(array,val)
print( find_nearest6(array,val))
%timeit find_nearest6(array,val)
(50000, 50000)
100000 loops, best of 3: 4.4 µs per loop
50001
1 loop, best of 3: 180 ms per loop
50001
1000 loops, best of 3: 267 µs per loop
[50000]
1000 loops, best of 3: 390 µs per loop
50001
1000 loops, best of 3: 259 µs per loop
50001
1000 loops, best of 3: 1.21 ms per loop
[50000]
1000 loops, best of 3: 746 µs per loop
For a large array bisection gives 4us compared to next best 180us and longest 1.21ms (~100 - 1000 times faster). For smaller arrays it's ~2-100 times faster.
Here is a fast vectorized version of #Dimitri's solution if you have many values to search for (values can be multi-dimensional array):
# `values` should be sorted
def get_closest(array, values):
# make sure array is a numpy array
array = np.array(array)
# get insert positions
idxs = np.searchsorted(array, values, side="left")
# find indexes where previous index is closer
prev_idx_is_less = ((idxs == len(array))|(np.fabs(values - array[np.maximum(idxs-1, 0)]) < np.fabs(values - array[np.minimum(idxs, len(array)-1)])))
idxs[prev_idx_is_less] -= 1
return array[idxs]
Benchmarks
> 100 times faster than using a for loop with #Demitri's solution`
>>> %timeit ar=get_closest(np.linspace(1, 1000, 100), np.random.randint(0, 1050, (1000, 1000)))
139 ms ± 4.04 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit ar=[find_nearest(np.linspace(1, 1000, 100), value) for value in np.random.randint(0, 1050, 1000*1000)]
took 21.4 seconds
Here's an extension to find the nearest vector in an array of vectors.
import numpy as np
def find_nearest_vector(array, value):
idx = np.array([np.linalg.norm(x+y) for (x,y) in array-value]).argmin()
return array[idx]
A = np.random.random((10,2))*100
""" A = array([[ 34.19762933, 43.14534123],
[ 48.79558706, 47.79243283],
[ 38.42774411, 84.87155478],
[ 63.64371943, 50.7722317 ],
[ 73.56362857, 27.87895698],
[ 96.67790593, 77.76150486],
[ 68.86202147, 21.38735169],
[ 5.21796467, 59.17051276],
[ 82.92389467, 99.90387851],
[ 6.76626539, 30.50661753]])"""
pt = [6, 30]
print find_nearest_vector(A,pt)
# array([ 6.76626539, 30.50661753])
If you don't want to use numpy this will do it:
def find_nearest(array, value):
n = [abs(i-value) for i in array]
idx = n.index(min(n))
return array[idx]
Here's a version that will handle a non-scalar "values" array:
import numpy as np
def find_nearest(array, values):
indices = np.abs(np.subtract.outer(array, values)).argmin(0)
return array[indices]
Or a version that returns a numeric type (e.g. int, float) if the input is scalar:
def find_nearest(array, values):
values = np.atleast_1d(values)
indices = np.abs(np.subtract.outer(array, values)).argmin(0)
out = array[indices]
return out if len(out) > 1 else out[0]
Here is a version with scipy for #Ari Onasafari, answer "to find the nearest vector in an array of vectors"
In [1]: from scipy import spatial
In [2]: import numpy as np
In [3]: A = np.random.random((10,2))*100
In [4]: A
Out[4]:
array([[ 68.83402637, 38.07632221],
[ 76.84704074, 24.9395109 ],
[ 16.26715795, 98.52763827],
[ 70.99411985, 67.31740151],
[ 71.72452181, 24.13516764],
[ 17.22707611, 20.65425362],
[ 43.85122458, 21.50624882],
[ 76.71987125, 44.95031274],
[ 63.77341073, 78.87417774],
[ 8.45828909, 30.18426696]])
In [5]: pt = [6, 30] # <-- the point to find
In [6]: A[spatial.KDTree(A).query(pt)[1]] # <-- the nearest point
Out[6]: array([ 8.45828909, 30.18426696])
#how it works!
In [7]: distance,index = spatial.KDTree(A).query(pt)
In [8]: distance # <-- The distances to the nearest neighbors
Out[8]: 2.4651855048258393
In [9]: index # <-- The locations of the neighbors
Out[9]: 9
#then
In [10]: A[index]
Out[10]: array([ 8.45828909, 30.18426696])
For large arrays, the (excellent) answer given by #Demitri is far faster than the answer currently marked as best. I've adapted his exact algorithm in the following two ways:
The function below works whether or not the input array is sorted.
The function below returns the index of the input array corresponding to the closest value, which is somewhat more general.
Note that the function below also handles a specific edge case that would lead to a bug in the original function written by #Demitri. Otherwise, my algorithm is identical to his.
def find_idx_nearest_val(array, value):
idx_sorted = np.argsort(array)
sorted_array = np.array(array[idx_sorted])
idx = np.searchsorted(sorted_array, value, side="left")
if idx >= len(array):
idx_nearest = idx_sorted[len(array)-1]
elif idx == 0:
idx_nearest = idx_sorted[0]
else:
if abs(value - sorted_array[idx-1]) < abs(value - sorted_array[idx]):
idx_nearest = idx_sorted[idx-1]
else:
idx_nearest = idx_sorted[idx]
return idx_nearest
All the answers are beneficial to gather the information to write efficient code. However, I have written a small Python script to optimize for various cases. It will be the best case if the provided array is sorted. If one searches the index of the nearest point of a specified value, then bisect module is the most time efficient. When one search the indices correspond to an array, the numpy searchsorted is most efficient.
import numpy as np
import bisect
xarr = np.random.rand(int(1e7))
srt_ind = xarr.argsort()
xar = xarr.copy()[srt_ind]
xlist = xar.tolist()
bisect.bisect_left(xlist, 0.3)
In [63]: %time bisect.bisect_left(xlist, 0.3)
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 22.2 µs
np.searchsorted(xar, 0.3, side="left")
In [64]: %time np.searchsorted(xar, 0.3, side="left")
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 98.9 µs
randpts = np.random.rand(1000)
np.searchsorted(xar, randpts, side="left")
%time np.searchsorted(xar, randpts, side="left")
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 1.2 ms
If we follow the multiplicative rule, then numpy should take ~100 ms which implies ~83X faster.
I think the most pythonic way would be:
num = 65 # Input number
array = np.random.random((10))*100 # Given array
nearest_idx = np.where(abs(array-num)==abs(array-num).min())[0] # If you want the index of the element of array (array) nearest to the the given number (num)
nearest_val = array[abs(array-num)==abs(array-num).min()] # If you directly want the element of array (array) nearest to the given number (num)
This is the basic code. You can use it as a function if you want
This is a vectorized version of unutbu's answer:
def find_nearest(array, values):
array = np.asarray(array)
# the last dim must be 1 to broadcast in (array - values) below.
values = np.expand_dims(values, axis=-1)
indices = np.abs(array - values).argmin(axis=-1)
return array[indices]
image = plt.imread('example_3_band_image.jpg')
print(image.shape) # should be (nrows, ncols, 3)
quantiles = np.linspace(0, 255, num=2 ** 2, dtype=np.uint8)
quantiled_image = find_nearest(quantiles, image)
print(quantiled_image.shape) # should be (nrows, ncols, 3)
Maybe helpful for ndarrays:
def find_nearest(X, value):
return X[np.unravel_index(np.argmin(np.abs(X - value)), X.shape)]
For 2d array, to determine the i, j position of nearest element:
import numpy as np
def find_nearest(a, a0):
idx = (np.abs(a - a0)).argmin()
w = a.shape[1]
i = idx // w
j = idx - i * w
return a[i,j], i, j
Here is a version that works with 2D arrays, using scipy's cdist function if the user has it, and a simpler distance calculation if they don't.
By default, the output is the index that is closest to the value you input, but you can change that with the output keyword to be one of 'index', 'value', or 'both', where 'value' outputs array[index] and 'both' outputs index, array[index].
For very large arrays, you may need to use kind='euclidean', as the default scipy cdist function may run out of memory.
This is maybe not the absolute fastest solution, but it is quite close.
def find_nearest_2d(array, value, kind='cdist', output='index'):
# 'array' must be a 2D array
# 'value' must be a 1D array with 2 elements
# 'kind' defines what method to use to calculate the distances. Can choose one
# of 'cdist' (default) or 'euclidean'. Choose 'euclidean' for very large
# arrays. Otherwise, cdist is much faster.
# 'output' defines what the output should be. Can be 'index' (default) to return
# the index of the array that is closest to the value, 'value' to return the
# value that is closest, or 'both' to return index,value
import numpy as np
if kind == 'cdist':
try: from scipy.spatial.distance import cdist
except ImportError:
print("Warning (find_nearest_2d): Could not import cdist. Reverting to simpler distance calculation")
kind = 'euclidean'
index = np.where(array == value)[0] # Make sure the value isn't in the array
if index.size == 0:
if kind == 'cdist': index = np.argmin(cdist([value],array)[0])
elif kind == 'euclidean': index = np.argmin(np.sum((np.array(array)-np.array(value))**2.,axis=1))
else: raise ValueError("Keyword 'kind' must be one of 'cdist' or 'euclidean'")
if output == 'index': return index
elif output == 'value': return array[index]
elif output == 'both': return index,array[index]
else: raise ValueError("Keyword 'output' must be one of 'index', 'value', or 'both'")
For those searching for multiple nearest, modifying the accepted answer:
import numpy as np
def find_nearest(array, value, k):
array = np.asarray(array)
idx = np.argsort(abs(array - value))[:k]
return array[idx]
See:
https://stackoverflow.com/a/66937734/11671779
import numpy as np
def find_nearest(array, value):
array = np.array(array)
z=np.abs(array-value)
y= np.where(z == z.min())
m=np.array(y)
x=m[0,0]
y=m[1,0]
near_value=array[x,y]
return near_value
array =np.array([[60,200,30],[3,30,50],[20,1,-50],[20,-500,11]])
print(array)
value = 0
print(find_nearest(array, value))
This one handles any number of queries, using numpy searchsorted, so after sorting the input arrays, is just as fast.
It works on regular grids in 2d, 3d ... too:
#!/usr/bin/env python3
# keywords: nearest-neighbor regular-grid python numpy searchsorted Voronoi
import numpy as np
#...............................................................................
class Near_rgrid( object ):
""" nearest neighbors on a Manhattan aka regular grid
1d:
near = Near_rgrid( x: sorted 1d array )
nearix = near.query( q: 1d ) -> indices of the points x_i nearest each q_i
x[nearix[0]] is the nearest to q[0]
x[nearix[1]] is the nearest to q[1] ...
nearpoints = x[nearix] is near q
If A is an array of e.g. colors at x[0] x[1] ...,
A[nearix] are the values near q[0] q[1] ...
Query points < x[0] snap to x[0], similarly > x[-1].
2d: on a Manhattan aka regular grid,
streets running east-west at y_i, avenues north-south at x_j,
near = Near_rgrid( y, x: sorted 1d arrays, e.g. latitide longitude )
I, J = near.query( q: nq × 2 array, columns qy qx )
-> nq × 2 indices of the gridpoints y_i x_j nearest each query point
gridpoints = np.column_stack(( y[I], x[J] )) # e.g. street corners
diff = gridpoints - querypoints
distances = norm( diff, axis=1, ord= )
Values at an array A definded at the gridpoints y_i x_j nearest q: A[I,J]
3d: Near_rgrid( z, y, x: 1d axis arrays ) .query( q: nq × 3 array )
See Howitworks below, and the plot Voronoi-random-regular-grid.
"""
def __init__( self, *axes: "1d arrays" ):
axarrays = []
for ax in axes:
axarray = np.asarray( ax ).squeeze()
assert axarray.ndim == 1, "each axis should be 1d, not %s " % (
str( axarray.shape ))
axarrays += [axarray]
self.midpoints = [_midpoints( ax ) for ax in axarrays]
self.axes = axarrays
self.ndim = len(axes)
def query( self, queries: "nq × dim points" ) -> "nq × dim indices":
""" -> the indices of the nearest points in the grid """
queries = np.asarray( queries ).squeeze() # or list x y z ?
if self.ndim == 1:
assert queries.ndim <= 1, queries.shape
return np.searchsorted( self.midpoints[0], queries ) # scalar, 0d ?
queries = np.atleast_2d( queries )
assert queries.shape[1] == self.ndim, [
queries.shape, self.ndim]
return [np.searchsorted( mid, q ) # parallel: k axes, k processors
for mid, q in zip( self.midpoints, queries.T )]
def snaptogrid( self, queries: "nq × dim points" ):
""" -> the nearest points in the grid, 2d [[y_j x_i] ...] """
ix = self.query( queries )
if self.ndim == 1:
return self.axes[0][ix]
else:
axix = [ax[j] for ax, j in zip( self.axes, ix )]
return np.array( axix )
def _midpoints( points: "array-like 1d, *must be sorted*" ) -> "1d":
points = np.asarray( points ).squeeze()
assert points.ndim == 1, points.shape
diffs = np.diff( points )
assert np.nanmin( diffs ) > 0, "the input array must be sorted, not %s " % (
points.round( 2 ))
return (points[:-1] + points[1:]) / 2 # floats
#...............................................................................
Howitworks = \
"""
How Near_rgrid works in 1d:
Consider the midpoints halfway between fenceposts | | |
The interval [left midpoint .. | .. right midpoint] is what's nearest each post --
| | | | points
| . | . | . | midpoints
^^^^^^ . nearest points[1]
^^^^^^^^^^^^^^^ nearest points[2] etc.
2d:
I, J = Near_rgrid( y, x ).query( q )
I = nearest in `x`
J = nearest in `y` independently / in parallel.
The points nearest [yi xj] in a regular grid (its Voronoi cell)
form a rectangle [left mid x .. right mid x] × [left mid y .. right mid y]
(in any norm ?)
See the plot Voronoi-random-regular-grid.
Notes
-----
If a query point is exactly halfway between two data points,
e.g. on a grid of ints, the lines (x + 1/2) U (y + 1/2),
which "nearest" you get is implementation-dependent, unpredictable.
"""
Murky = \
""" NaNs in points, in queries ?
"""
__version__ = "2021-10-25 oct denis-bz-py"

Full Frequency Array Reconstruction after numpy.fft.rfftn

I have a real valued grayscale 3D image with resolution rows x cols x deps. I take the dft of the image using freq = numpy.fft.rfftn(myImage)
The returned array, freq, is resolution: rows x cols x deps/2 + 1. I want to reconstruct freq as if it were the output of numpy.fft.fftn(myImage), that is, I want the dimensions of freq to be rows x cols x deps.
I know that the correspondence for real-valued dft is X_(k1,k2,k3) = X*_(N1-k1,N2-k2,N3-k3) where * is the conjugate transpose.
I could reconstruct the full freq array using a loop, but that'll be too slow, but I'm having trouble figuring out the correct way of doing it with array slicing.
Thanks!
FYI, I need the full array because I'll be element wise multiplying it with another array of full size rows x cols x deps, I cannot assume that array has any structure (like symmetry) that would make it unnecessary for me to reconstruct the full freq array.
I got it!
import numpy as np
import time
rows = 181
cols = 217
deps = 181
jac_k = np.random.rand(rows, cols, deps)*5
prev = time.time()
fft1 = np.fft.fftn(jac_k)
print time.time() - prev
prev = time.time()
fft2 = np.fft.rfftn(jac_k)
if deps%2 == 0:
fft2Star = np.conj(fft2[:, :, -2:0:-1])
else:
fft2Star = np.conj(fft2[:, :, -1:0:-1])
fft2Star[1::, :, :] = fft2Star[:0:-1, :, :]
fft2Star[:, 1::, :] = fft2Star[:, :0:-1, :]
fft2 = np.concatenate( (fft2, fft2Star), axis=2)
print time.time() - prev
print np.linalg.norm(fft1 - fft2)

Categories