Suppose I have an array of points,
import numpy as np
pts = np.random.rand(100,3) # 1000 points, X, Y, Z along second dimension
The naive approach to calculate the distance between each combination of points involves a double for loop and will be excruciatingly slow for large numbers of points,
def euclidian_distance(p1, p2):
d = p2 - p1
return np.sqrt(d**2).sum()
out = np.empty((pts.shape[0], pts.shape[0]))
pts_swapped = pts.swapaxes(0,1)
for idx, point in enumerate(pts_swapped):
for idx2, point_inner in enumerate(pts_swapped):
out[idx,idx2] = euclidian_distance(point, point_inner)
How do I vectorize this calculation?
Take a look at the scipy.spatial.distance.cdist. I'm not sure but i assume that scipy optimized this quite a lot. If you use the pts array for both inputs, I assume you'll get an M x M array with zeros on the diagonal . function
Related
I am wondering if scipy offers the option to implement a primitive but memory-friendly approach to epsilon neighborhood search:
Compute pairwise similarity for my data, but set all similarities smaller than a threshold epsilon to zero on the fly and then output result directly as sparse matrix.
For example scipy.spatial.distance.pdist() is really fast, but the memory limit is reached early compared to my time limit, at least if I take squareform().
I know there are O(n*log(n)) solutions in this case but for now it would be enough if the result could be sparse. Also obviously I would have to use a similarity as opposed to a distance, but that should not be such a big problem, should it.
As long as you can recast your similarity measure in terms of a distance metric (say 1 minus the similarity) then the most efficient solution is to use sklearn's BallTree.
Otherwise you could build a your own scipy.sparse.csr_matrix matrix by comparing each point against the other $ i -1$ points and throwing away all values smaller than the threshold.
Without knowing your specific similarity metric, this code should roughly do the trick:
import scipy.sparse as spsparse
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
def sparse_similarity(X, epsilon=0.99, Y=None, similarity_metric=cosine_similarity):
'''
X : ndarray
An m by n array of m original observations in an n-dimensional space.
'''
Nx, Dx = X.shape
if Y is None:
Y=X
Ny, Dy = Y.shape
assert Dx==Dy
data = []
indices = []
indptr = [0]
for ix in range(Nx):
xsim = similarity_metric([X[ix]], Y)
_ , kept_points = np.nonzero(xsim>=epsilon)
data.extend(xsim[0,kept_points])
indices.extend(kept_points)
indptr.append(indptr[-1] + len(kept_points))
return spsparse.csr_matrix((data, indices, indptr), shape=(Nx,Ny))
X = np.random.random(size=(1000,10))
sparse_similarity(X, epsilon=0.95)
I have to boost the time for an interpolation over a large (NxMxT) matrix MTR, where:
N is about 8000;
M is about 10000;
T represents the number of times at which each NxM matrix is calculated (in my case it's 23).
I have to compute the interpolation element-wise, on all the T different times, and return the interpolated values over a different array of times (T_interp, in my case with lenght 47) so, as output, I want an NxMxT_interp matrix.
The code snippet below defines the function I built for the interpolation, using scipy.interpolate.Rbf (y is the array MTR[i,j,:], x is the times array with length T, x_interp is the new array of times with length T_interp:
#==============================================================================
# Interpolate without nans
#==============================================================================
def interp(x,y,x_interp,**kwargs):
import numpy as np
from scipy.interpolate import Rbf
mask = np.isnan(y)
y_mask = np.ma.array(y,mask = mask)
x_new = [x[i] for i in np.where(~mask)[0]]
if len(y_mask.compressed()) == 0:
return [np.nan for i,n in enumerate(x_interp)]
elif len(y_mask.compressed()) == 1:
return [y_mask.compressed() for i,n in enumerate(x_interp)]
interp = Rbf(x_new,y_mask.compressed(),**kwargs)
y_interp = interp(x_interp)
return y_interp
I tried to achieve my goal either by looping over the NxM elements of the MTR matrix:
new_MTR = np.empty((N,M,T_interp))
for i in range(N):
for j in range(M):
new_MTR[i,j,:]=interp(times,MTR[i,j,:],New_times,function = 'linear')
or by using the np.apply_along_axis funtion:
new_MTR = np.apply_along_axis(lambda x: interp(times,x,New_times,function = 'linear'),2,MTR)
In both cases I extimated the time it takes to perform the whole operation and it appears to be slightly better for the np.apply_along_axis function, but still it will take about 15 hours!!
Is there a way to reduce this time? Maybe by vectorizing the entire operation? I don't know much about vectorizing and how it can be done in a situation like mine so any help would be much appreciated. Thank you!
Here is my problem : I manipulate 432*46*136*136 grids representing time*(space) encompassed in numpy arrays with numpy and python. I have one array alt, which encompasses the altitudes of the grid points, and another array temp which stores the temperature of the grid points.
It is problematic for a comparison : if T1 and T2 are two results, T1[t0,z0,x0,y0] and T2[t0,z0,x0,y0] represent the temperature at H1[t0,z0,x0,y0] and H2[t0,z0,x0,y0] meters, respectively. But I want to compare the temperature of points at the same altitude, not at the same grid point.
Hence I want to modify the z-axis of my matrices to represent the altitude and not the grid point. I create a function conv(alt[t,z,x,y]) which attributes a number between -20 and 200 to each altitude. Here is my code :
def interpolation_extended(self,temp,alt):
[t,z,x,y]=temp.shape
new=np.zeros([t,220,x,y])
for l in range(0,t):
for j in range(0,z):
for lat in range(0,x):
for lon in range(0,y):
new[l,conv(alt[l,j,lat,lon]),lat,lon]=temp[l,j,lat,lon]
return new
But this takes definitely too much time, I can't work this it. I tried to write it using universal functions with numpy :
def interpolation_extended(self,temp,alt):
[t,z,x,y]=temp.shape
new=np.zeros([t,220,x,y])
for j in range(0,z):
new[:,conv(alt[:,j,:,:]),:,:]=temp[:,j,:,:]
return new
But that does not work. Do you have any idea of doing this in python/numpy without using 4 nested loops ?
Thank you
I can't really try the code since I don't have your matrices, but something like this should do the job.
First, instead of declaring conv as a function, get the whole altitude projection for all your data:
conv = np.round(alt / 500.).astype(int)
Using np.round, the numpys version of round, it rounds all the elements of the matrix by vectorizing operations in C, and thus, you get a new array very quickly (at C speed). The following line aligns the altitudes to start in 0, by shifting all the array by its minimum value (in your case, -20):
conv -= conv.min()
the line above would transform your altitude matrix from [-20, 200] to [0, 220] (better for indexing).
With that, interpolation can be done easily by getting multidimensional indices:
t, z, y, x = np.indices(temp.shape)
the vectors above contain all the indices needed to index your original matrix. You can then create the new matrix by doing:
new_matrix[t, conv[t, z, y, x], y, x] = temp[t, z, y, x]
without any loop at all.
Let me know if it works. It might give you some erros since is hard for me to test it without data, but it should do the job.
The following toy example works fine:
A = np.random.randn(3,4,5) # Random 3x4x5 matrix -- your temp matrix
B = np.random.randint(0, 10, 3*4*5).reshape(3,4,5) # your conv matrix with altitudes from 0 to 9
C = np.zeros((3,10,5)) # your new matrix
z, y, x = np.indices(A.shape)
C[z, B[z, y, x], x] = A[z, y, x]
C contains your results by altitude.
edit: this question is not specifically about calculating distances, rather the most efficient way to loop through a numpy array, specifying that for index i all comparisons should be made with the rest of the array, as long as the second index is not i.
I have a numpy array with columns (X, Y, ID) and want to compare each element to each other element, but not itself. So, for each X, Y coordinate, I want to calculate the distance to each other X, Y coordinate, but not itself (where distance = 0).
Here is what I have - there must be a more "numpy" way to write this.
import math, arcpy
# Point feature class
fc = "MY_FEATURE_CLASS"
# Load points to numpy array: (X, Y, ID)
npArray = arcpy.da.FeatureClassToNumPyArray(fc,["SHAPE#X","SHAPE#Y","OID#"])
for row in npArray:
for row2 in npArray:
if row[2] != row2[2]:
# Pythagoras's theorem
distance = math.sqrt(math.pow((row[0]-row2[0]),2)+math.pow((row[1]-row2[1]),2))
Obviously, I'm a numpy newbie. I will not be surprised to find this a duplicate, but I don't have the numpy vocabulary to search out the answer. Any help appreciated!
Using SciPy's pdist, you could write something like
from scipy.spatial.distance import pdist, squareform
distances = squareform(pdist(npArray, lambda a,b: np.sqrt((a[0]-b[0])**2 + (a[1]-b[1])**2)))
pdist will compute the pair-wise distances using the custom metric that ignores the 3rd coordinate (which is your ID in this case). squareform turns this into a more readable matrix such that distances[0,1] gives the distance between the 0th and 1st rows.
Each row of X is a 3 dimensional data instance or point.
The output pairwisedist[i, j] is distance of X[i, :] and X[j, :]
X = np.array([[6,1,7],[10,9,4],[13,9,3],[10,8,15],[14,4,1]])
a = np.sum(X*X,1)
b = np.repeat( a[:,np.newaxis],5,axis=1)
pairwisedist = b + b.T -2* X.dot(X.T)
I wanted to point out that custom written sqrt of sum of squares are prone to overflow and underflow. Bultin math.hypot, np.hypot are way safer for no compromise on performance
from scipy.spatial.distance import pdist, squareform
distances = squareform(pdist(npArray, lambda a,b: math.hypot(*(a-b))
Refer
I am new to Numpy and I would like to ask you how to calculate euclidean distance between points stored in a vector.
Let's assume that we have a numpy.array each row is a vector and a single numpy.array. I would like to know if it is possible to calculate the euclidean distance between all the points and this single point and store them in one numpy.array.
Here is an interface:
points #2d list of row-vectors
singlePoint #one row-vector
listOfDistances= procedure( points,singlePoint)
Can we have something like this?
Or is it possible to have one command to have the single point as a list of other points and at the end we get a matrix of distances?
Thanks
To get the distance you can use the norm method of the linalg module in numpy:
np.linalg.norm(x - y)
While you can use vectorize, #Karl's approach will be rather slow with numpy arrays.
The easier approach is to just do np.hypot(*(points - single_point).T). (The transpose assumes that points is a Nx2 array, rather than a 2xN. If it's 2xN, you don't need the .T.
However this is a bit unreadable, so you write it out more explictly like this (using some canned example data...):
import numpy as np
single_point = [3, 4]
points = np.arange(20).reshape((10,2))
dist = (points - single_point)**2
dist = np.sum(dist, axis=1)
dist = np.sqrt(dist)
import numpy as np
def distance(v1, v2):
return np.sqrt(np.sum((v1 - v2) ** 2))
To apply a function to each element of a numpy array, try numpy.vectorize.
To do the actual calculation, we need the square root of the sum of squares of differences (whew!) between pairs of coordinates in the two vectors.
We can use zip to pair the coordinates, and sum with a comprehension to sum up the results. That looks like:
sum((x - y) ** 2 for (x, y) in zip(singlePoint, pointFromArray)) ** 0.5
import numpy as np
single_point = [3, 4]
points = np.arange(20).reshape((10,2))
distance = euclid_dist(single_point,points)
def euclid_dist(t1, t2):
return np.sqrt(((t1-t2)**2).sum(axis = 1))