I just started using scipy/numpy. I have an 100000*3 array, each row is a coordinate, and a 1*3 center point. I want to calculate the distance for each row in the array to the center and store them in another array. What is the most efficient way to do it?
I would take a look at scipy.spatial.distance.cdist:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html
import numpy as np
import scipy
a = np.random.normal(size=(10,3))
b = np.random.normal(size=(1,3))
dist = scipy.spatial.distance.cdist(a,b) # pick the appropriate distance metric
dist for the default distant metric is equivalent to:
np.sqrt(np.sum((a-b)**2,axis=1))
although cdist is much more efficient for large arrays (on my machine for your size problem, cdist is faster by a factor of ~35x).
I would use the sklearn implementation of the euclidean distance. The advantage is the usage of the more efficient expression by using Matrix multiplication:
dist(x, y) = sqrt(np.dot(x, x) - 2 * np.dot(x, y) + np.dot(y, y)
A simple script would look like this:
import numpy as np
x = np.random.rand(1000, 3)
y = np.random.rand(1000, 3)
dist = np.sqrt(np.dot(x, x)) - (np.dot(x, y) + np.dot(x, y)) + np.dot(y, y)
The advantage of this approach has been nicely described in the sklearn documentation:
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.euclidean_distances.html#sklearn.metrics.pairwise.euclidean_distances
I am using this approach to crunch large datamatrices (10000, 10000) with some minor modifications like using the np.einsum function.
You can also use the development of the norm (similar to remarkable identities). This is probably the most efficent way to compute the distance of a matrix of points.
Here is a code snippet that I originally used for a k-Nearest-Neighbors implementation, in Octave, but you can easily adapt it to numpy since it only uses matrix multiplications (the equivalent is numpy.dot()):
% Computing the euclidian distance between each known point (Xapp) and unknown points (Xtest)
% Note: we use the development of the norm just like a remarkable identity:
% ||x1 - x2||^2 = ||x1||^2 + ||x2||^2 - 2*<x1,x2>
[napp, d] = size(Xapp);
[ntest, d] = size(Xtest);
A = sum(Xapp.^2, 2);
A = repmat(A, 1, ntest);
B = sum(Xtest.^2, 2);
B = repmat(B', napp, 1);
C = Xapp*Xtest';
dist = A+B-2.*C;
This might not answer your question directly, but if you are after all permutations of particle pairs, I've found the following solution to be faster than the pdist function in some cases.
import numpy as np
L = 100 # simulation box dimension
N = 100 # Number of particles
dim = 2 # Dimensions
# Generate random positions of particles
r = (np.random.random(size=(N,dim))-0.5)*L
# uti is a list of two (1-D) numpy arrays
# containing the indices of the upper triangular matrix
uti = np.triu_indices(100,k=1) # k=1 eliminates diagonal indices
# uti[0] is i, and uti[1] is j from the previous example
dr = r[uti[0]] - r[uti[1]] # computes differences between particle positions
D = np.sqrt(np.sum(dr*dr, axis=1)) # computes distances; D is a 4950 x 1 np array
See this for a more in-depth look on this matter, on my blog post.
You may need to specify a more detailed manner the distance function you are interested of, but here is a very simple (and efficient) implementation of Squared Euclidean Distance based on inner product (which obviously can be generalized, straightforward manner, to other kind of distance measures):
In []: P, c= randn(5, 3), randn(1, 3)
In []: dot(((P- c)** 2), ones(3))
Out[]: array([ 8.80512, 4.61693, 2.6002, 3.3293, 12.41800])
Where P are your points and c is the center.
#is it true, to find the biggest distance between the points in surface?
from math import sqrt
n = int(input( "enter the range : "))
x = list(map(float,input("type x coordinates: ").split()))
y = list(map(float,input("type y coordinates: ").split()))
maxdis = 0
for i in range(n):
for j in range(n):
print(i, j, x[i], x[j], y[i], y[j])
dist = sqrt((x[j]-x[i])**2+(y[j]-y[i])**2)
if maxdis < dist:
maxdis = dist
print(" maximum distance is : {:5g}".format(maxdis))
Related
I want to fit a plane to a set of points (x, y, z) in Python. I found various answers how to perform the fitting if the error is measured with respect to the z-axis but I want to consider errors in orthogonal direction. I found the following question (Best fit plane by minimizing orthogonal distances) which addresses the same question - but it's not clear to me how to implement this in Python (likely with NumPy/SciPy). Further details regarding the mathematical derivation can also be found here: http://www.ncorr.com/download/publications/eberlyleastsquares.pdf (section 2).
The first link you gave does describe the algorithm for orthogonal distance fitting, but rather tersely. Here, in case it helps, is a more prolix description:
I suppose you have points (in your case 3d, but the dimension makes no odds to the algotithm) P[i], i=1..N
You want to find a (hyper-) plane that is of mininmal orthogonal distance from your points.
A hyper-plane can be described by a unit vector n and a scalar d. The set of points on the plane is
{ P | n.P + d = 0 }
and the (orthogonal) distance of a point P from the plane is
n.P + d
So we want to find n and d to minimise
Q(n,d) = Sum{ i | (n.P[i]+d)*(n.P[i]+d) } /N
(The division by N isn't essential, and makes no difference to the values of n and d that are found, but to my mind makes the algebra neater)
The first thing to notice is that if we knew n, the d that minimises Q will be
d = -n.Pbar where
Pbar = Sum{ i | P[i]}/N, the mean of the P[]
We may as well use this value of d, so that, after a little algebra the problem reduces to minimising Q^:
Q^(n) = Sum{ i | (n.P[i]-n.Pbar)*(n.P[i]-n.Pbar) } /N
= n' * C * n
where
C = Sum{ i | (P[i]-Pbar)*(P[i]-Pbar) } /N
The form of Q^ tells us that the value of n to minimise Q^ will be an eigenvector of C correseponding to a minimal eigenvalue.
So (sorry I can't give code but my python is contemptible):
a/ compute
Pbar = Sum{ i | P[i]}/N, the mean of the points
b/ compute
C = Sum{ i | (P[i]-Pbar)*(P[i]-Pbar) } /N, the covariance matrix of the points
c/ diagonalise C, and pick out a minimal eigenvalue and the corresponding eigenvector n
d/ compute
d = -Pbar.n
Then n, d define the hyperplane you want.
I've also had to deal with this situation and at first the mathematical notation can be overwhelming, but in the end the solution is fairly simple.
Once you get the intuition that the vector (A,B,C) that defines the best fitting plane Ax+By+Cz+D=0 is the one that explains the minimum variance of your set of coordinates, then the solution is straightforward.
First thing to do is center your coordinates (this way D will be 0 in your plane equation)
coords -= coords.mean(axis=0)
Then you have 2 options to get the vector you are interested in: (1) use the PCA implementation from sklearn or scipy to get the vector that explains minimal variance
pca = PCA(n_components=3)
pca.fit(coords)
# The last component/vector is the one with minimal variance, see PCA documentation
normal_vector = pca.components_[-1]
(2) re-implement the procedure described in the Geometric Tool reference you've linked.
#njit
def get_best_fitting_plane_vector(coords):
# Calculate the covariance matrix of the coordinates
covariance_matrix = np.cov(coords, rowvar=False) # Variables = columns
# Calculate the eigenvalues & eigenvectors of the covariance matrix
e_val, e_vect = np.linalg.eig(covariance_matrix)
# The normal vector to the plane is the eigenvector associated to the minimum eigenvalue
min_eval = np.argmin(e_val)
normal_vector = e_vect[:, min_eval]
return normal_vector
In terms of speed, the re-implemented procedure is faster than using PCA, and can be a lot faster if you use numba (just decorate the function with #njit).
Based on your second refernce
[]
Say you have n samples (x,y,z)
I'll call the 3 terms M*A=V, and define the column arrays
X=[ x_0, x_1 .. x_n ]'
Y=[ y_0, y_1 .. y_n ]'
Z=[ z_0, z_1 .. z_n ]'
Define the (n by 3) matrix XY1=[X,Y,1n]:
[[x_0,y_0,1],
XY1= [x_1,y_1,1],
...
[x_n,y_n,1]]
The matrix M can be obtained as
M = XY1' * XY1
Where apostrophe (') is the transposition operator and (*) the matrix product.
And the array V is
V = XY1'*Z
The least squares solution can be obtained through the moore-penrose pseoudoinverse: [(M'*M)^-1 * M']
~A = [(M'*M)^-1 * M'] * V
Sample code:
import numpy as np
from mpl_toolkits import mplot3d
import matplotlib.pyplot as plt
#Input your values
A=3
B=2
C=1
#reserve memory
xy1=np.ones([n,3])
#Make random data, n ( x,y ) tuples.
n=30 #samples
xy1[:,:2]=np.random.rand(n,2)
#plane: A*x+B*y+C = z , the z coord is calculated from random x,y
z=xy1.dot (np.array([[A,B,C],]).transpose() )
#addnoise
xy1[:,:2]+=np.random.normal(scale=0.05,size=[n,2])
z+=np.random.normal(scale=0.05,size=[n,1])
#calculate M and V
M=xy1.transpose().dot(xy1)
V=xy1.transpose().dot(z)
#pseudoinverse:
Mp=np.linalg.inv(M.transpose().dot(M)).dot(M.transpose())
#Least-squares Solution
ABC= Mp.dot(V)
Output
In [24]: ABC
Out[24]:
array([[3.11395111],
[2.02909874],
[1.01340411]])
I would like to compute the Earth Mover Distance between two 2D arrays (these are not images).
Right now I go through two libraries: scipy (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html) and pyemd (https://pypi.org/project/pyemd/).
#define a sampeling method
def sampeling2D(n, mu1, std1, mu2, std2):
#sample from N(0, 1) in the 2D hyperspace
x = np.random.randn(n, 2)
#scale N(0, 1) -> N(mu, std)
x[:,0] = (x[:,0]*std1) + mu1
x[:,1] = (x[:,1]*std2) + mu2
return x
#generate two sets
Y1 = sampeling2D(1000, 0, 1, 0, 1)
Y2 = sampeling2D(1000, -1, 1, -1, 1)
#compute the distance
distance = pyemd.emd_samples(Y1, Y2)
While the scipy version doesn't accept 2D arrays and it returns an error, the pyemd method returns a value. If you see from the documentation, it says that it accept only 1D arrays, so I think that the output is wrong. How can I calculate this distance in this case?
So if I understand you correctly, you're trying to transport the sampling distribution, i.e. calculate the distance for a setup where all clusters have weight 1. In general, you can treat the calculation of the EMD as an instance of minimum cost flow, and in your case, this boils down to the linear assignment problem: Your two arrays are the partitions in a bipartite graph, and the weights between two vertices are your distance of choice. Assuming that you want to use the Euclidean norm as your metric, the weights of the edges, i.e. the ground distances, may be obtained using scipy.spatial.distance.cdist, and in fact SciPy provides a solver for the linear sum assignment problem as well in scipy.optimize.linear_sum_assignment (which recently saw huge performance improvements which are available in SciPy 1.4. This could be of interest to you, should you run into performance problems; the 1.3 implementation is a bit slow for 1000x1000 inputs).
In other words, what you want to do boils down to
from scipy.spatial.distance import cdist
from scipy.optimize import linear_sum_assignment
d = cdist(Y1, Y2)
assignment = linear_sum_assignment(d)
print(d[assignment].sum() / n)
It is also possible to use scipy.sparse.csgraph.min_weight_bipartite_full_matching as a drop-in replacement for linear_sum_assignment; while made for sparse inputs (which yours certainly isn't), it might provide performance improvements in some situations.
It might be instructive to verify that the result of this calculation matches what you would get from a minimum cost flow solver; one such solver is available in NetworkX, where we can construct the graph by hand:
import networkx as nx
G = nx.DiGraph()
# Represent elements in Y1 by 0, ..., 999, and elements in
# Y2 by 1000, ..., 1999.
for i in range(n):
G.add_node(i, demand=-1)
G.add_node(n + i, demand=1)
for i in range(n):
for j in range(n):
G.add_edge(i, n + j, capacity=1, weight=d[i, j])
At this point, we can verify that the approach above agrees with the minimum cost flow:
In [16]: d[assignment].sum() == nx.algorithms.min_cost_flow_cost(G)
Out[16]: True
Similarly, it's instructive to see that the result agrees with scipy.stats.wasserstein_distance for 1-dimensional inputs:
from scipy.stats import wasserstein_distance
np.random.seed(0)
n = 100
Y1 = np.random.randn(n)
Y2 = np.random.randn(n) - 2
d = np.abs(Y1 - Y2.reshape((n, 1)))
assignment = linear_sum_assignment(d)
print(d[assignment].sum() / n) # 1.9777950447866477
print(wasserstein_distance(Y1, Y2)) # 1.977795044786648
I need to calculate spatial three-point correlation function P(r1, r2) for a set of points in 2D space. Right now I just go through all triplets of points and calculate distances r1 and r2 between pairs of points within a triplet and then plot 2D histogram which gives me desired three-point correlation. This however takes a lot of time even for moderate number of points.
The question is whether it is possible to speed up this calculation?
r1 = []
r2 = []
points = [[numpy.random.random(1), numpy.random.random(1)] for i in range(100)]
for x in points:
for y in points:
for z in points:
r1.append(scipy.spatial.distance.euclidean(x, y))
r2.append(scipy.spatial.distance.euclidean(x, z))
pylab.figure()
n, xedges, yedges = numpy.histogram2d(r1, r2, bins = 10)
pylab.imshow(n, interpolation='nearest', origin='low',
extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]])
A few things that will make this faster:
First, with your array of points, instead of nested arrays of length 1 ndarrays, you can just make an Nx2 ndarray:
points = np.random.random((N, 2))
Next, you end up computing each pairwise distance many times, you should compute each distance once and then loop over the elements of the array. Scipy can do this calculation for you using scipy.spatial.distance.pdist. To recover a pairwise distance matrix, you must use scipy.spatial.distance.squareform. Explicitly looping over the elements of the matrix:
r1 = []
r2 = []
d = squareform(pdist(points))
for i in range(N):
for j in range(N):
for k in range(N):
r1.append(d[i,j])
r2.append(d[i,k])
Finally, you can make it simpler still by noting the relationship between the pairwise distance matrix d and the outputs r1 and r2 (which you can do by inspection for some small N, like 3). Using some array manipulation, you can recover the same arrays using:
d = squareform(pdist(points))
r1 = np.tile(d.reshape((N**2, 1)), N).flatten()
r2 = np.tile(d, N).flatten()
Using only a moderate N=20, on my slow computer, while your code runs in 1.15 s, the method of explicitly looping takes 12.7 ms, and the array manipulation shortcut takes 362 µs, and the latter two should scale better than the original code.
for x in points:
for y in points:
r1.append(scipy.spatial.distance.euclidean(x, y))
for z in points:
r2.append(scipy.spatial.distance.euclidean(x, z))
will reduce complexity from cubic to quadratic. I can't advice more since I don't know the problem area.
I have 2 sets of 2D points (A and B), each set have about 540 points. I need to find the points in set B that are farther than a defined distance alpha from all the points in A.
I have a solution, but is not fast enough
# find the closest point of each of the new point to the target set
def find_closest_point( self, A, B):
outliers = []
for i in range(len(B)):
# find all the euclidean distances
temp = distance.cdist([B[i]],A)
minimum = numpy.min(temp)
# if point is too far away from the rest is consider outlier
if minimum > self.alpha :
outliers.append([i, B[i]])
else:
continue
return outliers
I am using python 2.7 with numpy and scipy. Is there another way to do this that I may gain a considerable increase in speed?
Thanks in advance for the answers
>>> from scipy.spatial.distance import cdist
>>> A = np.random.randn(540, 2)
>>> B = np.random.randn(540, 2)
>>> alpha = 1.
>>> ind = np.all(cdist(A, B) > alpha, axis=0)
>>> outliers = B[ind]
gives you the points you want.
If you have a very large set of points you could calculate x & y bounds of a add & subtract aplha then eliminate all the points in b from specific consideration that lay outside of that boundary.
I have a collection of n dimensional points and I want to find which 2 are the closest. The best I could come up for 2 dimensions is:
from numpy import *
myArr = array( [[1, 2],
[3, 4],
[5, 6],
[7, 8]] )
n = myArr.shape[0]
cross = [[sum( ( myArr[i] - myArr[j] ) ** 2 ), i, j]
for i in xrange( n )
for j in xrange( n )
if i != j
]
print min( cross )
which gives
[8, 0, 1]
But this is too slow for large arrays. What kind of optimisation can I apply to it?
RELATED:
Euclidean distance between points in two different Numpy arrays, not within
Try scipy.spatial.distance.pdist(myArr). This will give you a condensed distance matrix. You can use argmin on it and find the index of the smallest value. This can be converted into the pair information.
There's a whole Wikipedia page on just this problem, see: http://en.wikipedia.org/wiki/Closest_pair_of_points
Executive summary: you can achieve O(n log n) with a recursive divide and conquer algorithm (outlined on the Wiki page, above).
You could take advantage of the latest version of SciPy's (v0.9) Delaunay triangulation tools. You can be sure that the closest two points will be an edge of a simplex in the triangulation, which is a much smaller subset of pairs than doing every combination.
Here's the code (updated for general N-D):
import numpy
from scipy import spatial
def closest_pts(pts):
# set up the triangluataion
# let Delaunay do the heavy lifting
mesh = spatial.Delaunay(pts)
# TODO: eliminate reduncant edges (numpy.unique?)
edges = numpy.vstack((mesh.vertices[:,:dim], mesh.vertices[:,-dim:]))
# the rest is easy
x = mesh.points[edges[:,0]]
y = mesh.points[edges[:,1]]
dists = numpy.sum((x-y)**2, 1)
idx = numpy.argmin(dists)
return edges[idx]
#print 'distance: ', dists[idx]
#print 'coords:\n', pts[closest_verts]
dim = 3
N = 1000*dim
pts = numpy.random.random(N).reshape(N/dim, dim)
Seems closely O(n):
There is a scipy function pdist that will get you the pairwise distances between points in an array in a fairly efficient manner:
http://docs.scipy.org/doc/scipy/reference/spatial.distance.html
that outputs the N*(N-1)/2 unique pairs (since r_ij == r_ji). You can then search on the minimum value and avoid the whole loop mess in your code.
Perhaps you could proceed along these lines:
In []: from scipy.spatial.distance import pdist as pd, squareform as sf
In []: m= 1234
In []: n= 123
In []: p= randn(m, n)
In []: d= sf(pd(p))
In []: a= arange(m)
In []: d[a, a]= d.max()
In []: where(d< d.min()+ 1e-9)
Out[]: (array([701, 730]), array([730, 701]))
With substantially more points you need to be able to somehow utilize the hierarchical structure of your clustering.
How fast is it compared to just doing a nested loop and keeping track of the shortest pair? I think creating a huge cross array is what might be hurting you. Even O(n^2) is still pretty quick if you're only doing 2 dimensional points.
The accepted answer is OK for small datasets, but its execution time scales as n**2. However, as pointed out by #payne, an optimal solution can achieve n*log(n) computation time scaling.
This optial solution can be obtained using sklearn.neighbors.BallTree as follows.
import matplotlib.pyplot as plt
import numpy as np
from sklearn.neighbors import BallTree as tree
n = 10
dim = 2
xy = np.random.uniform(size=[n, dim])
# This solution is optimal when xy is very large
res = tree(xy)
dist, ids = res.query(xy, 2)
mindist = dist[:, 1] # second nearest neighbour
minid = np.argmin(mindist)
plt.plot(*xy.T, 'o')
plt.plot(*xy[ids[minid]].T, '-o')
This procedure scales well for very large sets of xy values and even for large dimensions dim (altough the example illustrates the case dim=2). The resulting output looks like this
An identical solution can be obtained using scipy.spatial.cKDTree, by replacing the sklearn import with the following Scipy one. Note however that cKDTree, unlike BallTree, does not scale well for high dimensions
from scipy.spatial import cKDTree as tree