Related
I have a set of points in 2-dimensional space and need to calculate the distance from each point to each other point.
I have a relatively small number of points, maybe at most 100. But since I need to do it often and rapidly in order to determine the relationships between these moving points, and since I'm aware that iterating through the points could be as bad as O(n^2) complexity, I'm looking for ways to take advantage of numpy's matrix magic (or scipy).
As it stands in my code, the coordinates of each object are stored in its class. However, I could also update them in a numpy array when I update the class coordinate.
class Cell(object):
"""Represents one object in the field."""
def __init__(self,id,x=0,y=0):
self.m_id = id
self.m_x = x
self.m_y = y
It occurs to me to create a Euclidean distance matrix to prevent duplication, but perhaps you have a cleverer data structure.
I'm open to pointers to nifty algorithms as well.
Also, I note that there are similar questions dealing with Euclidean distance and numpy but didn't find any that directly address this question of efficiently populating a full distance matrix.
You can take advantage of the complex type :
# build a complex array of your cells
z = np.array([complex(c.m_x, c.m_y) for c in cells])
First solution
# mesh this array so that you will have all combinations
m, n = np.meshgrid(z, z)
# get the distance via the norm
out = abs(m-n)
Second solution
Meshing is the main idea. But numpy is clever, so you don't have to generate m & n. Just compute the difference using a transposed version of z. The mesh is done automatically :
out = abs(z[..., np.newaxis] - z)
Third solution
And if z is directly set as a 2-dimensional array, you can use z.T instead of the weird z[..., np.newaxis]. So finally, your code will look like this :
z = np.array([[complex(c.m_x, c.m_y) for c in cells]]) # notice the [[ ... ]]
out = abs(z.T-z)
Example
>>> z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])
>>> abs(z.T-z)
array([[ 0. , 2.23606798, 4.12310563],
[ 2.23606798, 0. , 4.24264069],
[ 4.12310563, 4.24264069, 0. ]])
As a complement, you may want to remove duplicates afterwards, taking the upper triangle :
>>> np.triu(out)
array([[ 0. , 2.23606798, 4.12310563],
[ 0. , 0. , 4.24264069],
[ 0. , 0. , 0. ]])
Some benchmarks
>>> timeit.timeit('abs(z.T-z)', setup='import numpy as np;z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])')
4.645645342274779
>>> timeit.timeit('abs(z[..., np.newaxis] - z)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
5.049334864854522
>>> timeit.timeit('m, n = np.meshgrid(z, z); abs(m-n)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
22.489568296184686
If you don't need the full distance matrix, you will be better off using kd-tree. Consider scipy.spatial.cKDTree or sklearn.neighbors.KDTree. This is because a kd-tree kan find k-nearnest neighbors in O(n log n) time, and therefore you avoid the O(n**2) complexity of computing all n by n distances.
Jake Vanderplas gives this example using broadcasting in Python Data Science Handbook, which is very similar to what #shx2 proposed.
import numpy as np
rand = random.RandomState(42)
X = rand.rand(3, 2)
dist_sq = np.sum((X[:, np.newaxis, :] - X[np.newaxis, :, :]) ** 2, axis = -1)
dist_sq
array([[0. , 0.18543317, 0.81602495],
[0.18543317, 0. , 0.22819282],
[0.81602495, 0.22819282, 0. ]])
Here is how you can do it using numpy:
import numpy as np
x = np.array([0,1,2])
y = np.array([2,4,6])
# take advantage of broadcasting, to make a 2dim array of diffs
dx = x[..., np.newaxis] - x[np.newaxis, ...]
dy = y[..., np.newaxis] - y[np.newaxis, ...]
dx
=> array([[ 0, -1, -2],
[ 1, 0, -1],
[ 2, 1, 0]])
# stack in one array, to speed up calculations
d = np.array([dx,dy])
d.shape
=> (2, 3, 3)
Now all is left is computing the L2-norm along the 0-axis (as discussed here):
(d**2).sum(axis=0)**0.5
=> array([[ 0. , 2.23606798, 4.47213595],
[ 2.23606798, 0. , 2.23606798],
[ 4.47213595, 2.23606798, 0. ]])
If you are looking for the most efficient way of computation - use SciPy's cdist() (or pdist() if you need just vector of pairwise distances instead of full distance matrix) as suggested in Tweakimp's comment. As he said it's a lot faster than method based on vectorization and broadcasting, proposed by RichPauloo and shx2. The reason for that is that SciPy's cdist() and pdist() under the hood use for loop and C implementations for metric computations, which are even faster than vectorization.
By the way, if you can use SciPy and still prefer method using broadcasting, you don't have to implement it by yourself, as distance_matrix() function is pure Python implementation, which leverages broadcasting and vectorization (source code, docs).
It's worth mentioning that cdist()/pdist() is also more efficient than broadcasting memory-wise, as it computes distances one by one and avoids creating arrays of n*n*d elements, where n is number of points and d is points' dimensionality.
Experiments
I've conducted some simple experiments to compare performance of SciPy's cdist(), distance_matrix() and broadcasting implementation in NumPy. I used perf_counter_ns() from Python's time module to measure time and all the results are averaged over 10 runs on 10000 points in 2D space using np.float64 datatype (tested on Python 3.8.10, Windows 10 with Ryzen 2700 and 16 GB RAM):
cdist() - 0.6724s
distance_matrix() - 3.0128s
my NumPy implementation - 3.6931s
Code if someone wants to reproduce experiments:
from scipy.spatial import *
import numpy as np
from time import perf_counter_ns
def dist_mat_custom(a, b):
return np.sqrt(np.sum(np.square(a[:, np.newaxis, :] - b[np.newaxis, :, :]), axis=-1))
results = []
size = 10000
it_num = 10
for i in range(it_num):
a = np.random.normal(size=(size, 2))
b = np.random.normal(size=(size, 2))
start = perf_counter_ns()
c = distance_matrix(a, b)
#c = dist_mat_custom(a, b)
#c = distance.cdist(a, b)
results.append(perf_counter_ns() - start)
print(np.mean(results) / 1e9)
I have a numpy array of complex numbers. I seek to clip the real and imaginary parts of each number in the array to some prescribed minimum and maximum (same clipping applied to both the real and imaginary parts). For example, consider:
import numpy as np
clip_min = -4
clip_max = 3
x = np.array([-1.4 + 5j, -4.7 - 3j])
The desired output of the clipping operation would be:
[-1.4 + 3j, -4-3j]
I can achieve this by calling np.clip on the real and imaginary parts of the complex array and then adding them (after multiplying the imaginary clipped data by 1j). Is there a way to do this with one command?
np.clip(x, clip_min, clip_max)
doesn't yield the desired result.
There is a slightly more efficient way than clipping the real and imaginary parts as separate arrays, using in-place operations:
np.clip(x.real, clip_min, clip_max, out=x.real)
np.clip(x.imag, clip_min, clip_max, out=x.imag)
If these are just cartesian coordinates stored in complex numbers, you could clip them in a single command by keeping them as floats rather than complex.
x = array([[-1.4, 5. ],
[-4.7, -3. ]])
np.clip(x, clip_min, clip_max)
>>> array([[-1.4, 3. ],
[-4. , -3. ]])
As the title says, I'm trying to get a Markov Clustering Algorithm to work in Python, namely Python 3.7
Unfortunately, it's not doing much of anything, and it's driving me up the wall trying to fix it.
EDIT: First, I've made the adjustments to the main code to make each column sum to 100, even if it's not perfectly balanced. I'm going to try to account for that in the final answer.
To be clear, the biggest problem is that the numbers spiral out of control, into such easily-understandable numbers as 5.56268465e-309, and I don't know how to convert that into something understandable.
Here's the code so far:
import numpy as np
import math
## How far you'd like your random-walkers to go (bigger number -> more walking)
EXPANSION_POWER = 2
## How tightly clustered you'd like your final picture to be (bigger number -> more clusters)
INFLATION_POWER = 2
ITERATION_COUNT = 10
def normalize(matrix):
return matrix/np.sum(matrix, axis=0)
def expand(matrix, power):
return np.linalg.matrix_power(matrix, power)
def inflate(matrix, power):
for entry in np.nditer(transition_matrix, op_flags=['readwrite']):
entry[...] = math.pow(entry, power)
return matrix
def run(matrix):
#np.fill_diagonal(matrix, 1)
#print(matrix)
matrix = normalize(matrix)
print(matrix)
for _ in range(ITERATION_COUNT):
matrix = normalize(inflate(expand(matrix, EXPANSION_POWER), INFLATION_POWER))
return matrix
transition_matrix = np.array ([[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0.5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0.5,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0.34,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0.33,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0.33,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0.34,0,0,0,0,0,0,0,0,0,0,0,0,0.125,0],
[0,0,0,0.33,0,0,0.5,0,0,0,0,0,0,0,0,0,0.125,1],
[0,0,0,0.33,0,0,0.5,1,1,0,0,0,0,0,0,0,0.125,0],
[0,0,0,0,0.166,0,0,0,0,0,0,0,0,0,0,0,0.125,0],
[0,0,0,0,0.166,0,0,0,0,0.2,0,0,0,0,0,0,0.125,0],
[0,0,0,0,0.167,0,0,0,0,0.2,0.25,0,0,0,0,0,0.125,0],
[0,0,0,0,0.167,0,0,0,0,0.2,0.25,0.5,0,0,0,0,0,0],
[0,0,0,0,0.167,0,0,0,0,0.2,0.25,0.5,0,1,0,0,0.125,0],
[0,0,0,0,0.167,0,0,0,0,0.2,0.25,0,1,0,1,0,0.125,0],
[0,0,0,0,0,0.34,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0.33,0,0,0,0,0,0,0,0,0,0.5,0,0],
[0,0,0,0,0,0.33,0,0,0,0,0,0,0,0,0,0.5,0,0]])
run(transition_matrix)
print(transition_matrix)
This is part of a uni assignment - I need to do this array both weighted and unweighted (though the weighted part can just wait until I've got the bloody thing working at all) any tips or suggestions?
Your transition matrix is not valid.
>>> transition_matrix.sum(axis=0)
>>> matrix([[1. , 1. , 0.99, 0.99, 0.96, 0.99, 1. , 1. , 0. , 1. ,
1. , 1. , 1. , 0. , 0. , 1. , 0.88, 1. ]])
Not only does some of your columns not sum to 1, some of them sum to 0.
This means when you try to normalize your matrix, you will end up with nan because you are dividing by 0.
Lastly, is there a reason why you are using a Numpy matrix instead of just a Numpy array, which is the recommended container for such data? Because using Numpy arrays will simplify some of the operations, such as raising each entry to a power. Also, there are some differences between Numpy matrix and Numpy array which can result in subtle bugs.
I have a numpy array of floats which when printed look like this:
The red circles are the original values, the blue crosses are a linear interpolation using numpy.interp.
I would like to find the abscissa of the zero crossing of this numpy array (red circle) using scipy.optimize.bisect (for example). Since this is a numpy array (and not a function) I cannot pass it directly to scipy.optimize.bisect. So I was thinking to pass a function that interpolates the numpy array to bisect. Here is the code I am using for the moment:
def Inter_F(x,xp,fp):
return np.interp(x,xp,fp)
Numpyroot = scp.optimize.bisect(Inter_F,0,9,args=(XNumpy,YNumpy))
I find a value that seems correct, Numpyroot = 3.376425289196618.
I am wondering:
if this is the correct technical way to use scipy.optimize.bisect on
a numpy array? Specially when I am going to do this 10^6 times on different set of numpy values.
if enforcing a linear interpolation is not influencing the results
that bisect is going to find and if yes, are there better choice?
Here are the two numpy arrays:
XNumpy = array([ 0. , 1.125, 2.25 , 3.375, 4.5 , 5.625, 6.75 , 7.875, 9. ])
YNumpy = array([ -2.70584242e+04, -2.46925289e+04, -1.53211676e+04,
-2.30000000e+01, 1.81312104e+04, 3.41662461e+04,
4.80466863e+04, 5.75113178e+04, 6.41718009e+04])
I think what you do is correct. However, there is a more concise way.
import numpy as np
from scipy.interpolate import interp1d
XNumpy = np.array([0., 1.125, 2.25, 3.375, 4.5, 5.625, 6.75, 7.875, 9.])
YNumpy = np.array([
-2.70584242e+04, -2.46925289e+04, -1.53211676e+04,
-2.30000000e+01, 1.81312104e+04, 3.41662461e+04,
4.80466863e+04, 5.75113178e+04, 6.41718009e+04
])
invf = interp1d(YNumpy, XNumpy)
print(invf(0))
Result:
array(3.376425289199028)
Here I use scipy.interpolate.interp1d to return a function. Also I interpolate the inverse function so that the abscissa are readily calculated. Of course you can do the same trick with np.interp, I just like scipy.interpolate.interp1d because it returns a function so I can calculate x value from any given y value.
I have a set of points in 2-dimensional space and need to calculate the distance from each point to each other point.
I have a relatively small number of points, maybe at most 100. But since I need to do it often and rapidly in order to determine the relationships between these moving points, and since I'm aware that iterating through the points could be as bad as O(n^2) complexity, I'm looking for ways to take advantage of numpy's matrix magic (or scipy).
As it stands in my code, the coordinates of each object are stored in its class. However, I could also update them in a numpy array when I update the class coordinate.
class Cell(object):
"""Represents one object in the field."""
def __init__(self,id,x=0,y=0):
self.m_id = id
self.m_x = x
self.m_y = y
It occurs to me to create a Euclidean distance matrix to prevent duplication, but perhaps you have a cleverer data structure.
I'm open to pointers to nifty algorithms as well.
Also, I note that there are similar questions dealing with Euclidean distance and numpy but didn't find any that directly address this question of efficiently populating a full distance matrix.
You can take advantage of the complex type :
# build a complex array of your cells
z = np.array([complex(c.m_x, c.m_y) for c in cells])
First solution
# mesh this array so that you will have all combinations
m, n = np.meshgrid(z, z)
# get the distance via the norm
out = abs(m-n)
Second solution
Meshing is the main idea. But numpy is clever, so you don't have to generate m & n. Just compute the difference using a transposed version of z. The mesh is done automatically :
out = abs(z[..., np.newaxis] - z)
Third solution
And if z is directly set as a 2-dimensional array, you can use z.T instead of the weird z[..., np.newaxis]. So finally, your code will look like this :
z = np.array([[complex(c.m_x, c.m_y) for c in cells]]) # notice the [[ ... ]]
out = abs(z.T-z)
Example
>>> z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])
>>> abs(z.T-z)
array([[ 0. , 2.23606798, 4.12310563],
[ 2.23606798, 0. , 4.24264069],
[ 4.12310563, 4.24264069, 0. ]])
As a complement, you may want to remove duplicates afterwards, taking the upper triangle :
>>> np.triu(out)
array([[ 0. , 2.23606798, 4.12310563],
[ 0. , 0. , 4.24264069],
[ 0. , 0. , 0. ]])
Some benchmarks
>>> timeit.timeit('abs(z.T-z)', setup='import numpy as np;z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])')
4.645645342274779
>>> timeit.timeit('abs(z[..., np.newaxis] - z)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
5.049334864854522
>>> timeit.timeit('m, n = np.meshgrid(z, z); abs(m-n)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
22.489568296184686
If you don't need the full distance matrix, you will be better off using kd-tree. Consider scipy.spatial.cKDTree or sklearn.neighbors.KDTree. This is because a kd-tree kan find k-nearnest neighbors in O(n log n) time, and therefore you avoid the O(n**2) complexity of computing all n by n distances.
Jake Vanderplas gives this example using broadcasting in Python Data Science Handbook, which is very similar to what #shx2 proposed.
import numpy as np
rand = random.RandomState(42)
X = rand.rand(3, 2)
dist_sq = np.sum((X[:, np.newaxis, :] - X[np.newaxis, :, :]) ** 2, axis = -1)
dist_sq
array([[0. , 0.18543317, 0.81602495],
[0.18543317, 0. , 0.22819282],
[0.81602495, 0.22819282, 0. ]])
Here is how you can do it using numpy:
import numpy as np
x = np.array([0,1,2])
y = np.array([2,4,6])
# take advantage of broadcasting, to make a 2dim array of diffs
dx = x[..., np.newaxis] - x[np.newaxis, ...]
dy = y[..., np.newaxis] - y[np.newaxis, ...]
dx
=> array([[ 0, -1, -2],
[ 1, 0, -1],
[ 2, 1, 0]])
# stack in one array, to speed up calculations
d = np.array([dx,dy])
d.shape
=> (2, 3, 3)
Now all is left is computing the L2-norm along the 0-axis (as discussed here):
(d**2).sum(axis=0)**0.5
=> array([[ 0. , 2.23606798, 4.47213595],
[ 2.23606798, 0. , 2.23606798],
[ 4.47213595, 2.23606798, 0. ]])
If you are looking for the most efficient way of computation - use SciPy's cdist() (or pdist() if you need just vector of pairwise distances instead of full distance matrix) as suggested in Tweakimp's comment. As he said it's a lot faster than method based on vectorization and broadcasting, proposed by RichPauloo and shx2. The reason for that is that SciPy's cdist() and pdist() under the hood use for loop and C implementations for metric computations, which are even faster than vectorization.
By the way, if you can use SciPy and still prefer method using broadcasting, you don't have to implement it by yourself, as distance_matrix() function is pure Python implementation, which leverages broadcasting and vectorization (source code, docs).
It's worth mentioning that cdist()/pdist() is also more efficient than broadcasting memory-wise, as it computes distances one by one and avoids creating arrays of n*n*d elements, where n is number of points and d is points' dimensionality.
Experiments
I've conducted some simple experiments to compare performance of SciPy's cdist(), distance_matrix() and broadcasting implementation in NumPy. I used perf_counter_ns() from Python's time module to measure time and all the results are averaged over 10 runs on 10000 points in 2D space using np.float64 datatype (tested on Python 3.8.10, Windows 10 with Ryzen 2700 and 16 GB RAM):
cdist() - 0.6724s
distance_matrix() - 3.0128s
my NumPy implementation - 3.6931s
Code if someone wants to reproduce experiments:
from scipy.spatial import *
import numpy as np
from time import perf_counter_ns
def dist_mat_custom(a, b):
return np.sqrt(np.sum(np.square(a[:, np.newaxis, :] - b[np.newaxis, :, :]), axis=-1))
results = []
size = 10000
it_num = 10
for i in range(it_num):
a = np.random.normal(size=(size, 2))
b = np.random.normal(size=(size, 2))
start = perf_counter_ns()
c = distance_matrix(a, b)
#c = dist_mat_custom(a, b)
#c = distance.cdist(a, b)
results.append(perf_counter_ns() - start)
print(np.mean(results) / 1e9)