numpy mean of rows when speed is a concern - python

I want to do mean of rows of numpy matrix. So for the input:
array([[ 1, 1, -1],
[ 2, 0, 0],
[ 3, 1, 1],
[ 4, 0, -1]])
my output will be:
array([[ 0.33333333],
[ 0.66666667],
[ 1.66666667],
[ 1. ]])
I came up with a solution result = array([[x] for x in np.mean(my_matrix, axis=1)]), but this function will be called a lots of times on matrices of 40rows x 10-300 columns, so i would like to make it faster, and this implementation seems slow

You can do something like this:
>>> my_matrix.mean(axis=1)[:,np.newaxis]
array([[ 0.33333333],
[ 0.66666667],
[ 1.66666667],
[ 1. ]])

If the matrices are fresh and independent there isn't much you can save because the only way to compute the mean is to actually sum the numbers.
If however the matrices are obtained from partial views of a single fixed dataset (e.g. you're computing a moving average) the you can use a sum table. For example after:
st = data.cumsum(0)
you can compute the average of the elements between index x0 and x1 with
avg = (st[x1] - st[x0]) / (x1 - x0)
in O(1) (i.e. the computing time doesn't depends on how many elements you are averaging).
You can even use numpy to compute an array with the moving averages directly with:
res = (st[n:] - st[:-n]) / n
This approach can even be extended to higher dimensions like computing the average of the values in a rectangle in O(1) with
st = data.cumsum(0).cumsum(1)
rectsum = (st[y1][x1] + st[y0][x0] - st[y0][x1] - st[y1][x0])

Related

Linear Dependence of Set of Vectors in numpy

I want to check whether some vectors are dependent on each other or not by numpy, I found some good suggestions for checking linear dependency of rows of a matrix in the link below:
How to find linearly independent rows from a matrix
I can not understand the 'Cauchy-Schwarz inequality' method which I think is due to lack of my knowledge, however I tried the Eigenvalue method to check linear dependency among columns and here is my code:
A = np.array([
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 1, 1, 0],
[1, 0, 0, 1]
])
lambdas, V = np.linalg.eig(A)
print(lambdas)
print(V)
and I get:
[ 1. 0. 1.61803399 -0.61803399]
[[ 0. 0.70710678 0.2763932 -0.7236068 ]
[ 0. 0. 0.4472136 0.4472136 ]
[ 0. 0. 0.7236068 -0.2763932 ]
[ 1. -0.70710678 0.4472136 0.4472136 ]]
My question is what is the relevance between these eigenvectors or eigenvalues to the dependency of columns of my matrix? How can I understand which columns are dependent to each other and which are independent by these values?
The second column vector corresponds to the eigenvalue of 0.
Just take a look at the API documentation when you get confused.
v : (…, M, M) array
The normalized (unit “length”) eigenvectors, such that the column
v[:,i] is the eigenvector corresponding to the eigenvalue w[i].
You can find the linearly independent columns by QR decomposition as described here.

Quickest way to calculate the euclidean distance matrix of two list of points [duplicate]

I have a set of points in 2-dimensional space and need to calculate the distance from each point to each other point.
I have a relatively small number of points, maybe at most 100. But since I need to do it often and rapidly in order to determine the relationships between these moving points, and since I'm aware that iterating through the points could be as bad as O(n^2) complexity, I'm looking for ways to take advantage of numpy's matrix magic (or scipy).
As it stands in my code, the coordinates of each object are stored in its class. However, I could also update them in a numpy array when I update the class coordinate.
class Cell(object):
"""Represents one object in the field."""
def __init__(self,id,x=0,y=0):
self.m_id = id
self.m_x = x
self.m_y = y
It occurs to me to create a Euclidean distance matrix to prevent duplication, but perhaps you have a cleverer data structure.
I'm open to pointers to nifty algorithms as well.
Also, I note that there are similar questions dealing with Euclidean distance and numpy but didn't find any that directly address this question of efficiently populating a full distance matrix.
You can take advantage of the complex type :
# build a complex array of your cells
z = np.array([complex(c.m_x, c.m_y) for c in cells])
First solution
# mesh this array so that you will have all combinations
m, n = np.meshgrid(z, z)
# get the distance via the norm
out = abs(m-n)
Second solution
Meshing is the main idea. But numpy is clever, so you don't have to generate m & n. Just compute the difference using a transposed version of z. The mesh is done automatically :
out = abs(z[..., np.newaxis] - z)
Third solution
And if z is directly set as a 2-dimensional array, you can use z.T instead of the weird z[..., np.newaxis]. So finally, your code will look like this :
z = np.array([[complex(c.m_x, c.m_y) for c in cells]]) # notice the [[ ... ]]
out = abs(z.T-z)
Example
>>> z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])
>>> abs(z.T-z)
array([[ 0. , 2.23606798, 4.12310563],
[ 2.23606798, 0. , 4.24264069],
[ 4.12310563, 4.24264069, 0. ]])
As a complement, you may want to remove duplicates afterwards, taking the upper triangle :
>>> np.triu(out)
array([[ 0. , 2.23606798, 4.12310563],
[ 0. , 0. , 4.24264069],
[ 0. , 0. , 0. ]])
Some benchmarks
>>> timeit.timeit('abs(z.T-z)', setup='import numpy as np;z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])')
4.645645342274779
>>> timeit.timeit('abs(z[..., np.newaxis] - z)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
5.049334864854522
>>> timeit.timeit('m, n = np.meshgrid(z, z); abs(m-n)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
22.489568296184686
If you don't need the full distance matrix, you will be better off using kd-tree. Consider scipy.spatial.cKDTree or sklearn.neighbors.KDTree. This is because a kd-tree kan find k-nearnest neighbors in O(n log n) time, and therefore you avoid the O(n**2) complexity of computing all n by n distances.
Jake Vanderplas gives this example using broadcasting in Python Data Science Handbook, which is very similar to what #shx2 proposed.
import numpy as np
rand = random.RandomState(42)
X = rand.rand(3, 2)
dist_sq = np.sum((X[:, np.newaxis, :] - X[np.newaxis, :, :]) ** 2, axis = -1)
dist_sq
array([[0. , 0.18543317, 0.81602495],
[0.18543317, 0. , 0.22819282],
[0.81602495, 0.22819282, 0. ]])
Here is how you can do it using numpy:
import numpy as np
x = np.array([0,1,2])
y = np.array([2,4,6])
# take advantage of broadcasting, to make a 2dim array of diffs
dx = x[..., np.newaxis] - x[np.newaxis, ...]
dy = y[..., np.newaxis] - y[np.newaxis, ...]
dx
=> array([[ 0, -1, -2],
[ 1, 0, -1],
[ 2, 1, 0]])
# stack in one array, to speed up calculations
d = np.array([dx,dy])
d.shape
=> (2, 3, 3)
Now all is left is computing the L2-norm along the 0-axis (as discussed here):
(d**2).sum(axis=0)**0.5
=> array([[ 0. , 2.23606798, 4.47213595],
[ 2.23606798, 0. , 2.23606798],
[ 4.47213595, 2.23606798, 0. ]])
If you are looking for the most efficient way of computation - use SciPy's cdist() (or pdist() if you need just vector of pairwise distances instead of full distance matrix) as suggested in Tweakimp's comment. As he said it's a lot faster than method based on vectorization and broadcasting, proposed by RichPauloo and shx2. The reason for that is that SciPy's cdist() and pdist() under the hood use for loop and C implementations for metric computations, which are even faster than vectorization.
By the way, if you can use SciPy and still prefer method using broadcasting, you don't have to implement it by yourself, as distance_matrix() function is pure Python implementation, which leverages broadcasting and vectorization (source code, docs).
It's worth mentioning that cdist()/pdist() is also more efficient than broadcasting memory-wise, as it computes distances one by one and avoids creating arrays of n*n*d elements, where n is number of points and d is points' dimensionality.
Experiments
I've conducted some simple experiments to compare performance of SciPy's cdist(), distance_matrix() and broadcasting implementation in NumPy. I used perf_counter_ns() from Python's time module to measure time and all the results are averaged over 10 runs on 10000 points in 2D space using np.float64 datatype (tested on Python 3.8.10, Windows 10 with Ryzen 2700 and 16 GB RAM):
cdist() - 0.6724s
distance_matrix() - 3.0128s
my NumPy implementation - 3.6931s
Code if someone wants to reproduce experiments:
from scipy.spatial import *
import numpy as np
from time import perf_counter_ns
def dist_mat_custom(a, b):
return np.sqrt(np.sum(np.square(a[:, np.newaxis, :] - b[np.newaxis, :, :]), axis=-1))
results = []
size = 10000
it_num = 10
for i in range(it_num):
a = np.random.normal(size=(size, 2))
b = np.random.normal(size=(size, 2))
start = perf_counter_ns()
c = distance_matrix(a, b)
#c = dist_mat_custom(a, b)
#c = distance.cdist(a, b)
results.append(perf_counter_ns() - start)
print(np.mean(results) / 1e9)

OpenCV easy way to call fillConvexPoly() on 3d area?

I have a grayscale 3D image represented as a numpy array. Dimensions are height x width x depth. Given a square = [p1,p2,p3,p4] I want to call fillConvexPoly(square, 100) on every depth layer of the array. I know I can just loop through the depth and call the function a few hundred times, but I feel like doing so fails to take advantage of the fact that I am working with a numpy array. Is there a faster way to accomplish this?
All you need to do is index the rectangle in the first two dimensions, and that will select that rectangle in every channel in the third dimension. Then you can simply fill with whatever values you like. For e.g., I'll create a stack of 100 random images that are 5x5, and assign every pixel on the inside 1, leaving just the border of the image the random values it started with. Even though here just the first image is printed, each one looks like the first except the values around the edge.
>>> import numpy as np
>>> imgs = np.random.rand(5, 5, 100)
>>> imgs = np.random.rand(5, 5, 100)
>>> imgs[:, :, 0]
array([[ 0.17818592, 0.7427181 , 0.83685674, 0.27231489, 0.037665 ],
[ 0.61994589, 0.64282216, 0.20543185, 0.65049771, 0.52236919],
[ 0.78862153, 0.86612292, 0.48208187, 0.1233576 , 0.18561781],
[ 0.09628382, 0.08812067, 0.50085837, 0.92871428, 0.28052041],
[ 0.87715376, 0.38269949, 0.76995739, 0.83079243, 0.90698188]])
>>> imgs[1:4, 1:4] = 1
>>> imgs[:, :, 0]
array([[ 0.17818592, 0.7427181 , 0.83685674, 0.27231489, 0.037665 ],
[ 0.61994589, 1. , 1. , 1. , 0.52236919],
[ 0.78862153, 1. , 1. , 1. , 0.18561781],
[ 0.09628382, 1. , 1. , 1. , 0.28052041],
[ 0.87715376, 0.38269949, 0.76995739, 0.83079243, 0.90698188]])

affinity propagation in python

I am seeing something strange while using AffinityPropagation from sklearn. I have a 4 x 4 numpy ndarray - which is basically the affinity-scores. sim[i, j] has the affinity score of [i, j]. Now, when I feed into the AffinityPropgation function, I get a total of 4 labels.
here is an similar example with a smaller matrix:
In [215]: x = np.array([[1, 0.2, 0.4, 0], [0.2, 1, 0.8, 0.3], [0.4, 0.8, 1, 0.7], [0, 0.3, 0.7, 1]]
.....: )
In [216]: x
Out[216]:
array([[ 1. , 0.2, 0.4, 0. ],
[ 0.2, 1. , 0.8, 0.3],
[ 0.4, 0.8, 1. , 0.7],
[ 0. , 0.3, 0.7, 1. ]])
In [217]: clusterer = cluster.AffinityPropagation(affinity='precomputed')
In [218]: f = clusterer.fit(x)
In [219]: f.labels_
Out[219]: array([0, 1, 1, 1])
This says (according to Kevin), that the first sample (0th-indexed row) is a cluster (Cluster # 0) on its own and the rest of the samples are in another cluster (cluster # 1). But, still, I do not understand this output. What is a sample here? What are the members? I want to have a set of pairs (i, j) assigned to one cluster, another set of pairs assigned to another cluster and so on.
It looks like a 4-sample x 4-feature matrix..which I do not want. Is this the problem? If so, how to convert this to a nice 4-sample x 4-sample affinity-matrix?
The documentation (http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AffinityPropagation.html) says
fit(X, y=None)
Create affinity matrix from negative euclidean distances, then apply affinity propagation clustering.
Parameters:
X: array-like, shape (n_samples, n_features) or (n_samples, n_samples) :
Data matrix or, if affinity is precomputed, matrix of similarities / affinities.
Thanks!
By your description it sounds like you are working with a "pairwise similarity matrix": x (although your example data does not show that). If this is the case your matrix should be symmertric so that: sim[i,j] == sim[j,i] with your diagonal values equal to 1. Example similarity data S:
S
array([[ 1. , 0.08276253, 0.16227766, 0.47213595, 0.64575131],
[ 0.08276253, 1. , 0.56776436, 0.74456265, 0.09901951],
[ 0.16227766, 0.56776436, 1. , 0.47722558, 0.58257569],
[ 0.47213595, 0.74456265, 0.47722558, 1. , 0.87298335],
[ 0.64575131, 0.09901951, 0.58257569, 0.87298335, 1. ]])
Typically when you already have a distance matrix you should use affinity='precomputed'. But in your case, you are using similarity. In this specific example you can transform to pseudo-distance using 1-D. (The reason to do this would be because I don't know that Affinity Propagation will give you expected results if you give it a similarity matrix as input):
1-D
array([[ 0. , 0.91723747, 0.83772234, 0.52786405, 0.35424869],
[ 0.91723747, 0. , 0.43223564, 0.25543735, 0.90098049],
[ 0.83772234, 0.43223564, 0. , 0.52277442, 0.41742431],
[ 0.52786405, 0.25543735, 0.52277442, 0. , 0.12701665],
[ 0.35424869, 0.90098049, 0.41742431, 0.12701665, 0. ]])
With that being said, I think this is where your interpretation was off:
This says that the first 3-rows are similar, 4th row is a cluster on its own, and the 5th row is also a cluster on its own. Totally of 3 clusters.
The f.labels_ array:
array([0, 1, 1, 1, 0])
is telling you that samples (not rows) 0 and 4 are in cluster 0 AND that samples 2, 3, and 4 are in cluster 1. You don't need 25 different labels for a 5 sample problem, that wouldn't make sense. Hope this helps a little, try the demo (inspect the variables along the way and compare them with your data), which starts with raw data; it should help you decide if Affinity Propagation is the right clustering algorithm for you.
According to this page https://scikit-learn.org/stable/modules/clustering.html
you can use a similarity matrix for AffinityPropagation.

Efficiently Calculating a Euclidean Distance Matrix Using Numpy

I have a set of points in 2-dimensional space and need to calculate the distance from each point to each other point.
I have a relatively small number of points, maybe at most 100. But since I need to do it often and rapidly in order to determine the relationships between these moving points, and since I'm aware that iterating through the points could be as bad as O(n^2) complexity, I'm looking for ways to take advantage of numpy's matrix magic (or scipy).
As it stands in my code, the coordinates of each object are stored in its class. However, I could also update them in a numpy array when I update the class coordinate.
class Cell(object):
"""Represents one object in the field."""
def __init__(self,id,x=0,y=0):
self.m_id = id
self.m_x = x
self.m_y = y
It occurs to me to create a Euclidean distance matrix to prevent duplication, but perhaps you have a cleverer data structure.
I'm open to pointers to nifty algorithms as well.
Also, I note that there are similar questions dealing with Euclidean distance and numpy but didn't find any that directly address this question of efficiently populating a full distance matrix.
You can take advantage of the complex type :
# build a complex array of your cells
z = np.array([complex(c.m_x, c.m_y) for c in cells])
First solution
# mesh this array so that you will have all combinations
m, n = np.meshgrid(z, z)
# get the distance via the norm
out = abs(m-n)
Second solution
Meshing is the main idea. But numpy is clever, so you don't have to generate m & n. Just compute the difference using a transposed version of z. The mesh is done automatically :
out = abs(z[..., np.newaxis] - z)
Third solution
And if z is directly set as a 2-dimensional array, you can use z.T instead of the weird z[..., np.newaxis]. So finally, your code will look like this :
z = np.array([[complex(c.m_x, c.m_y) for c in cells]]) # notice the [[ ... ]]
out = abs(z.T-z)
Example
>>> z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])
>>> abs(z.T-z)
array([[ 0. , 2.23606798, 4.12310563],
[ 2.23606798, 0. , 4.24264069],
[ 4.12310563, 4.24264069, 0. ]])
As a complement, you may want to remove duplicates afterwards, taking the upper triangle :
>>> np.triu(out)
array([[ 0. , 2.23606798, 4.12310563],
[ 0. , 0. , 4.24264069],
[ 0. , 0. , 0. ]])
Some benchmarks
>>> timeit.timeit('abs(z.T-z)', setup='import numpy as np;z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])')
4.645645342274779
>>> timeit.timeit('abs(z[..., np.newaxis] - z)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
5.049334864854522
>>> timeit.timeit('m, n = np.meshgrid(z, z); abs(m-n)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
22.489568296184686
If you don't need the full distance matrix, you will be better off using kd-tree. Consider scipy.spatial.cKDTree or sklearn.neighbors.KDTree. This is because a kd-tree kan find k-nearnest neighbors in O(n log n) time, and therefore you avoid the O(n**2) complexity of computing all n by n distances.
Jake Vanderplas gives this example using broadcasting in Python Data Science Handbook, which is very similar to what #shx2 proposed.
import numpy as np
rand = random.RandomState(42)
X = rand.rand(3, 2)
dist_sq = np.sum((X[:, np.newaxis, :] - X[np.newaxis, :, :]) ** 2, axis = -1)
dist_sq
array([[0. , 0.18543317, 0.81602495],
[0.18543317, 0. , 0.22819282],
[0.81602495, 0.22819282, 0. ]])
Here is how you can do it using numpy:
import numpy as np
x = np.array([0,1,2])
y = np.array([2,4,6])
# take advantage of broadcasting, to make a 2dim array of diffs
dx = x[..., np.newaxis] - x[np.newaxis, ...]
dy = y[..., np.newaxis] - y[np.newaxis, ...]
dx
=> array([[ 0, -1, -2],
[ 1, 0, -1],
[ 2, 1, 0]])
# stack in one array, to speed up calculations
d = np.array([dx,dy])
d.shape
=> (2, 3, 3)
Now all is left is computing the L2-norm along the 0-axis (as discussed here):
(d**2).sum(axis=0)**0.5
=> array([[ 0. , 2.23606798, 4.47213595],
[ 2.23606798, 0. , 2.23606798],
[ 4.47213595, 2.23606798, 0. ]])
If you are looking for the most efficient way of computation - use SciPy's cdist() (or pdist() if you need just vector of pairwise distances instead of full distance matrix) as suggested in Tweakimp's comment. As he said it's a lot faster than method based on vectorization and broadcasting, proposed by RichPauloo and shx2. The reason for that is that SciPy's cdist() and pdist() under the hood use for loop and C implementations for metric computations, which are even faster than vectorization.
By the way, if you can use SciPy and still prefer method using broadcasting, you don't have to implement it by yourself, as distance_matrix() function is pure Python implementation, which leverages broadcasting and vectorization (source code, docs).
It's worth mentioning that cdist()/pdist() is also more efficient than broadcasting memory-wise, as it computes distances one by one and avoids creating arrays of n*n*d elements, where n is number of points and d is points' dimensionality.
Experiments
I've conducted some simple experiments to compare performance of SciPy's cdist(), distance_matrix() and broadcasting implementation in NumPy. I used perf_counter_ns() from Python's time module to measure time and all the results are averaged over 10 runs on 10000 points in 2D space using np.float64 datatype (tested on Python 3.8.10, Windows 10 with Ryzen 2700 and 16 GB RAM):
cdist() - 0.6724s
distance_matrix() - 3.0128s
my NumPy implementation - 3.6931s
Code if someone wants to reproduce experiments:
from scipy.spatial import *
import numpy as np
from time import perf_counter_ns
def dist_mat_custom(a, b):
return np.sqrt(np.sum(np.square(a[:, np.newaxis, :] - b[np.newaxis, :, :]), axis=-1))
results = []
size = 10000
it_num = 10
for i in range(it_num):
a = np.random.normal(size=(size, 2))
b = np.random.normal(size=(size, 2))
start = perf_counter_ns()
c = distance_matrix(a, b)
#c = dist_mat_custom(a, b)
#c = distance.cdist(a, b)
results.append(perf_counter_ns() - start)
print(np.mean(results) / 1e9)

Categories