Calculating variance within cells across matricies in python - python

I have several matricies with identical dimensions, X, and Y. I want to calculate the variance for each cell across the matricies, such that the resulting output matrix would also have the same dimensions, X, and Y. For example
matrix1 = [[1,1,1], [2,2,2], [3,3,3]]
matrix2 = [[2,2,2], [3,3,3], [4,4,4]]
matrix3 = [[3,3,3], [4,4,4], [5,5,5]]
Using position (0,0) in each cell as an example, I need to first calculate the mean, which would be (1+2+3)/3 = 2
matrix_sum = matrix1 + matrix2 + matrix3
matrix_mean = matrix_sum / 3
Next I'd calculate the population variance which would be:
[(1-2)+(2-2)+(3-2)]^2
And I'd like to be able to do this for an indeterminate (but small number) of matricies (say 50), and the matricies themselves would be at max 250, 250 (they will always be square matricies)
for x in range(1,matrix_mean.shape[0]):
for y in range(1,matrix_mean.shape[1]):
standard_deviation_matrix.iat[x,y] = pow(matrix_mean.iat[x,y]- matrix1.iat[x,y],2) + pow(matrix_mean.iat[x,y]- matrix2.iat[x,y],2) + pow(matrix_mean.iat[x,y]- matrix3.iat[x,y],2)
standard_deviation_matrix = standard_deviation_matrix / (3-1)
Here, combined_matrix is just (matrix1 + matrix2 + matrix3 .. matrix5) / 5 (i.e. the mean within each cell across the matricies)
This seems to work, but it's super slow and super clunky; but it's how I'd do it in C. Is there an easier/better/more pythonic way to do this?
Thanks

You can try:
all_mat = np.stack([matrix1, matrix2, matrix3])
mat_mean = all_mat.mean(axis=0)
variance = np.var(all_mat, axi=0)
Which gives you:
array([[0.66666667, 0.66666667, 0.66666667],
[0.66666667, 0.66666667, 0.66666667],
[0.66666667, 0.66666667, 0.66666667]])
Or for the std:
np.std(all_mat, axis=0)
And you get:
array([[0.81649658, 0.81649658, 0.81649658],
[0.81649658, 0.81649658, 0.81649658],
[0.81649658, 0.81649658, 0.81649658]])

Convert each matrix into a numpy array, stack the arrays (this will add another dimension), and calculate the variance along that dimension:
m1 = np.array(matrix1)
...
m = np.stack([m1, m2, ...])
m.var(axis=0)

Related

Is there a way to vectorize the calculation of correlation coefficients from this Numpy array?

This code computes the Pearson correlation coefficient for all possible pairs of L=45 element vectors taken from a stack of M=102272. The result is a symmetric MxM matrix occupying about 40 GB of memory. The memory requirement isn't a problem for my computer, but I estimate from test runs that the ~5 billion passes through the inner loop will take a good 2-3 days to complete. My question: Is there a straightforward way to vectorize the inner loop to speed things up significantly?
# L = 45
# M = 102272
# data[M,L] (type 'float32')
cmat = np.zeros((M,M))
for i in range(M):
v1 = data[i,:]
z1 = (v1-np.average(v1))/np.std(v1)
for j in range(i+1):
v2 = data[j,:]
z2 = (v2-np.average(v2))/np.std(v2)
cmmat[i,j] = cmmat[j,i] = z1.dot(z2)/L
There's a built-in numpy function that already exists to compute correlation matrix. Just use it!
>>> import numpy as np
>>> rng = np.random.default_rng(seed=42)
>>> xarr = rng.random((3, 3))
>>> xarr
array([[0.77395605, 0.43887844, 0.85859792],
[0.69736803, 0.09417735, 0.97562235],
[0.7611397 , 0.78606431, 0.12811363]])
>>> R1 = np.corrcoef(xarr)
>>> R1
array([[ 1. , 0.99256089, -0.68080986],
[ 0.99256089, 1. , -0.76492172],
[-0.68080986, -0.76492172, 1. ]])
Documentation link

Memory efficient mean pairwise distance

I am aware of the scipy.spatial.distance.pdist function and how to compute the mean from the resulting matrix/ndarray.
>>> x = np.random.rand(10000, 2)
>>> y = pdist(x, metric='euclidean')
>>> y.mean()
0.5214255824176626
In the example above y gets quite large (nearly 2,500 times as large as the input array):
>>> y.shape
(49995000,)
>>> from sys import getsizeof
>>> getsizeof(x)
160112
>>> getsizeof(y)
399960096
>>> getsizeof(y) / getsizeof(x)
2498.0019986009793
But since I am only interested in the mean pairwise distance, the distance matrix doesn't have to be kept in memory. Instead the mean of each row (or column) can be computed seperatly. The final mean value can then be computed from the row mean values.
Is there already a function which exploit this property or is there an easy way to extend/combine existing functions to do so?
If you use the square version of distance, it is equivalent to using the variance with n-1:
from scipy.spatial.distance import pdist, squareform
import numpy as np
x = np.random.rand(10000, 2)
y = np.array([[1,1], [0,0], [2,0]])
print(pdist(x, 'sqeuclidean').mean())
print(np.var(x, 0, ddof=1).sum()*2)
>>0.331474285845873
0.33147428584587346
You will have to weight each row by the number of observations that make up the mean. For example the pdist of a 3 x 2 matrix is the flattened upper triangle (offset of 1) of the squareform 3 x 3 distance matrix.
arr = np.arange(6).reshape(3,2)
arr
array([[0, 1],
[2, 3],
[4, 5]])
pdist(arr)
array([2.82842712, 5.65685425, 2.82842712])
from sklearn.metrics import pairwise_distances
square = pairwise_distances(arr)
square
array([[0. , 2.82842712, 5.65685425],
[2.82842712, 0. , 2.82842712],
[5.65685425, 2.82842712, 0. ]])
square[triu_indices(square.shape[0], 1)]
array([2.82842712, 5.65685425, 2.82842712])
There is the pairwise_distances_chuncked function that can be used to iterate over the distance matrix row by row, but you will need to keep track of the row index to make sure you only take the mean of values in the upper/lower triangle of the matrix (distance matrix is symmetrical). This isn't complicated, but I imagine you will introduce a significant slowdown.
tot = ((arr.shape[0]**2) - arr.shape[0]) / 2
weighted_means = 0
for i in gen:
if r < arr.shape[0]:
sm = i[0, r:].mean()
wgt = (i.shape[1] - r) / tot
weighted_means += sm * wgt
r += 1

The components of numpy.gradient of a symmetric function are different

The gradient of a symmetric function should have same derivatives in all dimensions.
numpy.gradient is providing different components.
Here is a MWE.
import numpy as np
x = (-1,0,1)
y = (-1,0,1)
X,Y = np.meshgrid(x,y)
f = 1/(X*X + Y*Y +1.0)
print(f)
>> [[0.33333333 0.5 0.33333333]
[0.5 1. 0.5 ]
[0.33333333 0.5 0.33333333]]
This has same values in both dimensions.
But np.gradient(f) gives
[array([[ 0.16666667, 0.5 , 0.16666667],
[ 0. , 0. , 0. ],
[-0.16666667, -0.5 , -0.16666667]]),
array([[ 0.16666667, 0. , -0.16666667],
[ 0.5 , 0. , -0.5 ],
[ 0.16666667, 0. , -0.16666667]])]
Both the components of the gradient are different.
Why so?
What I am missing in interpretation of the output?
Let's walk through this step by step. So first, as correctly mentioned by meowgoesthedog
numpy calculates derivatives in a direction.
Numpy's way of calculating gradients
It's important to note that np.gradient uses centric differences meaning (for simplicity we look at just one direction):
grad_f[i] = (f[i+1] - f[i])/2 + (f[i] - f[i-1])/2 = (f[i+1] - f[i-1])/2
At the boundary numpy calculates (take the min as example)
grad_f[min] = f[min+1] - f[min]
grad_f[max] = f[max] - f[max-1]
In your case the boundary is 0 and 2.
2D case
If you use more than one dimension we need to the direction of the derivative into account. np.gradient calculates the derivatives in all possible directions. Let's reproduce your results:
Let's move alongside the columns, so we calculate with row vectors
f[1,:] - f[0,:]
Output
array([0.16666667, 0.5 , 0.16666667])
which is exactly the first row of the first element of your gradient.
The row is calculated with centered derivatives, therefore:
(f[2,:]-f[1,:])/2 + (f[1,:]-f[0,:])/2
Output
array([0., 0., 0.])
The third row:
f[2,:] - f[1,:]
Output
array([-0.16666667, -0.5 , -0.16666667])
For the other direction just exchange the : and the numbers and take in mind that you are now calculating column vectors. This leads directly to the transposed derivative in the case of a symmetric function, like in your case.
3D case
x_ = (-1,0,4)
y_ = (-3,0,1)
z_ = (-1,0,12)
x, y, z = np.meshgrid(x_, y_, z_, indexing='ij')
f = 1/(x**2 + y**2 + z**2 + 1)
np.gradient(f)[1]
Output
array([[[ *2.50000000e-01, 4.09090909e-01, 3.97702165e-04*],
[ 8.33333333e-02, 1.21212121e-01, 1.75554093e-04],
[-8.33333333e-02, -1.66666667e-01, -4.65939801e-05]],
[[ **4.09090909e-01, 9.00000000e-01, 4.03045231e-04**],
[ 1.21212121e-01, 2.00000000e-01, 1.77904287e-04],
[-1.66666667e-01, -5.00000000e-01, -4.72366556e-05]],
[[ ***1.85185185e-02, 2.03619910e-02, 3.28827183e-04***],
[ 7.79727096e-03, 8.54700855e-03, 1.45243282e-04],
[-2.92397661e-03, -3.26797386e-03, -3.83406181e-05]]])
The gradient which is given here is calculated along rows (0 would be along matrices, 1 along rows, 2 along columns).
This can be calculated by
(f[:,1,:] - f[:,0,:])
Output
array([[*2.50000000e-01, 4.09090909e-01, 3.97702165e-04*],
[**4.09090909e-01, 9.00000000e-01, 4.03045231e-04**],
[***1.85185185e-02, 2.03619910e-02, 3.28827183e-04***]])
I added the asteriks so that it becomes clear where to find corresponding row vectors. Since we calculated the gradient in direction 1 we have to look for row vectors.
If one wants to reproduce the whole gradient, this is done by
np.stack(((f[:,1,:] - f[:,0,:]), (f[:,2,:] - f[:,0,:])/2, (f[:,2,:] - f[:,1,:])), axis=1)
n-dim case
We can generalize the things we learned to here to calculate gradients of arbitrary functions along directions.
def grad_along_axis(f, ax):
f_grad_ind = []
for i in range(f.shape[ax]):
if i == 0:
f_grad_ind.append(np.take(f, i+1, ax) - np.take(f, i, ax))
elif i == f.shape[ax] -1:
f_grad_ind.append(np.take(f, i, ax) - np.take(f, i-1, ax))
else:
f_grad_ind.append((np.take(f, i+1, ax) - np.take(f, i-1, ax))/2)
f_grad = np.stack(f_grad_ind, axis=ax)
return f_grad
where
np.take(f, i, ax) = f[:,...,i,...,:]
and i is at index ax.
Usually gradients and jacobians are operators on functions
Id you need the gradient of f = 1/(X*X + Y*Y +1.0) then you have to compute it symbolically. Or estimate it with numerical methods that use that function.
I do not know what a gradient of a constant 3d array is. numpy.gradient is a one dimensional concept.
Python has the sympy package that can automatically compute jacobians symbolically.
If by second order derivative of a scalar 3d field you mean a laplacian then you can estimate that with a standard 4 point stencil.

How to plot pairwise distances of two-dimensional vectors?

I have a set of data in python likes:
x y angle
If I want to calculate the distance between two points with all possible value and plot the distances with the difference between two angles.
x, y, a = np.loadtxt('w51e2-pa-2pk.log', unpack=True)
n = 0
f=(((x[n])-x[n+1:])**2+((y[n])-y[n+1:])**2)**0.5
d = a[n]-a[n+1:]
plt.scatter(f,d)
There are 255 points in my data.
f is the distance and d is the difference between two angles.
My question is can I set n = [1,2,3,.....255] and do the calculation again to get the f and d of all possible pairs?
You can obtain the pairwise distances through broadcasting by considering it as an outer operation on the array of 2-dimensional vectors as follows:
vecs = np.stack((x, y)).T
np.linalg.norm(vecs[np.newaxis, :] - vecs[:, np.newaxis], axis=2)
For example,
In [1]: import numpy as np
...: x = np.array([1, 2, 3])
...: y = np.array([3, 4, 6])
...: vecs = np.stack((x, y)).T
...: np.linalg.norm(vecs[np.newaxis, :] - vecs[:, np.newaxis], axis=2)
...:
Out[1]:
array([[ 0. , 1.41421356, 3.60555128],
[ 1.41421356, 0. , 2.23606798],
[ 3.60555128, 2.23606798, 0. ]])
Here, the (i, j)'th entry is the distance between the i'th and j'th vectors.
The case of the pairwise differences between angles is similar, but simpler, as you only have one dimension to deal with:
In [2]: a = np.array([10, 12, 15])
...: a[np.newaxis, :] - a[: , np.newaxis]
...:
Out[2]:
array([[ 0, 2, 5],
[-2, 0, 3],
[-5, -3, 0]])
Moreover, plt.scatter does not care that the results are given as matrices, and putting everything together using the notation of the question, you can obtain the plot of angles by distances by doing something like
vecs = np.stack((x, y)).T
f = np.linalg.norm(vecs[np.newaxis, :] - vecs[:, np.newaxis], axis=2)
d = angle[np.newaxis, :] - angle[: , np.newaxis]
plt.scatter(f, d)
You have to use a for loop and range() to iterate over n, e.g. like like this:
n = len(x)
for i in range(n):
# do something with the current index
# e.g. print the points
print x[i]
print y[i]
But note that if you use i+1 inside the last iteration, this will already be outside of your list.
Also in your calculation there are errors. (x[n])-x[n+1:] does not work because x[n] is a single value in your list while x[n+1:] is a list starting from n+1'th element. You can not subtract a list from an int or whatever it is.
Maybe you will have to even use two nested loops to do what you want. I guess that you want to calculate the distance between each point so a two dimensional array may be the data structure you want.
If you are interested in all combinations of the points in x and y I suggest to use itertools, which will give you all possible combinations. Then you can do it like follows:
import itertools
f = [((x[i]-x[j])**2 + (y[i]-y[j])**2)**0.5 for i,j in itertools.product(255,255) if i!=j]
# and similar for the angles
But maybe there is even an easier way...

Get solution to overdetermined linear homogeneous system numpy

I'm trying to find the solution to overdetermined linear homogeneous system (Ax = 0) using numpy in order to get the least linear squares solution for a linear regression.
This is the code I am using to generate the linear regression:
N = 100
x_data = np.linspace(0, N-1, N)
m = +5
n = -5
y_model = m*x_data + n
y_noise = y_model + np.random.normal(0, +5, N)
I want to recover m and n from y_noise. In other words, I want to resolve the homogeneous system (Ax = 0) where "x = (m, n)" and "A = (x_data | 1 | -y_noise)". So I convert non-homogeneous (Ax = y) into homogeneous (Ax = 0) using this code:
A = np.array(np.vstack((x_data, np.ones(N), -y_noise)).T)
I know I could resolve non-homogeneous system using np.linalg.lstsq((x_data | 1), y_noise)) but I want to get the solution for homogeneous system. I am finding a problem with this function as it only returns the trivial solution (x = 0):
x = np.linalg.lstsq(A, np.zeros(N))[0] => array([ 0., 0., 0.])
I was thinking about using eigenvectors to get the solution but it seems not to work:
A_T_A = np.dot(A.T, A)
eigen_values, eigen_vectors = np.linalg.eig(A_T_A)
# eigenvectors
[[ -2.03500000e-01 4.89890000e+00 5.31170000e+00]
[ -3.10000000e-03 1.02230000e+00 -2.64330000e+01]
[ 1.00000000e+00 1.00000000e+00 1.00000000e+00]]
# eigenvectors normalized
[[ -0.98365497700 -4.744666220 1.0] # (m1, n1, 1)
[ 0.00304878118 0.210130914 1.0] # (m2, n2, 1)
[ 25.7752417000 -5.132910010 1.0]] # (m3, n3, 1)
Which none of them fits model parameters (m=+5, n=-5)
How can I find (m, n) correctly? Thanks!
I have already found how to fix it, the problem is how I was interpreting the output of np.linalg.eig function, but the approach using eigenvectors is right. In spite of that, #Stelios is in the right when he says that the function np.linalg.lstsq returns the trivial solution (x = 0) because matrix A is full column rank.
I was assuming the output of np.linalg.eig was:
[[m1 n1 1]
[m2 n2 1]
[m3 n3 1]]
But it is not, the correct format is:
[[m1 m2 m3]
[n1 n2 n3]
[ 1 1 1]]
So if we want to get the solution which better fits model paramaters (m, n), we have to choose the eigenvector with the smallest eigenvalue and normalize it:
A_T_A = np.dot(A_homo.T, A_homo)
eigen_values, eigen_vectors = np.linalg.eig(A_T_A)
# eigenvectors
[[ 1.96409304e-01 9.48763118e-01 -2.47531678e-01]
[ 2.94608003e-04 2.52391765e-01 9.67625088e-01]
[ -9.80521952e-01 1.90123494e-01 -4.92925776e-02]]
# MIN eigenvector
eigen_vector_min = eigen_vectors[:, np.argmin(eigen_values)]
[-0.24753168 0.96762509 -0.04929258]
# MIN eigenvector normalized
[ 5.02168258 -19.63023915 1. ] # [m, n, 1]
Finally we get that m = 5.02 and n = -19,6 which is a pretty good approximation.

Categories