I recently encountered a problem of the inaccuracy of the matrix product/multiplication in NumPy. See my example below also here https://trinket.io/python3/6a4c22e450
import numpy as np
para = np.array([[ 3.28522453e+08, -1.36339334e+08, 1.36339334e+08],
[-1.36339334e+08, 5.65818682e+07, -5.65818682e+07],
[ 1.36339334e+08, -5.65818672e+07, 5.65818682e+07]])
in1 = np.array([[ 285.91695469],
[ 262.3 ],
[-426.64380594]])
in2 = np.array([[ 285.91695537],
[ 262.3 ],
[-426.64380443]])
(in1 - in2)/in1
>>> array([[-2.37831286e-09],
[ 0.00000000e+00],
[ 3.53925214e-09]])
The difference between in1 and in2 is very small, which is ~10^-9
res1 = para # in1
>>> array([[-356.2361908 ],
[ 443.16068268],
[-180.86068344]])
res2 = para # in2
>>> array([[ 73.03147125],
[265.01131439],
[ -2.71131516]])
but after the matrix multiplication, why does the difference between the output res1 and res2 change so much?
(res1 - res2)/res1
>>> array([[1.20500857],
[0.40199723],
[0.98500882]])
This is not a bug; it is to be expected with a matrix such as yours.
Your matrix (which is symmetric) has one large and two small eigenvalues:
In [34]: evals, evecs = np.linalg.eigh(para)
In [35]: evals
Out[35]: array([-1.06130078e-01, 1.00000000e+00, 4.41686189e+08])
Because the matrix is symmetric, it can be diagonalized with an orthonormal basis. That just means that we can define a new coordinate system in which the matrix is diagonal, and the diagonal values are those eigenvalues. The effect of multiplying the matrix by a vector in these coordinates is to simply multiply each coordinate by the corresponding eigenvalue, i.e. the first coordinate is multiplied by -0.106, the second coordinate doesn't change, and the third coordinate is multiplied by the large factor 4.4e8.
The reason you get such a drastic change when multiplying the original matrix para by in1 and in2 is that, in the new coordinates, the third component of the transformed in1 is positive, and the third component of the transformed in2 is negative. (That is, the points are on opposite sides of the 2-d eigenspace associated with the two smaller eigenvalues.) There are several ways to find these transformed coordinates; one is to compute inv(V)#x, where V is the matrix of eigenvectors:
In [36]: np.linalg.solve(evecs, in1)
Out[36]:
array([[ 5.64863071e+02],
[-1.16208620e+02],
[ 8.55527517e-07]])
In [37]: np.linalg.solve(evecs, in2)
Out[37]:
array([[ 5.64863070e+02],
[-1.16208619e+02],
[-2.71381169e-07]])
Note the different signs of the third components. The values are small, but when you multiply by the diagonal matrix, they are multiplied by 4.4e8, giving 377.87 and -119.86, respectively. That large change shows up as the results that you observed in the original coordinates.
For a rougher calculation: note that the elements of para are ~10^8, so multiplication on that order of magnitude occurs when you compute para # x. It is not surprising then, that given the relative differences between in1 and in2 are ~10^-9, the relative differences of res1 and res2 will be ~10^-9 * ~10^8 or ~0.1. (Your calculated relative errors were [1.2, 0.4, 0.99], so the rough estimate is in the right ballpark.)
This looks like a bug ... numpy is written in C, so this could be an issue of casting number into smaller float, which causes big floating point error in this case
Related
I have an array with shape (128,116,116,1), where 1st dimension asthe number of subjects, with the 2nd and 3rd being the data.
I was trying to calculate the variance (squared deviation from the mean) at each position (i.e: in (0,0), (0,1), (1,0), etc... until (116,116)) for all the 128 subjects, resulting in an array with shape (116,116).
Can anyone tell me how to accomplish this?
Thank you!
Let's say we have a multidimensional list a of shape (3,2,2)
import numpy as np
a =
[
[
[1,1],
[1,1]
],
[
[2,2],
[2,2]
],
[
[3,3],
[3,3]
],
]
np.var(a, axis = 0) # results in:
> array([[0.66666667, 0.66666667],
> [0.66666667, 0.66666667]])
If you want to efficiently compute the variance across all 128 subjects (which would be axis 0), I don't see a way to do it using the statistics package since it doesn't take multi-lists as input. So you will have to write your own code/logic and add loops on the subjects.
But, using the numpy.var
function, we can easily calculate the variance of each 'datapoint' (tuples of indices) across all 128 subjects.
Side note: You mentioned statistics.variance. However, that is only to be used when you are taking a sample from a population as is mentioned in the documentation you linked. If you were to go the manual route, you would use statistics.pvariance instead, since we are calculating it on the whole dataset.
The difference can be seen here:
statistics.pvariance([1,2,3])
> 0.6666666666666666 # (correct)
statistics.variance([1,2,3])
> 1 # (incorrect)
np.var([1,2,3])
> 0.6666666666666666 # (np.var also gives the correct output)
from numpy.linalg import inv, qr
X = np.random.randn(5, 5)
mat = X.T.dot(X)
inv(mat)
mat.dot(inv(mat))
dot product of matrix and its inverse should be Identity matrix.
But, here output is-
array([[ 1.00000000e+00, 6.70961522e-16, 3.98202719e-16,
-2.04084178e-15, 3.07963387e-16],
[-6.46120445e-15, 1.00000000e+00, 4.44698794e-16,
1.40254635e-15, 2.71601492e-16],
[ 3.00736839e-15, -5.65091222e-16, 1.00000000e+00,
1.63129995e-16, -6.43576692e-17],
[ 1.01120865e-14, -1.23622826e-15, -6.99882344e-16,
1.00000000e+00, -1.13627444e-16],
[-6.31447442e-15, 2.46897480e-15, 9.95010178e-16,
-2.81959392e-15, 1.00000000e+00]])
Please explain.
That must be due to the algorithm rounding but I've found that if you diagonalize the matrix and calculate the dot product with the inverse you end up correctly with the identity matrix. This might be due to a different algorithm used to calculate the inverse matrix for a diagonal matrix.
import numpy as np
m = np.random.randn(5,5)
print(np.linalg.det(m))
e = np.linalg.eig(m)[0]
mdiag = np.eye(5)*e
print(mdiag.dot(np.linalg.inv(mdiag)))
This method seems to work always for 3x3 matrix but some times fails for bigger matrixes since there is an immaginary part left in the order of 1e-17
I have a set of points in 2-dimensional space and need to calculate the distance from each point to each other point.
I have a relatively small number of points, maybe at most 100. But since I need to do it often and rapidly in order to determine the relationships between these moving points, and since I'm aware that iterating through the points could be as bad as O(n^2) complexity, I'm looking for ways to take advantage of numpy's matrix magic (or scipy).
As it stands in my code, the coordinates of each object are stored in its class. However, I could also update them in a numpy array when I update the class coordinate.
class Cell(object):
"""Represents one object in the field."""
def __init__(self,id,x=0,y=0):
self.m_id = id
self.m_x = x
self.m_y = y
It occurs to me to create a Euclidean distance matrix to prevent duplication, but perhaps you have a cleverer data structure.
I'm open to pointers to nifty algorithms as well.
Also, I note that there are similar questions dealing with Euclidean distance and numpy but didn't find any that directly address this question of efficiently populating a full distance matrix.
You can take advantage of the complex type :
# build a complex array of your cells
z = np.array([complex(c.m_x, c.m_y) for c in cells])
First solution
# mesh this array so that you will have all combinations
m, n = np.meshgrid(z, z)
# get the distance via the norm
out = abs(m-n)
Second solution
Meshing is the main idea. But numpy is clever, so you don't have to generate m & n. Just compute the difference using a transposed version of z. The mesh is done automatically :
out = abs(z[..., np.newaxis] - z)
Third solution
And if z is directly set as a 2-dimensional array, you can use z.T instead of the weird z[..., np.newaxis]. So finally, your code will look like this :
z = np.array([[complex(c.m_x, c.m_y) for c in cells]]) # notice the [[ ... ]]
out = abs(z.T-z)
Example
>>> z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])
>>> abs(z.T-z)
array([[ 0. , 2.23606798, 4.12310563],
[ 2.23606798, 0. , 4.24264069],
[ 4.12310563, 4.24264069, 0. ]])
As a complement, you may want to remove duplicates afterwards, taking the upper triangle :
>>> np.triu(out)
array([[ 0. , 2.23606798, 4.12310563],
[ 0. , 0. , 4.24264069],
[ 0. , 0. , 0. ]])
Some benchmarks
>>> timeit.timeit('abs(z.T-z)', setup='import numpy as np;z = np.array([[0.+0.j, 2.+1.j, -1.+4.j]])')
4.645645342274779
>>> timeit.timeit('abs(z[..., np.newaxis] - z)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
5.049334864854522
>>> timeit.timeit('m, n = np.meshgrid(z, z); abs(m-n)', setup='import numpy as np;z = np.array([0.+0.j, 2.+1.j, -1.+4.j])')
22.489568296184686
If you don't need the full distance matrix, you will be better off using kd-tree. Consider scipy.spatial.cKDTree or sklearn.neighbors.KDTree. This is because a kd-tree kan find k-nearnest neighbors in O(n log n) time, and therefore you avoid the O(n**2) complexity of computing all n by n distances.
Jake Vanderplas gives this example using broadcasting in Python Data Science Handbook, which is very similar to what #shx2 proposed.
import numpy as np
rand = random.RandomState(42)
X = rand.rand(3, 2)
dist_sq = np.sum((X[:, np.newaxis, :] - X[np.newaxis, :, :]) ** 2, axis = -1)
dist_sq
array([[0. , 0.18543317, 0.81602495],
[0.18543317, 0. , 0.22819282],
[0.81602495, 0.22819282, 0. ]])
Here is how you can do it using numpy:
import numpy as np
x = np.array([0,1,2])
y = np.array([2,4,6])
# take advantage of broadcasting, to make a 2dim array of diffs
dx = x[..., np.newaxis] - x[np.newaxis, ...]
dy = y[..., np.newaxis] - y[np.newaxis, ...]
dx
=> array([[ 0, -1, -2],
[ 1, 0, -1],
[ 2, 1, 0]])
# stack in one array, to speed up calculations
d = np.array([dx,dy])
d.shape
=> (2, 3, 3)
Now all is left is computing the L2-norm along the 0-axis (as discussed here):
(d**2).sum(axis=0)**0.5
=> array([[ 0. , 2.23606798, 4.47213595],
[ 2.23606798, 0. , 2.23606798],
[ 4.47213595, 2.23606798, 0. ]])
If you are looking for the most efficient way of computation - use SciPy's cdist() (or pdist() if you need just vector of pairwise distances instead of full distance matrix) as suggested in Tweakimp's comment. As he said it's a lot faster than method based on vectorization and broadcasting, proposed by RichPauloo and shx2. The reason for that is that SciPy's cdist() and pdist() under the hood use for loop and C implementations for metric computations, which are even faster than vectorization.
By the way, if you can use SciPy and still prefer method using broadcasting, you don't have to implement it by yourself, as distance_matrix() function is pure Python implementation, which leverages broadcasting and vectorization (source code, docs).
It's worth mentioning that cdist()/pdist() is also more efficient than broadcasting memory-wise, as it computes distances one by one and avoids creating arrays of n*n*d elements, where n is number of points and d is points' dimensionality.
Experiments
I've conducted some simple experiments to compare performance of SciPy's cdist(), distance_matrix() and broadcasting implementation in NumPy. I used perf_counter_ns() from Python's time module to measure time and all the results are averaged over 10 runs on 10000 points in 2D space using np.float64 datatype (tested on Python 3.8.10, Windows 10 with Ryzen 2700 and 16 GB RAM):
cdist() - 0.6724s
distance_matrix() - 3.0128s
my NumPy implementation - 3.6931s
Code if someone wants to reproduce experiments:
from scipy.spatial import *
import numpy as np
from time import perf_counter_ns
def dist_mat_custom(a, b):
return np.sqrt(np.sum(np.square(a[:, np.newaxis, :] - b[np.newaxis, :, :]), axis=-1))
results = []
size = 10000
it_num = 10
for i in range(it_num):
a = np.random.normal(size=(size, 2))
b = np.random.normal(size=(size, 2))
start = perf_counter_ns()
c = distance_matrix(a, b)
#c = dist_mat_custom(a, b)
#c = distance.cdist(a, b)
results.append(perf_counter_ns() - start)
print(np.mean(results) / 1e9)
I tried to use SciPy function linalg.eigsh to calculate a few eigenvalues and eigenvectors of a matrix. However, when I print the calculated eigenvectors, they are of the same dimension as the number of eigenvalues I wanted to calculate. Shouldn't it give me the actual eigenvector, whose dimension is the same as that of the original matrix?
My code for reference:
id = np.eye(13)
val, vec = sp.sparse.linalg.eigsh(id, k = 2)
print(vec[1])
Which gives me:
[-0.26158945 0.63952164]
While intuitively it should have a dimension of 13. And it should not be a non-integer value either. Is it just my misinterpretation of the function? If so, is there any other function in Python that can calculate a few eigenvectors (I don't want the full spectrum) of the wanted dimensionality?
vec is an array with shape (13, 2).
In [21]: vec
Out[21]:
array([[ 0.36312724, -0.04921923],
[-0.26158945, 0.63952164],
[ 0.41693924, 0.34811192],
[ 0.30068329, -0.11360339],
[-0.05388733, -0.3225355 ],
[ 0.47402124, -0.28180261],
[ 0.50581823, 0.29527393],
[ 0.06687073, 0.19762049],
[ 0.103382 , 0.29724875],
[-0.09819873, 0.00949533],
[ 0.05458907, -0.22466131],
[ 0.15499849, 0.0621803 ],
[ 0.01420219, 0.04509334]])
The eigenvectors are stored in the columns of vec. To see the first eigenvector, use vec[:, 0]. When you printed vec[0] (which is equivalent to vec[0, :]), you printed the first row of vec, which is just the first components of the two eigenvectors.
Suppose I have two vectors of length 25, and I want to compute their covariance matrix. I try doing this with numpy.cov, but always end up with a 2x2 matrix.
>>> import numpy as np
>>> x=np.random.normal(size=25)
>>> y=np.random.normal(size=25)
>>> np.cov(x,y)
array([[ 0.77568388, 0.15568432],
[ 0.15568432, 0.73839014]])
Using the rowvar flag doesn't help either - I get exactly the same result.
>>> np.cov(x,y,rowvar=0)
array([[ 0.77568388, 0.15568432],
[ 0.15568432, 0.73839014]])
How can I get the 25x25 covariance matrix?
You have two vectors, not 25. The computer I'm on doesn't have python so I can't test this, but try:
z = zip(x,y)
np.cov(z)
Of course.... really what you want is probably more like:
n=100 # number of points in each vector
num_vects=25
vals=[]
for _ in range(num_vects):
vals.append(np.random.normal(size=n))
np.cov(vals)
This takes the covariance (I think/hope) of num_vects 1xn vectors
Try this:
import numpy as np
x=np.random.normal(size=25)
y=np.random.normal(size=25)
z = np.vstack((x, y))
c = np.cov(z.T)
Covariance matrix from samples vectors
To clarify the small confusion regarding what is a covariance matrix defined using two N-dimensional vectors, there are two possibilities.
The question you have to ask yourself is whether you consider:
each vector as N realizations/samples of one single variable (for example two 3-dimensional vectors [X1,X2,X3] and [Y1,Y2,Y3], where you have 3 realizations for the variables X and Y respectively)
each vector as 1 realization for N variables (for example two 3-dimensional vectors [X1,Y1,Z1] and [X2,Y2,Z2], where you have 1 realization for the variables X,Y and Z per vector)
Since a covariance matrix is intuitively defined as a variance based on two different variables:
in the first case, you have 2 variables, N example values for each, so you end up with a 2x2 matrix where the covariances are computed thanks to N samples per variable
in the second case, you have N variables, 2 samples for each, so you end up with a NxN matrix
About the actual question, using numpy
if you consider that you have 25 variables per vector (took 3 instead of 25 to simplify example code), so one realization for several variables in one vector, use rowvar=0
# [X1,Y1,Z1]
X_realization1 = [1,2,3]
# [X2,Y2,Z2]
X_realization2 = [2,1,8]
numpy.cov([X,Y],rowvar=0) # rowvar false, each column is a variable
Code returns, considering 3 variables:
array([[ 0.5, -0.5, 2.5],
[-0.5, 0.5, -2.5],
[ 2.5, -2.5, 12.5]])
otherwise, if you consider that one vector is 25 samples for one variable, use rowvar=1 (numpy's default parameter)
# [X1,X2,X3]
X = [1,2,3]
# [Y1,Y2,Y3]
Y = [2,1,8]
numpy.cov([X,Y],rowvar=1) # rowvar true (default), each row is a variable
Code returns, considering 2 variables:
array([[ 1. , 3. ],
[ 3. , 14.33333333]])
Reading the documentation as,
>> np.cov.__doc__
or looking at Numpy Covariance, Numpy treats each row of array as a separate variable, so you have two variables and hence you get a 2 x 2 covariance matrix.
I think the previous post has right solution. I have the explanation :-)
I suppose what youre looking for is actually a covariance function which is a timelag function. I'm doing autocovariance like that:
def autocovariance(Xi, N, k):
Xs=np.average(Xi)
aCov = 0.0
for i in np.arange(0, N-k):
aCov = (Xi[(i+k)]-Xs)*(Xi[i]-Xs)+aCov
return (1./(N))*aCov
autocov[i]=(autocovariance(My_wector, N, h))
You should change
np.cov(x,y, rowvar=0)
onto
np.cov((x,y), rowvar=0)
What you got (2 by 2) is more useful than 25*25. Covariance of X and Y is an off-diagonal entry in the symmetric cov_matrix.
If you insist on (25 by 25) which I think useless, then why don't you write out the definition?
x=np.random.normal(size=25).reshape(25,1) # to make it 2d array.
y=np.random.normal(size=25).reshape(25,1)
cov = np.matmul(x-np.mean(x), (y-np.mean(y)).T) / len(x)
As pointed out above, you only have two vectors so you'll only get a 2x2 cov matrix.
IIRC the 2 main diagonal terms will be sum( (x-mean(x))**2) / (n-1) and similarly for y.
The 2 off-diagonal terms will be sum( (x-mean(x))(y-mean(y)) ) / (n-1). n=25 in this case.
according the document, you should expect variable vector in column:
If we examine N-dimensional samples, X = [x1, x2, ..., xn]^T
though later it says each row is a variable
Each row of m represents a variable.
so you need input your matrix as transpose
x=np.random.normal(size=25)
y=np.random.normal(size=25)
X = np.array([x,y])
np.cov(X.T)
and according to wikipedia: https://en.wikipedia.org/wiki/Covariance_matrix
X is column vector variable
X = [X1,X2, ..., Xn]^T
COV = E[X * X^T] - μx * μx^T // μx = E[X]
you can implement it yourself:
# X each row is variable
X = X - X.mean(axis=0)
h,w = X.shape
COV = X.T # X / (h-1)
i don't think you understand the definition of covariance matrix.
If you need 25 x 25 covariance matrix, you need 25 vectors each with n data points.