I generate a matrix that I want to get the covariance of:
test=np.array([4,2,.6,4.2,2.1,.59,3.9,2,.58,4.3,2.1,.62,4.1,2.2,.63]).reshape(5,3)
test
array([[ 4. , 2. , 0.6 ],
[ 4.2 , 2.1 , 0.59],
[ 3.9 , 2. , 0.58],
[ 4.3 , 2.1 , 0.62],
[ 4.1 , 2.2 , 0.63]])
I calculate the covariance with the numpy function:
np.cov(test)
array([[ 2.92 , 3.098 , 2.846 , 3.164 , 2.966 ],
[ 3.098 , 3.28703333, 3.0199 , 3.3566 , 3.1479 ],
[ 2.846 , 3.0199 , 2.7748 , 3.0832 , 2.8933 ],
[ 3.164 , 3.3566 , 3.0832 , 3.4288 , 3.2122 ],
[ 2.966 , 3.1479 , 2.8933 , 3.2122 , 3.0193 ]])
This however is different than following the covariance formula:
mean=np.mean(test,0)
np.dot(test-mean,(test-mean).T)/(5-1)
array([[ 0.004104, -0.002886, 0.006624, -0.005416, -0.002426],
[-0.002886, 0.002649, -0.005316, 0.005044, 0.000509],
[ 0.006624, -0.005316, 0.011744, -0.010496, -0.002556],
[-0.005416, 0.005044, -0.010496, 0.010164, 0.000704],
[-0.002426, 0.000509, -0.002556, 0.000704, 0.003769]])
This does not match the numpy calculations.
In fact, I take a peek at the source code and the equation is (x-m) * (x-m).T.conj() / (N - 1) which I believe I am implementing.
The difference comes from the fact that the np.cov calculates the covariance between row vectors, which is why the result is 5*5 instead of 3*3, but np.mean calculates the average of column vectors and when you do test - mean the calculation is also broadcasted along column which differs from what np.cov is doing, the fix would be a two-step:
Firstly, make sure the mean is calculated for each row, which can be done by simply transposing the test matrix:
mean = np.mean(test.T, 0)
And then when calculate x - x_bar, reshape the mean vector so that the minus is along the rows as well, and also since the vector under test is row vector the dimension is going to be 3 instead of 5. After these fixing, it will give consistent results as np.cov does:
np.dot(test-mean[:, None],(test-mean[:, None]).T)/(3-1)
# array([[ 2.92 , 3.098 , 2.846 , 3.164 , 2.966 ],
# [ 3.098 , 3.28703333, 3.0199 , 3.3566 , 3.1479 ],
# [ 2.846 , 3.0199 , 2.7748 , 3.0832 , 2.8933 ],
# [ 3.164 , 3.3566 , 3.0832 , 3.4288 , 3.2122 ],
# [ 2.966 , 3.1479 , 2.8933 , 3.2122 , 3.0193 ]])
Related
Input:
array([[ 1. , 5. , 1. ],
[ 10. , 7. , 1.5],
[ 6.9, 5. , 1. ],
[ 19. , 9. , 100. ],
[ 11. , 11. , 11. ]])
Expected Output:
array([[ 19. , 9. , 100. ],
[ 11. , 11. , 11. ],
[ 10. , 7. , 1.5],
[ 6.9, 5. , 1. ],
[ 1. , 5. , 1. ]])
i tried doing the below:
for i in M:
ls = i.mean()
x = np.append(i,ls)
print(x) #found the mean
After this i am unable to arrange each column based on the mean value in each row. All i can do
is to arrange each row in descending order but that is not what i wanted.
You can do this:
In [405]: row_idxs = np.argsort(np.mean(a * -1, axis=1))
In [406]: a[row_idxs, :]
Out[406]:
array([[ 19. , 9. , 100. ],
[ 11. , 11. , 11. ],
[ 10. , 7. , 1.5],
[ 6.9, 5. , 1. ],
[ 1. , 5. , 1. ]])
Using argsort will sort the indices. Multiplying by -1 allows you to get descending order.
I'm encountering an issue since hours, I don't understand why the V matrix below doesn't equal the Identity matrix:
A = np.random.randint(50, size=(100, 2))
V = A.dot(A.T)
D = V.dot(inv(V))
D
The result I found is below either:
array([[ 3.26611328, 7.87890625, 14.1953125 , ..., 2. ,
-5. , -24. ],
[ -5.91061401, -26.05834961, 5.30126953, ..., -10. ,
8. , -16. ],
[ -2.64431763, 3.55639648, 3.10107422, ..., -0.5 ,
-5. , -4. ],
...,
[ -2.62512207, -7.78222656, 10.26367188, ..., -6. ,
18. , 0. ],
[ -3.0625 , 14. , -4. , ..., -0.0625 ,
0. , 8. ],
[ 2. , -7. , 16. , ..., -7.5 ,
-8. , -4. ]])
Thank you for your help
I've found my issue:
I was trying to find the inv() of a matrix which det(matrix) = 0, that's why the calculus wasn't correct.
D = V.T.dot(V)
inv(D).dot(D)
then I find the Identity matrix
Thank you
Habib
I have a function that creates a 2-dim array, a Vandermonde matrix and is called as:
vandermonde(generator, rank)
Where generator is a n-sized array for example
generator = np.array([-1/2, 1/2, 3/2, 5/2, 7/2, 9/2])
and rank=4
Then I need to create 4 Vandermonde matrices (because rank=4) skewed by h in my space (that h is arbitrary here, lets call h=1).
Therefore I came with the following deterministic code:
V = np.array([
vandermonde(generator-0*h, rank),
vandermonde(generator-1*h, rank),
vandermonde(generator-2*h, rank),
vandermonde(generator-3*h, rank)
])
Then I want instead do multiple manual calls to vandermonde I used a for-loop as in:
V=[]
for i in range(rank):
V.append(vandermonde(generator - h*i, rank))
V = np.array(V)
This approach works fine, but seems too "patchy". I tried a np.append approach as below:
M = np.array([])
for i in range(rank):
M = np.append(M,[vandermonde(generator - h*i, rank)])
But didn't worked as I expected, seems np.append expand the array instead to create a new element.
My questions are:
How can I not use standard Python lists, use directly a np approach cause np.append seems not behave as I expect, instead it just grow that array instead add a new array element
Is there any more direct numpy approaches to this?
My vandermonde function is:
def vandermonde(generator, rank=None):
"""Returns a vandermonde matrix
If rank not passwd returns a square vandermonde matrix
"""
if rank is None:
rank = len(generator)
return np.tile(generator,(rank,1)) ** np.array(range(rank)).reshape((rank,1))
The expected answer is a 3 dimensional array with size (generator, rank, rank) where each element is one of the generator skewed vandermonde matrices. For the constants above(generator, rank, h) we have:
V= array([[[ 1. , 1. , 1. , 1. , 1. , 1. ],
[ -0.5 , 0.5 , 1.5 , 2.5 , 3.5 , 4.5 ],
[ 0.25, 0.25, 2.25, 6.25, 12.25, 20.25],
[ -0.12, 0.12, 3.38, 15.62, 42.88, 91.12]],
[[ 1. , 1. , 1. , 1. , 1. , 1. ],
[ -1.5 , -0.5 , 0.5 , 1.5 , 2.5 , 3.5 ],
[ 2.25, 0.25, 0.25, 2.25, 6.25, 12.25],
[ -3.38, -0.12, 0.12, 3.38, 15.62, 42.88]],
[[ 1. , 1. , 1. , 1. , 1. , 1. ],
[ -2.5 , -1.5 , -0.5 , 0.5 , 1.5 , 2.5 ],
[ 6.25, 2.25, 0.25, 0.25, 2.25, 6.25],
[-15.62, -3.38, -0.12, 0.12, 3.38, 15.62]],
[[ 1. , 1. , 1. , 1. , 1. , 1. ],
[ -3.5 , -2.5 , -1.5 , -0.5 , 0.5 , 1.5 ],
[ 12.25, 6.25, 2.25, 0.25, 0.25, 2.25],
[-42.88, -15.62, -3.38, -0.12, 0.12, 3.38]]])
Some related ideas can be found in this discussion on: efficient-way-to-compute-the-vandermonde-matrix
Use broadcasting to get the final 3D array in a vectorized manner -
r = np.arange(rank)
V_out = (generator - h*r[:,None,None]) ** r[:,None]
We can also use cumprod to achieve the exponential values for another solution -
gr = np.repeat(generator - h*r[:,None,None], rank, axis=1)
gr[:,0] = 1
out = gr.cumprod(1)
Let's say I have a 10 x 20 matrix of values (so 200 data points)
values = np.random.rand(10,20)
with a known regular spacing between coordinates so that the x and y coordinates are defined by
coord_x = np.arange(0,5,0.5) --> gives [0.0,0.5,1.0,1.5...4.5]
coord_y = np.arange(0,5,0.25) --> gives [0.0,0.25,0.50,0.75...4.5]
I'd like to get an array representing each coordinates points so that
the shape of the array is (200,2), 200 being the total number of points and the extra dimension simply representing x and y such as
coord[0][0]=0.0, coord[0][1]=0.0
coord[1][0]=0.0, coord[1][1]=0.25
coord[2][0]=0.0, coord[2][1]=0.50
...
coord[19][0]=0.0, coord[19][1]=5.0
coord[20][0]=0.5, coord[20][1]=0.0
coord[21][0]=0.5, coord[21][1]=0.25
coord[22][0]=0.5, coord[22][1]=0.50
...
coord[199][0]=4.5, coord[199][1]=4.5
That would a fairly easy thing to do with a double for loop, but I wonder if there is more elegant solution using built-in numpy (or else) functions.
?
I think meshgrid may be what you're looking for.
Here's an example, with smaller number of datapoints:
>>> from numpy import fliplr, dstack, meshgrid, linspace
>>> x, y, nx, ny = 4.5, 4.5, 3, 10
>>> Xs = linspace(0, x, nx)
>>> Ys = linspace(0, y, ny)
>>> fliplr(dstack(meshgrid(Xs, Ys)).reshape(nx * ny, 2))
array([[ 0. , 0. ],
[ 0. , 2.25],
[ 0. , 4.5 ],
[ 0.5 , 0. ],
[ 0.5 , 2.25],
[ 0.5 , 4.5 ],
[ 1. , 0. ],
[ 1. , 2.25],
[ 1. , 4.5 ],
[ 1.5 , 0. ],
[ 1.5 , 2.25],
[ 1.5 , 4.5 ],
[ 2. , 0. ],
[ 2. , 2.25],
[ 2. , 4.5 ],
[ 2.5 , 0. ],
[ 2.5 , 2.25],
[ 2.5 , 4.5 ],
[ 3. , 0. ],
[ 3. , 2.25],
[ 3. , 4.5 ],
[ 3.5 , 0. ],
[ 3.5 , 2.25],
[ 3.5 , 4.5 ],
[ 4. , 0. ],
[ 4. , 2.25],
[ 4. , 4.5 ],
[ 4.5 , 0. ],
[ 4.5 , 2.25],
[ 4.5 , 4.5 ]])
I think you meant coord_y = np.arange(0,5,0.25) in your question. You can do
from numpy import meshgrid,column_stack
x,y=meshgrid(coord_x,coord_y)
coord = column_stack((x.T.flatten(),y.T.flatten()))
I have a 2D array of Numpy data read from a .csv file. Each row represents a data point with the final column containing a a 'key' which corresponds uniquely to 'key' in another Numpy array - the 'lookup table' as it were.
What is the best (most Numpythonic) way to match up the lines in the first table with the values in the second?
Some example data:
import numpy as np
lookup = np.array([[ 1. , 3.14 , 4.14 ],
[ 2. , 2.71818, 3.7 ],
[ 3. , 42. , 43. ]])
a = np.array([[ 1, 11],
[ 1, 12],
[ 2, 21],
[ 3, 31]])
Build a dictionary from key to row number in the lookup table:
mapping = dict(zip(lookup[:,0], range(len(lookup))))
Then you can use the dictionary to match up lines. For instance, if you just want to join the tables:
>>> np.hstack((a, np.array([lookup[mapping[key],1:]
for key in a[:,0]])))
array([[ 1. , 11. , 3.14 , 4.14 ],
[ 1. , 12. , 3.14 , 4.14 ],
[ 2. , 21. , 2.71818, 3.7 ],
[ 3. , 31. , 42. , 43. ]])
In the special case when the index can be calculated from the keys, the dictionary can be avoided. It's an advantage when the key of the lookup table can be chosen.
For Vebjorn Ljosa's example:
lookup:
>>> lookup[a[:,0]-1, :]
array([[ 1. , 3.14 , 4.14 ],
[ 1. , 3.14 , 4.14 ],
[ 2. , 2.71818, 3.7 ],
[ 3. , 42. , 43. ]])
merge:
>>> np.hstack([a, lookup[a[:,0]-1, :]])
array([[ 1. , 11. , 1. , 3.14 , 4.14 ],
[ 1. , 12. , 1. , 3.14 , 4.14 ],
[ 2. , 21. , 2. , 2.71818, 3.7 ],
[ 3. , 31. , 3. , 42. , 43. ]])