Related
I am receiving the right answer when I compute the Vandermonde
coefficients of this matrix. However, the output matrix is reversed.
It should be [6,-39,55,27] instead of [27,55,-39,6].
My output for my Vandermonde Matrix is flipped and the final solution
c, is flipped.
import numpy as np
from numpy import linalg as LA
x = np.array([[4],[2],[0],[-1]])
f = np.array([[7],[29],[27],[-73]])
def main():
A_matrix = VandermondeMatrix(x)
print(A_matrix)
c = LA.solve(A_matrix,f) #coefficients of Vandermonde Polynomial
print(c)
def VandermondeMatrix(x):
n = len(x)
A = np.zeros((n, n))
exponent = np.array(range(0,n))
for j in range(n):
A[j, :] = x[j]**exponent
return A
if __name__ == "__main__":
main()
Just make the exponent range the other way around from the beginning, then you don't have to flip afterwards reducing runtime:
def VandermondeMatrix(x):
n = len(x)
A = np.zeros((n, n))
exponent = np.array(range(n-1,-1,-1))
for j in range(n):
A[j, :] = x[j]**exponent
return A
Out:
#A_matrix:
[[64. 16. 4. 1.]
[ 8. 4. 2. 1.]
[ 0. 0. 0. 1.]
[-1. 1. -1. 1.]]
#c:
[[ 6.]
[-39.]
[ 55.]
[ 27.]]
np.flip(c)?
link to documentation
You could do
print(c[::-1])
which will reverse the order of c.
From How can I flip the order of a 1d numpy array?
There is a parameter that does exactly that: increasing=True
Example from the documentation:
x = np.array([1, 2, 3, 5])
np.vander(x)
array([[ 1, 1, 1, 1],
[ 8, 4, 2, 1],
[ 27, 9, 3, 1],
[125, 25, 5, 1]])
np.vander(x, increasing=True)
array([[ 1, 1, 1, 1],
[ 1, 2, 4, 8],
[ 1, 3, 9, 27],
[ 1, 5, 25, 125]])
In [3]: def VandermondeMatrix(x):
...: n = len(x)
...: A = np.zeros((n, n))
...: exponent = np.array(range(0,n))
...: for j in range(n):
...: A[j, :] = x[j]**exponent
...: return A
...:
In [4]: x = np.array([[4],[2],[0],[-1]])
In [5]: VandermondeMatrix(x)
Out[5]:
array([[ 1., 4., 16., 64.],
[ 1., 2., 4., 8.],
[ 1., 0., 0., 0.],
[ 1., -1., 1., -1.]])
In [6]: f = np.array([[7],[29],[27],[-73]])
In [7]: np.linalg.solve(_5,f)
Out[7]:
array([[ 27.],
[ 55.],
[-39.],
[ 6.]])
The result is a (4,1) array; reverse rows with:
In [9]: _7[::-1]
Out[9]:
array([[ 6.],
[-39.],
[ 55.],
[ 27.]])
Negative strides, [::-1] indexing is also used to reverse Python lists and strings.
In [10]: ['a','b','c'][::-1]
Out[10]: ['c', 'b', 'a']
I run the following in Python and expected the columns in E[1] to be the eigenvectors of A, but they are not. Only Sympy.Matrix.eigenvects() seem to do it right. Why this error?
A
Out[194]:
matrix([[-3, 3, 2],
[ 1, -1, -2],
[-1, -3, 0]])
E = np.linalg.eig(A)
E
Out[196]:
(array([ 2., -4., -2.]),
matrix([[ -2.01889132e-16, 9.48683298e-01, 8.94427191e-01],
[ 5.54700196e-01, -3.16227766e-01, -3.71551690e-16],
[ -8.32050294e-01, 2.73252305e-17, 4.47213595e-01]]))
A*E[1] / E[1]
Out[205]:
matrix([[ 6.59900617, -4. , -2. ],
[ 2. , -4. , -3.88449298],
[ 2. , 8.125992 , -2. ]])
The eigenvectors are correct, within an expected margin of error.
What you discovered is that testing eigenvectors with element-wise division is a bad idea.
A better way is to compute the norm of the difference between matrix*vector and eigenvalue*vector.
NumPy performs computations in floating point arithmetics, limited to 52 bits of precision (double precision). This means any of its answers may contain numerical errors, at least of relative size 2**(-52) which is about 2e-16. So, when you see a number like 2e-16 coming from a calculation with numbers of size 1-3, the conclusion is: "that number should probably be zero, and the value we have for it is likely just noise". And if you divide by that number, noise is all you get.
SymPy, on the other hand, performs symbolic manipulations, so its answer (when it can get one) is exactly what the theory predicts.
From its docs:
The number w is an eigenvalue of a if there exists a vector v such that dot(a,v) = w * v. Thus, the arrays a, w, and v satisfy the equations dot(a[:,:], v[:,i]) = w[i] * v[:,i] for i \in {0,...,M-1}.
With your matrix:
In [1]: A = np.array([[-3, 3, 2],
...: [ 1, -1, -2],
...: [-1, -3, 0]])
...:
In [2]: w,v=np.linalg.eig(A)
In [3]: w
Out[3]: array([ 2., -4., -2.])
In [4]: v
Out[4]:
array([[ -9.39932874e-17, 9.48683298e-01, 8.94427191e-01],
[ 5.54700196e-01, -3.16227766e-01, 1.93473310e-16],
[ -8.32050294e-01, -4.08811066e-17, 4.47213595e-01]])
In [5]: np.dot(A,v)
Out[5]:
array([[ -2.22044605e-16, -3.79473319e+00, -1.78885438e+00],
[ 1.10940039e+00, 1.26491106e+00, -7.77156117e-16],
[ -1.66410059e+00, 4.44089210e-16, -8.94427191e-01]])
In [6]: w*v
Out[6]:
array([[ -1.87986575e-16, -3.79473319e+00, -1.78885438e+00],
[ 1.10940039e+00, 1.26491106e+00, -3.86946619e-16],
[ -1.66410059e+00, 1.63524427e-16, -8.94427191e-01]])
In [7]: np.dot(A,v)-w*v
Out[7]:
array([[ -3.40580301e-17, 8.88178420e-16, 2.22044605e-16],
[ 8.88178420e-16, -6.66133815e-16, -3.90209498e-16],
[ -2.22044605e-16, 2.80564783e-16, -3.33066907e-16]])
In [8]: np.allclose(np.dot(A,v), w*v)
Out[8]: True
So, yes, the documented test is satisfied, within floating point limits.
einsum can be used to highlight the i axis in the dot calculation.
In [10]: np.einsum('...k,ki->...i',A,v)
Out[10]:
array([[ -2.22044605e-16, -3.79473319e+00, -1.78885438e+00],
[ 1.10940039e+00, 1.26491106e+00, -7.77156117e-16],
[ -1.66410059e+00, 3.88578059e-16, -8.94427191e-01]])
When I divide by v (element wise), the result matches the eigenvalues, 2 -4,-2, except where v and the dot are virtually 0 (1e-16 or smaller).
In [11]: np.einsum('...k,ki->...i',A,v)/v
Out[11]:
array([[ 2.36234534, -4. , -2. ],
[ 2. , -4. , -4.01686475],
[ 2. , -9.50507681, -2. ]])
I'm using matrices in numpy python. I have a matrix A and I then I calculate its inverse. Now I multiply A with its inverse, and I'm not getting the identity matrix. Can anyone point out what's wrong here?
A = matrix([
[4, 3],
[3, 2]
]);
print (A.I) # prints [[-2 3], [ 3 -4]] - correct
print A.dot(A.T) # prints [[25 18], [18 13]] - Incorrect
print A*(A.T) # prints [[25 18], [18 13]] - Incorrect
You are using dot on the matrix and the transposed matrix (not the inverse) ...
In [16]: np.dot(A.I, A)
Out[16]:
matrix([[ 1., 0.],
[ 0., 1.]])
With the transposed you have the result you showed:
In [17]: np.dot(A.T, A)
Out[17]:
matrix([[25, 18],
[18, 13]])
Here is another method:
I works only on matrix
you can use np.linalg.inv(x) for inverse
In [11]: import numpy as np
In [12]: A = np.array([[4, 3], [3, 2]])
In [13]: B = np.linalg.inv(A)
In [14]: A.dot(B)
Out[14]:
array([[ 1., 0.],
[ 0., 1.]])
I am trying to make a numpy array of the form ([1.], [2.], ...) from a list [1, 2, 3] so I can use it as an input for sklearn's linear_model.
This command
np.array(test_list)
produces this kind of array:
array([1, 2, 3, 4])
whereas I want
array ([1], [2], [3], [4])
You could just reshape:
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
print(arr.reshape(arr.size, 1).astype(float))
Which would give you:
[[ 1.]
[ 2.]
[ 3.]
[ 4.]
[ 5.]]
+1 to mgilson's answer. Here's another way:
arr = np.array([np.array([float(i)]) for i in test_list])
You can insert a new axis and transpose:
>>> arr = np.array([1, 2, 3, 4], dtype=float)
>>> arr[None, ...].T
array([[1.],
[2.],
[3.],
[4.]])
As with most things numpy, there's probably a better way, but this works alright :-).
Or, as pointed out in the comments, you can just insert an axis at the right place:
>>> arr[..., None]
array([[ 1.],
[ 2.],
[ 3.],
[ 4.]])
Note that you could write None as np.newaxis if you find that to be more semantically correct.
You could also use NumPy's atleast_2d and transpose:
In [270]: np.atleast_2d([1, 2, 3, 4, 5]).T.astype(float)
Out[270]:
array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.]])
The problem I encounter is that, by using ndarray.view(np.dtype) to get a structured array from a classic ndarray seems to miscompute the float to int conversion.
Example talks better:
In [12]: B
Out[12]:
array([[ 1.00000000e+00, 1.00000000e+00, 0.00000000e+00,
0.00000000e+00, 4.43600000e+01, 0.00000000e+00],
[ 1.00000000e+00, 2.00000000e+00, 7.10000000e+00,
1.10000000e+00, 4.43600000e+01, 1.32110000e+02],
[ 1.00000000e+00, 3.00000000e+00, 9.70000000e+00,
2.10000000e+00, 4.43600000e+01, 2.04660000e+02],
...,
[ 1.28900000e+03, 1.28700000e+03, 0.00000000e+00,
9.99999000e+05, 4.75600000e+01, 3.55374000e+03],
[ 1.28900000e+03, 1.28800000e+03, 1.29000000e+01,
5.40000000e+00, 4.19200000e+01, 2.08400000e+02],
[ 1.28900000e+03, 1.28900000e+03, 0.00000000e+00,
0.00000000e+00, 4.19200000e+01, 0.00000000e+00]])
In [14]: B.view(A.dtype)
Out[14]:
array([(4607182418800017408, 4607182418800017408, 0.0, 0.0, 44.36, 0.0),
(4607182418800017408, 4611686018427387904, 7.1, 1.1, 44.36, 132.11),
(4607182418800017408, 4613937818241073152, 9.7, 2.1, 44.36, 204.66),
...,
(4653383897399164928, 4653375101306142720, 0.0, 999999.0, 47.56, 3553.74),
(4653383897399164928, 4653379499352653824, 12.9, 5.4, 41.92, 208.4),
(4653383897399164928, 4653383897399164928, 0.0, 0.0, 41.92, 0.0)],
dtype=[('i', '<i8'), ('j', '<i8'), ('tnvtc', '<f8'), ('tvtc', '<f8'), ('tf', '<f8'), ('tvps', '<f8')])
The 'i' and 'j' columns are true integers:
Here you have two further check I have done, the problem seems to come from the ndarray.view(np.int)
In [21]: B[:,:2]
Out[21]:
array([[ 1.00000000e+00, 1.00000000e+00],
[ 1.00000000e+00, 2.00000000e+00],
[ 1.00000000e+00, 3.00000000e+00],
...,
[ 1.28900000e+03, 1.28700000e+03],
[ 1.28900000e+03, 1.28800000e+03],
[ 1.28900000e+03, 1.28900000e+03]])
In [22]: B[:,:2].view(np.int)
Out[22]:
array([[4607182418800017408, 4607182418800017408],
[4607182418800017408, 4611686018427387904],
[4607182418800017408, 4613937818241073152],
...,
[4653383897399164928, 4653375101306142720],
[4653383897399164928, 4653379499352653824],
[4653383897399164928, 4653383897399164928]])
In [23]: B[:,:2].astype(np.int)
Out[23]:
array([[ 1, 1],
[ 1, 2],
[ 1, 3],
...,
[1289, 1287],
[1289, 1288],
[1289, 1289]])
What am I doing wrong? Can't I change the type due to numpy allocation memory? Is there another way to do this (fromarrays, was accusing a shape mismatch ?
This is the difference between doing somearray.view(new_dtype) and calling astype.
What you're seeing is exactly the expected behavior, and it's very deliberate, but it's uprising the first time you come across it.
A view with a different dtype interprets the underlying memory buffer of the array as the given dtype. No copies are made. It's very powerful, but you have to understand what you're doing.
A key thing to remember is that calling view never alters the underlying memory buffer, just the way that it's viewed by numpy (e.g. dtype, shape, strides). Therefore, view deliberately avoids altering the data to the new type and instead just interprets the "old bits" as the new dtype.
For example:
In [1]: import numpy as np
In [2]: x = np.arange(10)
In [3]: x
Out[3]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [4]: x.dtype
Out[4]: dtype('int64')
In [5]: x.view(np.int32)
Out[5]: array([0, 0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, 0, 8, 0, 9, 0],
dtype=int32)
In [6]: x.view(np.float64)
Out[6]:
array([ 0.00000000e+000, 4.94065646e-324, 9.88131292e-324,
1.48219694e-323, 1.97626258e-323, 2.47032823e-323,
2.96439388e-323, 3.45845952e-323, 3.95252517e-323,
4.44659081e-323])
If you want to make a copy of the array with a new dtype, use astype instead:
In [7]: x
Out[7]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [8]: x.astype(np.int32)
Out[8]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)
In [9]: x.astype(float)
Out[9]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
However, using astype with structured arrays will probably surprise you. Structured arrays treat each element of the input as a C-like struct. Therefore, if you call astype, you'll run into several suprises.
Basically, you want the columns to have a different dtype. In that case, don't put them in the same array. Numpy arrays are expected to be homogenous. Structured arrays are handy in certain cases, but they're probably not what you want if you're looking for something to handle separate columns of data. Just use each column as its own array.
Better yet, if you're working with tabular data, you'll probably find its easier to use pandas than to use numpy arrays directly. pandas is oriented towards tabular data (where columns are expected to have different types), while numpy is oriented towards homogenous arrays.
Actually, from_arrays work, but it doesn't explain this weird comportment.
Here is the solution I've found:
np.core.records.fromarrays(B.T, dtype=A.dtype)
The only solution which worked for me in similar situation:
np.array([tuple(row) for row in B], dtype=A.dtype)