Solving 5 Linear Equations in Python - python

I've tried using matrices, and it has failed. I've looked at external modules and external programs, but none of it has worked. If someone could share some tips or code that would be helpful, thanks.

import numpy
import scipy.linalg
m = numpy.matrix([
[1, 1, 1, 1, 1],
[16, 8, 4, 2, 1],
[81, 27, 9, 3, 1],
[256, 64, 16, 4, 1],
[625, 125, 25, 5, 1]
])
res = numpy.matrix([[1],[2],[3],[4],[8]])
print scipy.linalg.solve(m, res)
returns
[[ 0.125]
[-1.25 ]
[ 4.375]
[-5.25 ]
[ 3. ]]
(your solution coefficients for a,b,c,d,e)

I'm not sure what you mean when you say the matrix methods don't work. That's the standard way of solving these types of problems.
From a linear algebra standpoint, solving 5 linear equations is trivial. It can be solved using any number of methods. You can use Gaussian elimination, finding the inverse, Cramer's rule, etc.
If you're lazy, you can always resort to libraries. Sympy and Numpy can both solve linear equations with ease.

Perhaps you're using matrices in a wrong way.
Matrices are just like lists within lists.
[[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1],[1,1,1,1,1,1]]
The aforementioned code would make a list that you can access like mylist[y][x] as the axes are swapped.

Related

Assert Scipy Univariate Spline Strictly Increasing

I'm working with univariate splines from scipy. A simple example of one is as follows:
import scipy
x = [1, 2, 3, 4, 5]
y = [1, 4, 9, 16, 25]
f = scipy.interpolate.UnivariateSpline(x, y)
Is there any way I could make the resulting spline strictly increasing or strictly decreasing? I've noticed that, even if I feed it strictly increasing or decreasing data points, the result won't necessarily have this property.
Look for monotone interpolants, pchip and/or akima. These are at least locally monotone.

Create interaction term in scikit-learn

There are certainly many ways of creating interaction terms in Python, whether by using numpy or pandas directly, or some library like patsy. However, I was looking for a way of creating interaction terms scikit-learn style, i.e. in a form that plays nicely with its fit-transform-predict paradigm. How might I do this?
Let's consider the case of making an interaction term between two variables.
You might make use of the FunctionTransformer class, like so:
import numpy as np
from sklearn.preprocessing import FunctionTransformer
# 5 rows, 2 columns
X = np.arange(10).reshape(5, 2)
# Appends interaction of columns at 0 and 1 indices to original matrix
interaction_append_function = lambda x: np.append(x, (x[:, 0] * x[:, 1])[:, None], 1)
interaction_transformer = FunctionTransformer(func=interaction_append_function)
Let's try it out:
>>> interaction_transformer.fit_transform(X)
array([[ 0, 1, 0],
[ 2, 3, 6],
[ 4, 5, 20],
[ 6, 7, 42],
[ 8, 9, 72]])
You now have a transformer that will play well with other workflows like sklearn.pipeline or sklearn.compose.
Certainly there are more extensible ways of handling this, but hopefully you get the idea.

An optimized matrix multiplication library in Python (similar to Matlab) but is NOT numpy

According to the NumPy documentation they may deprecate their np.matrix class. And while arrays do have their multitude of use cases, they cannot do everything. Specifically, they will "break" when doing pretty basic linear algebra operations (you can read more about it here).
Building my own matrix multiplication module in python is not too difficult, but it would not be optimized at all. I am looking for another library that has full linear algebra support which is optimized upon BLAS (Basic Linear Algebra Subprograms). Or at the least, is there any documents on how to DIY integrate a BLAS to python.
Edit: So some are suggesting the # operator, which is like pushing a mole down a hole and having him pop up immediately in the neighbouring one. In essence, what is happening is a debuggers nightmare:
W*x == w*x.T
W#x == W#x.T
You would hope that an error is raised here letting you know that you made a mistake in defining your matrices. But since arrays don't store 2D information if they are along one axis, I am not sure that the issue can ever be solved via np.array. (These problems don't exist with np.matrix but for some reason the developers seem insistent on removing it).
If you insist on the distinction between column and row vectors, you can do that.
>>> x = np.array([1, 2, 3]).reshape(-1, 1)
>>> W = np.arange(15).reshape(5, 3)
>>> x
array([[1],
[2],
[3]])
>>> W
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14]])
>>> W # x
array([[ 8],
[26],
[44],
[62],
[80]])
>>> W # x.T
ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0,
with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 1 is different from 3)
You could create helper functions to create column and row vectors:
def rowvec(x):
return np.array(x).reshape(1, -1)
def colvec(x):
return np.array(x).reshape(-1, 1)
>>> rowvec([1, 2, 3])
array([[1, 2, 3]])
>>> colvec([1, 2, 3])
array([[1],
[2],
[3]])
I would recommend that you only use this type of constructs when you're porting existing Matlab code. You'll have trouble reading numpy code written by others and many library functions expect 1D arrays as inputs, not (1, n)-shaped arrays.
Actually, numpy offers BLAS-powered matrix mutiplication through the matmul operator #. This invokes the __matmul__ magic method for a given class.
All you have to do in the above example is W # x.
Other linear algebra stuff can be found on the np.linalg module.
Edit: I guess your problem is way more about the language's style than any technical issues. I found this answer very elucidative:
Transposing a NumPy array
Also, I find it very improbable that you will find something that is NOT numpy since most of the major machine learning/data science frameworks rely on it.

Clarification on computation of eigenvectors using NumPy

I am reviewing some linear algebra and working through some implementations in Python. I am working through a problem related to finding eigenvectors of a matrix A.
A = [[ 1, 2,-2],
[-2, 5,-2],
[-6, 6,-3]]
When I solve this problem by hand, I get eigenvalues 3 and -3, with 3 having a multiplicity of 2. My eigenvectors are [[1/3], [1/3], [1]], [[1], [1], [0]], [[-1], [0], [1]].
Trying my implementation in NumPy:
import numpy as np
A = [[ 1, 2, -2],
[-2, 5, -2],
[-6, 6, -3]]
np.linalg.eig(A)
which gives the output
(array([ 3., -3., 3.]), array([[ 0.53452248, -0.30151134, -0.05332571],
[-0.26726124, -0.30151134, -0.73225996],
[-0.80178373, -0.90453403, -0.67893425]]))
The eigenvalues are what I would expect, but the eigenvectors are confusing to me. From what I've read, I understand they are the columns and they are normalized, i.e., norm(e1) = 1. Also, numerically they seem to be correct in the sense they satisfy the Ax = lambda*x equation.
Furthermore, when I do the implementation in SymPy, I get the expected result.
from sympy.matrices import Matrix, eye, zeros, ones, diag, GramSchmidt
A = Matrix([[ 1, 2, -2],
[-2, 5, -2],
[-6, 6, -3]])
A.eigenvects()
Output:
[(-3, 1, [Matrix([
[1/3],
[1/3],
[ 1]])]), (3, 2, [Matrix([
[1],
[1],
[0]]), Matrix([
[-1],
[ 0],
[ 1]])])]
Can anyone shed some light on the differences and what's going on with NumPy? Is it solving numerically and these aren't truly eigenvectors but they seem to be in the sense they satisfy the conditions within a certain level of numerical precision? Thank you.
Here, the eigenvalue 3 has geometric multiplicity 2 (the rank of the matrix (A - 3 I) is 1) and there are infinitely many ways to choose the two basis vectors (eigenvectors) for this eigenspace.
In the case of normal matrix A, numpy.linalg.eig will return an array of row eigenvectors forming a set of orthonormal bases of the whole space, and in computing practice the eigenvectors are unique up to permutations and the orientation (sign) in each column. In the non-normal case (as is here) there's no unique choice, only a unique partition of the whole space into sub(eigen)spaces associated with each eigenvalue.
You can consider the output eigenvectors for the eigenvalue 3 (namely, the 0th and 2nd columns in the returned eigenvector array) an arbitrary set of bases satisfying the eigenvalue equation.
The implementation should be a wrapper of the underlying ?GEEV function of the LAPACK API. Apart from matching the order of eigenvalues (which is ordered as conjugate pairs), the only constraint on the output eigenvectors seems
Each eigenvector is scaled so that the Euclidean norm is 1 and the largest component is real.
So there's still lots of arbitrariness and I wouldn't count on a particular output.

Need faster python code for calculating sample entropy

This is the problem I have faced when I am writing python code for sample entropy.
map(max, abs(a[i]-a) ) is very slow.
Is there any other function perform better than map() ?
Where a is ndarray that looks like np.array([ [1,2,3,4,5],[2,3,4,5,6],[3,4,5,3,2] ])
Use the vectorized max
>>> map(max, abs(a[2]-a) )
[3, 4, 0]
>>> np.abs(a[2] - a).max(axis=1)
array([3, 4, 0])

Categories