Boundary condition for np.linalg.solve? - python

I have the following problem:
I have a system of 9 linear coupled equations, with 9 variables I want to solve for. I wrote it in Matrixform, so I can solve it like
A = 9x9 array (matrix)
b = 9x1 array (vector)
x = np.linalg.solve(A, b)
Now my problem is, that I need a boundary condition that three of the elements of the solution should be one, x_00 + x_44 + x_88 = 1.
How do I implement that?
** (For people who know physics, basically I am solving for the steady state solution of a density matrix. And there is a reason why I solve it semi-analytically :) )
** I get a solution already now, but it is different from the solution I get in wolfram mathematica, where I can implement the boundary condition.
Thanks a lot for your help!

Related

How to solve separable differential equation using Sympy?

I cannot figure out how to solve this separable differential equation using sympy. Help would be greatly appreciated.
y′=(y−4)(y−2),y(0)=5
Here was my attempt, thanks in advance!!!
import sympy as sp
x,y,t = sp.symbols('x,y,t')
y_ = sp.Function('y_')(x)
diff_eq = sp.Eq(sp.Derivative(y_,x), (y-4)*(y-2))
ics = {y_.subs(x,0):5}
sp.dsolve(diff_eq, y_, ics = ics)
the output is y(x) = xy^2 -6xy +8x + 5
The primary error is the introduction of y_. This makes the variable y a constant parameter of the ODE and you get the wrong solution.
If you correct this you get an error of "too many solutions for the integration constant". This is a bug caused by not simplifying the integration constant after it first occurs. So multiplication and addition of constants should just be absorbed, an additive constant in an exponent should become a multiplicative factor for the exponential. As it is, exp(2*C_1)==3 has two solutions if C_1 is considered as an angle (it's a bit of tortured logic from computing roots in the complex plane).
The newer versions can actually solve this fully if you give the third hint in the classification list 'separable', '1st_exact', '1st_rational_riccati', ... that does something different than partial fraction decomposition of the first two
from sympy import *
x = Symbol('x')
y = Function('y')(x)
dsolve(Eq(y.diff(x), (y-2)*(y-4)),y,
ics={y.subs(x,0):5},
hint='1st_rational_riccati')
returning
\displaystyle y{\left(x \right)} = \frac{2 \cdot \left(6 - e^{2 x}\right)}{3 - e^{2 x}}

not able to resolve LinAlgError: Last 2 dimensions of the array must be square [duplicate]

I need to solve a set of simultaneous equations of the form Ax = B for x. I've used the numpy.linalg.solve function, inputting A and B, but I get the error 'LinAlgError: Last 2 dimensions of the array must be square'. How do I fix this?
Here's my code:
A = matrix([[v1x, v2x], [v1y, v2y], [v1z, v2z]])
print A
B = [(p2x-p1x-nmag[0]), (p2y-p1y-nmag[1]), (p2z-p1z-nmag[2])]
print B
x = numpy.linalg.solve(A, B)
The values of the matrix/vector are calculated earlier in the code and this works fine, but the values are:
A =
(-0.56666301, -0.52472909)
(0.44034147, 0.46768087)
(0.69641397, 0.71129036)
B =
(-0.38038602567630364, -24.092279373295057, 0.0)
x should have the form (x1,x2,0)
In case you still haven't found an answer, or in case someone in the future has this question.
To solve Ax=b:
numpy.linalg.solve uses LAPACK gesv. As mentioned in the documentation of LAPACK, gesv requires A to be square:
LA_GESV computes the solution to a real or complex linear system of equations AX = B, where A is a square matrix and X and B are rectangular matrices or vectors. Gaussian elimination with row interchanges is used to factor A as A = PL*U , where P is a permutation matrix, L is unit lower triangular, and U is upper triangular. The factored form of A is then used to solve the above system.
If A matrix is not square, it means that you either have more variables than your equations or the other way around. In these situations, you can have the cases of no solution or infinite number of solutions. What determines the solution space is the rank of the matrix compared to the number of columns. Therefore, you first have to check the rank of the matrix.
That being said, you can use another method to solve your system of linear equations. I suggest having a look at factorization methods like LU or QR or even SVD. In LAPACK you can use getrs, in Python you can different things:
first do the factorization like QR and then feed the resulting matrices to a method like scipy.linalg.solve_triangular
solve the least-squares using numpy.linalg.lstsq
Also have a look here where a simple example is formulated and solved.
A square matrix is a matrix with the same number of rows and columns. The matrix you are doing is a 3 by 2. Add a column of zeroes to fix this problem.

Is there a way to generate random solutions to non-square linear equations, preferably in python?

First of all, I know that these threads exist! So bear with me, my question is not fully answered by them.
As an example assume we are in a 4-dimensional vector space, i.e R^4. We are looking at the two linear equations:
3*x1 - 2* x2 + 7*x3 - 2*x4 = 6
1*x1 + 3* x2 - 2*x3 + 5*x4 = -2
The actual questions is: Is there a way to generate a number N of points that solve both of these equations making use of the linear solvers from NumPy etc?
The main problem with all python libraries I have tried so far is: they need n equations for a n-dimensional space
Solving the problem is very easy for one equation, since you can simply use n-1 randomly generated vlaues and adapt the last one such that the vector solves the equation.
My expected result would be a list of N "randomly" generated points that solve k linear equations in an n-dimensional space, where k<n.
A system of linear equations with more variables than equations is known as an underdetermined system.
An underdetermined linear system has either no solution or infinitely many solutions.
...
There are algorithms to decide whether an underdetermined system has solutions, and if it has any, to express all solutions as linear functions of k of the variables (same k as above). The simplest one is Gaussian elimination.
As you say, many functions available in libraries (e.g. np.linalg.solve) require a square matrix (i.e. n equations for n unknowns), what you are looking for is an implementation of Gaussian elimination for non square linear systems.
This isn't 'random', but np.linalg.lstsq (least square) is will solve non-square matrices:
Return the least-squares solution to a linear matrix equation.
Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b - a x ||^2. The equation may be under-, well-, or over- determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation.
For more info, see:
solving Ax =b for a non-square matrix A using python
Since you have an underdetermined system of equations (too few constraints for your solutions, or fewer equations than variables) you can just pick some arbitrary values for x3 and x4 and solve the system in x1, x2 (this has 2 variables/2 equations).
You will just need to check that the resulting system is not inconsistent (i.e. it admits no solution) and that there are no duplicate solutions.
You could for instance fix x3=0 and choosing random values of x4 generate solutions for your equations in x1, x2
Here's an example generating 10 "random" solutions
n = 10
x3 = 0
X = []
for x4 in np.random.choice(1000, n):
b = np.array([[6-7*x3+2*x4],[-2+2*x3-5*x4]])
x = np.linalg.solve(a, b)
X.append(np.append(x,[x3,x4]))
# check solution nr. 3
[x1, x2, x3, x4] = X[3]
3*x1 - 2* x2 + 7*x3 - 2*x4
# output: 6.0
1*x1 + 3* x2 - 2*x3 + 5*x4
# output: -2.0
Thanks for the answers, which both helped me and pointed me in the right direction.
I now have an easy step-by-step solution to my problem for arbitrary k<n.
1. Find one solution to all equations given. This can be done by using
solution_vec = numpy.linalg.lstsq(A,b)
this gives a solution as seen in ukemis answer. In my example above, the Matrix A is equal to the coefficients of the equations on the left side, b represents the vector on the right side.
2. Determine the null space of your matrix A.
These are all vectors v such that the skalar product v*A_i = 0 for every(!) row A_i of A. The following function, found in this thread can be used to get representatives of the null space of A:
def nullSpaceOfMatrix(A, eps=1e-15):
u, s, vh = scipy.linalg.svd(A)
null_mask = (s <= eps)
null_space = scipy.compress(null_mask, vh, axis=0)
return scipy.transpose(null_space)
3. Generate as many (N) "random" linear combinations (meaning with random coefficients) of solution_vec and resulting vectors of the nullspace of the matrix as you want! This works because the scalar product is additive and nullspace vectors have a scalar product of 0 to the vectors of the equations. Those linear combinations always must contain solution_vec, as in:
linear_combination = solution_vec + a*null_spacevec_1 + b*nullspacevec_2...
where a and b can be randomly chosen.

numpy linalg.solve, not a square matrix

So currently I'm working with code looking like:
Q,R = np.linalg.qr(matrix)
Qb = np.dot(Q.T, new_mu[b][n])
x_qr = np.linalg.solve(R, Qb)
mu.append(x_qr)
The code works fine as long as my matrix is square, but as soon as it's not, the system is not solvable and I got errors. If I've understood it right I can't use linalg.solve on non-full rank matrices, but is there a way for me to get across this obstacle without using a lstsquare solution?
No, this is not possible, as specified in the np.linalg.solve docs.
The issue is that given Ax = b, if A is not square, then your equation is either over-determined or under-determined, assuming that all rows in A are linearly independent. This means that there does not exist a single x that solves this equation.
Intuitively, the idea is that if you have n (length of x) variables that you are trying to solve for, then you need exactly n equations to find a unique solution for x, assuming that these equations are not "redundant". In this case, "redundant" means linearly dependent: one equation is equal to the linear combination of one or more of the other equations.
In this scenario, one possibly useful thing to do is to find the x that minimizes norm(b - Ax)^2 (i.e. linear least squares solution):
x, _, _, _ = np.linalg.lsq(A, b)

Multiple linear regression in python without fitting the origin?

I found this chunk of code on http://rosettacode.org/wiki/Multiple_regression#Python, which does a multiple linear regression in python. Print b in the following code gives you the coefficients of x1, ..., xN. However, this code is fitting the line through the origin (i.e. the resulting model does not include a constant).
All I'd like to do is the exact same thing except I do not want to fit the line through the origin, I need the constant in my resulting model.
Any idea if it's a small modification to do this? I've searched and found numerous documents on multiple regressions in python, except they are lengthy and overly complicated for what I need. This code works perfect, except I just need a model that fits through the intercept not the origin.
import numpy as np
from numpy.random import random
n=100
k=10
y = np.mat(random((1,n)))
X = np.mat(random((k,n)))
b = y * X.T * np.linalg.inv(X*X.T)
print(b)
Any help would be appreciated. Thanks.
you only need to add a row to X that is all 1.
Maybe a more stable approach would be to use a least squares algorithm anyway. This can also be done in numpy in a few lines. Read the documentation about numpy.linalg.lstsq.
Here you can find an example implementation:
http://glowingpython.blogspot.de/2012/03/linear-regression-with-numpy.html
What you have written out, b = y * X.T * np.linalg.inv(X * X.T), is the solution to the normal equations, which gives the least squares fit with a multi-linear model. swang's response is correct (and EMS's elaboration)---you need to add a row of 1's to X. If you want some idea of why it works theoretically, keep in mind that you are finding b_i such that
y_j = sum_i b_i x_{ij}.
By adding a row of 1's, you are are setting x_{(k+1)j} = 1 for all j, which means that you are finding b_i such that:
y_j = (sum_i b_i x_{ij}) + b_{k+1}
because the k+1st x_ij term is always equal to one. Thus, b_{k+1} is your intercept term.

Categories