solving linear system 6x6 matrix using sympy - python

i am trying to solve a matrix that has 6x6 matrices as it's entries(elements)
i tried multiplying the inverse of gen to the solution matrix, but i don't trust the correctness of the answer am getting.
from sympy import Eq, solve_linear_system, Matrix,count_ops,Mul,horner
import sympy as sp
a, b, c, d, e,f = sp.symbols('a b c d e f')
ad = Matrix(([43.4,26.5,115,-40.5,52.4,0.921],
[3.78,62.9,127,-67.6,110,4.80],
[41.25,75.0,213,-88.9, 131, 5.88],
[-10.6,-68.4,-120,64.6,-132,-8.49],
[6.5,74.3,121,-72.8,179,29.7],
[1.2,30.7,49.7,-28.7,91,29.9]))
fb= Matrix(([1,0,0,0,0,0],
[0,1,0,0,0,0],
[0,0,1,0,0,0],
[0,0,0,1,0,0],
[0,0,0,0,1,0],
[0,0,0,0,0,1]))
ab = Matrix(([-0.0057],
[0.0006],
[-0.0037],
[0.0009],
[0.0025],
[0.0042]))
az = sp.symbols('az')
bz = sp.symbols('bz')
fz = sp.symbols('fz')
gen = Matrix(([az, fz, 0, 0, 0, 0,bz],
[fz,az,fz,0,0,0,bz],
[0,fz,az,fz,0,0,bz],
[0,0,fz,az,fz,0,bz],
[0,0,0,fz,az,fz,bz],
[0,0,0,0,fz,az,bz]))
answer = solve_linear_system(gen,a,b,c,d,e,f)
first_solution = answer[a]
df = count_ops(first_solution)
print(df,first_solution)
disolved = zip(first_solution.simplify().as_numer_denom(),(1,-1))
dft = Mul(*[horner(b)**e for b,e in disolved])
dff = count_ops(dft)
print(dff,dft)
_1st_solution = dft.subs({az:ad,fz:fb,bz:ab},simultaneous = True).doit()
print(_1st_solution)
when i ran my code it raised sympy.matrices.common.ShapeError

You have to be careful when using horner with expressions containing commutative symbols that are actually noncommutative (in your case because they represent matrices). Your dft expression is
(az**2*bz - bz*fz**2)/(az*(az*(az + fz) - 2*fz**2) - fz**3)
but should maybe be
(az**2 - fz**2)*(az*(az*(az + fz) - 2*fz**2) - fz**3)**(-1)*bz
You would have received a correct expression if you had created the symbols as noncommutative (as shown below).
But you can't use horner with non-commutative symbols, so I just rearranged the expression by hand; you will have to check to see that the ordering is right. As an alternative to doing the factoring by hand you might also try using factor_nc to help you -- but it won't handle horner like expression factoring:
>>> ax, bz, fz = symbols('az bz fz, commutative=False)
>>> (az**2*bz - fz**2*bz)
az**2*bz - fz**2*bz
>>> factor_nc(_)
(az**2 - fz**2)*bz

Related

Convert sympy to numpy

I have two NumPy array (two variables), which contains complex numbers. Since they have been defined in NumPy, so the complex notation is denoted by "j".
for i in range(0, 8):
cg1 = np.array(eigVal1[i])
det_eval = eval(det)
AA = det_eval[0:2,0:2]
bb = det_eval[0:2,2] * (-1)
roots = np.append(roots, solve(AA, bb))
a = np.append(a, roots[i])
b = np.append(b, roots[i+1])
Output:
a = array([-9.03839731e-04+0.00091541j, 3.02435614e-07-0.00043776j,
-9.03839731e-04-0.00091541j, 3.02435614e-07+0.00043776j,
9.03812649e-04+0.00092323j, 4.17553402e-07+0.00043764j,
9.03812649e-04-0.00092323j, 4.17553402e-07-0.00043764j])
b = array([ 3.02435614e-07-0.00043776j, -9.03839731e-04-0.00091541j,
3.02435614e-07+0.00043776j, 9.03812649e-04+0.00092323j,
4.17553402e-07+0.00043764j, 9.03812649e-04-0.00092323j,
4.17553402e-07-0.00043764j, -5.53769989e-05-0.00243369j])
I also have a long equation which some variables have been defined symbolic (y).
u_n = A0*y**(1322.5696672125 + 1317.38942049453*I) + A1*y**(1322.5696672125 - 1317.38942049453*I) + A2*y**(-1322.5696672125 + 1317.38942049453*I) + A3*y**(-1322.5696672125 - 1317.38942049453*I) + ..
My problem is when I want to substitute the two variables (a and b) into the equation, all complex numbers change to "I" and it makes the equation more complex, because I am not able to simplify the equation further.
Is there any solution to convert "I" to "j" in sympy.
for i in range(0, 8):
u_n = u_n.subs(A[i], (a[i] * C[i]))
The result is:
u_n = C0*y**(1322.5696672125 + 1317.38942049453*I)*(-0.000903839731101097 + 0.000915407724097998*I) + C1*y**(1322.5696672125 - 1317.38942049453*I)*(3.02435613673241e-7 - 0.000437760318205723*I) +..
As you see I can not simplify it further, even if I use simplify(u_n). However, in numpy for example,(2+3j)(5+6j) will be reduced to (-8+27j), but when a symbolic notation comes to my equation it won't be simplified further. y**(2+3j)(5+6j) -->> y**(2+3I)(5+6*I).
I would like to have y**(-8+27j) which y is symbolic.
I would appreciate it if someone help me with that.
From you last paragraph:
The python expression:
In [38]: (2+3j)*(5+6j)
Out[38]: (-8+27j)
sympy: (y is a sympy symbol):
In [39]: y**(2+3j)*(5+6j)
Out[39]:
With corrective () to group the multiply before the power:
In [40]: y**((2+3j)*(5+6j))
Out[40]:
Even in plain python the operator order matters:
In [44]: 1**(2+3j)*(5+6j)
Out[44]: (5+6j)
In [45]: 1**((2+3j)*(5+6j))
Out[45]: (1+0j)

Sympy - got two solutions from trigonometric equation, I was expecting only one

I'm trying to solve a trigonometric equation with sympy. I'm having issues understanding what sympy is doing: I was expecting only one solution, instead I got two. Here is the code:
import sympy as sp
sp.var("a, b, c, d, z")
myeq = sp.Eq(c * sp.sin(a * (b / 2 - z)) + d * sp.cos(a * (b / 2 - z)), 0)
sol = sp.solve(myeq, z)
print(sol)
Output: [(a*b - 4*atan((c - sqrt(c**2 + d**2))/d))/(2*a), (a*b - 4*atan((c + sqrt(c**2 + d**2))/d))/(2*a)]
The solution I was expecting is: [b / 2 + atan(c / d) / a]
What am I missing? For this specific case, is it possible to obtain a single solution?
If you rearrange your equation to combine the sin and cos into tan you will get what you are looking for:
>>> solve(c/d*tan(a*(b/2-z))-1,z)
[b/2 - atan(d/c)/a]
If you don't, SymPy will rewrite and solve in terms of exp...and in that case, as you can verify, it will be quadratic in exp(l*a*z).
An attempt at rewriting a two-arg sum as a ratio could be done like this:
>>> def ratio(eq):
... if isinstance(eq, Eq):
... eq=eq.rewrite(Add)
... A, B = eq.as_two_terms()
... if not A.is_Add and not B.is_Add:
... return Eq(A/B, 1)
>>> trigsimp(ratio(eq))
Eq(c*tan(a*b/2 - a*z)/d, 1)
(The function returns None if there aren't two terms to work with.) As you can see, in this case you get a new equation which will solve as you desired.

Find zeros of the characteristic polynomial of a matrix with Python

Given an N x N symmetric matrix C and an N x N diagonal matrix I, find the solutions of the equation det(λI-C)=0. In other words, the (generalized) eigenvalues of C are to be found.
I know few ways how to solve this in MATLAB using build-in functions:
1st way:
function lambdas=eigenValues(C,I)
syms x;
lambdas=sort(roots(double(fliplr(coeffs(det(C-I*x))))));
2nd way:
[V,D]=eig(C,I);
However, I need to use Python. There are similar function in NumPy and SymPy, but, according to docs (numpy, sympy), they take only one matrix C, as the input. Though, the result's different from the result produced by Matlab. Also, symbolic solutions produced by SymPy aren't helpful. Maybe I am doing something wrong? How to find solution?
Example
MATLAB:
%INPUT
I =
2 0 0
0 6 0
0 0 5
C =
4 7 0
7 8 -4
0 -4 1
[v,d]=eig(C,I)
%RESULT
v =
-0.3558 -0.3109 -0.5261
0.2778 0.1344 -0.2673
0.2383 -0.3737 0.0598
d =
-0.7327 0 0
0 0.4876 0
0 0 3.7784
Python 3.5:
%INPUT
I=np.matrix([[2,0,0],
[0,6,0],
[0,0,5]])
C=np.matrix([[4,7,0],[7,8,-4],[0,-4,1]])
np.linalg.eigh(C)
%RESULT
(array([-3., 1.91723747, 14.08276253]),
matrix(
[[-0.57735027, 0.60061066, -0.55311256],
[ 0.57735027, -0.1787042 , -0.79670037],
[ 0.57735027, 0.77931486, 0.24358781]]))
At least if I has positive diagonal entries you can simply solve a transformed system:
# example problem
>>> A = np.random.random((3, 3))
>>> A = A.T # A
>>> I = np.identity(3) * np.random.random((3,))
# transform
>>> J = np.sqrt(np.einsum('ii->i', I))
>>> B = A / np.outer(J, J)
# solve
>>> eval_, evec = np.linalg.eigh(B)
# back transform result
>>> evec /= J[:, None]
# check
>>> A # evec
array([[ -1.43653725e-02, 4.14643550e-01, -2.42340866e+00],
[ -1.75615960e-03, -4.17347693e-01, -8.19546081e-01],
[ 1.90178603e-02, 1.34837899e-01, -1.69999003e+00]])
>>> eval_ * (I # evec)
array([[ -1.43653725e-02, 4.14643550e-01, -2.42340866e+00],
[ -1.75615960e-03, -4.17347693e-01, -8.19546081e-01],
[ 1.90178603e-02, 1.34837899e-01, -1.69999003e+00]])
OP's example. IMPORTANT: must use np.arrays for I and C, np.matrix will not work.
>>> I=np.array([[2,0,0],[0,6,0],[0,0,5]])
>>> C=np.array([[4,7,0],[7,8,-4],[0,-4,1]])
>>>
>>> J = np.sqrt(np.einsum('ii->i', I))
>>> B = C / np.outer(J, J)
>>> eval_, evec = np.linalg.eigh(B)
>>> evec /= J[:, None]
>>>
>>> evec
array([[-0.35578356, -0.31094779, -0.52605088],
[ 0.27778714, 0.1343625 , -0.267297 ],
[ 0.23826117, -0.37371199, 0.05975754]])
>>> eval_
array([-0.73271478, 0.48762792, 3.7784202 ])
If I has positive and negative entries use eig instead of eigh and before taking the square root cast to complex dtype.
Differing from other answers, I assume that by the symbol I you mean the identity matrix, Ix=x.
What you want to solve, Cx=λIx, is the so-called standard eigenvalue problem,
and most eigenvalue solvers tackle the problem described in that format, hence the
Numpy function has the signature eig(C).
If your C matrix is a symmetric matrix and your problem is indeed a standard eigenvalue problem I'd recommend the use of numpy.linalg.eigh, that is optimized for this type of problems.
On the contrary if your problem is really a generalized eigenvalue problem, as, e.g., the frequency equation Kx=ω²Mx you could use scipy.linalg.eigh, that supports that type of problem statement for symmetric matrices.
eigvals, eigvecs = scipy.linalg.eigh(C, I)
With respect to the discrepancies in eigenvalues, the Numpy implementation gives no guarantees w/r to their ordering, so it could be just a different ordering, but if your problem is indeed a generalized problem (I not being the identity matrix...) the solution is of course different and you have to use the Scipy implementation of eigh.
If the discrepancies is within the eigenvectors, please remember that the eigenvectors are known within an arbitrary scale factor and, again, the ordering could be undefined (but, of course, their order is the same order in which you have the eigenvalues) — the situation is a little different for scipy.linalg.eigh because in this case the eigenvalues are sorted and the eigenvectors are normalized with respect to the second matrix argument (I in your example).
Ps: scipy.linalg.eigh behaviour (i.e., sorted eigenvalues and normalized eigenvectors) is so convenient for my use cases that I use to use it also to solve standard eigenvalue problems.
Using SymPy:
>>> from sympy import *
>>> t = Symbol('t')
>>> D = diag(2,6,5)
>>> S = Matrix([[ 4, 7, 0],
[ 7, 8,-4],
[ 0,-4, 1]])
>>> (t*D - S).det()
60*t**3 - 212*t**2 - 77*t + 81
Computing the exact roots:
>>> roots = solve(60*t**3 - 212*t**2 - 77*t + 81,t)
>>> roots
[53/45 + (-1/2 - sqrt(3)*I/2)*(312469/182250 + sqrt(797521629)*I/16200)**(1/3) + 14701/(8100*(-1/2 - sqrt(3)*I/2)*(312469/182250 + sqrt(797521629)*I/16200)**(1/3)), 53/45 + 14701/(8100*(-1/2 + sqrt(3)*I/2)*(312469/182250 + sqrt(797521629)*I/16200)**(1/3)) + (-1/2 + sqrt(3)*I/2)*(312469/182250 + sqrt(797521629)*I/16200)**(1/3), 53/45 + 14701/(8100*(312469/182250 + sqrt(797521629)*I/16200)**(1/3)) + (312469/182250 + sqrt(797521629)*I/16200)**(1/3)]
Computing floating-point approximations of the roots:
>>> for r in roots:
... r.evalf()
...
0.487627918145732 + 0.e-22*I
-0.73271478047926 - 0.e-22*I
3.77842019566686 - 0.e-21*I
Note that the roots are real.

Isolate all variables to LHS in sympy?

I am using sympy to process some equations. I want to write the equations in a canonical form such that variables of interest are all on LHS. For eg. if I have,
lhs = sympify("e*x +f")`
rhs = sympify("g*y + t*x +h")`
eq = Eq(lhs,rhs)
e*x + f == g*y + h + t*x
I need a function which can isolate a list of given variables (my so called canonical form), like
IsolateVariablesToLHS(eq,[x,y]) # desired function
(e-t)*x - g*y == h-f # now x and y are on LHS and remaining are on RHS
I have the assurance that I will only get linear equations, so this is always possible.
>>> import sympy as sm
>>> lhs = sm.sympify('e*x + f')
>>> rhs = sm.sympify('g*y + t*x + h')
>>> eq = sm.Eq(lhs, rhs)
Here's a simple construct
def isolateVariablesToLHS(eq, syms):
l = sm.S.Zero
eq = eq.args[0] - eq.args[1]
for e in syms:
ind = eq.as_independent(e)[1]
l += ind
eq -= ind
return sm.Eq(l, eq)
>>> isolateVariablesToLHS(eq, [x, y])
Eq(e*x - g*y - t*x, f - h)
With the equation as provided in the question combine all the terms and construct a filter for discovering the required variables.
>>> from itertools import filterfalse
>>> terms = eq.lhs - eq.rhs
>>> vars = ['x', 'y']
>>> filt = lambda t: any(t.has(v) for v in vars)
>>> result = Eq(sum(filter(filt, terms.args)), - sum(filterfalse(filt, terms.args))
>>> result
e*x - g*y - t*x == -f + h
I'm not familiar with sympy but I think this will work for equations consisting of proper atoms such as Symbols. Make sure you replace the vars list with the actual instantiated Symbols x, y instead of the 'ascii' representations. This is probably required to combine terms that have variables in common.
filter and filterfalse might have different names in python 2.x, but this functionality is probably still in the itertools package.

Differentiation in python using Sympy

How do i implement this kind of equation in python dC/dt = r + kI - dC where the left hand side are constants and the right hand side are varibles?
i am relatively new to python and as such can't really do much.
from sympy.solvers import ode
r=float(input("enter r:"))
k=float(input("enter k:"))
I=float(input("enter I:"))
d=float(input("enter d:"))
C=float(input("enter C:"))
dC/dt=x
x=r + kI-dC
print(x)
what it just does equate the values of x and not any differential, would like help getting this to work.
if possible i would like to get answer specifying the using of sympy,
but all answers are truly appreciated.
You asigned values to all the variables that are on the rhs of x so when you show x you see the value that it took on with the variables that you defined. Rather than input values, why not try solve the ode symbolically if possible?
>>> from sympy import *
>>> var('r k I d C t')
(r, k, I, d, C, t)
>>> eq = Eq(C(t).diff(t), r + k*I + d*C(t)) # note d*C(t) not d*C
>>> ans = dsolve(eq); ans
C(t) == (-I*k - r + exp(d*(C1 + t)))/d
Now you can substitute in values for the variables to see the result:
>>> ans.subs({k: 0})
C(t) == (-r + exp(d*(C1 + t)))/d

Categories