Sympy inverse_laplace_transform isn't working? - python

I'm having a big issue with the inverse laplace transform in sympy when I try to antitransform (s + 0.2)/(s*(s + 0.2) + 1).
The code I'm using:
from sympy import *
s = symbols('s')
t = symbols('t')
f = (s + 0.2)/(s*(s + 0.2) + 1)
inverse_laplace_transform(f,s,t)
That code breaks the program and I don't understand why, because with others functions the method doesn't crash down.

Apparently there is a problem with the mpmath, that is a module for arbitrary precision arithmetic, or with the integration of this module with sympy.
You can circumvent the bug by replacing 0.2 by a rational. Something like this worked for me:
from sympy import *
s = symbols('s')
t = symbols('t')
a = Rational(1, 5)
f = (s + a)/(s*(s + a) + 1)
inverse_laplace_transform(f,s,t)
That returns:
sqrt(11)*(sin(3*sqrt(11)*t/10) + 3*sqrt(11)*cos(3*sqrt(11)*t/10))*exp(-t/10)*Heaviside(t)/33

Related

Are there tricks and tweaks for unevaluated integrals in SymPy?

I try to verify some formula from a paper concerning mechanics by retracing their derivation, involving some integrals like these:
The original source suggests that the solution is possible, but it seems quite tricky. I tried my best with SymPy (also with Maxima and Mathematica) fiddling around with different assumptions and simplifications, without any success:
import sympy as sp
from IPython.display import display, Math
x = sp.symbols('x', nonnegative=True, real=True, finite=True)
a = sp.symbols('a', positive=True, real=True, finite=True)
b = sp.symbols('b', positive=True, real=True, finite=True)
# print("assumptions for x:", x.assumptions0)
# print("assumptions for a, b:", r.assumptions0)
expr = x/(b + 2*(a-sp.sqrt(x)*sp.sqrt(2*a-x)))**3
expr = sp.expand(expr)
display(Math("e = "+sp.latex(expr)))
integ = sp.integrate(expr, (x, 0, a))
display(Math("integ = "+sp.latex(integ)))
SymPy returns my integrals unevaluated. Are there any other tricks and tweaks I could try to get a solution? The parameter a and b are physical dimensions (finite positive real numbers).
The answer found using Rubi is
(a^2*(2*a + b)^4*(Sqrt[2*a - Sqrt[-4*a - b]*Sqrt[b]]*Sqrt[2*a + Sqrt[-4*a - b]*Sqrt[b]]*
(Sqrt[b]*(-2*a + b)*Sqrt[-(4*a + b)^2] + 6*a*Sqrt[-4*a - b]*(2*a + b)*ArcTan[(2*a)/(Sqrt[b]*Sqrt[4*a + b])]) -
6*a*(2*a + b)^2*Sqrt[4*a + b]*ArcTanh[Sqrt[2*a - Sqrt[-4*a - b]*Sqrt[b]]/Sqrt[2*a + Sqrt[-4*a - b]*Sqrt[b]]] +
6*a*(2*a + b)^2*Sqrt[4*a + b]*ArcTanh[Sqrt[2*a + Sqrt[-4*a - b]*Sqrt[b]]/Sqrt[2*a - Sqrt[-4*a - b]*Sqrt[b]]]))/
(2*(2*a - Sqrt[-4*a - b]*Sqrt[b])^(5/2)*(2*a + Sqrt[-4*a - b]*Sqrt[b])^(5/2)*Sqrt[-4*a - b]*b^(5/2)*(4*a + b)^(5

How to proof with Sympy that a given Cartesian equation can be written as a given polar equation

i have an assignment on sympy and am struggling with the following question:
"Prove with the help of Sympy that 4*(x2 + y2 -ax)3 = 27a2(x2+y2)2 can be written using r = 4a*cos(theta/3)3".
I have tried to substitute x = r*cos(theta) and y = r*sin(theta).
Then I tried sp.solveset(eq, r) but I only got a very longset of {}, nothing like the given polar equation.
Does anyone know how to do this (I can use sympy and numpy)?
The following code builds the equation from its left hand side and right hand side. Then the change of variables to polar coordinates is performed using substitution.
The resulting trigonometric expression is then simplified, and it turns out to be zero after simplification. So any pair/tuple (x,y)=(r*cos(theta),r*sin(theta)) is a solution.
from sympy import *
a,x,y,theta = symbols('a x y \Theta', real=True)
init_printing(use_latex=True)
lhs = 4 * (x**2 + y**2 - a*x) ** 3
rhs = 27 * a**2 * (x**2 + y**2)**2
f = lhs - rhs
r = 4 * a * cos(theta/3)**3
display(f,"----")
f = f.subs(x,r*cos(theta))
f = f.subs(y,r*sin(theta))
display(f,"----")
f1 = f
display(simplify(f))
# format for wolframalpha
t = symbols('t')
f1 = f1.subs(theta,t)
import re
f1 = re.sub("\*\*","^",str(f1))
print("----")
print("wolframalpha expression: solve ", str(f1)," over the reals")
To double-check this, at the end, a wolframalpha query is also generated, which confirms the solutions.

SymPy "solves" a differential equation it shouldn't solve

Here's what I did:
from sympy import *
x = symbols("x")
y = Function("y")
dsolve(diff(y(x),x) - y(x)**x)
The answer I get (SymPy 1.0) is:
Eq(y(x), (C1 - x*(x - 1))**(1/(-x + 1)))
But that's wrong. Both Mathematica and Maple can't solve this ODE. What's happening here?
A bug. SymPy thinks it's a Bernoulli equation
y' = P(x) * y + Q(x) * y**n
without checking that the exponent n is constant. So the solution is wrong.
I raised an issue on SymPy tracker. It should be soon fixed in the development version of SymPy and subsequently in version 1.2. (As an aside, 1.0 is a bit old, many things have improved in 1.1.1 although not that one.)
With the correction, SymPy recognizes there is no explicit solution and resorts to power series method, producing a few terms of the power series:
Eq(y(x), x + x**2*log(C1)/2 + x**3*(log(C1)**2 + 2/C1)/6 + x**4*(log(C1)**3 + 9*log(C1)/C1 - 3/C1**2)/24 + x**5*(log(C1)**4 + 2*(log(C1) - 1/C1)*log(C1)/C1 + 2*(2*log(C1) - 1/C1)*log(C1)/C1 + 22*log(C1)**2/C1 - 20*log(C1)/C1**2 + 20/C1**2 + 8/C1**3)/120 + C1 + O(x**6))
You don't have to wait for the patch to get this power series, it can be obtained by giving SymPy a "hint":
dsolve(diff(y(x), x) - y(x)**x, hint='1st_power_series')
Works better with an initial condition:
dsolve(diff(y(x), x) - y(x)**x, ics={y(0): 1}, hint='1st_power_series')
returns
Eq(y(x), 1 + x + x**3/3 - x**4/8 + 7*x**5/30 + O(x**6))

Rewrite equation as polynomial

from sympy import *
K, T, s = symbols('K T s')
G = K/(1+s*T)
Eq1 =Eq(G+1,0)
I want to rewrite equation Eq1 with sympy as polynomial: 1+K+T*s==0
How would I do this?
I spent some hours of searching and trying simplifications methods but could not find a elegant, short solution.
The actual problem in SymPy:
from IPython.display import display
import sympy as sp
sp.init_printing(use_unicode=True,use_latex=True,euler=True)
Kf,Td0s,Ke,Te,Tv,Kv,s= sp.symbols("K_f,T_d0^',K_e,T_e,T_v,K_v,s")
Ga= Kf/(1+s*Tv)
Gb= Ke/(1+s*Te)
Gc= Kf/(1+s*Td0s)
G0=Ga*Gb*Gc
G1=sp.Eq(G0+1,0)
display(G1)
How to tell Sympy to rewrite equation G1 as polynomial in shape s^3*(...)+s^2*(...)+s*(...)+(...)=... ?
The actual problem from textbook: http://i.imgur.com/J1MYo9H.png
How it should look like: http://i.imgur.com/RqEDo7H.png
The two equations are equivalent.
Here's what you can do.
import sympy as sp
Kf,Td0s,Ke,Te,Tv,Kv,s= sp.symbols("K_f,T_d0^',K_e,T_e,T_v,K_v,s")
Ga= Kf/(1+s*Tv)
Gb= Ke/(1+s*Te)
Gc= Kf/(1+s*Td0s)
G0=Ga*Gb*Gc
Throw away the denominator
eq = (G0 + 1).as_numer_denom()[0]
Expand the equation and collect terms with powers of s.
eq = eq.expand().collect(s)
Final Equation
Eq(eq, 0)
Eq(K_e*K_f**2 + T_d0^'*T_e*T_v*s**3 + s**2*(T_d0^'*T_e + T_d0^'*T_v + T_e*T_v) + s*(T_d0^' + T_e + T_v) + 1, 0)

Equation solving in Python

I am trying to solve equations such as the following for x:
Here the alpha's and K are given, and N will be upwards of 1,000. Is there a way to specify the LHS given an np.array for the alpha's using sympy? My hope was to define:
eqn = Eq(LHR - K)
solve(eqn,x)
by telling sympy that LHS= sum( a_i + x).
Any tips on solvers which would do this the fastest would also be appreciated. Thanks!
I was hoping for something like:
from sympy import Symbol, symbols, solve, summation, log
import numpy as np
N=10
K=1
alpha=np.random.randn(N, 1)
x = Symbol('x')
i = Symbol('i')
eqn = summation(log(x+alpha[i]), (i, 1, N))
solve(eqn-K,x)
You can't index a NumPy array with a SymPy symbol. Since your sum is finite, just use the Python sum function:
>>> alpha=np.random.randn(1, N)
>>> sum([log(x + i) for i in alpha[0]])
log(x - 1.85289943713841) + log(x - 1.40121781484552) + log(x - 1.21850393539695) + log(x - 0.605693136420962) + log(x - 0.575839713282035) + log(x - 0.105389419698408) + log(x + 0.415055726774043) + log(x + 0.71601559149345) + log(x + 0.866995633213984) + log(x + 1.12521825562504)
But even so, I don't get why you don't just rewrite this as (x - alpha[0])*(x - alpha[1])*...*(x - alpha[N - 1]) - exp(K), as suggested by Warren Weckesser. You can then use a numerical solver like SymPy's nsolve or something from another library to solve this numerically
>>> nsolve(Mul(*[(x - i) for i in alpha[0]]) - exp(K), 1)
mpf('1.2696755961730152')
You could also solve the log expression numerically, but unless your logs can have negative arguments, these should be the same.

Categories