Solving ODEs with SymPy - python

I've just started trying out SymPy. Unfortunately, I am already stumped. Behold this:
from sympy import *
t, G, M = symbols('t G M', real = True)
x = Function('x')
y = Function('y')
print(Eq(Derivative(x, t, 2), G * M * x(t) / (x(t) * x(t) + y(t) * y(t))**1.5))
...it simply prints False. The documentation says that this means the relationship can be proven to be false.
I know that I can prevent evaluation by using evaluation = False, but eventually I want to solve my system of differential equations, and then this assumption will come into play again.
So, can anyone see what I did wrong here?
Addendum:
What I am trying to do is play around with the two-body-problem and orbital mechanics. With the gravitational constant G and the mass of the primary M at the origin, this and the symmetric equation for y(t) describe the gravitational acceleration on the secondary.
The solution, Kepler tells us, should be an ellipse for reasonable starting conditions.

I found it now, the solution is rather simple: SymPy needs to be told the x is a function of t, so
print(Eq(Derivative(x(t), t, 2), G * M * x(t) / (x(t) * x(t) + y(t) * y(t))**1.5))
does the trick.

Related

Using sympy to rearrange equation?

Im trying to rearrange a supply curve equation for calculation the price elasticy.
The equation is log P = -2 + 1.7 * log Q.
Im trying to rearrange the equation so that log Q is in terms of log P. Is there a way sympy can handle these rearrangements?
p, q = sp.symbols('p, q', real=True, positive=True)
eq = sp.Eq(-2+1.7*sp.log(q))
sp.solve(eq,sp.log(p))
Using sympy solve will take care of the arrangements. The equations just must be set up properly and the function parameters must be correct.
import sympy as sp
p, q = sp.symbols('p, q', real=True, positive=True)
eq = sp.Eq(-2+1.7*sp.log(q), p) # set equation = to p
ans = sp.solve(eq,q) #solve equation for q in terms of p
This solution simply saves your equation to eq and solves eq for q so the answer is in terms of only p. In the end ans holds the value:
[3.24290842192454*exp(0.588235294117647*p)]
I highly reccomend reading this website before coming to SO.
Although you showed log P in the written text, you did not include it in the code. If you do so, you can then request that solve isolate log(q) for you:
>>> eq = Eq(log(p), -2 + 10*log(q)/17)
>>> solve(eq, log(q))
[17*log(p)/10 + 17/5]

How to solve this equation with sympy?

Recently I got a long equation to solve that looks like that.
I've tried to solve this using sympy.solveset(), but it returned ConditionSet which means it couldn't handle this equation. How can I solve this equation using simpy library and if not at least in python? The code that I used:
import sympy as sp
t = sp.symbols('t')
a = 1.46
b = 1.2042 * 10**-4 * ((1.2275 * 10**-5 + t) * sp.ln(1.2275 * 10**-5 + t) - t)
result = sp.solveset(sp.Eq(a, b), t)
print(result)
This is a transcendental equation. It possibly has an analytic solution in terms of the Lambert W function but I'm not sure. I'll assume that you just want a numerical solution which you can get using nsolve:
In [42]: nsolve(a - b, t, 1)
Out[42]: 1857.54700584719

How to put an integral in a function in python/matplotlib

So pretty much, I am aiming to achieve a function f(x)
My problem is that my function has an integral in it, and I only know how to construct definite integrals, so my question is how does one create an indefinite integral in a function (or there may be some other method I am currently unaware of)
My function is defined as :
(G is gravitational constant, although you can leave G out of your answer for simplicity, I'll add it in my code)
Here is the starting point, but I don't know how to do the integral portion
import numpy as np
def f(x):
rho = 5*(1/(1+((x**2)/(3**2))))
function_result = rho * 4 * np.pi * x**2
return function_result
Please let me know if I need to elaborate on something.
EDIT-----------------------------------------------------
I made some major progress, but I still have one little error.
Pretty much, I did this:
from sympy import *
x = Symbol('x')
rho = p0()*(1/(1+((x**2)/(rc()**2))))* 4 * np.pi * x**2
fooply = integrate(rho,x)
def f(rx):
function_result = fooply.subs({x:rx})
return function_result
Which works fine when I plug in one number for f; however, when I plug in an array (as I need to later), I get the error:
raise SympifyError(a)
sympy.core.sympify.SympifyError: SympifyError: [3, 3, 3, 3, 3]
(Here, I did print(f([3,3,3,3,3]))). Usually, the function returns an array of values. So if I did f([3,2]) it should return [f(3),f(2)]. Yet, for some reason, it doesn't for my function....
Thanks in advance
how about:
from sympy import *
x, p0, rc = symbols('x p0 rc', real=True, positive=True)
rho = p0*(1/(1+((x**2)/(rc))))* 4 * pi * x**2
fooply = integrate(rho,x)/x
rho, fooply
(4*pi*p0*x**2/(1 + x**2/rc),
4*pi*p0*rc*(-sqrt(rc)*atan(x/sqrt(rc)) + x)/x)
fooply = fooply.subs({p0: 2.0, rc: 3.0})
np_fooply = lambdify(x, fooply, 'numpy')
print(np_fooply(np.array([3,3,3,3,3])))
[ 29.81247362 29.81247362 29.81247362 29.81247362 29.81247362]
To plug in an array to a SymPy expression, you need to use lambdify to convert it to a NumPy function (f = lambdify(x, fooply)). Just using def and subs as you have done will not work.
Also, in general, when using symbolic computations, it's better to use sympy.pi instead of np.pi, as the former is symbolic and can simplify. It will automatically be converted to the numeric pi by lambdify.

Solving a system of odes (with changing constant!) using scipy.integrate.odeint?

I currently have a system of odes with a time-dependent constant. E.g.
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a * x + y * z
dy_dt = b * (y-z)
dz_dt = -x*y+c*y-z
return [dx_dt, dy_dt, dz_dt]
The constants are "a", "b" and "c". I currently have a list of "a"s for every time-step which I would like to insert at every time-step, when using the scipy ode solver...is this possible?
Thanks!
Yes, this is possible. In the case where a is constant, I guess you called scipy.integrate.odeint(fun, u0, t, args) where fun is defined as in your question, u0 = [x0, y0, z0] is the initial condition, t is a sequence of time points for which to solve for the ODE and args = (a, b, c) are the extra arguments to pass to fun.
In the case where a depends on time, you simply have to reconsider a as a function, for example (given a constant a0):
def a(t):
return a0 * t
Then you will have to modify fun which computes the derivative at each time step to take the previous change into account:
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a(t) * x + y * z # A change on this line: a -> a(t)
dy_dt = b * (y - z)
dz_dt = - x * y + c * y - z
return [dx_dt, dy_dt, dz_dt]
Eventually, note that u0, t and args remain unchanged and you can again call scipy.integrate.odeint(fun, u0, t, args).
A word about the correctness of this approach. The performance of the approximation of the numerical integration is affected, I don't know precisely how (no theoretical guarantees) but here is a simple example which works:
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
import scipy.integrate
tmax = 10.0
def a(t):
if t < tmax / 2.0:
return ((tmax / 2.0) - t) / (tmax / 2.0)
else:
return 1.0
def func(x, t, a):
return - (x - a(t))
x0 = 0.8
t = np.linspace(0.0, tmax, 1000)
args = (a,)
y = sp.integrate.odeint(func, x0, t, args)
fig = plt.figure()
ax = fig.add_subplot(111)
h1, = ax.plot(t, y)
h2, = ax.plot(t, [a(s) for s in t])
ax.legend([h1, h2], ["y", "a"])
ax.set_xlabel("t")
ax.grid()
plt.show()
I Hope this will help you.
No, that is not possible in the literal sense of
"I currently have a list of "a"s for every time-step which I would like to insert at every time-step"
as the solver has adaptive step size control, that is, it will use internal time steps that you have no control over, and each time step uses several evaluations of the function. Thus there is no connection between the solver time steps and the data time steps.
In the extended sense that the given data defines a piecewise constant step function however, there are several approaches to get to a solution.
You can integrate from jump point to jump point, using the ODE function with the constant parameter for this time segment. After that use numpy array operations like concatenate to assemble the full solution.
You can use interpolation functions like numpy.interp or scipy.interpolate.interp1d. The first gives a piecewise linear interpolation, which may not be desired here. The second returns a function object that can be configured to be a "zero-order hold", which is a piecewise constant step function.
You could implement your own logic to go from the time t to the correct values of those parameters. This mostly applies if there is some structure to the data, for instance, if they have the form f(int(t/h)).
Note that the approximation order of the numerical integration is not only bounded by the order of the RK (solve_ivp) or multi-step (odeint) method, but also by the differentiability order of the (parts of) the differential equation. If the ODE is much less smooth than the order of the method, the implicit assumptions for the step size control mechanism are violated, which may result in a very small step size requiring a huge number of integration steps.
I also encountered similar problem. In my case, parameters a, b, and c are not in direct function with time, but determined by x, y, and z at that time. So I have to get x, y, z at time t, and calculate a, b, c for the integration calculation for x, y, z at t+dt. It turns out that if I change dt value, the whole integration result will change dramatically, even to something unreasonable.

Solve an implicit ODE (differential algebraic equation DAE)

I'm trying to solve a second order ODE using odeint from scipy. The issue I'm having is the function is implicitly coupled to the second order term, as seen in the simplified snippet (please ignore the pretend physics of the example):
import numpy as np
from scipy.integrate import odeint
def integral(y,t,F_l,mass):
dydt = np.zeros_like(y)
x, v = y
F_r = (((1-a)/3)**2 + (2*(1+a)/3)**2) * v # 'a' implicit
a = (F_l - F_r)/mass
dydt = [v, a]
return dydt
y0 = [0,5]
time = np.linspace(0.,10.,21)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon,mass))
in this case I realise it is possible to algebraically solve for the implicit variable, however in my actual scenario there is a lot of logic between F_r and the evaluation of a and algebraic manipulation fails.
I believe the DAE could be solved using MATLAB's ode15i function, but I'm trying to avoid that scenario if at all possible.
My question is - is there a way to solve implicit ODE functions (DAE) in python( scipy preferably)? And is there a better way to pose the problem above to do so?
As a last resort, it may be acceptable to pass a from the previous time-step. How could I pass dydt[1] back into the function after each time-step?
Quite Old , but worth updating so it may be useful for anyone, who stumbles upon this question. There are quite few packages currently available in python that can solve implicit ODE.
GEKKO (https://github.com/BYU-PRISM/GEKKO) is one of the packages, that specializes on dynamic optimization for mixed integer , non linear optimization problems, but can also be used as a general purpose DAE solver.
The above "pretend physics" problem can be solved in GEKKO as follows.
m= GEKKO()
m.time = np.linspace(0,100,101)
F_l = m.Param(value=1000)
mass = m.Param(value =1000)
m.options.IMODE=4
m.options.NODES=3
F_r = m.Var(value=0)
x = m.Var(value=0)
v = m.Var(value=0,lb=0)
a = m.Var(value=5,lb=0)
m.Equation(x.dt() == v)
m.Equation(v.dt() == a)
m.Equation (F_r == (((1-a)/3)**2 + (2*(1+a)/3)**2 * v))
m.Equation (a == (1000 - F_l)/mass)
m.solve(disp=False)
plt.plot(x)
if algebraic manipulation fails, you can go for a numerical solution of your constraint, running for example fsolve at each timestep:
import sys
from numpy import linspace
from scipy.integrate import odeint
from scipy.optimize import fsolve
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 10.
mass = 1000.
def F_r(a, v):
return (((1 - a) / 3) ** 2 + (2 * (1 + a) / 3) ** 2) * v
def constraint(a, v):
return (F_lon - F_r(a, v)) / mass - a
def integral(y, _):
v = y[1]
a, _, ier, mesg = fsolve(constraint, 0, args=[v, ], full_output=True)
if ier != 1:
print "I coudn't solve the algebraic constraint, error:\n\n", mesg
sys.stdout.flush()
return [v, a]
dydt = odeint(integral, y0, time)
Clearly this will slow down your time integration. Always check that fsolve finds a good solution, and flush the output so that you can realize it as it happens and stop the simulation.
About how to "cache" the value of a variable at a previous timestep, you can exploit the fact that default arguments are calculated only at the function definition,
from numpy import linspace
from scipy.integrate import odeint
#you can choose a better guess using fsolve instead of 0
def integral(y, _, F_l, M, cache=[0]):
v, preva = y[1], cache[0]
#use value for 'a' from the previous timestep
F_r = (((1 - preva) / 3) ** 2 + (2 * (1 + preva) / 3) ** 2) * v
#calculate the new value
a = (F_l - F_r) / M
cache[0] = a
return [v, a]
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon, mass))
Notice that in order for the trick to work the cache parameter must be mutable, and that's why I use a list. See this link if you are not familiar with how default arguments work.
Notice that the two codes DO NOT produce the same result, and you should be very careful using the value at the previous timestep, both for numerical stability and precision. The second is clearly much faster though.

Categories