How to solve separable differential equation using Sympy? - python

I cannot figure out how to solve this separable differential equation using sympy. Help would be greatly appreciated.
y′=(y−4)(y−2),y(0)=5
Here was my attempt, thanks in advance!!!
import sympy as sp
x,y,t = sp.symbols('x,y,t')
y_ = sp.Function('y_')(x)
diff_eq = sp.Eq(sp.Derivative(y_,x), (y-4)*(y-2))
ics = {y_.subs(x,0):5}
sp.dsolve(diff_eq, y_, ics = ics)
the output is y(x) = xy^2 -6xy +8x + 5

The primary error is the introduction of y_. This makes the variable y a constant parameter of the ODE and you get the wrong solution.
If you correct this you get an error of "too many solutions for the integration constant". This is a bug caused by not simplifying the integration constant after it first occurs. So multiplication and addition of constants should just be absorbed, an additive constant in an exponent should become a multiplicative factor for the exponential. As it is, exp(2*C_1)==3 has two solutions if C_1 is considered as an angle (it's a bit of tortured logic from computing roots in the complex plane).
The newer versions can actually solve this fully if you give the third hint in the classification list 'separable', '1st_exact', '1st_rational_riccati', ... that does something different than partial fraction decomposition of the first two
from sympy import *
x = Symbol('x')
y = Function('y')(x)
dsolve(Eq(y.diff(x), (y-2)*(y-4)),y,
ics={y.subs(x,0):5},
hint='1st_rational_riccati')
returning
\displaystyle y{\left(x \right)} = \frac{2 \cdot \left(6 - e^{2 x}\right)}{3 - e^{2 x}}

Related

Solve Python linear mathematical optimization problem - vector - easy constraint

I do not know how to set up and solve the following linear optimization problem in Python. Can you help?
R and Beta are vectors of length n with known constants.
x is a vector of length n but is unknown - to be optimized.
constraints:
`B * x = 200 cross product (elementwise product then sum)
x(i)>0
find optimal values in vector x such that we maximize:
max(R * x)
It seems to be not a very complicated, but I did not succeed setting up the problem in the scipy library. Any help is appreciated.
That looks like a linear programming problem, take a look into the scipy library to solve that.

Finding all roots of a transcendental equation in Python (eg. Bound state solutions of the Finite Square Well potential)

So I'm trying to solve for the eigenvalues of bound state solutions to the finite square well potential which involves solving the equation:
I was able to solve it graphically by plotting the two functions and figuring out where they intersect, then using the intersection as an initial guess in scipy.optimize.root. However, this seems like a fairly cumbersome process. From what I understand, root finding methods usually requires an initial guess and finds the local minima closest to the initial guess.
What I'm wondering is if there is a way to find all the roots or all local minima in a given range in Python without necessarily providing an initial guess. From the web searches I've done, there seems to be some methods in mathematica but in general, finding all roots within a good tolerance is impossible except for specific functions. What I'm wondering, though, is whether my equation happens to fall under those specific situations? Below is a graph of the two functions and the code I used using scipy.optimize.root:
def func(z, z0):
return np.tan(z) - np.sqrt((z0/z)**2 - 1)
#finding z0
a = 3
V0 = 3
h_bar = 1
m = 1
z0 = a*np.sqrt(2*m*V0)/h_bar
#calculating roots
res = opt.root(func, [1.4, 4.0, 6.2], args = (z0, ))
res.x
#output
array([1.38165158, 4.11759958, 6.70483966])
There is no general way.
In this specific case, all roots are well localized due to periodicity of the tangent function, so you can simply run root-findind, e.g. brentq on each of these intervals.
This is more of a hack rather than an answer, but one may succeed in making the process of root finding a bit less tedious by providing some grid of initial guesses and taking the set of solutions found with all the attempts.
Using sympy (whose defaults in nsolve may provide a more robust solver) you could do
from sympy.solvers import nsolve
from sympy import tan, sqrt, symbols
import numpy as np
a = 3
V0 = 3
h_bar = 1
m = 1
z0 = a*np.sqrt(2*m*V0)/h_bar
z = symbols('z')
expr = tan(z) - sqrt((z0/z)**2 - 1)
# try out a grid of initial conditions and get set of found solutions
# this may fail with some other choice of interval
initial_guesses = np.linspace(0.2, 7, 100)
# candidate solutions
set([nsolve(expr, z, guess) for guess in initial_guesses])

Sympy function derivatives and sets of equations

I'm working with nonlinear systems of equations. These systems are generally a nonlinear vector differential equation.
I now want to use functions and derive them with respect to time and to their time-derivatives, and find equilibrium points by solving the nonlinear equations 0=rhs(eqs).
Similar things are needed to calculate the Euler-Lagrange equations, where you need the derivative of L wrt. diff(x,t).
Now my question is, how do I implement this in Sympy?
My main 2 problems are, that deriving a Symbol f wrt. t diff(f,t), I get 0. I can see, that with
x = Symbol('x',real=True);
diff(x.subs(x,x(t)),t) # because diff(x,t) => 0
and
diff(x**2, x)
does kind of work.
However, with
x = Fuction('x')(t);
diff(x,t);
I get this to work, but I cannot differentiate wrt. the funtion x itself, like
diff(x**2,x) -DOES NOT WORK.
Since I need these things, especially not only for scalars, but for vectors (using jacobian) all the time, I really want this to be a clean and functional workflow.
Which kind of type should I initiate my mathematical functions in Sympy in order to avoid strange substitutions?
It only gets worse for matricies, where I cannot get
eqns = Matrix([f1-5, f2+1]);
variabs = Matrix([f1,f2]);
nonlinsolve(eqns,variabs);
to work as expected, since it only allows symbols as input. Is there an easy conversion here? Like eqns.tolist() - which doesn't work either?
EDIT:
I just found this question, which was answered towards using expressions and matricies. I want to be able to solve sets of nonlinear equations, build the jacobian of a vector wrt. another vector and derive wrt. functions as stated above. Can anyone point me into a direction to start a concise workflow for this purpose? I guess the most complex task is calculating the Lie-derivative wrt. a vector or list of functions, the rest should be straight forward.
Edit 2:
def substi(expr,variables):
return expr.subs( {w:w(t)} )
would automate the subsitution, such that substi(vector_expr,varlist_vector).diff(t) is not all 0.
Yes, one has to insert an argument in a function before taking its derivative. But after that, differentiation with respect to x(t) works for me in SymPy 1.1.1, and I can also differentiate with respect to its derivative. Example of Euler-Lagrange equation derivation:
t = Symbol("t")
x = Function("x")(t)
L = x**2 + diff(x, t)**2 # Lagrangian
EL = -diff(diff(L, diff(x, t)), t) + diff(L, x)
Now EL is 2*x(t) - 2*Derivative(x(t), t, t) as expected.
That said, there is a build-in method for Euler-Lagrange:
EL = euler_equations(L)
would yield the same result, except presented as a differential equation with right-hand side 0: [Eq(2*x(t) - 2*Derivative(x(t), t, t), 0)]
The following defines x to be a function of t
import sympy as s
t = s.Symbol('t')
x = s.Function('x')(t)
This should solve your problem of diff(x,t) being evaluated as 0. But I think you will still run into problems later on in your calculations.
I also work with calculus of variations and Euler-Lagrange equations. In these calculations, x' needs to be treated as independent of x. So, it is generally better to use two entirely different variables for x and x' so as not to confuse Sympy with the relationship between those two variables. After we are done with the calculations in Sympy and we go back to our pen and paper we can substitute x' for the second variable.

Solving improper integral without approximating

I'm having trouble solving this integral in python. The function being integrated is not defined on the boundaries of integration.
I've found a few more questions similar to this, but all were very specific replies to the issue in particular.
I don't want to approximate the integral too much, if possible not at all, as the reason I'm doing this integral in the first place is to avoid an approximation.
Is there any way to solve this integral?
import numpy as np
from pylab import *
import scipy
from math import *
from scipy import integrate
m_Earth_air = (28.0134*0.78084)+(31.9988*0.209476)+(39.948*0.00934)+(44.00995*0.000314)+(20.183*0.00001818)+(4.0026*0.00000524)+(83.80*0.00000114)+(131.30*0.000000087)+(16.04303*0.000002)+(2.01594*0.0000005)
Tb0 = 288.15
Lb0 = -6.5
Hb0 = 0.0
def Tm_0(z):
return Tb0+Lb0*(z-Hb0)
k = 1.38*10**-19 #cm^2.kg/s^2.K #Boltzmann cst
mp = 1.67262177*10**-27 #kg
Rad= 637100000.0 #radius planet #cm
g0 = 980.665 #cm/s^2
def g(z):
return (g0*((Rad/(Rad+z))**2.0))
def scale_height0(z):
return k*Tm_0(z*10**-5)/(m_Earth_air*mp*g(z))
def functionz(z,zvar):
return np.exp(-zvar/scale_height0(z))*((Rad+zvar)/(Rad+z))/((np.sqrt(((Rad+zvar)/(Rad+z))**2.0-1.0)))
def chapman0(z):
return (1.0/(scale_height0(z)))*((integrate.quad(lambda zvar: functionz(z,zvar), z, np.inf))[0])
print chapman0(1000000)
print chapman0(5000000)
The first block of variables and definitions are fine. The issue is in the "functionz(z,zvar)" and its integration.
Any help very much appreciated !
Unless you can solve the integral analytically there is no way to solve it without an approximation over its bounds. This isn't a Python problem, but a calculus problem in general, thus why math classes take such great pains to show you the numeric approximations.
If you don't want it to differ too much, choose a small epsilon with a method that converges fast.
Edit- Clarity on last statement:
Epsilon - ɛ - refers to the step size through the bounds of integration- the delta x- remember that the numeric approximation methods all slice the integral into slivers and add them back up, think of it as the width of each sliver, the smaller the sliver the better the approximation. You can specify these in numerical packages.
A method that converges fast implies the method approaches the true value of the integral quickly and the error of approximation is small for each sliver. For example, the Riemann sum is a naive method which assumes each sliver is a rectangle, while a trapezoid connects the beginning and the end of the sliver with a line to make a trapezoid. Of these 2, trapezoid typically converges faster as it tries to account for the change within the shape. (Neither is typically used as there are better guesses for most functions)
Both of these variables change the computational expense of the calculation. Typically epsilon is the most expensive to change, thus why it is important you choose a good method of approximation (some can differ by an order of magnitude in error for the same epsilon).
All of this will depend on how much error your calculation can tolerate.
It often helps to eliminate possible numerical instabilities by rescaling variables. In your case zvar starts from 1e6, which is probably causing problems due to some implementation details in quad(). If you scale it as y = zvar / z, so that the integration starts from 1 it seems to converge pretty well for z = 1e6:
def functiony(z, y):
return np.exp(-y*z/scale_height0(z))*(Rad+y*z)/(Rad+z) / np.sqrt(((Rad+y*z)/(Rad+z))**2.0-1.0)
def chapman0y(z):
return (1.0/(scale_height0(z)))*((integrate.quad(lambda y: functiony(z,y), 1, np.inf))[0])
>>> print(chapman0y(1000000))
1.6217257661844094e-06
(I set m_Earth_air = 28.8e-3 — this constant is missing in your code, I assumed it is the molar mass of air in (edit) kg/mole).
As for z = 5e6, scale_height0(z) is negative, which gives a huge positive value under the exponent, making the integral divergent on the infinity.
I had a similar issue and found that SciPy quad needs you to specify another parameter, epsabs=1e-1000, limit=1000 (stepsize limit), epsrel=1e1 works for everything I've tried. I.e. in this case:
def chapman0(z):
return (1.0/(scale_height0(z)))*((integrate.quad(lambda zvar: functionz(z,zvar), z, np.inf, limit=1000, epsabs=1e-1000, epsrel=1e1))[0])[0])
#results:
0.48529410529321887
-1.276589093231806e+21
Seems to be a high absolute error tolerance but for integrals that don't rapidly converge it seems to fix the issue. Just posting for others with similar problems as this post is quite dated. There are algorithms in other packages that converge faster but none that I've found in SciPy. The results are based on the posted code (not the selected answer).

On ordinary differential equations (ODE) and optimization, in Python

I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files

Categories