Unexpected behavior with Scipy.integrate.odeint on discontinuous function - python

I am trying to solve the following ordinary differential equation:
f'(t) = -2 if 300 <= t <= 303 else 0
import scipy.integrate as integr
import matplotlib.pyplot as plt
Y0=25
def f(Y,t):
a= -2 if 300 <=t <= 303 else 0
return a
T=np.linspace(0,500,5000)
sol=integr.odeint(f,Y0,T)
plt.plot(T,sol)
plt.show()
However the result is only a flat line :
[1]: https://i.stack.imgur.com/KAK4F.png
Whereas it works fine if the interval is bigger : 150 <= t <= 350 instead of 300 <= t <= 303
Any idea why?
Thanks in advance

odeint (which is a wrapper to LSODA) or any ODE solvers can't really deal with discontinuities like this in one fellow swoop. ODE solvers normally assume a smooth solution. The solution is flat because odeint is likely taking such large timesteps that it is stepping past the t = 300 to 303 region.
In these situations it is best to do a single integration for each smooth part. So integrate from t = 0 to 300, then stop. Use solution at 300 as initial conditions to integrate from 300 to 303, then stop. Then use solution at t = 303 to integrate from 303 to 500.
There are some packages which can automate this... I know Assimulo has this feature. But I don't know how to use it.
Here is a clunky solution with odeint
import scipy.integrate as integr
import matplotlib.pyplot as plt
def f1(y,t):
return 0.0
def f2(y,t):
return -2.0
y0 = np.array([25])
t1=np.linspace(0,300,1000)
sol1=integr.odeint(f1,y0,t1)
t2=np.linspace(300,303,1000)
sol2=integr.odeint(f2,sol1[-1],t2)
t3=np.linspace(303,500,1000)
sol3=integr.odeint(f1,sol2[-1],t3)
sol = np.concatenate((sol1, sol2, sol3))
t = np.concatenate((t1, t2, t3))
plt.plot(t,sol)
plt.show()

Related

Scipy solve_ivp not giving expected solutions

I need to solve a 2nd order ODE which I have decoupled into two first order ODEs. I've tried solving it using solve_ivp, but it doesn't seem to provide the solution I expect. I have provided the code below.
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import derivative
from scipy.integrate import solve_ivp
from matplotlib import rc
rc('text', usetex = True)
V0 = 2*10**-10
A = 0.130383
f = 0.129576
phi_i_USR2 = 6.1
phi_Ni_USR2 = 1.2
def V_USR2(phi):
return V0*(np.tanh(phi/np.sqrt(6)) + A*np.sin(1/f*np.tanh(phi/np.sqrt(6))))**2
def V_phi_USR2(phi):
return derivative(V_USR2, phi)
N = np.linspace(0,66,1000)
def USR2(N, X):
phi, g = X
return [g, (g**2/2 - 3)*(V_phi_USR2(phi)/V_USR2(phi) + g)]
X0 = [phi_i_USR2, phi_Ni_USR2]
sol = solve_ivp(USR2, (0,66), X0, method = 'LSODA', t_eval = N)
phi_USR2 = sol.y[0]
phi_N_USR2 = sol.y[1]
N_USR2 = sol.t
plt.plot(phi_USR2, phi_N_USR2)
plt.xlabel("$\phi$")
plt.ylabel("$\phi'$")
plt.title("Phase plot for USR2")
plt.show()
solve_ivp gives me the following plot:
The problem is that there is supposed to be an oscillation near the origin, which is not well captured by solve_ivp. However, for the same equation and initial conditions, Mathematica gives me exactly what I want:
I want the same plot in Python as well. I tried various methods in solve_ivp such as RK45, LSODA, Radau and BDF, but all of them show the same problem (LSODA tries to depict an oscillation but fails, but the other methods don't even move past the point where the oscillation starts). It would be great if someone can shed light on the problem. Thanks in advance.

Avoiding divergent solutions with odeint? shooting method

I am trying to solve an equation in Python. Basically what I want to do is to solve the equation:
(1/x^2)*d(Gam*dL/dx)/dx)+(a^2*x^2/Gam-(m^2))*L=0
This is the Klein-Gordon equation for a massive scalar field in a Schwarzschild spacetime. It suppose that we know m and Gam=x^2-2*x. The initial/boundary condition that I know are L(2+epsilon)=1 and L(infty)=0. Notice that the asymptotic behavior of the equation is
L(x-->infty)-->Exp[(m^2-a^2)*x]/x and Exp[-(m^2-a^2)*x]/x
Then, if a^2>m^2 we will have oscillatory solutions, while if a^2 < m^2 we will have a divergent and a decay solution.
What I am interested is in the decay solution, however when I am trying to solve the above equation transforming it as a system of first order differential equations and using the shooting method in order to find the "a" that can give me the behavior that I am interested about, I am always having a divergent solution. I suppose that it is happening because odeint is always finding the divergent asymptotic solution. Is there a way to avoid or tell to odeint that I am interested in the decay solution? If not, do you know a way that I could solve this problem? Maybe using another method for solving my system of differential equations? If yes, which method?
Basically what I am doing is to add a new system of equation for "a"
(d^2a/dx^2=0, da/dx(2+epsilon)=0,a(2+epsilon)=a_0)
in order to have "a" as a constant. Then I am considering different values for "a_0" and asking if my boundary conditions are fulfilled.
Thanks for your time. Regards,
Luis P.
I am incorporating the value at infinity considering the assimptotic behavior, it means that I will have a relation between the field and its derivative. I will post the code for you if it is helpful:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from math import *
from scipy.integrate import ode
These are initial conditions for Schwarzschild. The field is invariant under reescaling, then I can use $L(2+\epsilon)=1$
def init_sch(u_sch):
om = u_sch[0]
return np.array([1,0,om,0]) #conditions near the horizon, [L_c,dL/dx,a,da/dx]
These are our system of equations
def F_sch(IC,r,rho_c,m,lam,l,j=0,mu=0):
L = IC[0]
ph = IC[1]
om = IC[2]
b = IC[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.
return [dR_dr,dph_dr,dom_dr,db_dr]
Then I try for different values of "om" and ask if my boundary conditions are fulfilled. p_sch are the parameters of my model. In general what I want to do is a little more complicated and in general I will need more parameters that in the just massive case. Howeve I need to start with the easiest which is what I am asking here
p_sch = (1,1,0,0) #[rho_c,m,lam,l], lam and l are for a more complicated case
ep = 0.2
ep_r = 0.01
r_end = 500
n_r = 500000
n_omega = 1000
omega = np.linspace(p_sch[1]-ep,p_sch[1],n_omega)
r = np.linspace(2+ep_r,r_end,n_r)
tol = 0.01
a = 0
for j in range(len(omega)):
print('trying with $omega =$',omega[j])
omeg = [omega[j]]
ini = init_sch(omeg)
Y = odeint(F_sch,ini,r,p_sch,mxstep=50000000)
print Y[-1,0]
#Here I ask if my asymptotic behavior is fulfilled or not. This should be basically my value at infinity
if abs(Y[-1,0]*((p_sch[1]**2.-Y[-1,2]**2.)**(1/2.)+1./(r[-1]))+Y[-1,1]) < tol:
print(j,'times iterations in omega')
print("R'(inf)) = ", Y[-1,0])
print("\omega",omega[j])
omega_1 = [omega[j]]
a = 10
break
if a > 1:
break
Basically what I want to do here is to solve the system of equations giving different initial conditions and find a value for "a=" (or "om" in the code) that should be near to my boundary conditions. I need this because after this I can give such initial guest to a secant method and try to fiend a best value for "a". However, always that I am running this code I am having divergent solutions that it is, of course, a behavior that I am not interested. I am trying the same but considering the scipy.integrate.solve_vbp, but when I run the following code:
from IPython import get_ipython
get_ipython().magic('reset -sf')
import numpy as np
import matplotlib.pyplot as plt
from math import *
from scipy.integrate import solve_bvp
def bc(ya,yb,p_sch):
m = p_sch[1]
om = p_sch[4]
tol_s = p_sch[5]
r_end = p_sch[6]
return np.array([ya[0]-1,yb[0]-tol_s,ya[1],yb[1]+((m**2-yb[2]**2)**(1/2)+1/r_end)*yb[0],ya[2]-om,yb[2]-om,ya[3],yb[3]])
def fun(r,y,p_sch):
rho_c = p_sch[0]
m = p_sch[1]
lam = p_sch[2]
l = p_sch[3]
L = y[0]
ph = y[1]
om = y[2]
b = y[3]
Gam_sch=r**2.-2.*r
dR_dr = ph
dph_dr = (1./Gam_sch)*(2.*(1.-r)*ph+L*(l*(l+1.))-om**2.*r**4.*L/Gam_sch+(m**2.+lam*L**2.)*r**2.*L)
dom_dr = b
db_dr = 0.*y[3]
return np.vstack((dR_dr,dph_dr,dom_dr,db_dr))
eps_r=0.01
r_end = 500
n_r = 50000
r = np.linspace(2+eps_r,r_end,n_r)
y = np.zeros((4,r.size))
y[0]=1
tol_s = 0.0001
p_sch= (1,1,0,0,0.8,tol_s,r_end)
sol = solve_bvp(fun,bc, r, y, p_sch)
I am obtaining this error: ValueError: bc return is expected to have shape (11,), but actually has (8,).
ValueError: bc return is expected to have shape (11,), but actually has (8,).

Scipy odeint giving index out of bounds errors

I am trying to solve a differential equation in python using Scipy's odeint function. The equation is of the form dy/dt = w(t) where w(t) = w1*(1+A*sin(w2*t)) for some parameters w1, w2, and A. The code I've written works for some parameters, but for others I get given index out of bound errors.
Here's some example code that works
import numpy as np
import scipy.integrate as integrate
t = np.arange(1000)
w1 = 2*np.pi
w2 = 0.016*np.pi
A = 1.0
w = w1*(1+A*np.sin(w2*t))
def f(y,t0):
return w[t0]
y = integrate.odeint(f,0,t)
Here's some example code that doesn't work
import numpy as np
import scipy.integrate as integrate
t = np.arange(1000)
w1 = 0.3*np.pi
w2 = 0.005*np.pi
A = 0.15
w = w1*(1+A*np.sin(w2*t))
def f(y,t0):
return w[t0]
y = integrate.odeint(f,0,t)
The only thing that changes between these is that the three parameters w1, w2, and A are smaller in the second, but the second one always gives me the following error
line 13, in f
return w[t0]
IndexError: index 1001 is out of bounds for axis 0 with size 1000
This error continues even after restarting python and running the second code first. I've tried with other parameters, some seem to work, but others give me different index out of bounds errors. Some say 1001 is out of bounds, some say 1000, some say 1008, ect.
Changing the initial condition on y (the second input for odeint, which I have as 0 on the above codes) also changes the number on the index error, so it might be that I'm misunderstanding what to put here. I wasn't told what the initial conditions should be other than that y is used as a phase of a signal, so I presumed it to be initially 0.
What you want to do is
def w(t):
return w1*(1+A*np.sin(w2*t))
def f(y,t0):
return w(t0)
Array indices are typically integers, time arguments and values of solutions of differential equations are typically real numbers. Thus there is some conceptual difficulty in invoking w[t0].
You might also try to integrate directly the function w, there is no inherent difficulty in this example.
As for coupled systems, you solve them as coupled systems.
def w(t):
return w1*(1+A*np.sin(w2*t))
def f(y,t):
wt = w(t)
return np.array([ wt, wt*sin(y[1]-y[0]) ])

Plotting 2D integral function in python

Here is my first steps within the NumPy world.
As a matter of fact the target is plotting below 2-D function as a 3-D mesh:
N = \frac{n}{2\sigma\sqrt{\pi}}\exp^{-\frac{n^{2}x^{2}}{4\sigma^{2}}}
That could been done as a piece a cake in Matlab with below snippet:
[x,n] = meshgrid(0:0.1:20, 1:1:100);
mu = 0;
sigma = sqrt(2)./n;
f = normcdf(x,mu,sigma);
mesh(x,n,f);
But the bloody result is ugly enough to drive me trying Python capabilities to generate scientific plots.
I searched something and found that the primary steps to hit above mark in Pyhton might be acquired by below snippet:
from matplotlib.patches import Polygon
import numpy as np
from scipy.integrate import quad
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
sigma = 1
def integrand(x,n):
return (n/(2*sigma*np.sqrt(np.pi)))*np.exp(-(n**2*x**2)/(4*sigma**2))
t = np.linespace(0, 20, 0.01)
n = np.linespace(1, 100, 1)
lower_bound = -100000000000000000000 #-inf
upper_bound = t
tt, nn = np.meshgrid(t,n)
real_integral = quad(integrand(tt,nn), lower_bound, upper_bound)
Axes3D.plot_trisurf(real_integral, tt,nn)
Edit: With due attention to more investigations on Greg's advices, above code is the most updated snippet.
Here is the generated exception:
RuntimeError: infinity comparisons don't work for you
It is seemingly referring to the quad call...
Would you please helping me to handle this integrating-plotting problem?!...
Best
Just a few hints to get you in the right direction.
numpy.meshgrid can do the same as MatLABs function:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html
When you have x and n you can do math just like in matlab:
sigma = numpy.sqrt(2)/n
(in python multiplication/division is default index by index - no dot needed)
scipy has a lot more advanced functions, see for example How to calculate cumulative normal distribution in Python for a 1D case.
For plotting you can use matplotlibs pcolormesh:
import matplotlib.pyplot as plt
plt.pcolormesh(x,n,real_integral)
Hope this helps until someone can give you a more detailed answer.

Integrated for loop to calculate values over a grid/mesh

I am fairly new to python, and I am trying to plot a contour plot of water surface over a 2d mesh.
At the moment the code is running but I am not getting the right solution. I have checked the formula carefully and I am fairly confident that the issue is with my loops.
I want the code to run for each point on my mesh based on their x and y coordinates.
The mesh is 100 x 100 resulting in 10000 nodes. I have posted my code below, I believe the problem is with the integrated for loops. Any advice on what I might be able to try would be great.
Apologies for the length of code...
import numpy as np
import matplotlib.pyplot as plt
import math
import sys
from math import sqrt
import decimal
t=0
n=5
l=100000
d=100
g=9.81
nx, ny = (100,100)
x5 = np.linspace(-100000,100000,nx)
y5 = np.linspace(-100000,100000,ny)
xv,yv = np.meshgrid(x5,y5)
x = np.arange(-100000,100000,2000)
y = np.arange(-100000,100000,2000)
c=np.arange(len(x))
x2=np.arange(len(x))
y2=np.arange(len(x))
t59=np.arange (1,10001,1)
h=np.arange(len(t59))
om2=1.458*(10**-4.0)
phi=52
phirad=phi*(math.pi/180)
f=om2*math.sin(phirad)
A=(((d+n)**2.0)-(d**2.0))/(((d+n)**2.0)+(d**2.0))
w=(((8*g*d)/(l**2))+(f**2))**0.5
a=((1-(A**2.0))**0.5)/(1-(A*math.cos(w*t)))
b=(((1-(A**2.0))/(1-(A*math.cos(w*t)))**2.0)-1)
l2=l**2.0
for i in range (len(x)):
for j in range (len(y)):
h[i]=d*(a-1-((((x[i]**2.0)+(y[j]**2.0))/l2)*b))
h5=np.reshape(h,(100,100))
plt.figure(1)
plt.contourf(x5,y5,h5)
plt.colorbar()
plt.show()
Ok apologies I didn't make myself very clear. So I'm hoping to achieve a parabolic basin output with h values varying between roughly -10 and 10. Instead I am getting enormous values and the completely wrong shape. I thought the for loop needed to be more like:
for i in range (len(x)):
for j in range (len(y)):
h[i][j]=d*(a-1-((((x[i][j]**2.0)+(y[i][j]**2.0))/l2)*b))
Is that clearer? Let me know if not.
The first thing is that the complete loop is not necessary.
h = d * (a - 1 - (x[None,:]**2 + y[:,None]**2) / 12 * b)
Here the magic comes with the None in indexing. x[None, :] means "x as a row vector copied to as many rows as needed and y[:, None] means "y as a column vector copied to as many columns as needed`.
This might be easiest to understand with an example:
import numpy as np
x = np.arange(5)
y = np.arange(0,50,10)
print x, y, x[None,:] + y[:, None]
The one-liner above gives:
Some manual calculations show this should be rather ok.
d = 100
a = 1.05
b = 0.1025
For a corner point at (1e5, 1e5), we have 2e10 in the addition, so the values do not look badly off.

Categories