Having problems with scipy.integrate.solve_ivp - python

I am trying to use scipy.integrate.solve_ivp to calculate the solutions to newton's gravitation equation for my n body simulation, however I am confused how the function is passed into solve_ivp. I have the following code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
G = 6.67408e-11
m_sun = 1988500e24
m_jupiter = 1898.13e24
m_earth = 5.97219e24
au = 149597870.700e3
v_factor = 1731460
year = 31557600.e0
init_s = np.array([-6.534087946884256E-03*au, 6.100454846284101E-03*au, 1.019968145073305E-04*au, -6.938967653087248E-06*v_factor, -5.599052606952444E-06*v_factor, 2.173251724105919E-07*v_factor])
init_j = np.array([2.932487231769548E+00*au, -4.163444383137574E+00*au, -4.833604407653648E-02*au, 6.076788230491844E-03*v_factor, 4.702729516645153E-03*v_factor, -1.554436340872727E-04*v_factor])
variables_s = init_s
variables_j = init_j
N = 2
tStart = 0e0
t_End = 25*year
Nt = 2000
dt = t_End/Nt
temp_end = dt
t=tStart
domain = (t, temp_end)
planetsinit = np.vstack((init_s, init_j))
planetspos = planetsinit[:,0:3]
mass = np.vstack((1988500e24, 1898.13e24))
def weird_division(n, d):
return n / d if d else 0
variables_save = np.zeros((N,6,Nt))
variables_save[:,:,0] = planetsinit
pos_s = planetspos[0]
pos_j = planetspos[1]
while t < t_End:
t_index = int(weird_division(t, dt))
for index in range(len(planetspos)):
for otherindex in range(len(planetspos)):
if index != otherindex:
x1_p1, x2_p1, x3_p1 = planetsinit[index, 0:3]
x1_p2, x2_p2, x3_p2 = planetsinit[otherindex, 0:3]
m = mass[otherindex]
def f_grav(t, y):
x1_p1, x2_p1, x3_p1, v1_p1, v2_p1, v3_p1 = y
x1_diff = x1_p1 - x1_p2
x2_diff = x2_p1 - x2_p2
x3_diff = x3_p1 - x3_p2
dydt = [v1_p1,
v2_p1,
v3_p1,
-(x1_diff)*G*m/((x1_diff)**2+(x2_diff)**2+(x3_diff)**2)**(3/2),
-(x2_diff)*G*m/((x1_diff)**2+(x2_diff)**2+(x3_diff)**2)**(3/2),
-(x3_diff)*G*m/((x1_diff)**2+(x2_diff)**2+(x3_diff)**2)**(3/2)]
return dydt
solution = solve_ivp(fun=f_grav, t_span=domain, y0=planetsinit[index])
planetsinit[index] = solution['y'][0:6, -1]
variables_save[index,:,t_index] = solution['y'][0:6, -1]
planetspos[index] = planetsinit[index][0:3]
t += dt
temp_end += dt
domain = (t,temp_end)
pos_s = variables_save[0,0:3,:]
pos_j = variables_save[1,0:3,:]
plt.plot(variables_save[0,0:3,:][0], variables_save[0,0:3,:][1])
plt.plot(variables_save[1,0:3,:][0], variables_save[1,0:3,:][1])
The code above works very nicely and produces a stable orbit. However when I calculate the acceleration outside the function and feed that through into the f_grav function, something goes wrong and produces an orbit which is no longer stable. However I am perplexed as I don't know why the data is different as to be it seems like that I have passed through the exactly same inputs. Which leads me to think that maybe its the way the the function f_grav is interpolated by the solve_ivp integrator? To calculate the acceleration outside all I do is change the following code in the loop to:
x1_p1, x2_p1, x3_p1 = planetsinit[index, 0:3]
x1_p2, x2_p2, x3_p2 = planetsinit[otherindex, 0:3]
m = mass[otherindex]
x1_diff = x1_p1 - x1_p2
x2_diff = x2_p1 - x2_p2
x3_diff = x3_p1 - x3_p2
ax = -(x1_diff)*G*m/((x1_diff)**2+(x2_diff)**2+(x3_diff)**2)**(3/2)
ay = -(x2_diff)*G*m/((x1_diff)**2+(x2_diff)**2+(x3_diff)**2)**(3/2)
az = -(x3_diff)*G*m/((x1_diff)**2+(x2_diff)**2+(x3_diff)**2)**(3/2)
def f_grav(t, y):
x1_p1, x2_p1, x3_p1, v1_p1, v2_p1, v3_p1 = y
dydt = [v1_p1,
v2_p1,
v3_p1,
ax,
ay,
az]
return dydt
solution = solve_ivp(fun=f_grav, t_span=domain, y0=planetsinit[index])
planetsinit[index] = solution['y'][0:6, -1]
variables_save[index,:,t_index] = solution['y'][0:6, -1]
planetspos[index] = planetsinit[index][0:3]
As I said I don't know why different orbits are produces which are shown below and any hints as to why or how to solve it would me much appreciated. To clarify why I can't use the working code as it is, as when more bodies are involved I aim to sum the accelerations contribution of all the other planets which isn't possible this way where the acceleration is calculated in the function itself.
Sorry for the large coding chunks but I did feel it was appropriate as then it could be run and the problem itself is clearer.
Both have the same time period, dt, however the orbit on the left is stable and the one on the right is not

Related

Solving/graphing derivative equations

I am a student and am trying to figure out how to solve differential equations with python but I am very confused. I am also getting a syntax error on the fourth to last line, and I am not sure why. This model is supposed to show the progression of a virus.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# function that returns dT/dt
# initial condition
T = 1
V = .001
I = 0
def model(t,T,I,V):
β = 10**-5
δ = 4
p = 2*(10**6)
c = 4
dTdt = -β*t*V
dIdt = (β*T*V) - (δ*I)
dVdt = (p*I) - (c* V)
return dTdt, dIdt,dVdt
# initial condition
T0 = 1
V0 = 10**-3
I0 = 0
# time points
t = np.linspace(0,25)
# solve
V = odeint(model,t,V0, args = (T0,I0)
# plot results
plt.plot(t,V)
plt.xlabel('time')
plt.ylabel('V(t)')
plt.show()
The state vector is one single argument in the ODE function. So you have to replace its first line with
def model(Y,t):
T,I,V = Y
and correct the integrator call to
Y = odeint(model,[T0,I0,V0],t)
T,I,V = Y.T

Is there a way to speed up odeint in python

currently I am trying to think about how to speedup odeint for a simulation I am running. Actually its only a primitive second order ode with a friction term and a discontinuous force. The model I am using to describe the dynamics is defined in a seperate function. Trying to solve the ode results in an error or extremely high computation times (talking about days).
Here is my code mostly hardcoded:
import numpy as np
import pandas as pd
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def create_ref(tspan):
if tspan>2 and tspan<8:
output = np.sin(tspan)
elif tspan>=20:
output = np.sin(tspan*10)
else:
output = 0.5
return output
def model(state,t):
def Fs(x):
pT = 0
pB = 150
p0 = 200
pA = 50
kein = 0.25
kaus = 0.25
if x<0:
pA = pT
Fsres = -kaus*x*pA-kein*x*(p0-pB)
else:
pB = pT
Fsres = -kaus*x*pB-kein*x*(p0-pA)
return Fsres
x,dx = state
refnow = np.interp(t,xref.index.values,xref.values.squeeze())
refprev = np.interp(t-dt,xref.index.values,xref.values.squeeze())
drefnow = (refnow-refprev)/dt
x_meas = x
dx_meas = dx
errnow = refnow-x_meas
errprev = refprev-(x_meas-dx_meas*dt)
intrefnow = dt*(errnow-errprev)
kp = 10
kd = 0.1
ki = 100
sigma = kp*(refnow-x_meas)+kd*(drefnow-dx_meas)+ki*intrefnow
tr0 = 5
FricRed = (1.5-0.5*np.tanh(0.1*(t-tr0)))
kpu = 300
fr = 0.1
m = 0.01
d = 0.01
k = 10
u = float(kpu*np.sqrt(np.abs(sigma))*np.sign(sigma))
ddx = 1/m*(Fs(x)+FricRed*fr*np.sign(dx)-d*dx-k*x + u )
return [dx,ddx]
dt = 1e-3
tspan = np.arange(start=0, stop=50, step=dt)
steplim = tspan[-1]*0.1
reffunc = np.vectorize(create_ref)
xrefvals = reffunc(tspan)
xref = pd.DataFrame(data=xrefvals,index=tspan,columns=['xref'])
x0 = [-0.5,0]
simresult = odeint(model, x0, tspan)
plt.figure(num=1)
plt.plot(tspan,simresult[:,0],label='ispos')
plt.plot(tspan,xref['xref'].values,label='despos')
plt.legend()
plt.show()
I change the code according to Pranav Hosangadi s comments. Thanks for the hints. I didn't know that and learned something new and didn't expect dictionaries to have such a high impact on computation time. But now its much faster.

Programming of 4th order Runge-Kutta in advection equation in python

%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from math import pi
# wave speed
c = 1
# spatial domain
xmin = 0
xmax = 1
#time domain
m=500; # num of time steps
tmin=0
T = tmin + np.arange(m+1);
tmax=500
n = 50 # num of grid points
# x grid of n points
X, dx = np.linspace(xmin, xmax, n+1, retstep=True);
X = X[:-1] # remove last point, as u(x=1,t)=u(x=0,t)
# for CFL of 0.1
CFL = 0.3
dt = CFL*dx/c
# initial conditions
def initial_u(x):
return np.sin(2*pi*x)
# each value of the U array contains the solution for all x values at each timestep
U = np.zeros((m+1,n),dtype=float)
U[0] = u = initial_u(X);
def derivatives(t,u,c,dx):
uvals = [] # u values for this time step
for j in range(len(X)):
if j == 0: # left boundary
uvals.append((-c/(2*dx))*(u[j+1]-u[n-1]))
elif j == n-1: # right boundary
uvals.append((-c/(2*dx))*(u[0]-u[j-1]))
else:
uvals.append((-c/(2*dx))*(u[j+1]-u[j-1]))
return np.asarray(uvals)
# solve for 500 time steps
for k in range(m):
t = T[k];
k1 = derivatives(t,u,c,dx)*dt;
k2 = derivatives(t+0.5*dt,u+0.5*k1,c,dx)*dt;
k3 = derivatives(t+0.5*dt,u+0.5*k2,c,dx)*dt;
k4 = derivatives(t+dt,u+k3,c,dx)*dt;
U[k+1] = u = u + (k1+2*k2+2*k3+k4)/6;
# plot solution
plt.style.use('dark_background')
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1)
line, = ax1.plot(X,U[0],color='cyan')
ax1.grid(True)
ax1.set_ylim([-2,2])
ax1.set_xlim([0,1])
def animate(i):
line.set_ydata(U[i])
return line,
I want to program in Python an advection equation which is (∂u/∂t) +c (∂u/∂x) = 0. Time should be discretized with Runge-kutta 4th order. Spatial discretiziation is 2nd order finite difference. When I run my code, I get straight line which transforms into sine wave. But I gave as initial condition sine wave. Why does it start as straight line? And I want to have sine wave moving forward. Do you have any idea on how to get sine wave moving forward? I appreciate your help. Thanks in advance!
While superficially your computation steps are related to the RK4 method, they deviate from the RK4 method and the correct space discretization too much to mention it all.
The traditional way to apply ODE integration methods is to have a function derivatives(t, state, params) and then apply that to compute the Euler step or the RK4 step. In your case it would be
def derivatives(t,u,c,dx):
du = np.zeros(len(u));
p = c/(2*dx);
du[0] = p*(u[1]-u[-1]);
du[1:-1] = p*(u[2:]-u[:-2]);
du[-1] = p*(u[0]-u[-2]);
return du;
Then you can do
X, dx = np.linspace(xmin, xmax, n+1, retstep=True);
X = X[:-1] # remove last point, as u(x=1,t)=u(x=0,t)
m=500; # number of time steps
T = tmin + np.arange(m+1);
U = np.zeros((m+1,n),dtype=float)
U[0] = u = initial_u(X);
for k in range(m):
t = T[k];
k1 = derivatives(t,u,c,dx)*dt;
k2 = derivatives(t+0.5*dt,u+0.5*k1,c,dx)*dt;
k3 = derivatives(t+0.5*dt,u+0.5*k2,c,dx)*dt;
k4 = derivatives(t+dt,u+k3,c,dx)*dt;
U[k+1] = u = u + (k1+2*k2+2*k3+k4)/6;
This uses dt as computed as the primary variable in the time stepping, then constructs the arithmetic sequence from tmin with step dt. Other ways are possible, but one has to make tmax and the number of time steps compatible.
The computation up to this point should now be successful and can be used in the animation. In my understanding, you do not produce a new plot in each frame, you only draw the graph once and after that just change the line data
# animate the time data
line, = ax1.plot(X,U[0],color='cyan')
ax1.grid(True)
ax1.set_ylim([-2,2])
ax1.set_xlim([0,1])
def animate(i):
line.set_ydata(U[i])
return line,
etc.

Crank Nicolson Method on Wave Function Python

I am trying to propagate a gaussian wave packet using the crank nicolson method in imaginary time (multiply the time step by the unit imaginary). The code that I have written in attempt to achieve this is shown here:
import matplotlib.pyplot as plt #this allows you to plot, and changes the name to plt
import numpy as np #this allows you to do math, and changes the name to np
import math
import scipy.linalg as la
def V(x):
# k = 1
# v = k*x**4
v = 0.25*(x-3)**2+0.15*(x-3)**4
return v
def Psi(x):
psi = np.exp(-2*(x-3)**2)
return psi
#Function for computing integral using trapezoid method
def TrapInt(y, h):
trap = [(float(y[ii]) + float(y[ii+1])) for ii in range(0, len(y)-1)]
return float(h)/2*sum(trap)
N = 1000
L = 3;
h = 0.01
x = np.arange(0,6,h);
t = np.linspace(0,L,300);
t = 1j*t;
dt = t[1] - t[0]
dx = x[1] - x[0]
A = 1j*dt/(2*dx**2)
pot = V(x)
Q = np.zeros([len(x),len(x)],dtype = complex)
P = np.zeros([len(x),len(x)],dtype = complex)
wave = np.zeros([len(x),len(t)],dtype = complex)
wave[:,0] = Psi(x)
B = (1- 2*A - 1j*dt*pot)
for ii in range(0,len(x)-1):
Q[ii][ii] = -(B[ii])
P[ii][ii] = (B[ii])
Q[ii][ii+1] = (2-A)
P[ii][ii+1] = A
if ii >= 1:
Q[ii][ii-1] = -A
P[ii][ii-1] = A
plt.plot(wave[:,0])
for ii in range(0,len(t)-1):
one = np.matmul(P,wave[:,ii])
wave[:,ii+1] = np.matmul(la.inv(Q),one)
I can't seem to find any mathematical errors in my implementation of the crank nicolson method; however, whenever I try to run this it gives an error saying that Q is singular (has no inverse). I'm not sure why this is occurring. Any help is appreciated. Thanks
You never assign to Q[-1]. Zero rows have been known to produce singular matrices in some cases.
Also, don’t repeatedly invert the matrix. Probably don’t invert it at all, but rather store some decomposition of it to allow efficient calculation of Q-1x.

Separating gaussian components of a curve using python

I am trying to deblend the emission lines of low resolution spectrum in order to get the gaussian components. This plot represents the kind of data I am using:
After searching a bit, the only option I found was the application of the gauest function from the kmpfit package (http://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html#gauest). I have copied their example but I cannot make it work.
I wonder if anyone could please offer me any alternative to do this or how to correct my code:
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
def CurveData():
x = np.array([3963.67285156, 3964.49560547, 3965.31835938, 3966.14111328, 3966.96362305,
3967.78637695, 3968.60913086, 3969.43188477, 3970.25463867, 3971.07714844,
3971.89990234, 3972.72265625, 3973.54541016, 3974.36791992, 3975.19067383])
y = np.array([1.75001533e-16, 2.15520995e-16, 2.85030769e-16, 4.10072843e-16, 7.17558032e-16,
1.27759917e-15, 1.57074192e-15, 1.40802933e-15, 1.45038722e-15, 1.55195653e-15,
1.09280316e-15, 4.96611341e-16, 2.68777266e-16, 1.87075114e-16, 1.64335999e-16])
return x, y
def FindMaxima(xval, yval):
xval = np.asarray(xval)
yval = np.asarray(yval)
sort_idx = np.argsort(xval)
yval = yval[sort_idx]
gradient = np.diff(yval)
maxima = np.diff((gradient > 0).view(np.int8))
ListIndeces = np.concatenate((([0],) if gradient[0] < 0 else ()) + (np.where(maxima == -1)[0] + 1,) + (([len(yval)-1],) if gradient[-1] > 0 else ()))
X_Maxima, Y_Maxima = [], []
for index in ListIndeces:
X_Maxima.append(xval[index])
Y_Maxima.append(yval[index])
return X_Maxima, Y_Maxima
def GaussianMixture_Model(p, x, ZeroLevel):
y = 0.0
N_Comps = int(len(p) / 3)
for i in range(N_Comps):
A, mu, sigma = p[i*3:(i+1)*3]
y += A * np.exp(-(x-mu)*(x-mu)/(2.0*sigma*sigma))
Output = y + ZeroLevel
return Output
def Residuals_GaussianMixture(p, x, y, ZeroLevel):
return GaussianMixture_Model(p, x, ZeroLevel) - y
Wave, Flux = CurveData()
Wave_Maxima, Flux_Maxima = FindMaxima(Wave, Flux)
EmLines_Number = len(Wave_Maxima)
ContinuumLevel = 1.64191e-16
# Define initial values
p_0 = []
for i in range(EmLines_Number):
p_0.append(Flux_Maxima[i])
p_0.append(Wave_Maxima[i])
p_0.append(2.0)
p1, conv = optimize.leastsq(Residuals_GaussianMixture, p_0[:],args=(Wave, Flux, ContinuumLevel))
Fig = plt.figure(figsize = (16, 10))
Axis1 = Fig.add_subplot(111)
Axis1.plot(Wave, Flux, label='Emission line')
Axis1.plot(Wave, GaussianMixture_Model(p1, Wave, ContinuumLevel), 'r', label='Fit with optimize.leastsq')
print p1
Axis1.plot(Wave, GaussianMixture_Model([p1[0],p1[1],p1[2]], Wave, ContinuumLevel), 'g:', label='Gaussian components')
Axis1.plot(Wave, GaussianMixture_Model([p1[3],p1[4],p1[5]], Wave, ContinuumLevel), 'g:')
Axis1.set_xlabel( r'Wavelength $(\AA)$',)
Axis1.set_ylabel('Flux' + r'$(erg\,cm^{-2} s^{-1} \AA^{-1})$')
plt.legend()
plt.show()
A typical simplistic way to fit:
def model(p,x):
A,x1,sig1,B,x2,sig2 = p
return A*np.exp(-(x-x1)**2/sig1**2) + B*np.exp(-(x-x2)**2/sig2**2)
def res(p,x,y):
return model(p,x) - y
from scipy import optimize
p0 = [1e-15,3968,2,1e-15,3972,2]
p1,conv = optimize.leastsq(res,p0[:],args=(x,y))
plot(x,y,'+') # data
#fitted function
plot(arange(3962,3976,0.1),model(p1,arange(3962,3976,0.1)),'-')
Where p0 is your initial guess. By the looks of things, you might want to use Lorentzian functions...
If you use full_output=True, you get all kind of info about the fitting. Also check out curve_fit and the fmin* functions in scipy.optimize. There are plenty of wrappers around these around, but often, like here, it's easier to use them directly.

Categories