Solving the energy of the harmonic oscillator using solve_ivp - python

I had been given the harmonic oscillator equation in the form of : y'' = -w^2 * y with initial conditions y(0) = 1 and y'(0) = 0 and w = 2*pi and I had to compare the analytical solution with the exact solution in the form of y(t) = cos(wt).
So the first part went smoothly and I saw that my result using solve_ivp gives me the exact same curve as the exact solution.
But then I had to compare the evolution of energy for RK45, RK23, and DOP853.
The energy is written : E = 0.5 * k * y^2 + 0.5 * v^2 with k = w^2 and v the velocity v = y'
I was expecting to get a straigh tline, since my harmonic oscillator isn't damped but I got a decreasing curve for each integrations, and I do not know why. If anyone has any idea, I post my code Here. Thanks in advance for the help !
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
#Exercice 1
#We first write the initial conditions
omega = (2*np.pi)
omega_sq = omega**2
y_0 = [1., 0.]
t = np.linspace(0, 4 ,100) #Time interval between 0 and 4 as asked in the problem
def dY_dt (t, y): #Definition of the system
solution = [y[1], - omega_sq *y[0]]
return solution
result = solve_ivp(dY_dt, (0, 100),y0 = y_0, method = 'RK45', t_eval = t)
z = np.cos(omega*t) #Exact solution, for comparison
plt.plot(t, z, 'r', lw = 5) #Plot of the exact solution
plt.plot(t,result.y[0], 'b') #Plot of the analytical solution
#Exercice 2
def Energie(Y): #Energy
k = omega_sq
E = 0.5*(result.y[1])**2 + 0.5*k*(result.y[0])**2
return E
E = Energie(result)
#Legendes
plt.ylabel("Position")
plt.xlabel("Time in seconds")
plt.title('Comparison between analytical and exact')
plt.legend(['Solution exacte', 'Solution solve_ivp'], loc='lower left')
plt.figure()
plt.plot(t, E)
plt.show

Related

scipy curve_fi returns initial parameters estimates

I am triyng to use scipy curve_fit function to fit a gaussian function to my data to estimate a theoretical power spectrum density. While doing so, the curve_fit function always return the initial parameters (p0=[1,1,1]) , thus telling me that the fitting didn't work.
I don't know where the issue is. I am using python 3.9 (spyder 5.1.5) from the anaconda distribution on windows 11.
here a Wetransfer link to the data file
https://wetransfer.com/downloads/6097ebe81ee0c29ee95a497128c1c2e420220704110130/86bf2d
Here is my code below. Can someone tell me what the issue is, and how can i solve it?
on the picture of the plot, the blue plot is my experimental PSD and the orange one is the result of the fit.
import numpy as np
import math
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import scipy.constants as cst
File = np.loadtxt('test5.dat')
X = File[:, 1]
Y = File[:, 2]
f_sample = 50000
time=[]
for i in range(1,len(X)+1):
t=i*(1/f_sample)
time= np.append(time,t)
N = X.shape[0] # number of observation
N1=int(N/2)
delta_t = time[2] - time[1]
T_mes = N * delta_t
freq = np.arange(1/T_mes, (N+1)/T_mes, 1/T_mes)
freq=freq[0:N1]
fNyq = f_sample/2 # Nyquist frequency
nb = 350
freq_block = []
# discrete fourier tansform
X_ft = delta_t*np.fft.fft(X, n=N)
X_ft=X_ft[0:N1]
plt.figure()
plt.plot(time, X)
plt.xlabel('t [s]')
plt.ylabel('x [micro m]')
# Experimental power spectrum on both raw and blocked data
PSD_X_exp = (np.abs(X_ft)**2/T_mes)
PSD_X_exp_b = []
STD_PSD_X_exp_b = []
for i in range(0, N1+2, nb):
freq_b = np.array(freq[i:i+nb]) # i-nb:i
psd_b = np.array(PSD_X_exp[i:i+nb])
freq_block = np.append(freq_block, (1/nb)*np.sum(freq_b))
PSD_X_exp_b = np.append(PSD_X_exp_b, (1/nb)*np.sum(psd_b))
STD_PSD_X_exp_b = np.append(STD_PSD_X_exp_b, PSD_X_exp_b/np.sqrt(nb))
plt.figure()
plt.loglog(freq, PSD_X_exp)
plt.legend(['Raw Experimental PSD'])
plt.xlabel('f [Hz]')
plt.ylabel('PSD')
plt.figure()
plt.loglog(freq_block, PSD_X_exp_b)
plt.legend(['Experimental PSD after blocking'])
plt.xlabel('f [Hz]')
plt.ylabel('PSD')
kB = cst.k # Boltzmann constant [m^2kg/s^2K]
T = 273.15 + 25 # Temperature [K]
r = (2.8 / 2) * 1e-6 # Particle radius [m]
v = 0.00002414 * 10 ** (247.8 / (-140 + T)) # Water viscosity [Pa*s]
gamma = np.pi * 6 * r * v # [m*Pa*s]
Do = kB*T/gamma # expected value for D
f3db_o = 50000 # expected value for f3db
fc_o = 300 # expected value pour fc
n = np.arange(-10,11)
def theo_spectrum_lorentzian_filter(x, D_, fc_, f3db_):
PSD_theo=[]
for i in range(0,len(x)):
# print(i)
psd_theo=np.sum((((D_*Do)/2*math.pi**2)/((fc_*fc_o)**2+(x[i]+n*f_sample)
** 2))*(1/(1+((x[i]+n*f_sample)/(f3db_*f3db_o))**2)))
PSD_theo= np.append(PSD_theo,psd_theo)
return PSD_theo
popt, pcov = curve_fit(theo_spectrum_lorentzian_filter, freq_block, PSD_X_exp_b, p0=[1, 1, 1], sigma=STD_PSD_X_exp_b, absolute_sigma=True, check_finite=True,bounds=(0.1, 10), method='trf', jac=None)
D_, fc_, f3db_ = popt
D1 = D_*Do
fc1 = fc_*fc_o
f3db1 = f3db_*f3db_o
print('Diffusion constant D = ', D1, ' Corner frequency fc= ',fc1, 'f3db(diode,eff)= ', f3db1)
I believe I've successfully fitted your data. Here's the approach I took.
First, I plotted your model (with popt=[1, 1, 1]) and the data you had. I noticed your data was significantly lower than the model. Then I started fiddling with the parameters. I wanted to push the model upwards. I did that by multiplying popt[0] by increasingly large values. I ended up with 1E13 as a ballpark value. Note that I have no idea if this is physically possible for your model. Then I jury-rigged your fitting function to multiply D_ by 1E13 and ran your code. I got this fit:
So I believe it's a problem of 1) inappropriate starting values and 2) inappropriate bounds. In your position, I would revise this model, check if there's any problems with units and so on.
Here's what I used to try to fit your model:
plt.figure()
plt.loglog(freq_block[:170], PSD_X_exp_b[:170], label='Exp')
plt.loglog(freq_block[:170],
theo_spectrum_lorentzian_filter(
freq_block[:170],
1E13*popt[0], popt[1], popt[2]),
label='model'
)
plt.xlabel('f [Hz]')
plt.ylabel('PSD')
plt.legend()
I limited the data to point 170 because there were some weird backwards values that made me uncomfortable. I would recheck them if I were you.
Here's the model code I used. I didn't change the curve_fit call (except to limit x to :170.
def theo_spectrum_lorentzian_filter(x, D_, fc_, f3db_):
PSD_theo=[]
D_ = 1E13*D_ # I only changed here
for i in range(0,len(x)):
psd_theo=np.sum((((D_*Do)/2*math.pi**2)/((fc_*fc_o)**2+(x[i]+n*f_sample)
** 2))*(1/(1+((x[i]+n*f_sample)/(f3db_*f3db_o))**2)))
PSD_theo= np.append(PSD_theo,psd_theo)
return PSD_theo

Implementation of piecewised function in python with numerical ODE solution. TypeError: float() argument must be a string or a number, not 'OdeResult'

I have a reaction system whose mechanism is defined in the form of a nonlinear first-order ODE as:
dx(t)/dt = - gaf(1 - rho)(1 - exp(-kt))(x(t)^2 + bx(t))/(cx(t) + p)
where, g, a, f, b, c, rho, k and p are all constants.
I’m trying to compile a code in python to obtain the time trajectory of the variable x(t) from time t1 to time tf where the kinetic range of the profile is always preceded by a waiting period of 2 min before the reaction starts with constant x0 value in that period starting a time t0 = 0. My aim is to export the whole profile to excel (as .xlsx file) in regular spaced intervals of 2.5 secs as shown in the link to the figure below.
Overall Trajectory Profile - Plotted Curve
I wish the system be defined in the form of a piecewised function, but when I run the code below in python it gives me the error: "float() argument must be a string or a number, not 'OdeResult'". I’m stuck here! Any suggestions on how to remedy this problem or alternatives?
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
f = 1
rho = 0.1
r = 0.5
a = 0.08
x0 = 2
b = x0*(1/r - 1)
k = 1.0e-2
g = 0.138
b = 2
c = 1.11
p = 2.174
tau = (1/k)*np.log(f/(f-rho)) # Inhibition time (induction period)
t0 = 0
ti = 120
t1 = ti + tau
tf = 1000
delta_t = 2.5
t_end = tf + delta_t
dt = 1.0e-5
t_eval=np.arange(t0, t_end, dt)
def ode(t, x):
return -g*a*f*(1-rho)*(1-np.exp(-k*t))*(x**2 + b*x)/(c*x + p)
sol = solve_ivp(ode, [t0, t_end], [x0], t_eval=t_eval, method='RK45')
plt.figure(figsize = (12, 4))
plt.subplot(121)
curve, = plt.plot(sol.t, sol.y[0])
plt.xlabel("t")
plt.ylabel("x(t)")
plt.show()
t = np.arange(t0, tf, delta_t)
def x(t):
if(t0 <= t <= ti): return x0 # Waiting period
elif(ti < t < t1): return x0 # Lag period associated to 'tau' (inhibition period)
else: return sol # Dynamic range (problem here!!). TypeError: float() argument must be a string or a number, not 'OdeResult'
y = []
for i in range(len(t)):
y.append(x(t[i]))
plt.plot(t, y, c='red', ls='', ms=5, marker='.')
ax = plt.gca()
ax.set_ylim([0, x0])
plt.show()
Replace t_eval=t_eval with dense_output=True. Then the returned sol object contains an interpolation function sol so that you could complete your x function as
return sol.sol(t)[0]
You do not need an else branch if the branch before is concluded with a return statement.
You might want to include the derivative zero on the constant segment inside the ODE function, so that the x(t) function is not needed. Then you would get the desired values also direct with the instant evaluation using t_eval as done originally.

Lotka Volterra with Runge Kutta not desired precision

Population over time (should be the same height at every peak
I've programmed a code to simulate a mice and fox population using Runge-Kutta 4th order.
But the result is not as wanted to be.. each peak should nearly be at same height
I don't think that it is a problem of step size..
Do you have an idea?
import matplotlib.pyplot as plt
import numpy as np
#function definition
def mice(f_0, m_0):
km = 2. #birthrate mice
kmf = 0.02 #deathrate mice
return km*m_0 - kmf*m_0*f_0
def fox(m_0, f_0):
kf = 1.06 #deathrate foxes
kfm = 0.01 #birthrate foxes
return kfm*m_0*f_0 -kf*f_0
def Runge_kutta4( h, f, xn, yn):
k1 = h*f(xn, yn)
k2 = h*f(xn+h/2, yn + k1/2)
k3 = h*f(xn+h/2, yn + k2/2)
k4 = h*f(xn+h, yn + k3)
return yn + k1/6 + k2/3 + k3/3 + k4/6
h = 0.01
f = 15.
m = 100.
f_list = [f]
m_list = [m]
for i in range(10000):
fox_new = Runge_kutta4(h, fox, m, f)
mice_new = Runge_kutta4(h, mice, f, m)
f_list.append(fox_new)
m_list.append(mice_new)
f = fox_new
m = mice_new
time = np.linspace(0,100,10001)
#Faceplot LV
fig = plt.figure(figsize=(10,10))
fig.suptitle("Runge Kutta 4")
plt.grid()
plt.xlabel('Mice', fontsize = 10)
plt.ylabel('Foxes', fontsize = 10)
plt.plot(m_list, f_list, '-')
plt.axis('equal')
plt.show()
fig.savefig("Faceplott_Runge_Kutta4.jpg", dpi=fig.dpi)
fig1 = plt.figure(figsize=(12,10))
fig1.suptitle("Runge Kutta 4")
plt.grid()
plt.xlabel('Time [d]', fontsize = 10)
plt.ylabel('Populationsize', fontsize = 10)
plt.plot(time, m_list , label='mice')
plt.plot(time, f_list , label='fox')
plt.legend()
plt.show()
fig1.savefig("Fox_Miceplot_Runge_Kutta4.jpg", dpi=fig.dpi)
In the Runge-Kutta implementation, xn is the time variable and yn the scalar state variable. f is the scalar ODE function for the scalar ODE y'(x)=f(x,y(x)). However, this is not how you apply the RK4 procedure, your ODE functions are autonomous, contain no time variable but instead of it two coupled state variables. As implemented, the result should be a convoluted first order method of no specific type.
You need to solve the coupled system as a coupled system, that is, the stages for both variables have to be calculated simultaneously with the same increments.
kf1 = h*fox(mn, fn)
km1 = h*mice(fn, mn)
kf2 = h*fox(mn+0.5*km1, fn+0.5*kf1)
km2 = h*mice(fn+0.5*kf1, mn+0.5*km1)
kf3 = h*fox(mn+0.5*km2, fn+0.5*kf2)
km3 = h*mice(fn+0.5*kf2, mn+0.5*km2)
kf4 = h*fox(mn+km3, fn+kf3)
km4 = h*mice(fn+kf3, mn+km3)
etc.
See also Runge Kutta problems in JS for the same problem in JavaScript
The other way is to vectorize the system so that the Runge-Kutta procedure can remain the same, but in the integration loop the state vector has to be constructed and unpacked,
def VL(x,y): f, m = y; return np.array([fox(m,f), mice(f,m)])
y = np.array([f,m])
time = np.arange(x0,xf+0.1*h,h)
for x in time[1:]:
y = Runge_kutta4(h, VL, x, y)
f, m = y
f_list.append(f)
m_list.append(m)
everything else remaining the same.

How to integrate coupled differential equations?

I've got a system of equations that I've been tryin to get Python to solve and plot but the plot is not coming out right.
This is my code:
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
#function that returns dx/dt and dy/dt
def func(z,t):
for r in range(-10,10):
beta=2
gamma=0.8
c = z[0]
tau = z[1]
dcdt = r*c+c**2-c**3-beta*c*tau**2
dtaudt = -gamma*tau+0.5*beta*c*tau
return [dcdt,dtaudt]
#inital conditions
z0 = [2,0]
#time points
t = np.linspace(0,24,100)
#solve ODE
z = odeint(func,z0,t)
#seperating answers out
c = z[:,0]
tau = z[:,1]
print(z)
#plot results
plt.plot(t,c,'r-')
plt.plot(t,tau,'b--')
plt.legend(['c(t)','tau(t)'])
plt.show()
Let me explain. I am studying doubly diffusive convection. I din't want any assumptions to be made on the value of r, but beta and gamma are positive. So I thougt to assign values to them but not to r.
This is the plot I get and from understanding the problem, that the graph is not right. The tau plot should efinitely not be stuck on 0 and the c plot should be doing more. I am relitively new to Python and am taking courses but really want to understand what I've done wrong, so help in a simple language would be appreciated.
I see 2 problems in your function that you should check.
for r in range(-10,10):
Here you are doing a for loop just reevaluating dcdt and dtaudt. As a result, the output value is the same as just evaluating r=9 (last value in the loop)
dtaudt = -gamma*tau+0.5*beta*c*tau
Here you have dtaudt = tau*(beta*c/2. -gamma). Your choice tau[0]=0 implies that tau will remain 0.
Try this:
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
r = 1
beta=2
gamma=0.8
#function that returns dx/dt and dy/dt
def func(z,t):
c = z[0]
tau = z[1]
dcdt = r*c+c**2-c**3-beta*c*tau**2
dtaudt = -gamma*tau+0.5*beta*c*tau
print(dtaudt)
return [dcdt,dtaudt]
#inital conditions
z0 = [2,0.2] #tau[0] =!0.0
#time points
t = np.linspace(0,24,100)
#solve ODE
z = odeint(func,z0,t)
#seperating answers out
c = z[:,0]
tau = z[:,1]
#plot results
plt.plot(t,c,'r-')
plt.plot(t,tau,'b--')
plt.legend(['c(t)','tau(t)'])
plt.show()

What am I doing wrong in this Dopri5 implementation

I am totally new to python, and try to integrate following ode:
$\dot{x} = -2x-y^2$
$\dot{y} = -y-x^2
This results in an array with everything 0 though
What am I doing wrong? It is mostly copied code, and with another, not coupled ode it worked fine.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import ode
def fun(t, z):
"""
Right hand side of the differential equations
dx/dt = -omega * y
dy/dt = omega * x
"""
x, y = z
f = [-2*x-y**2, -y-x**2]
return f
# Create an `ode` instance to solve the system of differential
# equations defined by `fun`, and set the solver method to 'dop853'.
solver = ode(fun)
solver.set_integrator('dopri5')
# Set the initial value z(0) = z0.
t0 = 0.0
z0 = [0, 0]
solver.set_initial_value(z0, t0)
# Create the array `t` of time values at which to compute
# the solution, and create an array to hold the solution.
# Put the initial value in the solution array.
t1 = 2.5
N = 75
t = np.linspace(t0, t1, N)
sol = np.empty((N, 2))
sol[0] = z0
# Repeatedly call the `integrate` method to advance the
# solution to time t[k], and save the solution in sol[k].
k = 1
while solver.successful() and solver.t < t1:
solver.integrate(t[k])
sol[k] = solver.y
k += 1
# Plot the solution...
plt.plot(t, sol[:,0], label='x')
plt.plot(t, sol[:,1], label='y')
plt.xlabel('t')
plt.grid(True)
plt.legend()
plt.show()
Your initial state (z0) is [0,0]. The time derivative (fun) for this initial state is also [0,0]. Hence, for this initial condition, [0,0] is the correct solution for all times.
If you change your initial condition to some other value, you should observe more interesting result.

Categories