Python - Using a Kronecker Delta with ODEINT - python

I'm trying to plot the output from an ODE using a Kronecker delta function which should only become 'active' at a specific time = t1.
This should give a sawtooth like response where the initial value decays down exponentially until t=t1 where it rises again instantly before decaying down once again.
However, when I plot this it looks like the solver is seeing the Kronecker delta function as zero for all time t. Is there anyway to do this in Python?
from scipy import KroneckerDelta
import scipy.integrate as sp
import matplotlib.pyplot as plt
import numpy as np
def dy_dt(y,t):
dy_dt = 500*KroneckerDelta(t,t1) - 2y
return dy_dt
t1 = 4
y0 = 500
t = np.arrange(0,10,0.1)
y = sp.odeint(dy_dt,y0,t)
plt.plot(t,y)

In the case of a simple Kronecker delta using time, you can run the ode in pieces like so:
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import numpy as np
def dy_dt(y,t):
return -2*y
t_delta = 4
tend = 10
y0 = [500]
t1 = np.linspace(0,t_delta,50)
y1 = odeint(dy_dt,y0,t1)
y0 = y1[-1] + 500 # execute Kronecker delta
t2 = np.linspace(t_delta,tend,50)
y2 = odeint(dy_dt,y0,t2)
t = np.append(t1, t2)
y = np.append(y1, y2)
plt.plot(t,y)
Another option for complicated situations is to the events functionality of solve_ivp.

I think the problem could be internal rounding errors, because 0.1 cannot be represented exactly as a python float. I would try
import math
def dy_dt(y,t):
if math.isclose(t, t1):
return 500 - 2*y
else:
return -2y
Also the documentation of odeint suggests using the args parameter instead of global variables to give your derivative function access to additional arguments and replacing np.arange by np.linspace:
import scipy.integrate as sp
import matplotlib.pyplot as plt
import numpy as np
import math
def dy_dt(y, t, t1):
if math.isclose(t, t1):
return 500 - 2*y
else:
return -2*y
t1 = 4
y0 = 500
t = np.linspace(0, 10, num=101)
y = sp.odeint(dy_dt, y0, t, args=(t1,))
plt.plot(t, y)
I did not test the code so tell me if there is anything wrong with it.
EDIT:
When testing my code I took a look at the t values for which dy_dt is evaluated. I noticed that odeint does not only use the t values that where specified, but alters them slightly:
...
3.6636447422787928
3.743098503914526
3.822552265550259
3.902006027185992
3.991829287543431
4.08165254790087
4.171475808258308
...
Now using my method, we get
math.isclose(3.991829287543431, 4) # False
because the default tolerance is set to a relative error of at most 10^(-9), so the odeint function "misses" the bump of the derivative at 4. Luckily, we can fix that by specifying a higher error threshold:
def dy_dt(y, t, t1):
if math.isclose(t, t1, abs_tol=0.01):
return 500 - 2*y
else:
return -2*y
Now dy_dt is very high for all values between 3.99 and 4.01. It is possible to make this range smaller if the num argument of linspace is increased.
TL;DR
Your problem is not a problem of python but a problem of numerically solving an differential equation: You need to alter your derivative for an interval of sufficient length, otherwise the solver will likely miss the interesting spot. A kronecker delta does not work with numeric approaches to solving ODEs.

Related

How to put initial condition of ODE at a specific time point using odeint in Python?

How to put initial condition of ODE at a specific time point using odeint in Python?
So I have y(0) = 5 as initial condition,
following code works::
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# function that returns dy/dt
def model(y,t):
k = 0.3
dydt = -k * y
return dydt
# initial condition
y0 = 5
# time points
t = np.linspace(0,20)
# solve ODE
y = odeint(model,y0,t)
# plot results
plt.plot(t,y)
plt.xlabel('time')
plt.ylabel('y(t)')
plt.show()
I wanna see the graph in both negative and positive time line.
So I change t = np.linspace(0,20) to t = np.linspace(-5,20), but then the initial condition is taken as y(-5) = 5.
How to solve this?
I do not think you can, according to the docs
But you can solve for positive and negative t's separately and then stich them together. Replace the relevant lines with
tp = np.linspace(0,20)
tm = np.linspace(0,-5)
# solve ODE
yp = odeint(model,y0,tp)
ym = odeint(model,y0,tm)
# stich together; note we flip the time direction with [::-1] construct
t = np.concatenate([tm[::-1],tp])
y = np.concatenate([ym[::-1],yp])
this produces

Solving differential equations numerically

I tried solving a very simple equation f = t**2 numerically. I coded a for-loop, so as to use f for the first time step and then use the solution of every loop through as the inital function for the next loop.
I am not sure if my approach to solve it numerically is correct and for some reason my loop only works twice (one through the if- then the else-statement) and then just gives zeros.
Any help very much appreciatet. Thanks!!!
## IMPORT PACKAGES
import numpy as np
import math
import sympy as sym
import matplotlib.pyplot as plt
## Loop to solve numerically
for i in range(1,4,1):
if i == 1:
f_old = t**2
print(f_old)
else:
f_old = sym.diff(f_old, t).evalf(subs={t: i})
f_new = f_old + dt * (-0.5 * f_old)
f_old = f_new
print(f_old)
Scipy.integrate package has a function called odeint that is used for solving differential equations
Here are some resources
Link 1
Link 2
y = odeint(model, y0, t)
model: Function name that returns derivative values at requested y and t values as dydt = model(y,t)
y0: Initial conditions of the differential states
t: Time points at which the solution should be reported. Additional internal points are often calculated to maintain accuracy of the solution but are not reported.
Example that plots the results as well :
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# function that returns dy/dt
def model(y,t):
k = 0.3
dydt = -k * y
return dydt
# initial condition
y0 = 5
# time points
t = np.linspace(0,20)
# solve ODE
y = odeint(model,y0,t)
# plot results
plt.plot(t,y)
plt.xlabel('time')
plt.ylabel('y(t)')
plt.show()

fsolve gives weird answers

I want to use fsolve to numerically find roots of a nonlinear transcendent equation.
The following code does this job.
import numpy as np
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
kappa = 0.1
tau = 90
def equation(x, * parameters):
kappa,tau = parameters
return -x + kappa * np.sin(-tau*x)
x = np.linspace(-0.5,0.5, 35)
roots = fsolve(equation,x, (kappa,tau))
x_2 = np.linspace(-1.5,1.5,1500)
plt.plot(x_2 ,x_2 )
plt.plot(x_2 , kappa*np.sin(-x_2 *tau))
plt.scatter(x, roots)
plt.show()
I can double check the solutions graphically by plotting the two graphs f1(x)=x and f2(x)=k * sin(-x * tau), which i also included in the code.
fsolve gives me some wrong answers, without throwing any errors or convergence problems.
The Problem is, that I would like to automatize the procedure for varying kappa and tau, without me checking which answers are wrong and which are right. But with wrong answers as output, i can't use this method. Is there any other method or an option I can use, to be on the safe side?
Thanks for the help.
I couldn't run your code, there were errors and anyway, according to the documentation on scipy.fsolve, you're supposed to add an initial guess as the second input argument, not a range as what you did there fsolve(equation, x0, (kappa,tau))
You could of course however pass this in a loop, looping for every value in the array np.linspace(0.5, 0.5, 25). Although I do not understand what you are trying to achieve by varying kappa and tau, but if I take it that for those given parameters, you are interested in looking for the roots, here's how I would do it.
import numpy as np
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
# Take it as it is
kappa = 0.1
tau = 90
def equation(x, parameters):
kappa,tau = parameters
return -x + kappa * np.sin(-tau*x)
# Initial guess of x = -0.1
SolutionStack = []
x0 = -kappa
y = fsolve(equation, x0, [kappa, tau])
SolutionStack.append(y[0])
y = fsolve(equation, SolutionStack[-1], [kappa, tau])
SolutionStack.append(y[0])
deltaY = SolutionStack[-1] - SolutionStack[0]
# Define tolerance
tol = 5e-4
while ((SolutionStack[-1] <= kappa) and (deltaY <= tol)):
y = fsolve(equation, SolutionStack[-1], [kappa, tau])
SolutionStack.append(y[0])
deltaY = SolutionStack[-1] - SolutionStack[-2]
# Obviously a little guesswork is involved here, as it pertains to 0.07
if deltaY <= tol:
SolutionStack[-1] = SolutionStack[-1] + 0.07
# Obtaining the roots
Roots = []
Roots.append(SolutionStack[0])
for i in range(len(SolutionStack)-1):
if (SolutionStack[i+1] - SolutionStack[i]) <= tol:
continue
else:
Roots.append(SolutionStack[i+1]
Probably not the smartest way to do it (assuming I even understood you correctly), but perhaps you have an idea now.

Is fsolve good to any system of equations?

I don't have a lot of experience with Python but I decided to give it a try in solving the following system of equations:
x = A * exp (x+y)
y = 4 * exp (x+y)
I want to solve this system and plot x and y as a function of A.
I saw some a similar question and give fsolve a try:
`from scipy.optimize import fsolve
def f(p):
x, y = p
A = np.linspace(0,4)
eq1= x -A* np.exp(x+y)
eq2= y- 4* np.exp(x+y)
return (eq1,eq2)
x,y = fsolve(f,(0, 0))
print(x,y)
plt.plot(x,A)
plt.plot(y,A)
`
I'm getting these errors:
setting an array element with a sequence.
Result from function call is not a proper array of floats.
Pass the value of A as argument to the function and run fsolve for each value of A separately.
Following code works.
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
import numpy as np
def f(p,*args):
x, y = p
A = args[0]
return (x -A* np.exp(x+y),y- 4* np.exp(x+y))
A = np.linspace(0,4,5)
X = []
Y =[]
for a in A:
x,y = fsolve(f,(0.0, 0.0) , args=(a))
X.append(x)
Y.append(y)
print(x,y)
plt.plot(A,X)
plt.plot(A,Y)
4.458297786441408e-17 -1.3860676807976662
-1.100088440495758 -0.5021704548996653
-1.0668987418054918 -0.7236105952221454
-1.0405000943788385 -0.9052366768954621
-1.0393471472966025 -1.0393471472966027
/usr/local/lib/python3.6/dist-packages/scipy/optimize/minpack.py:163: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
/usr/local/lib/python3.6/dist-packages/scipy/optimize/minpack.py:163: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last five Jacobian evaluations.
warnings.warn(msg, RuntimeWarning)
[<matplotlib.lines.Line2D at 0x7f4a2a83a4e0>]

python - scipy.integrate.odeint returning wrong results

I was trying to integrate a square wave using python 3.5 and the scipy.integrate.odeint function but the results don't make any sense and vary wildly with the array of time points selected.
The square wave has a period of 10sec and the simulation runs for 100sec. Since the array of time points has size 500, there will be 50 time points on each period of the square wave, but that doesn't seem to be happening.
Using the optional parameter hmax=0.02 fixes it, but shouldn't it be inferred automatically?
Here's the code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate
# dx/dt = f(t), where f(t) is a square wave
def f(x, t):
return float(t % 10.0 < 5.0) * 0.3
T = 100
tt = np.linspace(0, T, 500)
xx = integrate.odeint(f, 0, tt, hmax=0.2)
plt.figure()
plt.subplot(2,1,1)
plt.plot(tt, xx)
plt.axis([0,T,0,16])
plt.subplot(2,1,2)
plt.plot(tt, [f(None,t) for t in tt])
plt.axis([0, T, 0, 1])
plt.show()
I'm hoping someone can put some light into what is happening here.
Try changing T between 80 and 100 (simulation time).
I think your problem is that the odeint function takes continuous Ordinary Differential Equations which a square wave is not.
i'd start by redefining your square-wave function to:
def g(t):
return float(t % 10.0 < 5.0) * 0.3
then define a function to calculate the integral step-by-step:
def get_integral(tt):
intarray = np.zeros_like(tt)
step_size = tt[1] -tt[0]
for i,t in enumerate(tt):
intarray[i] = intarray[i-1] + g(t)*step_size
return intarray
Then:
xx = get_integral(tt)
should give you the result you're looking for.

Categories