I have a system of ODEs where my state variables and independent variable span many orders of magnitude (initial values are around 0 at t=0 and are expected to become about 10¹⁰ by t=10¹⁷). I also want to ensure that my state variables remain positive.
According to this Stack Overflow post, one way to enforce positivity is to log-transform the ODEs to solve for the evolution of the logarithm of a variable instead of the variable itself. However when I try this with my ODEs, I get an overflow error probably because of the huge dynamic range / orders of magnitude of my state variables and time variable. Am I doing something wrong or is log-transform just not applicable in my case?
Here is a minimal working example that is successfully solved by scipy.integrate.solve_ivp:
import numpy as np
from scipy.interpolate import interp1d
from scipy.integrate import solve_ivp
# initialize times at which we are given certain input quantities/parameters
# this is seconds corresponding to the age of the universe in billions of years
times = np.linspace(0.1,10,500) * 3.15e16
# assume we are given the amount of new mass flowing into the system in units of g/sec
# for this toy example we will assume a log-normal distribution and then interpolate it for our integrator function
mdot_grow_array = np.random.lognormal(mean=0,sigma=1,size=len(times))*1.989e33 / 3.15e7
interp_grow = interp1d(times,mdot_grow_array,kind='cubic')
# assume there is also a conversion efficiency for some fraction of mass to be converted to another form
# for this example we'll assume the fractions are drawn from a uniform random distribution and again interpolate
mdot_convert_array = np.random.uniform(0,0.1,len(times)) / 3.15e16 # fraction of M1 per second converted to M2
interp_convert = interp1d(times,mdot_convert_array,kind='cubic')
# set up our integrator function
def integrator(t,y):
print('Working on t=',t/3.15e16) # to check status of integration in billions of years
# unpack state variables
M1, M2 = y
# get the interpolated value of new mass flowing in at this time
mdot_grow_now = interp_grow(t)
mdot_convert_now = interp_convert(t)
# assume some fraction of the mass gets converted to another form
mdot_convert = mdot_convert_now * M1
# return the derivatives
M1dot = mdot_grow_now - mdot_convert
M2dot = mdot_convert
return M1dot, M2dot
# set up initial conditions and run solve_ivp for the whole time range
# should start with M1=M2=0 initially but then solve_ivp does not work at all, so just use [1,1] instead
initial_conditions = [1.0,1.0]
# note how the integrator gets stuck at very small timesteps early on
sol = solve_ivp(integrator,(times[0],times[-1]),initial_conditions,dense_output=True,method='RK23')
And here is the same example but now log-transformed following the Stack Overflow post referenced above (since dlogx/dt = 1/x * dx/dt, we simply replace the LHS with x*dlogx/dt and divide both sides by x to isolate dlogx/dt on the LHS; and we make sure to use np.exp() on the state variables – now logx instead of x – within the integrator function):
import numpy as np
from scipy.interpolate import interp1d
from scipy.integrate import solve_ivp
# initialize times at which we are given certain input quantities/parameters
# this is seconds corresponding to the age of the universe in billions of years
times = np.linspace(0.1,10,500) * 3.15e16
# assume we are given the amount of new mass flowing into the system in units of g/sec
# for this toy example we will assume a log-normal distribution and then interpolate it for our integrator function
mdot_grow_array = np.random.lognormal(mean=0,sigma=1,size=len(times))*1.989e33 / 3.15e7
interp_grow = interp1d(times,mdot_grow_array,kind='cubic')
# assume there is also a conversion efficiency for some fraction of mass to be converted to another form
# for this example we'll assume the fractions are drawn from a uniform random distribution and again interpolate
mdot_convert_array = np.random.uniform(0,0.1,len(times)) / 3.15e16 # fraction of M1 per second converted to M2
interp_convert = interp1d(times,mdot_convert_array,kind='cubic')
# set up our integrator function
def integrator(t,logy):
print('Working on t=',t/3.15e16) # to check status of integration in billions of years
# unpack state variables
M1, M2 = np.exp(logy)
# get the interpolated value of new mass flowing in at this time
mdot_grow_now = interp_grow(t)
mdot_convert_now = interp_convert(t)
# assume some fraction of the mass gets converted to another form
mdot_convert = mdot_convert_now * M1
# return the derivatives
M1dot = (mdot_grow_now - mdot_convert) / M1
M2dot = (mdot_convert) / M2
return M1dot, M2dot
# set up initial conditions and run solve_ivp for the whole time range
# should start with M1=M2=0 initially but then solve_ivp does not work at all, so just use [1,1] instead
initial_conditions = [1.0,1.0]
# note how the integrator gets stuck at very small timesteps early on
sol = solve_ivp(integrator,(times[0],times[-1]),initial_conditions,dense_output=True,method='RK23')
[…] is log-transform just not applicable in my case?
I don’t know where your transform went wrong, but it will certainly not achieve what you think it does. Log-transforming as a means to avoid negative values makes sense and works if and only if the following two conditions hold:
If the value of a dynamical variable approaches zero (from above), its derivative also approaches zero (from above) in your model.
Due to numerical noise, your derivative may turn negative though it actually isn’t.
Conversely, it is not necessary or doesn’t work in the following cases:
If Condition 1 fails because your derivative never approaches zero in your model, but is strictly positive, you have no problem to begin with, as your derivative should not become negative in any reasonable implementation of your model. (You might make it happen by implementing some spectacular numerical annihilation, but that’s quite a difficult feat to achieve and not what I would consider a reasonable implementation.)
If Condition 1 fails because your derivative becomes truly negative in your model, logarithms won’t save you, because the dynamics wants to push the derivative below zero and the logarithms cannot represent this. You usually get an overflow error due to the logarithms becoming extremely negative or the adaptive integration fails.
Even if Condition 1 applies, Condition 2 can usually be handled by avoiding numerical annihilations and similar when implementing your model.
Unless I am mistaken, your model falls into the first category. If M1 goes to zero, mdot_convert goes towards zero and thus M1dot = mdot_grow_now - mdot_convert is strictly positive, because mdot_grow_now is. M2dot is strictly positive anyway. Thus, you gain nothing from log-transforming. In fact, in the vast majority of cases, your dynamical variables will quickly increase.
With all that being said, some things you might want to look into are:
Normalising your variables to be in the order of magnitude of 1.
Stochastic differential equations.
Related
I would like to compute the Buchstab function numerically. It is defined by the delay differential equation:
How can I compute this numerically efficiently?
To get a general feeling of how DDE integration works, I'll give some code, based on the low-order Heun method (to avoid uninteresting details while still being marginally useful).
In the numerical integration the previous values are treated as a function of time like any other time-depending term. As there is not really a functional expression for it, the solution so-far will be used as a function table for interpolation. The interpolation error order should be as high as the error order of the ODE integrator, which is easy to arrange for low-order methods, but will require extra effort for higher order methods. The solve_ivp stepper classes provide such a "dense output" interpolation per step that can be assembled into a function for the currently existing integration interval.
So after the theory the praxis. Select step size h=0.05, convert the given history function into the start of the solution function table
u=1
u_arr = []
w_arr = []
while u<2+0.5*h:
u_arr.append(u)
w_arr.append(1/u)
u += h
Then solve the equation, for the delayed value use interpolation in the function table, here using numpy.interp. There are other functions with more options in `scipy.interpolate.
Note that h needs to be smaller than the smallest delay, so that the delayed values are from a previous step. Which is the case here.
u = u_arr[-1]
w = w_arr[-1]
while u < 4:
k1 = (-w + np.interp(u-1,u_arr,w_arr))/u
us, ws = u+h, w+h*k1
k2 = (-ws + np.interp(us-1,u_arr,w_arr))/us
u,w = us, w+0.5*h*(k1+k2)
u_arr.append(us)
w_arr.append(ws)
Now the numerical approximation can be further processed, for instance plotted.
plt.plot(u_arr,w_arr); plt.grid(); plt.show()
I'm using scipy.integrate.ode and would like to know, what happens internally when I get the message UserWarning: zvode: Excess work done on this call. (Perhaps wrong MF.) 'Unexpected istate=%s' % istate))
This appears when I call ode.integrate(t1) for too big t1, so I'm forced to use a for-loop and incrementally integrate my equation, what lowers the speed since the solver can not use adaptive step size very effectively. I already tried different methods and setting for the integrator. The maximal number of steps nsteps=100000 is very big already but with this setting I still can't integrate up to 1000 in one call, which I would like to do.
The code I use is:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import ode
h_bar=0.658212 #reduced Planck's constant (meV*ps)
m0=0.00568563 #free electron mass (meV*ps**2/nm**2)
m_e=0.067*m0 #effective electron mass (meV*ps**2/nm**2)
m_h=0.45*m0 #effective hole mass (meV*ps**2/nm**2)
m_reduced=1/((1/m_e)+(1/m_h)) #reduced mass of electron and holes combined
kB=0.08617 #Boltzmann's constant (meV/K)
mu_e=-50 #initial chemical potential for electrons
mu_h=-100 #initial chemical potential for holes
k_array=np.arange(0,1.5,0.02) #a list of different k-values
n_k=len(k_array) #number of k-values
def derivative(t,y_list,Gamma,g,kappa,k_list,n_k):
#initialize output vector
y_out=np.zeros(3*n_k+1,dtype=complex)
y_out[0:n_k]=-g*g*2*np.real(y_list[2*n_k:3*n_k])/h_bar
y_out[n_k:2*n_k]=-g*g*2*np.real(y_list[2*n_k:3*n_k])/h_bar
y_out[2*n_k:3*n_k]=((-1.j*(k_list**2/(2*m_reduced))-(Gamma+kappa))*y_list[2*n_k:3*n_k]-y_list[-1]*(1-y_list[n_k:2*n_k]-y_list[0:n_k])+y_list[0:n_k]*y_list[n_k:2*n_k])/h_bar
y_out[-1]=(2*np.real(g*g*sum(y_list[2*n_k:3*n_k]))-2*kappa*y_list[-1])/h_bar
return y_out
def dynamics(t_list,N_ini=1e-3, T=300, Gamma=1.36,kappa=0.02,g=0.095):
#initial values
t0=0 #initial time
y_initial=np.zeros(3*n_k+1,dtype=complex)
y_initial[0:n_k]=1/(1+np.exp(((h_bar*k_array)**2/(2*m_e)-mu_e)/(kB*T))) #Fermi-Dirac distributions
y_initial[n_k:2*n_k]=1/(1+np.exp(((h_bar*k_array)**2/(2*m_h)-mu_h)/(kB*T)))
t_list=t_list[1:] #remove t=0 from list (not feasable for integrator)
r=ode(derivative).set_integrator('zvode',method='adams', atol=10**-6, rtol=10**-6,nsteps=100000) #define ode solver
r.set_initial_value(y_initial,t0)
r.set_f_params(Gamma,g,kappa,k_array,n_k)
#create array for output (the +1 accounts values at t0=0)
y_output=np.zeros((len(t_list)+1,len(y_initial)),dtype=complex)
#insert initial data in output array
y_output[0]=y_initial
#perform integration for time steps given by t_list (the +1 account for the initial values already in the array)
for i in range(len(t_list)):
print(r't = %s' % t_list[i])
r.integrate(t_list[i])
if not (r.successful()):
print('Integration not successful!!')
break
y_output[i+1]=r.y
return y_output
t_list=np.arange(0,100,5)
data=dynamics(t_list,N_ini=1e-3, T=300, Gamma=1.36,kappa=0.02,g=1.095)
The message means that the method reached the number of steps specified by nsteps parameter. Since you asked about internals, I looked into the Fortran source, which offers this explanation:
-1 means an excessive amount of work (more than MXSTEP steps) was done on this call, before completing the requested task, but the integration was otherwise successful as far as T. (MXSTEP is an optional input and is normally 500.)
The conditional statement that brings up the error is this "GO TO 500".
According to LutzL, for your ODE the solver chooses step size 2e-4, which means 5000000 steps to integrate up to 1000. Your options are:
try such a large value of nsteps (which translates to MXSTEP in aforementioned Fortran routine)
reduce error tolerance
run a for loop, as you already do.
I'm trying to solve a system of coupled, first-order ODEs in Python. I'm new to this, but the Zombie Apocalypse example from SciPy.org has been a great help so far.
An important difference in my case is that the input data used to "drive" my system of ODEs changes abruptly at various time points and I'm not sure how best to deal with this. The code below is the simplest example I can think of to illustrate my problem. I appreciate this example has a straightforward analytical solution, but my actual system of ODEs is more complicated, which is why I'm trying to understand the basics of numerical methods.
Simplified example
Consider a bucket with a hole in the bottom (this kind of "linear reservoir" is the basic building block of many hydrological models). The input flow rate to the bucket is R and the output from the hole is Q. Q is assumed to be proportional to the volume of water in the bucket, V. The constant of proportionality is usually written as , where T is the "residence time" of the store. This gives a simple ODE of the form
In reality, R is an observed time series of daily rainfall totals. Within each day, the rainfall rate is assumed to be constant, but between days the rate changes abruptly (i.e. R is a discontinuous function of time). I'm trying to understand the implications of this for solving my ODEs.
Strategy 1
The most obvious strategy (to me at least) is to apply SciPy's odeint function separately within each rainfall time step. This means I can treat R as a constant. Something like this:
import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sn
from scipy.integrate import odeint
np.random.seed(seed=17)
def f(y, t, R_t):
""" Function to integrate.
"""
# Unpack parameters
Q_t = y[0]
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# #############################################################################
# User input
T = 10 # Time constant (days)
Q0 = 0. # Initial condition for outflow rate (mm/day)
days = 300 # Number of days to simulate
# #############################################################################
# Create a fake daily time series for R
# Generale random values from uniform dist
df = pd.DataFrame({'R':np.random.uniform(low=0, high=5, size=days+20)},
index=range(days+20))
# Smooth with a moving window to make more sensible
df['R'] = pd.rolling_mean(df['R'], window=20)
# Chop off the NoData at the start due to moving window
df = df[20:].reset_index(drop=True)
# List to store results
Q_vals = []
# Vector of initial conditions
y0 = [Q0, ]
# Loop over each day in the R dataset
for step in range(days):
# We want to find the value of Q at the end of this time step
t = [0, 1]
# Get R for this step
R_t = float(df.ix[step])
# Solve the ODEs
soln = odeint(f, y0, t, args=(R_t,))
# Extract flow at end of step from soln
Q = float(soln[1])
# Append result
Q_vals.append(Q)
# Update initial condition for next step
y0 = [Q, ]
# Add results to df
df['Q'] = Q_vals
Strategy 2
The second approach involves simply feeding everything to odeint and letting it deal with the discontinuities. Using the same parameters and R values as above:
def f(y, t):
""" Function used integrate.
"""
# Unpack incremental values for S and D
Q_t = y[0]
# Get the value for R at this t
idx = df.index.get_loc(t, method='ffill')
R_t = float(df.ix[idx])
# ODE to solve
dQ_dt = (R_t - Q_t)/T
return dQ_dt
# Vector of initial parameter values
y0 = [Q0, ]
# Time grid
t = np.arange(0, days, 1)
# solve the ODEs
soln = odeint(f, y0, t)
# Add result to df
df['Q'] = soln[:, 0]
Both of these approaches give identical answers, which look like this:
However the second strategy, although more compact in terms of code, it much slower than the first. I guess this is something to do with the discontinuities in R causing problems for odeint?
My questions
Is strategy 1 the best approach here, or is there a better way?
Is strategy 2 a bad idea and why is it so slow?
Thank you!
1.) Yes
2.) Yes
Reason for both: Runge-Kutta solvers expect ODE functions that have an order of differentiability at least as high as the order of the solver. This is needed so that the Taylor expansion which gives the expected error term exists. Which means that even the order 1 Euler method expects a differentiable ODE function. Thus no jumps are allowed, kinks can be tolerated in order 1, but not in higher order solvers.
This is especially true for implementations with automatic step size adaptations. Whenever a point is approached where the differentiation order is not satisfied, the solver sees a stiff system and drives the step-size toward 0, which leads to a slowdown of the solver.
You can combine strategies 1 and 2 if you use a solver with fixed step size and a step size that is a fraction of 1 day. Then the sampling points at the day turns serve as (implicit) restart points with the new constant.
I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files
Model I-V.
Method:
Perform an integral, as a function of E, which outputs Current for each Voltage value used. This is repeated for an array of v_values. The equation can be found below.
Although the limits in this equation range from -inf to inf, the limits must be restricted so that (E+eV)^2-\Delta^2>0 and E^2-\Delta^2>0, to avoid poles. (\Delta_1 = \Delta_2). Therefore there are currently two integrals, with limits from -inf to -gap-e*v and gap to inf.
However, I keep returning a math range error although I believe I have excluded the troublesome E values by using the limits stated above. Pastie of errors: http://pastie.org/private/o3ugxtxai8zbktyxtxuvg
Apologies for the vagueness of this question. But, can anybody see obvious mistakes or code misuse?
My attempt:
from scipy import integrate
from numpy import *
import scipy as sp
import pylab as pl
import numpy as np
import math
e = 1.60217646*10**(-19)
r = 3000
gap = 400*10**(-6)*e
g = (gap)**2
t = 0.02
k = 1.3806503*10**(-23)
kt = k*t
v_values = np.arange(0,0.001,0.0001)
I=[]
for v in v_values:
val, err = integrate.quad(lambda E:(1/(e*r))*(abs(E)/np.sqrt(abs(E**2-g)))*(abs(E+e*v)/(np.sqrt(abs((E+e*v)**2-g))))*((1/(1+math.exp((E+e*v)/kt)))-(1/(1+math.exp(E/k*t)))),-inf,(-gap-e*v)*0.9)
I.append(val)
I = array(I)
I2=[]
for v in v_values:
val2, err = integrate.quad(lambda E:(1/(e*r))*(abs(E)/np.sqrt(abs(E**2-g)))*(abs(E+e*v)/(np.sqrt(abs((E+e*v)**2-g))))*((1/(1+math.exp((E+e*v)/kt)))-(1/(1+math.exp(E/k*t)))),gap*0.9,inf)
I2.append(val2)
I2 = array(I2)
I[np.isnan(I)] = 0
I[np.isnan(I2)] = 0
pl.plot(v_values,I,'-b',v_values,I2,'-b')
pl.show()
This question is better suited for the Computational Science site. Still here are some points for you to think about.
First, the range of integration is the intersection of (-oo, -eV-gap) U (-eV+gap, +oo) and (-oo, -gap) U (gap, +oo). There are two possible cases:
if eV < 2*gap then the allowed energy values are in (-oo, -eV-gap) U (gap, +oo);
if eV > 2*gap then the allowed energy values are in (-oo, -eV-gap) U (-eV+gap, -gap) U (gap, +oo).
Second, you are working in a very low temperature region. With t equal to 0.02 K, the denominator in the Boltzmann factor is 1.7 µeV, while the energy gap is 400 µeV. In this case the value of the exponent is huge for positive energies and it soon goes off the limits of the double precision floating point numbers, used by Python. As this is the minimum possible positive energy, things would not get any better at higher energies. With negative energies the value would always be very close to zero. Note that at this temperature, the Fermi-Dirac distribution has a very sharp edge and resembles a reflected theta function. At E = gap you would have exp(E/kT) of approximately 6.24E+100. You would run out of range when E/kT > 709.78 or E > 3.06*gap.
Yet it makes no sense to go to such energies since at that temperature the difference between the two Fermi functions very quickly becomes zero outside the [-eV, 0] interval which falls entirely inside the gap for the given temperature when V < (2*gap)/e (0.8 mV). That's why one would expect that the current would be very close to zero when the bias voltage is less than 0.8 mV. When it is more than 0.8 mV, then the main value of the integral would come from the integrand in (-eV+gap, -gap), although some non-zero value would come from the region near the singularity at E = gap and some from the region near the singularity at E = -eV-gap. You should not avoid the singularities in the DoS, otherwise you would not get the expected discontinuities (vertical lines) in the I(V) curve (image taken from Wikipedia):
Rather, you have to derive equivalent approximate expressions in the vicinity of each singularity and integrate them instead.
As you can see, there are many special cases for the value of the integrand and you have to take them all into account when computing numerically. If you don't want to do that, you should probably turn to some other mathematical package like Maple or Mathematica. These have much more sophisticated numerical integration routines and might be able to directly handle your formula.
Note that this is not an attempt to answer your question but rather a very long comment that would not fit in any comment field.
The reason for the math range error is that your exponential goes to infinity. Taking v = 0.0009 and E = 5.18e-23, the expression exp((E + e*v) / kt) (I corrected the typo pointed out by Hristo Liev in your Python expression) is exp(709.984..) which is beyond the range you can represent with double precision numbers (up to ca. 1E308).
Two additional notes:
As noted by others, you should probably rescale your equation by using a unit system which delivers numbers in a smaller range. Maybe, atomic units are a possible choice as it would set e = 1, but I did not try to convert your equation into it. (Probably, your timestep would then become quite large, as in atomic units the time unit is about is 1/40 fs).
Usually, one uses the exponential notation for float point numbers: e = 1.60217E-19 instead of e = 1.60217*10**(-19).
The best way to approach this problem in the end was to use a heaviside function to preventE variable from exceeding \Delta variable.