Model I-V in Python - python

Model I-V.
Method:
Perform an integral, as a function of E, which outputs Current for each Voltage value used. This is repeated for an array of v_values. The equation can be found below.
Although the limits in this equation range from -inf to inf, the limits must be restricted so that (E+eV)^2-\Delta^2>0 and E^2-\Delta^2>0, to avoid poles. (\Delta_1 = \Delta_2). Therefore there are currently two integrals, with limits from -inf to -gap-e*v and gap to inf.
However, I keep returning a math range error although I believe I have excluded the troublesome E values by using the limits stated above. Pastie of errors: http://pastie.org/private/o3ugxtxai8zbktyxtxuvg
Apologies for the vagueness of this question. But, can anybody see obvious mistakes or code misuse?
My attempt:
from scipy import integrate
from numpy import *
import scipy as sp
import pylab as pl
import numpy as np
import math
e = 1.60217646*10**(-19)
r = 3000
gap = 400*10**(-6)*e
g = (gap)**2
t = 0.02
k = 1.3806503*10**(-23)
kt = k*t
v_values = np.arange(0,0.001,0.0001)
I=[]
for v in v_values:
val, err = integrate.quad(lambda E:(1/(e*r))*(abs(E)/np.sqrt(abs(E**2-g)))*(abs(E+e*v)/(np.sqrt(abs((E+e*v)**2-g))))*((1/(1+math.exp((E+e*v)/kt)))-(1/(1+math.exp(E/k*t)))),-inf,(-gap-e*v)*0.9)
I.append(val)
I = array(I)
I2=[]
for v in v_values:
val2, err = integrate.quad(lambda E:(1/(e*r))*(abs(E)/np.sqrt(abs(E**2-g)))*(abs(E+e*v)/(np.sqrt(abs((E+e*v)**2-g))))*((1/(1+math.exp((E+e*v)/kt)))-(1/(1+math.exp(E/k*t)))),gap*0.9,inf)
I2.append(val2)
I2 = array(I2)
I[np.isnan(I)] = 0
I[np.isnan(I2)] = 0
pl.plot(v_values,I,'-b',v_values,I2,'-b')
pl.show()

This question is better suited for the Computational Science site. Still here are some points for you to think about.
First, the range of integration is the intersection of (-oo, -eV-gap) U (-eV+gap, +oo) and (-oo, -gap) U (gap, +oo). There are two possible cases:
if eV < 2*gap then the allowed energy values are in (-oo, -eV-gap) U (gap, +oo);
if eV > 2*gap then the allowed energy values are in (-oo, -eV-gap) U (-eV+gap, -gap) U (gap, +oo).
Second, you are working in a very low temperature region. With t equal to 0.02 K, the denominator in the Boltzmann factor is 1.7 µeV, while the energy gap is 400 µeV. In this case the value of the exponent is huge for positive energies and it soon goes off the limits of the double precision floating point numbers, used by Python. As this is the minimum possible positive energy, things would not get any better at higher energies. With negative energies the value would always be very close to zero. Note that at this temperature, the Fermi-Dirac distribution has a very sharp edge and resembles a reflected theta function. At E = gap you would have exp(E/kT) of approximately 6.24E+100. You would run out of range when E/kT > 709.78 or E > 3.06*gap.
Yet it makes no sense to go to such energies since at that temperature the difference between the two Fermi functions very quickly becomes zero outside the [-eV, 0] interval which falls entirely inside the gap for the given temperature when V < (2*gap)/e (0.8 mV). That's why one would expect that the current would be very close to zero when the bias voltage is less than 0.8 mV. When it is more than 0.8 mV, then the main value of the integral would come from the integrand in (-eV+gap, -gap), although some non-zero value would come from the region near the singularity at E = gap and some from the region near the singularity at E = -eV-gap. You should not avoid the singularities in the DoS, otherwise you would not get the expected discontinuities (vertical lines) in the I(V) curve (image taken from Wikipedia):
Rather, you have to derive equivalent approximate expressions in the vicinity of each singularity and integrate them instead.
As you can see, there are many special cases for the value of the integrand and you have to take them all into account when computing numerically. If you don't want to do that, you should probably turn to some other mathematical package like Maple or Mathematica. These have much more sophisticated numerical integration routines and might be able to directly handle your formula.
Note that this is not an attempt to answer your question but rather a very long comment that would not fit in any comment field.

The reason for the math range error is that your exponential goes to infinity. Taking v = 0.0009 and E = 5.18e-23, the expression exp((E + e*v) / kt) (I corrected the typo pointed out by Hristo Liev in your Python expression) is exp(709.984..) which is beyond the range you can represent with double precision numbers (up to ca. 1E308).
Two additional notes:
As noted by others, you should probably rescale your equation by using a unit system which delivers numbers in a smaller range. Maybe, atomic units are a possible choice as it would set e = 1, but I did not try to convert your equation into it. (Probably, your timestep would then become quite large, as in atomic units the time unit is about is 1/40 fs).
Usually, one uses the exponential notation for float point numbers: e = 1.60217E-19 instead of e = 1.60217*10**(-19).

The best way to approach this problem in the end was to use a heaviside function to preventE variable from exceeding \Delta variable.

Related

Solving log-transformed ODE system without overflow error

I have a system of ODEs where my state variables and independent variable span many orders of magnitude (initial values are around 0 at t=0 and are expected to become about 10¹⁰ by t=10¹⁷). I also want to ensure that my state variables remain positive.
According to this Stack Overflow post, one way to enforce positivity is to log-transform the ODEs to solve for the evolution of the logarithm of a variable instead of the variable itself. However when I try this with my ODEs, I get an overflow error probably because of the huge dynamic range / orders of magnitude of my state variables and time variable. Am I doing something wrong or is log-transform just not applicable in my case?
Here is a minimal working example that is successfully solved by scipy.integrate.solve_ivp:
import numpy as np
from scipy.interpolate import interp1d
from scipy.integrate import solve_ivp
# initialize times at which we are given certain input quantities/parameters
# this is seconds corresponding to the age of the universe in billions of years
times = np.linspace(0.1,10,500) * 3.15e16
# assume we are given the amount of new mass flowing into the system in units of g/sec
# for this toy example we will assume a log-normal distribution and then interpolate it for our integrator function
mdot_grow_array = np.random.lognormal(mean=0,sigma=1,size=len(times))*1.989e33 / 3.15e7
interp_grow = interp1d(times,mdot_grow_array,kind='cubic')
# assume there is also a conversion efficiency for some fraction of mass to be converted to another form
# for this example we'll assume the fractions are drawn from a uniform random distribution and again interpolate
mdot_convert_array = np.random.uniform(0,0.1,len(times)) / 3.15e16 # fraction of M1 per second converted to M2
interp_convert = interp1d(times,mdot_convert_array,kind='cubic')
# set up our integrator function
def integrator(t,y):
print('Working on t=',t/3.15e16) # to check status of integration in billions of years
# unpack state variables
M1, M2 = y
# get the interpolated value of new mass flowing in at this time
mdot_grow_now = interp_grow(t)
mdot_convert_now = interp_convert(t)
# assume some fraction of the mass gets converted to another form
mdot_convert = mdot_convert_now * M1
# return the derivatives
M1dot = mdot_grow_now - mdot_convert
M2dot = mdot_convert
return M1dot, M2dot
# set up initial conditions and run solve_ivp for the whole time range
# should start with M1=M2=0 initially but then solve_ivp does not work at all, so just use [1,1] instead
initial_conditions = [1.0,1.0]
# note how the integrator gets stuck at very small timesteps early on
sol = solve_ivp(integrator,(times[0],times[-1]),initial_conditions,dense_output=True,method='RK23')
And here is the same example but now log-transformed following the Stack Overflow post referenced above (since dlogx/dt = 1/x * dx/dt, we simply replace the LHS with x*dlogx/dt and divide both sides by x to isolate dlogx/dt on the LHS; and we make sure to use np.exp() on the state variables – now logx instead of x – within the integrator function):
import numpy as np
from scipy.interpolate import interp1d
from scipy.integrate import solve_ivp
# initialize times at which we are given certain input quantities/parameters
# this is seconds corresponding to the age of the universe in billions of years
times = np.linspace(0.1,10,500) * 3.15e16
# assume we are given the amount of new mass flowing into the system in units of g/sec
# for this toy example we will assume a log-normal distribution and then interpolate it for our integrator function
mdot_grow_array = np.random.lognormal(mean=0,sigma=1,size=len(times))*1.989e33 / 3.15e7
interp_grow = interp1d(times,mdot_grow_array,kind='cubic')
# assume there is also a conversion efficiency for some fraction of mass to be converted to another form
# for this example we'll assume the fractions are drawn from a uniform random distribution and again interpolate
mdot_convert_array = np.random.uniform(0,0.1,len(times)) / 3.15e16 # fraction of M1 per second converted to M2
interp_convert = interp1d(times,mdot_convert_array,kind='cubic')
# set up our integrator function
def integrator(t,logy):
print('Working on t=',t/3.15e16) # to check status of integration in billions of years
# unpack state variables
M1, M2 = np.exp(logy)
# get the interpolated value of new mass flowing in at this time
mdot_grow_now = interp_grow(t)
mdot_convert_now = interp_convert(t)
# assume some fraction of the mass gets converted to another form
mdot_convert = mdot_convert_now * M1
# return the derivatives
M1dot = (mdot_grow_now - mdot_convert) / M1
M2dot = (mdot_convert) / M2
return M1dot, M2dot
# set up initial conditions and run solve_ivp for the whole time range
# should start with M1=M2=0 initially but then solve_ivp does not work at all, so just use [1,1] instead
initial_conditions = [1.0,1.0]
# note how the integrator gets stuck at very small timesteps early on
sol = solve_ivp(integrator,(times[0],times[-1]),initial_conditions,dense_output=True,method='RK23')
[…] is log-transform just not applicable in my case?
I don’t know where your transform went wrong, but it will certainly not achieve what you think it does. Log-transforming as a means to avoid negative values makes sense and works if and only if the following two conditions hold:
If the value of a dynamical variable approaches zero (from above), its derivative also approaches zero (from above) in your model.
Due to numerical noise, your derivative may turn negative though it actually isn’t.
Conversely, it is not necessary or doesn’t work in the following cases:
If Condition 1 fails because your derivative never approaches zero in your model, but is strictly positive, you have no problem to begin with, as your derivative should not become negative in any reasonable implementation of your model. (You might make it happen by implementing some spectacular numerical annihilation, but that’s quite a difficult feat to achieve and not what I would consider a reasonable implementation.)
If Condition 1 fails because your derivative becomes truly negative in your model, logarithms won’t save you, because the dynamics wants to push the derivative below zero and the logarithms cannot represent this. You usually get an overflow error due to the logarithms becoming extremely negative or the adaptive integration fails.
Even if Condition 1 applies, Condition 2 can usually be handled by avoiding numerical annihilations and similar when implementing your model.
Unless I am mistaken, your model falls into the first category. If M1 goes to zero, mdot_convert goes towards zero and thus M1dot = mdot_grow_now - mdot_convert is strictly positive, because mdot_grow_now is. M2dot is strictly positive anyway. Thus, you gain nothing from log-transforming. In fact, in the vast majority of cases, your dynamical variables will quickly increase.
With all that being said, some things you might want to look into are:
Normalising your variables to be in the order of magnitude of 1.
Stochastic differential equations.

Find intersection of A(x) and B(y) in complex plane plus corr. x and y

suppose I have the following Problem:
I have a complex function A(x) and a complex function B(y). I know these functions cross in the complex plane. I would like to find out the corresponding x and y of this intersection point, numerically ( and/or graphically). What is the most clever way of doing that?
This is my starting point:
import matplotlib.pyplot as plt
import numpy as np
from numpy import sqrt, pi
x = np.linspace(1, 10, 10000)
y = np.linspace(1, 60, 10000)
def A_(x):
return -1/( 8/(pi*x)*sqrt(1-(1/x)**2) - 1j*(8/(pi*x**2)) )
A = np.vectorize(A_)
def B_(y):
return 3/(1j*y*(1+1j*y))
B = np.vectorize(B_)
real_A = np.real(A(x))
imag_A = np.imag(A(x))
real_B = np.real(B(y))
imag_B = np.imag(B(y))
plt.plot(real_A, imag_A, color='blue')
plt.plot(real_B, imag_B, color='red')
plt.show()
I don't have to plot it necessarily. I just need x_intersection and y_intersection (with some error that depends on x and y).
Thanks a lot in advance!
EDIT:
I should have used different variable names. To clarify what i need:
x and y are numpy arrays and i need the index of the intersection point of each array plus the corresponding x and y value (which again is not the intersection point itself, but some value of the arrays x and y ).
Here I find the minimum of the distance between the two curves. Also, I cleaned up your code a bit (eg, vectorize wasn't doing anything useful).
import matplotlib.pyplot as plt
import numpy as np
from numpy import sqrt, pi
from scipy import optimize
def A(x):
return -1/( 8/(pi*x)*sqrt(1-(1/x)**2) - 1j*(8/(pi*x**2)) )
def B(y):
return 3/(1j*y*(1+1j*y))
# The next three lines find the intersection
def dist(x):
return abs(A(x[0])-B(x[1]))
sln = optimize.minimize(dist, [1, 1])
# plotting everything....
a0, b0 = A(sln.x[0]), B(sln.x[1])
x = np.linspace(1, 10, 10000)
y = np.linspace(1, 60, 10000)
a, b = A(x), B(y)
plt.plot(a.real, a.imag, color='blue')
plt.plot(b.real, b.imag, color='red')
plt.plot(a0.real, a0.imag, "ob")
plt.plot(b0.real, b0.imag, "xr")
plt.show()
The specific x and y values at the intersection point are sln.x[0] and sln.x[1], since A(sln.x[0])=B(sln.x[1]). If you need the index, as you also mention in your edit, I'd use, for example, numpy.searchsorted(x, sln.x[0]), to find where the values from the fit would insert into your x and y arrays.
I think what's a bit tricky with this problem is that the space for graphing where the intersection is (ie, the complex plane) does not show the input space, but one has to optimize over the input space. It's useful for visualizing the solution, then, to plot the distance between the curves over the input space. That can be done like this:
data = dist((X, Y))
fig, ax = plt.subplots()
im = ax.imshow(data, cmap=plt.cm.afmhot, interpolation='none',
extent=[min(x), max(x), min(y), max(y)], origin="lower")
cbar = fig.colorbar(im)
plt.plot(sln.x[0], sln.x[1], "xw")
plt.title("abs(A(x)-B(y))")
From this it seems much more clear how optimize.minimum is working -- it just rolls down the slope to find the minimum distance, which is zero in this case. But still, there's no obvious single visualization that one can use to see the whole problem.
For other intersections one has to dig a bit more. That is, #emma asked about other roots in the comments, and there I mentioned that there's no generally reliable way to find all roots to arbitrary equations, but here's how I'd go about looking for other roots. Here I won't lay out the complete program, but just list the changes and plots as I go along.
First, it's obvious that for the domain shown in my first plot that there's only one intersection, and that there are no intersection in the region to the left. The only place there could be another intersection is to the right, but for that I'll need to allow the sqrt in the def of B to get a negative argument without throwing an exception. An easy way to do this is to add 0j to the argument of the sqrt, like this, sqrt(1+0j-(1/x)**2). Then the plot with the intersection becomes
I plotted this over a broader range (x=np.linspace(-10, 10, 10000) and y=np.linspace(-400, 400, 10000)) and the above is the zoom of the only place where anything interesting is going on. This shows the intersection found above, plus the point where it looks like the two curves might touch (where the red curve, B, comes to a point nearly meeting the blue curve A going upward), so that's the new interesting thing, and the thing I'll look for.
A bit of playing around with limits, etc, show that B is coming to a point asymptotically, and the equation of B is obvious that it will go to 0 + 0j for large +/- y, so that's about all there is to say for B.
It's difficult to understand A from the above plot, so I'll look at the real and imaginary parts independently:
So it's not a crazy looking function, and the jumping between Re=const and Im=const is just the nature of sqrt(1-x-2), which is pure complex for abs(x)<1 and pure real for abs(x)>1.
It's pretty clear now that the other time the curves are equal is at y= +/-inf and x=0. And, quick look at the equations show that A(0)=0+0j and B(+/- inf)=0+0j, so this is another intersection point (though since it occurs at B(+/- inf), it's sort-of ambiguous on whether or not it would be called an intersection).
So that's about it. One other point to mention is that if these didn't have such an easy analytic solution, like it wasn't clear what B was at inf, etc, one could also graph/minimize, etc, by looking at B(1/y), and then go from there, using the same tools as above to deal with the infinity. So using:
def dist2(x):
return abs(A(x[0])-B(1./x[1]))
Where the min on the right is the one initially found, and the zero, now at x=-0 and 1./y=0 is the other one (which, again, isn't interesting enough to apply an optimizer here, but it could be interesting in other equations).
Of course, it's also possible to estimate this by just finding the minimum of the data that goes into the above graph, like this:
X, Y = np.meshgrid(x, y)
data = dist((X, Y))
r = np.unravel_index(data.argmin(), data.shape)
print x[r[1]], y[r[0]]
# 2.06306306306 1.8008008008 # min approach gave 2.05973231 1.80069353
But this is only approximate (to the resolution of data) and involved many more calculations (1M compared to a few hundred). I only post this because I think it might be what the OP originally had in mind.
Briefly, two analytic solutions are derived for the roots of the problem. The first solution removes the parametric representation of x and solves for the roots directly in the (u, v) plane, where for example A(x): u(x) + i v(y) gives v(u) = f(u). The second solution uses a polar representation, e.g. A(x) is given by r(x) exp(i theta(x)), and offers a better understanding of the behavior of the square root as x passes through unity towards zero. Possible solutions occurring at the singular points are explored. Finally, a bisection root finding algorithm is constructed as a Python iterator to invert certain solutions. Summarizing, the one real root can be found as a solution to either of the following equations:
and gives:
x0 = -2.059732
y0 = +1.800694
A(x0) = B(y0) = (-0.707131, -i 0.392670)
As in most problems there are a number of ways to proceed. One can use a "black box" and hopefully find the root they are looking for. Sometimes an answer is all that is desired, and with a little understanding of the functions this may be an adequate way forward. Unfortunately, it is often true that such an approach will provide less insight about the problem then others.
For example, algorithms find it difficult locating roots in the global space. Local roots may be found with other roots lying close by and yet undiscovered. Consequently, the question arises: "Are all the roots accounted for?" A more complete understanding of the functions, e.g. asymptotic behaviors, branch cuts, singular points, can provide the global perspective to better answer this, as well as other important questions.
So another possible solution would be building one's own "black box." A simple bisection routine might be a starting point. Robust if the root lies in the initial interval and fairly efficient. This encourages us to look at the global behavior of the functions. As the code is structured and debugged the various functions are explored, new insights are gained, and the algorithm has become a tool towards a more complete solution to the problem. Perhaps, with some patience, a closed-form solution can be found. A Python iterator is constructed and listed below implementing a bisection root finding algorithm.
Begin by putting the functions A(x) and B(x) in a more standard form:
C(x) = u(x) + i v(x)
and here the complex number i is brought out of the denominator and into the numerator, casting the problem into the form of functions of a complex variable. The new representation simplifies the original functions considerably. The real and imaginary parts are now clearly separated. An interesting graph is to plot A(x) and B(x) in the 3-dimensional space (u, v, x) and then visualize the projection into the u-v plane.
import numpy as np
from numpy import real, imag
import matplotlib.pyplot as plt
ax = fig.gca(projection='3d')
s = np.linspace(a, b, 1000)
ax.plot(f(s).real, f(s).imag, z, color='blue')
ax.plot(g(s).real, g(s).imag, z, color='red')
ax.plot(f(s).real, f(s).imag, 0, color='black')
ax.plot(g(s).real, g(s).imag, 0, color='black')
The question arises: "Can the parametric representation be replaced so that a relationship such as,
A(x): u(x) + i v(x) gives v(u) = f(u)
is obtained?" This will provide A(x) as a function v(u) = f(u) in the u-v plane. Then, if for
B(x): u(x) + i v(x) gives v(u) = g(u)
a similar relationship can be found, the solutions can be set equal to one another,
f(u) = g(u)
and the root(s) computed. In fact, it is convenient to look for a solution in the square of the above equation. The worst case is that an algorithm will have to be built to find the root, but at this point the behavior of our functions are better understood. For example, if f(u) and g(u) are polynomials of degree n then it is known that there are n roots. The best case is that a closed-form solution might be a reward for our determination.
Here is more detail to the solution. For A(x) the following is derived:
and v(u) = f(u) is just v(u) = constant. Similarly for B(x) a slightly more complex form is required:
Look at the function g(u) for B(x). It is imaginary if u > 0, but the root must be real since f(u) is real. This means that u must be less then 0, and there is both a positive and negative real branch to the square root. The sign of f(u) then allows one to pick the negative branch as the solution for the root. So the fact that the solution must be real is determined by the sign of u, and the fact that the real root is negative specifies what branch of the square root to choose.
In the following plot both the real (u < 0) and complex (u > 0) solutions are shown.
The camera looks toward the origin in the back corner, where the red and blue curves meet. The z-axis is the magnitude of f(u) and g(u). The x and y axes are the real/complex values of u respectively. The blue curves are the real solution with (3 - |u|). The red curves are the complex solution with (3 + |u|). The two sets meet at u = 0. The black curve is f(u) equal to (-pi/8).
There is a divergence in g(u) at |u| = 3 and this is associated with x = 0. It is far removed from the solution and will not be considered further.
To obtain the roots to f = g it is easier to square f(u) and equate the two functions. When the function g(u) is squared the branches of the square root are lost, much like squaring the solutions for x**2 = 4. In the end the appropriate root will be chosen by the sign of f(u) and so this is not an issue.
So by looking at the dependence of A and B, with respect to the parametric variable x, a representation for these functions was obtained where v is a function of u and the roots found. A simpler representation can be obtained if the term involving c in the square root is ignored.
The answer gives all the roots to be found. A cubic equation has at most three roots and one is guaranteed to be real. The other two may be imaginary or real. In this case the real root has been found and the other two roots are complex. Interestingly, as c changes these two complex roots may move into the real plane.
In the above figure the x-axis is u and the y axis is the evaluated cubic equation with constant c. The blue curve has c as (pi/8) squared. The red curve uses a larger and negative value for c, and has been translated upwards for purposes of demonstration. For the blue curve there is an inflection point near (0, 0.5), while the red curve has a maximum at (-0.9, 2.5) and a minimum at (0.9, -0.3).
The intersection of the cubic with the black line represents the roots given by: u**3 + c u + 3c = 0. For the blue curve there is one intersection and one real root with two roots in the complex plane. For the red curve there are three intersections, and hence 3 roots. As we change the value of the constant c (blue to red) the one real root undergoes a "pitchfork" bifurcation, and the two roots in the complex plane move into the real space.
Although the root (u_t, v_t) has been located, obtaining the value for x requires that (u, v) be inverted. In the present example this is a trivial matter, but if not, a bisection routine can be used to avoid the algebraic difficulties.
The parametric representation also leads to a solution for the real root, and it rounds out the analysis with an independent verification of the first result. Second, it answers the question about what happens at the singularity in the square root. Third, it gives a greater understanding of the multiplicity of roots.
The steps are: (1) convert A(x) and B(x) into polar form, (2) equate the modulus and argument giving two equations in two unknowns, (3) make a substitution for z = x**2. Converting A(x) to polar form:
Absolute value bars are not indicated, and it should be understood that the moduli r(x) and s(x) are positive definite as their names imply. For B(x):
The two equations in two unknowns:
Finally, the cubic solution is sketched out here where the substitution z = x**2 has been made:
The solution for z = x**2 gives x directly, which allows one to substitute into both A(x) and B(x). This is an exact solution if all terms are maintained in the cubic solution, and there is no error in x0, y0, A(x0), or B(x0). A simpler representation can be found by considering terms proportional to 1/d as small.
Before leaving the polar representation consider the two singular points where: (1) abs(x) = 1, and (2) x = 0. A complicating factor is that the arguments behave something like 1/x instead of x. It is worthwhile to look at a plot of the arctan(a) and then ask yourself how that changes if its argument is replaced by 1/a. The following graphs will then look less foreign.
Consider the polar representation of B(x). As x approaches 0 the modulus and argument tend toward infinity, i.e. the point is infinitely far from the origin and lies along the y-axis. Approaching 0 from the negative direction the point lies along the negative y-axis with varphi = (-pi/2), while approaching from the other direction the point lies along the positive y-axis with varphi = (+pi/2).
A somewhat more complicated behavior is exhibited by A(x). A(x) is even in x since the modulus is positive definite and the argument involves only x**2. There is a symmetry across the y-axis that allows us to only consider the x > 0 plane.
At x = 1 the modulus is just (pi/8), and as x continues to approach 0 so does r(x). The behavior of the argument is more complex. As x approaches unity from large positive values the argument is diverging towards +inf and so theta is approaching (+pi/2). As x passes through 1 the argument becomes complex. At x equals 0 the argument has reached its minimum value of -i. For complex arguments the arctan is given by:
The following are plots of the arguments for A(x) and B(x). The x-axis is the value of x, and the y-axis is the value of the angle in units of pi. In the first plot theta is shown in blue curves, and as x approaches 1 the angle approaches (+pi/2). Theta is real because abs(x) >= 1, and notice it is symmetric across the y-axis. The black curve is varphi and as x approaches 0 it approaches plus or minus (pi/2). Notice it is an odd function in x.
In the second plot A(x) is shown where abs(x) < 1 and the argument becomes complex. Near x = 1 theta is equal to (+pi/2), the blue curve, minus a small imaginary part, the red curve. As x approaches zero theta is equal to (+pi/2) minus a large imaginary part. At x equals 0 the argument is equal to -i and theta = (+pi/2) minus an infinite imaginary part, i.e ln(0) = -inf:
The values for x0 and y0 are determined by the set of equations that equate modulus and argument of A(x) and B(x), and there are no other roots. If x0 = 0 was a root, then it would fall out of these equations. The same holds for x0 = 1. In fact, if one uses approximations in the argument of A(x) about these points, and then substitutes into the equation for the modulus, the equality cannot be maintained there.
Here is another perspective: consider the set of equations where x is assumed large and call it x_inf. The equation for the argument then gives x_inf = y_inf, where 1 is neglected with respect to x_inf squared. Upon substitution into the second equation a cubic is obtained in x_inf. Will this give the correct answer? Yes, if x0 is actually large, and in this case you might get away with it since x0 is approximately 2. The difference between the sqrt(4) and the sqrt(5) is around 10%. But does this mean that x_inf = 100 is a solution? No it does not: x_inf is only a solution if it equals x0.
The initial reason for examining the problem in the first place was to find a context for building a root-finding bisection routine as a Python iterator. This can be used to find any of the roots discussed here, and looks something like this:
class Bisection:
def __init__(self, a, b, func, max_iter):
self.max_iter = max_iter
self.count_iter = 0
self.a = a
self.b = b
self.func = func
fa = func(self.a)
fb = func(self.b)
if fa*fb >= 0.0:
raise ValueError
def __iter__(self):
self.x1 = self.a
self.x2 = self.b
self.xmid = self.x1 + ((self.x2 - self.x1)/2.0)
return self
def __next__(self):
f1 = self.func(self.x1)
f2 = self.func(self.x2)
error = abs(f1 - f2)
fmid = self.func(self.xmid)
if fmid == 0.0:
return self.xmid
if f1*fmid < 0:
self.x2 = self.xmid
else:
self.x1 = self.xmid
self.xmid = self.x1 + ((self.x2 - self.x1)/2.0)
f1 = self.func(self.x1)
fmid = self.func(self.xmid)
self.count_iter += 1
if self.count_iter >= self.max_iter:
raise StopIteration
return self.xmid
The routine does only a minimal amount in the way of catching exceptions and was used to find x for the given solution in the u-v plane. The arguments a and b give the lower and upper brackets for the root to be found. The argument func is the function for the root to be found. This might look like: u0 - B(x).real. The constant max_iterations tells the iterator to stop after a given number of bisections has been attempted.

Solving improper integral without approximating

I'm having trouble solving this integral in python. The function being integrated is not defined on the boundaries of integration.
I've found a few more questions similar to this, but all were very specific replies to the issue in particular.
I don't want to approximate the integral too much, if possible not at all, as the reason I'm doing this integral in the first place is to avoid an approximation.
Is there any way to solve this integral?
import numpy as np
from pylab import *
import scipy
from math import *
from scipy import integrate
m_Earth_air = (28.0134*0.78084)+(31.9988*0.209476)+(39.948*0.00934)+(44.00995*0.000314)+(20.183*0.00001818)+(4.0026*0.00000524)+(83.80*0.00000114)+(131.30*0.000000087)+(16.04303*0.000002)+(2.01594*0.0000005)
Tb0 = 288.15
Lb0 = -6.5
Hb0 = 0.0
def Tm_0(z):
return Tb0+Lb0*(z-Hb0)
k = 1.38*10**-19 #cm^2.kg/s^2.K #Boltzmann cst
mp = 1.67262177*10**-27 #kg
Rad= 637100000.0 #radius planet #cm
g0 = 980.665 #cm/s^2
def g(z):
return (g0*((Rad/(Rad+z))**2.0))
def scale_height0(z):
return k*Tm_0(z*10**-5)/(m_Earth_air*mp*g(z))
def functionz(z,zvar):
return np.exp(-zvar/scale_height0(z))*((Rad+zvar)/(Rad+z))/((np.sqrt(((Rad+zvar)/(Rad+z))**2.0-1.0)))
def chapman0(z):
return (1.0/(scale_height0(z)))*((integrate.quad(lambda zvar: functionz(z,zvar), z, np.inf))[0])
print chapman0(1000000)
print chapman0(5000000)
The first block of variables and definitions are fine. The issue is in the "functionz(z,zvar)" and its integration.
Any help very much appreciated !
Unless you can solve the integral analytically there is no way to solve it without an approximation over its bounds. This isn't a Python problem, but a calculus problem in general, thus why math classes take such great pains to show you the numeric approximations.
If you don't want it to differ too much, choose a small epsilon with a method that converges fast.
Edit- Clarity on last statement:
Epsilon - ɛ - refers to the step size through the bounds of integration- the delta x- remember that the numeric approximation methods all slice the integral into slivers and add them back up, think of it as the width of each sliver, the smaller the sliver the better the approximation. You can specify these in numerical packages.
A method that converges fast implies the method approaches the true value of the integral quickly and the error of approximation is small for each sliver. For example, the Riemann sum is a naive method which assumes each sliver is a rectangle, while a trapezoid connects the beginning and the end of the sliver with a line to make a trapezoid. Of these 2, trapezoid typically converges faster as it tries to account for the change within the shape. (Neither is typically used as there are better guesses for most functions)
Both of these variables change the computational expense of the calculation. Typically epsilon is the most expensive to change, thus why it is important you choose a good method of approximation (some can differ by an order of magnitude in error for the same epsilon).
All of this will depend on how much error your calculation can tolerate.
It often helps to eliminate possible numerical instabilities by rescaling variables. In your case zvar starts from 1e6, which is probably causing problems due to some implementation details in quad(). If you scale it as y = zvar / z, so that the integration starts from 1 it seems to converge pretty well for z = 1e6:
def functiony(z, y):
return np.exp(-y*z/scale_height0(z))*(Rad+y*z)/(Rad+z) / np.sqrt(((Rad+y*z)/(Rad+z))**2.0-1.0)
def chapman0y(z):
return (1.0/(scale_height0(z)))*((integrate.quad(lambda y: functiony(z,y), 1, np.inf))[0])
>>> print(chapman0y(1000000))
1.6217257661844094e-06
(I set m_Earth_air = 28.8e-3 — this constant is missing in your code, I assumed it is the molar mass of air in (edit) kg/mole).
As for z = 5e6, scale_height0(z) is negative, which gives a huge positive value under the exponent, making the integral divergent on the infinity.
I had a similar issue and found that SciPy quad needs you to specify another parameter, epsabs=1e-1000, limit=1000 (stepsize limit), epsrel=1e1 works for everything I've tried. I.e. in this case:
def chapman0(z):
return (1.0/(scale_height0(z)))*((integrate.quad(lambda zvar: functionz(z,zvar), z, np.inf, limit=1000, epsabs=1e-1000, epsrel=1e1))[0])[0])
#results:
0.48529410529321887
-1.276589093231806e+21
Seems to be a high absolute error tolerance but for integrals that don't rapidly converge it seems to fix the issue. Just posting for others with similar problems as this post is quite dated. There are algorithms in other packages that converge faster but none that I've found in SciPy. The results are based on the posted code (not the selected answer).

Calculate the exact integral in Python

I need to write a python code to calculate the exact value of the integral (-5, 5) of 1/(1+x^2).
I know the answer is 2arctan(5) which is roughly equivalent to 2.746801...
I have below the code I have written, however I am getting a slightly different answer and I was wondering if there is anything I can do to make this code more accurate? Thanks for any help!
## The function to be integrated
def func(x):
return 1/(1 + x**2)
## Defining variables
a = -5.0
b = 5.0
dx = 1.0
Area = 0
## Number of trapezoids
n = int((b-a)/dx)
## Loop to calculate area and sum
for i in range(1, n+1):
x0 = a + (i-1)*dx
x1 = a + i*dx
## Area of each trapezoid
Ai = dx*(func(x0) + func(x1))/2.0
## Cumulative sum of areas
Area = Area + Ai
print("The exact value is: ", Area)
The answer I am getting is 2.756108...
I know it's a small difference, however, it is a difference and I would like to try for something more exact.
The reason you are getting an approximate value for the integral is because you are using an approximation technique (a first-order approximation to compute the value of the definite integral).
There are two ways to evaluate an integral: analytically or numerically (by approximation). Your method is of the second variety, and since it's an approximation it will generate a value that is within a certain margin of error of the real value.
The point of my answer is that there is no way for you to calculate the exact value of the integral using a numeric approach (definitely not in the case of this function). So you will have to settle for a certain margin of error that you're willing to accept and then choose a delta-x sufficiently small to get you within that range.

On ordinary differential equations (ODE) and optimization, in Python

I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files

Categories