Solving improper integral without approximating - python

I'm having trouble solving this integral in python. The function being integrated is not defined on the boundaries of integration.
I've found a few more questions similar to this, but all were very specific replies to the issue in particular.
I don't want to approximate the integral too much, if possible not at all, as the reason I'm doing this integral in the first place is to avoid an approximation.
Is there any way to solve this integral?
import numpy as np
from pylab import *
import scipy
from math import *
from scipy import integrate
m_Earth_air = (28.0134*0.78084)+(31.9988*0.209476)+(39.948*0.00934)+(44.00995*0.000314)+(20.183*0.00001818)+(4.0026*0.00000524)+(83.80*0.00000114)+(131.30*0.000000087)+(16.04303*0.000002)+(2.01594*0.0000005)
Tb0 = 288.15
Lb0 = -6.5
Hb0 = 0.0
def Tm_0(z):
return Tb0+Lb0*(z-Hb0)
k = 1.38*10**-19 #cm^2.kg/s^2.K #Boltzmann cst
mp = 1.67262177*10**-27 #kg
Rad= 637100000.0 #radius planet #cm
g0 = 980.665 #cm/s^2
def g(z):
return (g0*((Rad/(Rad+z))**2.0))
def scale_height0(z):
return k*Tm_0(z*10**-5)/(m_Earth_air*mp*g(z))
def functionz(z,zvar):
return np.exp(-zvar/scale_height0(z))*((Rad+zvar)/(Rad+z))/((np.sqrt(((Rad+zvar)/(Rad+z))**2.0-1.0)))
def chapman0(z):
return (1.0/(scale_height0(z)))*((integrate.quad(lambda zvar: functionz(z,zvar), z, np.inf))[0])
print chapman0(1000000)
print chapman0(5000000)
The first block of variables and definitions are fine. The issue is in the "functionz(z,zvar)" and its integration.
Any help very much appreciated !

Unless you can solve the integral analytically there is no way to solve it without an approximation over its bounds. This isn't a Python problem, but a calculus problem in general, thus why math classes take such great pains to show you the numeric approximations.
If you don't want it to differ too much, choose a small epsilon with a method that converges fast.
Edit- Clarity on last statement:
Epsilon - ɛ - refers to the step size through the bounds of integration- the delta x- remember that the numeric approximation methods all slice the integral into slivers and add them back up, think of it as the width of each sliver, the smaller the sliver the better the approximation. You can specify these in numerical packages.
A method that converges fast implies the method approaches the true value of the integral quickly and the error of approximation is small for each sliver. For example, the Riemann sum is a naive method which assumes each sliver is a rectangle, while a trapezoid connects the beginning and the end of the sliver with a line to make a trapezoid. Of these 2, trapezoid typically converges faster as it tries to account for the change within the shape. (Neither is typically used as there are better guesses for most functions)
Both of these variables change the computational expense of the calculation. Typically epsilon is the most expensive to change, thus why it is important you choose a good method of approximation (some can differ by an order of magnitude in error for the same epsilon).
All of this will depend on how much error your calculation can tolerate.

It often helps to eliminate possible numerical instabilities by rescaling variables. In your case zvar starts from 1e6, which is probably causing problems due to some implementation details in quad(). If you scale it as y = zvar / z, so that the integration starts from 1 it seems to converge pretty well for z = 1e6:
def functiony(z, y):
return np.exp(-y*z/scale_height0(z))*(Rad+y*z)/(Rad+z) / np.sqrt(((Rad+y*z)/(Rad+z))**2.0-1.0)
def chapman0y(z):
return (1.0/(scale_height0(z)))*((integrate.quad(lambda y: functiony(z,y), 1, np.inf))[0])
>>> print(chapman0y(1000000))
1.6217257661844094e-06
(I set m_Earth_air = 28.8e-3 — this constant is missing in your code, I assumed it is the molar mass of air in (edit) kg/mole).
As for z = 5e6, scale_height0(z) is negative, which gives a huge positive value under the exponent, making the integral divergent on the infinity.

I had a similar issue and found that SciPy quad needs you to specify another parameter, epsabs=1e-1000, limit=1000 (stepsize limit), epsrel=1e1 works for everything I've tried. I.e. in this case:
def chapman0(z):
return (1.0/(scale_height0(z)))*((integrate.quad(lambda zvar: functionz(z,zvar), z, np.inf, limit=1000, epsabs=1e-1000, epsrel=1e1))[0])[0])
#results:
0.48529410529321887
-1.276589093231806e+21
Seems to be a high absolute error tolerance but for integrals that don't rapidly converge it seems to fix the issue. Just posting for others with similar problems as this post is quite dated. There are algorithms in other packages that converge faster but none that I've found in SciPy. The results are based on the posted code (not the selected answer).

Related

numpy roots() returns false roots

I'm trying to use numpy to find the roots of some polynomials, but I am getting some erroneous results:
>> poly = np.polynomial.Polynomial([4.383930e+00, 2.277144e+14, -7.008406e+25, -4.258004e+16])
>> roots = poly.roots()
>> roots
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>> poly(roots)
array([-3.74803539e+23, -7.99360578e-15, -1.89182003e-13])
What is up with the false root -1.64593692e+09 which results in -3.74803539e+23? This is clearly not a root.
Is this the result of floating-point errors? or something else?..
And more importantly;
Is there a way to get around it?
..perhaps something I can tweak, or a different function I can use?. Any help is much appreciated.
I found this and this previous question which seemed to be related, but after reading them and the answers/comments I don't think that they are the same problem.
First of all, computing the roots of a polynomial is a classically ill-conditioned problem, meaning (roughly) that no matter what algorithm you use to solve it, small changes in the coefficients of many polynomials can lead to huge changes in their roots. That means we should be a little careful not to place an extraordinary amount of faith in root-finding results in general, and that perhaps that we shouldn't be too surprised when a root finder gives weird results. There's a pretty good example on Wikipedia, Wilkinson's polynomial, that shows how things can go wrong.
In this instance, the coefficients of the polynomial of interest are of such different magnitudes that it's not surprising that the results seem poor. But consider this: if our original polynomial is p() and it has a root x, then p(x) = 0, but also c*p(x) = 0 for any constant c. In other words, we can scale the coefficients without changing the roots, so happen if we normalized the polynomial by dividing by the coefficient of largest magnitude, 7e25?
Original polynomial: p(x) = 4.4 + 2.3e+14*x - 7.0e25*x**2 - 4.3e16*x**3
Scaled polynomial: p(x) = 6.3e-26 + 3.2e-12*x - x**2 - 6.1e-10*x**3
So for this polynomial, the largest coefficient ~7e25 is so huge that the smallest coefficient ~4.4 is essentially negligible. That should give us a hint that what counts as zero in a root finding iteration isn't what we would normally consider "small."
The short answer is that the root calculated by NumPy isn't perfect, but it is an estimate of an actual root. Here's some code to convince us.
>>> import numpy as np
>>> coefs = np.array([4.383930e+00, 2.277144e+14, -7.008406e+25, -4.258004e+16])
>>> coefs_normed = coefs / np.abs(coefs).max()
>>> coefs_normed
array([ 6.25524549e-26, 3.24916108e-12, -1.00000000e+00, -6.07556697e-10])
>>> poly = np.polynomial.Polynomial(coefs)
>>> roots = poly.roots()
>>> roots
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>>> poly(roots)
array([-3.74803539e+23, 8.43769499e-14, -1.89182003e-13])
>>> poly_normed = np.polynomial.Polynomial(coefs_normed)
>>> roots_normed = poly_normed.roots()
>>> roots_normed
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>>> poly_normed(roots_normed)
array([-5.34791419e-03, 1.20534089e-39, -2.11221641e-39])
Now, -5e-03 is not very close to machine epsilon, but that should convince us that maybe the calculated root isn't quite as bad as it seemed at first.
A final point: the np.polynomial.Polynomial class has domain and window arguments that determine how it does its computations. Since polynomials get absolutely huge as the domain tends to +infinity or -infinity, it's unrealistic to expect accurate calculations for a value around 10^9.
The root appears real:
x = np.linspace(-2e9, 1000, 10000)
plt.plot(x, poly(x))
The problem is that the scale of the data is very large. -3e23 is tiny compared to say 6e43. The discrepancy is caused by roundoff error. Third order polynomials have an analytical solution, but it's not going to be numerically stable when your domain is on the order of 1e9.
You can try to use the domain and window parameters to attempt to introduce some numerical stability. For example, a common choice of domain is something that envelops your entire dataset. You would have to adjust the coefficients to compenstate, since those values are usually used for fitting data.

Fitting data with function with several domains whose boundaries are variable

As the title mentions, I am having trouble fitting data points to a function with 3 domains whose boundaries are a parameter of my function. Here is the function I am dealing with:
global sigma_m
sigma_m=2*10**(-12)
global sigma_f
sigma_f=10**3
def Conductivity (phi,phi_c,t,s):
sigma=[0]*(len(phi))
for i in range (0,len(phi)):
if phi[i]<phi_c:
sigma[i]=sigma_m*(phi_c-phi[i])**(-s)
elif phi[i]==phi_c:
sigma[i]=sigma_f*(sigma_m/sigma_f)**(t/(t+s))
else:
sigma[i]=sigma_f*(phi[i]-phi_c)**t
return sigma
And my data points are:
phi_data=[0,0.005,0.007,0.008,0.017,0.05,0.085,0.10]
sigma_data=[2.00E-12,2.50E-12,3.00E-12,9.00E-04,1.00E-01,1.00E+00,2.00E+00,3.00E+00]
My constraints are that phi_c, s, and t must be strictly greater than zero (in practice, phi_c is rarely higher than 0.1 but higher than 0.001, s is usually between 0.5 and 1.5, and t is usually anywhere between 1.5 and 6).
My goal is to fit my data points and have my fit give me values of phi_c, s, and t. s and t can be estimated to help the code (in the specific set of data points that I showed, t should be around 2, and s should be around 0.5). phi_c is completely unknown, except for the range of values that I mentioned just above.
I have used both curve_fit from scipy and Model from lmfit but both provide ridiculously small phi_c values (like 10**(-16) or similarly small values that make me believe the programme wants phi_c to be negative).
Here is my code for when I used curve_fit:
popt, pcov = curve_fit(Conductivity, phi_data, sigma_data, p0=[0.01,2,0.5], bounds=(0,[0.5,10,3]))
Here is my code for when I used Model from lmfit:
t_estimate=0.5
s_estimate=2
phi_c_estimate=0.005
condmodel = Model(Conductivity)
params = condmodel.make_params(phi_c=phi_c_estimate,t=t_estimate,s=s_estimate)
result = condmodel.fit(sigma_data, params, phi=phi_data)
params['phi_c'].min = 0
params['phi_c'].max = 0.1
Both options give an okay fit when plotted, but the estimated value of phi_c is nowhere near plausible.
If you have any idea what I could do to have a better fit, please let me know!
PS: I have a read a promising post about using the package symfit to fit the data on the different regions separately, unfortunately the package symfit does not work for me. It keeps uninstalling my version of scipy then reinstalling an older version, and then it tells me it needs a newer version of scipy to function.
EDIT: I managed to make the symfit package work. Here is my entire code:
from symfit import parameters, variables, Fit, Piecewise, exp, Eq
import numpy as np
import matplotlib.pyplot as plt
global sigma_m
sigma_m=2*10**(-12)
global sigma_f
sigma_f=10**3
phi, sigma = variables ('phi, sigma')
t, s, phi_c = parameters('t, s, phi_c')
phi_c.min = 0.001
phi_c.max = 0.1
sigma1 = sigma_m*(phi_c-phi)**(-s)
sigma2 = sigma_f*(phi-phi_c)**t
model = {sigma: Piecewise ((sigma1, phi <= phi_c), (sigma2, phi > phi_c))}
constraints = [Eq(sigma1.subs({phi: phi_c}), sigma2.subs({phi: phi_c}))]
phi_data=np.array([0,0.005,0.007,0.008,0.017,0.05,0.085,0.10])
sigma_data=np.array([2.00E-12,2.50E-12,3.00E-12,9.00E-04,1.00E-01,1.00E+00,2.00E+00,3.00E+00])
fit = Fit(model, phi=phi_data, sigma=sigma_data, constraints=constraints)
fit_result = fit.execute()
print(fit_result)
Unfortunately I get the following error:
File "D:\Programs\Anaconda\lib\site-packages\sympy\printing\pycode.py", line 236, in _print_ComplexInfinity
return self._print_NaN(expr)
File "D:\Programs\Anaconda\lib\site-packages\sympy\printing\pycode.py", line 74, in _print_known_const
known = self.known_constants[expr.__class__.__name__]
KeyError: 'ComplexInfinity'
My knowledge of coding is very limited, I have no idea what this means and what I should do to not have this error anymore. Please let me know if you have an idea.
I'm not certain that I have a single answer for you, but this will be too long to fit into a comment.
First, a model that switches functional form is especially challenging. But, what's more is that your form has
elif phi[i]==phi_c:
For floating point numbers that are variables, this is going to basically never be true. You might not mean "exactly equal" but "pretty close", which might be
elif abs(phi[i] - phi_c) < 1.0e-5:
or something...
But also, converting that from a for loop to using numpy.where() is probably worth looking into.
Second, it is not at all clear that your different forms actually evaluate to the same values at the boundaries to ensure a continuous function. You might want to check that.
Third, models with powers and exponentials are especially challenging to fit as a small change in power can have a huge impact on the resulting value. It's also very easy to get "negative value raised to non-integer value", which is of course, complex.
Fourth, those sigma_m and sigma_f constants look like they could easily cause trouble. You should definitely evaluate your model with your starting parameter values and see if you can sort of reproduce your data with your model and reasonable starting values. I suspect that you'll need to change your starting values.

What is the difference between scipy.optimize's 'root' and 'fixed_point' methods

There are two methods in scipy.optimize which are root and fixed_point.
I am very surprised to find that root offers many methods, whereas fixed_point has just one. Mathematically the two are identical. They relate the following fixed points of g(x) with the roots of f(x):
[ g(x) = f(x) - x ]
How do I determine which function to use?
Also, none of the two methods allow me to specify the regions where the functions are defined. Is there a way to limit the range of x?
Summary: if you don't know what to use, use root. The method fixed_point merits consideration if your problem is naturally a fixed-point problem g(x) = x where it's reasonable to expect that iterating g will help in solving the problem (i.e., g has some non-expanding behavior). Otherwise, use root or something else.
Although every root-finding problem is mathematically equivalent to a fixed-point problem, it's not always beneficial to restate it as such from the numerical methods point of view. Sometimes it is, as in Newton's method. But the trivial restatement, replacing f(x) = 0 as g(x) = x with g(x) = f(x) + x is not likely to help.
The method fixed_point iterates the provided function, optionally with adjustments that make convergence faster / more likely. This is going to be problematic if the iterated values move away from the fixed point (a repelling fixed point), which can happen despite the adjustments. An example: solving exp(x) = 1 directly and as a fixed point problem for exp(x) - 1 + x, with the same starting point:
import numpy as np
from scipy.optimize import fixed_point, root
root(lambda x: np.exp(x) - 1, 3) # converges to 0 in 14 steps
fixed_point(lambda x: np.exp(x) - 1 + x, 3) # RuntimeError: Failed to converge after 500 iterations, value is 2.9999533400931266
To directly answer the question: the difference is in the methods being used. Fixed point solver is quite simple, it's the iteration of a given function boosted by some acceleration of convergence. When that doesn't work (and often it doesn't), too bad. The root finding methods are more sophisticated and more robust, they should be preferred.

Find global minimum using scipy.optimize.minimize

Given a 2D point p, I'm trying to calculate the smallest distance between that point and a functional curve, i.e., find the point on the curve which gives me the smallest distance to p, and then calculate that distance. The example function that I'm using is
f(x) = 2*sin(x)
My distance function for the distance between some point p and a provided function is
def dist(p, x, func):
x = np.append(x, func(x))
return sum([[i - j]**2 for i,j in zip(x,p)])
It takes as input, the point p, a position x on the function, and the function handle func. Note this is a squared Euclidean distance (since minimizing in Euclidean space is the same as minimizing in squared Euclidean space).
The crucial part of this is that I want to be able to provide bounds for my function so really I'm finding the closest distance to a function segment. For this example my bounds are
bounds = [0, 2*np.pi]
I'm using the scipy.optimize.minimize function to minimize my distance function, using the bounds. A result of the above process is shown in the graph below.
This is a contour plot showing distance from the sin function. Notice how there appears to be a discontinuity in the contours. For convenience, I've plotted a few points around that discontinuity and the "closet" points on the curve that they map to.
What's actually happening here is that the scipy function is finding a local minimum (given some initial guess), but not a global one and that is causing the discontinuity. I know finding the global minimum of any function is impossible, but I'm looking for a more reliable way to find the global minimum.
Possible methods for finding a global minimum would be
Choose a smart initial guess, but this amounts to knowing approximately where the global minimum is to begin with, which is using the solution of the problem to solve it.
Use a multiple initial guesses and choose the answer which gets to the best minimum. This however seems like a poor choice, especially when my functions get more complicated (and higher dimensional).
Find the minimum, then perturb the solution and find the minimum again, hoping that I may have knocked it into a better minimum. I'm hoping that maybe there is some way to do this simply without evoking some complicated MCMC algorithm or something like that. Speed counts for this process.
Any suggestions about the best way to go about this, or possibly directions to useful functions that may tackle this problem would be great!
As suggest in a comment, you could try a global optimization algorithm such as scipy.optimize.differential_evolution. However, in this case, where you have a well-defined and analytically tractable objective function, you could employ a semi-analytical approach, taking advantage of the first-order necessary conditions for a minimum.
In the following, the first function is the distance metric and the second function is (the numerator of) its derivative w.r.t. x, that should be zero if a minimum occurs at some 0<x<2*np.pi.
import numpy as np
def d(x, p):
return np.sum((p-np.array([x,2*np.sin(x)]))**2)
def diff_d(x, p):
return -2 * p[0] + 2 * x - 4 * p[1] * np.cos(x) + 4 * np.sin(2*x)
Now, given a point p, the only potential minimizers of d(x,p) are the roots of diff_d(x,p) (if any), as well as the boundary points x=0 and x=2*np.pi. It turns out that diff_d may have more than one root. Noting that the derivative is a continuous function, the pychebfun library offers a very efficient method for finding all the roots, avoiding cumbersome approaches based on the scipy root-finding algorithms.
The following function provides the minimum of d(x, p) for a given point p:
import pychebfun
def min_dist(p):
f_cheb = pychebfun.Chebfun.from_function(lambda x: diff_d(x, p), domain = (0,2*np.pi))
potential_minimizers = np.r_[0, f_cheb.roots(), 2*np.pi]
return np.min([d(x, p) for x in potential_minimizers])
Here is the result:

scipy integrate over array with variable bounds

I am trying to integrate a function over a list of point and pass the whole array to an integration function in order ot vectorize the thing. For starters, calling scipy.integrate.quad is way too slow since I have something like 10 000 000 points to integrate. Using scipy.integrate.romberg does the trick much faster, almost instantaneous while quad is slow since you must loop over it or vectorize it.
My function is quite complicated, but for demonstation purpose, let's say I want to integrate x^2 from a to b, but x is an array of scalar to evaluate x. For example
import numpy as np
from scipy.integrate import quad, romberg
def integrand(x, y):
return x**2 + y**2
quad(integrand, 0, 10, args=(10) # this fails since y is not a scalar
romberg(integrand, 0, 10) # y works here, giving the integral over
# the entire range
But this only work for fixed bounds. Is there a way to do something like
z = np.arange(20,30)
romberg(integrand, 0, z) # Fails since the function doesn't seem to
# support variable bounds
Only way I see it is to re-implement the algorithm itself in numpy and use that instead so I can have variable bounds. Any function that supports something like this? There is also romb, where you must supply the values of integrand directly and a dx interval, but that will be too imprecise for my complicated function (the marcum Q function, couldn't find any implementation, that could be another way to dot it).
The best approach when trying to evaluate a special function is to write a function that uses the properties of the function to quickly and accurately evaluate it in all parameter regimes. It is quite unlikely that a single approach will give accurate (or even stable) results for all ranges of parameters. Direct evaluation of an integral, as in this case, will almost certainly break down in many cases.
That being said, the general problem of evaluating an integral over many ranges can be solved by turning the integral into a differential equation and solving that. Roughly, the steps would be
Given an integral I(t) which I will assume is an integral of a function f(x) from 0 to t [this can be generalized to an arbitrary lower limit], write it as the differential equation dI/dt = f(x).
Solve this differential equation using scipy.integrate.odeint() for some initial conditions (here I(0)) over some range of times from 0 to t. This range should contain all limits of interest. How finely this is sampled depends on the function and how accurately it needs to be evaluated.
The result will be the value of the integral from 0 to t for the set of t we input. We can turn this into a "continuous" function using interpolation. For example, using a spline we can define i = scipy.interpolate.InterpolatedUnivariateSpline(t,I).
Given a set of upper and lower limits in arrays b and a, respectively, then we can evaluate them all at once as res=i(b)-i(a).
Whether this approach will work in your case will require you to carefully study it over your range of parameters. Also note that the Marcum Q function involves a semi-infinite integral. In principle this is not a problem, just transform the integral to one over a finite range. For example, consider the transformation x->1/x. There is no guarantee this approach will be numerically stable for your problem.

Categories