Integration of a bessel function in python: subdivision issue - python

I am trying to integrate on a surface the following equation:
intensity=(2*J1(z)/z)^2 with z=A*sqrt((x-mu1)^2+(y-mu2)^2), A(L) a constant of x and y and J1 the first order bessel function. To do so I use the dblquad function as below:
resultinf = dblquad(lambda r,phi:intensity(mu1,mu2,L,r,phi),0,inf,lambda phi:0,lambda phi:2*pi)
The only important parameters here are r and phi in polar coordinates (the others depends of other parameters unimportant here), with x=rcos(phi) and y=rsin(phi)
But when I try to integrate the function I get this message:
C:\pyzo2013b\Lib\pyzo-packages\scipy\integrate\quadpack.py:289:
UserWarning: The maximum number of subdivisions (50) has been
achieved. If increasing the limit yields no improvement it is
advised to analyze the integrand in order to determine the
difficulties. If the position of a local difficulty can be
determined (singularity, discontinuity) one will probably gain from
splitting up the interval and calling the integrator on the
subranges. Perhaps a special-purpose integrator should be used.
warnings.warn(msg)
And then a completely unacurrate result followed by:
C:\pyzo2013b\Lib\pyzo-packages\scipy\integrate\quadpack.py:289:
UserWarning: The integral is probably divergent, or slowly convergent.
warnings.warn(msg)
I do understand the meaning of this messages but I got two questions:
Is there any mean for me to avoid this subdivision error other than just dividing my integrated intervalls in smaller segment (I'd like to check my other results by comparing them to the norm over an infinite domain and I won't be able to do so if I can't integrate over an infinite domain properly)? Maybe with a special purpose integrator? But I don't know what it is or how to use them.
Why do I get a warning about a divergent integral or a singularity knowing that J1(z)/(z) converge to 1 when z converge to zero (just like a sinus cardinal)?
Does anybody has an answer?
Here is the complete useful lines of codes (all the others parameters are defined otherwise):
def intensity(mu1,mu2,L,r,phi):# distribution area for a diffracted beam
x=r*cos(phi)
y=r*sin(phi)
X=x-mu1
Y=y-mu2
R=sqrt(X**2+Y**2)
scaled_R = R*Dt *pi/(lambd*L)
return (4*(special.jv(1,scaled_R)**2/scaled_R**2)
resultinf = dblquad(lambda r,phi:intensity(mu1,mu2,L,r,phi),0,inf,lambda phi:0,lambda phi:2*pi)
print(resultinf)
(I have modified it on the advise of gboffi for the sake of a better understanding of the function.)

Related

Curve fit with an list of point

One of my python script gives me some results depending on processing duration, which I display like that:
Now I would like to trace the function's curve which approximate the best the results evolution.
After few researches, the best tool I found is the curve_fit of scipy.optimize.
There is just one problem, the function curve_fit requires at first parameter a function (if I have well understand the documentation's example) but my points on the graph are not the results of a function, so I don't know what to put here.
Can someone help me to fix this problem or proposing me another way t do that?
Thanks.
When you say "now I would like to trace the function's curve which approximate the best the results evolution", you must have some sort of curve in mind that is the ideal form for the data. So, what is that function? In curve-fitting, that function is called "the model function" -- the function that models your data.
Think of it this way: you have 50 or so measurement points. You might believe that they are each perfectly accurate and free-of-error. But since you asked about curve-fitting, this is probably not the case. That is, you probably believe there is some noise or errors in the data and that the data can be represented by an idealized function with many fewer than 50 or so parameters (I'd guess 4 or so).
That idealized function that explains your model (and would allow predicting "optimum" values at "duration" points that you did not measure) is the "model function". If you have that, curve-fitting can help: you write that function (which probably depends on a few Parameters) to model the data in python and find the best values for the Parameters so that the model matches your data. If you don't have that, what do you mean by "curve-fitting"?
You could draw a spline through the data or otherwise smooth the data, but that gives little power about predicting new values that would be different from "interpolate/extrapolate the data without worrying about the effect of noise".
It looks like an "exponential approach" type of curve like you get for charging a capacitor - see here.
So, I'd start with this formula:
y = a * ( 1 - n * np.exp(-b*x))
If I plot that with Matplotlib:
#!/usr/bin/env python3
import numpy as np
import matplotlib.pyplot as plt
# Make 100 samples along x-axis, from 0..10
x = np.linspace(0,10,100)
# Make an exponential approach type of curve
a = 17000
n = 1
b = 3
y = a * ( 1 - n * np.exp(-b*x))
# Plot it
plt.title(f'Plot for a={a}, n={n}, b={b}')
plt.plot(x,y)
plt.show()

Limits involving the cumulative distribution function of a normal variable

I'm working through some exercises on improper integrals and I've stumbled across an issue I can't resolve. I'm attempting to use the limit() function on the following problem:
Here N(x) is the cumulative distribution function of the standard normal variable.
The limit() function so far hasn't caused any problems, including problems which require L'Hôpital's rule be applied. However, I'm struggling to get compute the correct answer for this particular problem and can't work out why. The following code yields an incorrect answer
from sympy import *
x, y = symbols('x y')
init_printing(use_unicode=False) #Print the answers in unicode characters
cum_distribution = (1/sqrt(2*pi)*(integrate(exp(-y**2/2), (y, -oo, x))))
func = (cum_distribution -(1/2)-(x/sqrt(2*pi)))/(x**3)
limit(func, x, 0)
If I apply L'Hôpital's rule, i get the correct
l_hopital = diff((cum_distribution -(1/2)-(x/sqrt(2*pi))), x)/diff(x**3, x)
limit(l_hopital, x, 0)
I looked through the limit() function source code and my understanding is that L'Hôpital's rule isn't applied? In this case, can this problem be solved using the limit() function without applying this rule?
At present, a limit involving the function erf (known as the error function, related to normal CDF) can only be evaluated when the argument of erf tends to positive infinity. Limits at other places are either not evaluated, or evaluated incorrectly. (Related PR). This includes the limit
limit(-(sqrt(2)*x - sqrt(pi)*erf(sqrt(2)*x/2))/(2*sqrt(pi)*x**3), x, 0)
which returns unevaluated (though I would not call this incorrect). As a workaround, you can compute the Taylor series of this function with one term (the constant term), which gives the correct value of the limit:
series(func, x, 0, 1).removeO()
returns -sqrt(2)/(12*sqrt(pi)).
As in calculus practice, L'Hopital's rule is inferior to power series techniques when it comes to algorithmic computations, and SymPy relies primarily on the latter. The algorithm it uses is devised and explained in On Computing Limits in a Symbolic Manipulation System by Dominik Gruntz.

How can I minimize a function in Python, without using gradients, and using constraints and ranges?

EDIT: looks like this was already answered before here
It didn't show up in my searches because I didn't know the right nomenclature. I'll leave the question here for now in case someone arrives here because of the constraints.
I'm trying to optimize a function which is flat on almost all points ("steps function", but in a higher dimension).
The objective is to optimize a set of weights, that must sum to one, and are the parameters of a function which I need to minimize.
The problem is that, as the function is flat at most points, gradient techniques fail because they immediately converge on the starting "guess".
My hypothesis is that this could be solved with (a) Annealing or (b) Genetic Algos. Scipy sends me to basinhopping. However, I cannot find any way to use the constraint (the weights must sum to 1) or ranges (weights must be between 0 and 1) using scipy.
Actual question: How can I solve a minimization problem without gradients, and also use constraints and ranges for the input variables?
The following is a toy example (evidently this one could be solved using the gradient):
# import minimize
from scipy.optimize import minimize
# define a toy function to minimize
def my_small_func(g):
x = g[0]
y = g[1]
return x**2 - 2*y + 1
# define the starting guess
start_guess = [.5,.5]
# define the acceptable ranges (for [g1, g2] repectively)
my_ranges = ((0,1),(0,1))
# define the constraint (they must always sum to 1)
def constraint(g):
return g[0] + g[1] - 1
cons = {'type':'eq', 'fun': constraint}
# minimize
minimize(my_small_func, x0=start_guess, method='SLSQP',
bounds=rranges, constraints=cons)
I usually use R so maybe this is a bad answer, but anyway here goes.
You can solve optimization problems like the using a global optimizer. An example of this is Differential Evolution. The linked method does not use gradients. As for constraints, I usually build them manually. That looks something like this:
# some dummy function to minimize
def objective.function(a, b)
if a + b != 1 # if some constraint is not met
# return a very high value, indicating a very bad fit
return(10^90)
else
# do actual stuff of interest
return(fit.value)
Then you simply feed this function to the differential evolution package function and that should do the trick. Methods like differential evolution are made to solve in particular very high dimensional problems. However the constraint you mentioned can be a problem as it will likely result in very many invalid parameter configurations. This is not necessarily a problem for the algorithm, but is simply means you need to do a lot of tweaking and need to expect a lot of waiting time. Depending on your problem, you could try optimizing weights/ parameters in blocks. That means, optimize parameters given a set of weights, then optimize weights given the previous set of parameters and repeat that many times.
Hope this helps :)

Fitting a distribution to data: how to penalize "bad" parameter estimates?

I'm using using scipy's least-squares optimization to fit an exponentially-modified gaussian distribution to a set of reaction time measurements. In general, it works well, but sometimes, the optimization goes off the rails and chooses a crazy value for a parameter -- the resulting plot clearly doesn't fit the data very well. In general, it looks like the problems arise from floating-point precision errors -- we head off into 0 or inf or nan-land.
I'm thinking of doing two things:
Using the parameters to simultaneously fit a CDF and PDF to the data; I have formulas for both. (I'm using a kernel density estimate to approximate the PDF from the data.)
Somehow taking into account the distance from the initial parameter estimates (generated by the method of moments approach on the wikipedia page). Those estimates are far from perfect, but are pretty good and seem to steer clear of "exploding floating point" problems.
Combining the PDF and CDF fits sounds pretty straightforward; the scales of the error will even be generally the same. But getting the initial parameter fits in there: I'm not quite sure if it's even a good idea -- but if it is:
What would I do about the difference in scale? Should I normalize the parameter "error" to a percent error?
Is there a reasonable way to decide on a relative weight between the data estimation error and parameter "error"?
Are these even the right questions to be asking? Are there generally-regarded "correct" answers, or is "try some stuff until you find something that seems to work" a good approach?
One example dataset
As requested, here's a dataset for which this process isn't working very well. I know there are only a few samples and that the data don't fit the distribution well; I'm still hoping against hope that I can get a "reasonable-looking" result from optimization.
array([ 450., 560., 692., 730., 758., 723., 486., 596., 716.,
695., 757., 522., 535., 419., 478., 666., 637., 569.,
859., 883., 551., 652., 378., 801., 718., 479., 544.])
MLE Update
I had a bunch of problems getting my MLE estimate to converge to a "reasonable" value, until I discovered this: If X contains at least one nan, np.sum(X) == nan when X is a numpy array but not when X is a pandas Series. So the sum of the log-likelihood was doing stupid things when the parameters started to go out of bounds.
Added a np.asarray() call and everything is great!
This should have been a comment but I run out of space.
I think a Maximum Likelihood fit is probably the most appropriate approach here. ML method is already implemented for many distributions in scipy.stats. For example, you can find the MLE of normal distribution by calling scipy.stats.norm.fit and find the MLE of exponential distribution in a similar way. Combining these two resulting MLE parameters should give you a pretty good starting parameter for Ex-Gaussian ML fit. In fact I would imaging most of your data is quite nicely Normally distributed. If that is the case, the ML parameter estimates for Normal distribution alone should give you a pretty good starting parameter.
Since Ex-Gaussian only has 3 parameters, I don't think a ML fit will be hard at all. If you could provide a dataset for which your current method doesn't work well, it will be easier to show a real example.
Alright, here you go:
>>> import scipy.special as sse
>>> import scipy.stats as sss
>>> import scipy.optimize as so
>>> from numpy import *
>>> def eg_pdf(p, x): #defines the PDF
m=p[0]
s=p[1]
l=p[2]
return 0.5*l*exp(0.5*l*(2*m+l*s*s-2*x))*sse.erfc((m+l*s*s-x)/(sqrt(2)*s))
>>> xo=array([ 450., 560., 692., 730., 758., 723., 486., 596., 716.,
695., 757., 522., 535., 419., 478., 666., 637., 569.,
859., 883., 551., 652., 378., 801., 718., 479., 544.])
>>> sss.norm.fit(xo) #get the starting parameter vector form the normal MLE
(624.22222222222217, 132.23977474531389)
>>> def llh(p, f, x): #defines the negative log-likelihood function
return -sum(log(f(p,x)))
>>> so.fmin(llh, array([624.22222222222217, 132.23977474531389, 1e-6]), (eg_pdf, xo)) #yeah, the data is not good
Warning: Maximum number of function evaluations has been exceeded.
array([ 6.14003407e+02, 1.31843250e+02, 9.79425845e-02])
>>> przt=so.fmin(llh, array([624.22222222222217, 132.23977474531389, 1e-6]), (eg_pdf, xo), maxfun=1000) #so, we increase the number of function call uplimit
Optimization terminated successfully.
Current function value: 170.195924
Iterations: 376
Function evaluations: 681
>>> llh(array([624.22222222222217, 132.23977474531389, 1e-6]), eg_pdf, xo)
400.02921290185645
>>> llh(przt, eg_pdf, xo) #quite an improvement over the initial guess
170.19592431051217
>>> przt
array([ 6.14007039e+02, 1.31844654e+02, 9.78934519e-02])
The optimizer used here (fmin, or Nelder-Mead simplex algorithm) does not use any information from gradient and usually works much slower than the optimizer that does. It appears that the derivative of the negative log-likelihood function of Exponential Gaussian may be written in a close form easily. If so, optimizers that utilize gradient/derivative will be better and more efficient choice (such as fmin_bfgs).
The other thing to consider is parameter constrains. By definition, sigma and lambda has to be positive for Exponential Gaussian. You can use a constrained optimizer (such as fmin_l_bfgs_b). Alternatively, you can optimize for:
>>> def eg_pdf2(p, x): #defines the PDF
m=p[0]
s=exp(p[1])
l=exp(p[2])
return 0.5*l*exp(0.5*l*(2*m+l*s*s-2*x))*sse.erfc((m+l*s*s-x)/(sqrt(2)*s))
Due to the functional invariance property of MLE, the MLE of this function should be the same as same as the original eg_pdf. There are other transformation that you can use, besides exp(), to project (-inf, +inf) to (0, +inf).
And you can also consider http://en.wikipedia.org/wiki/Lagrange_multiplier.

Is it possible to invert an arbitrary lambda in Python?

I have been playing around with Python and math lately, and I ran in to something I have yet to be able to figure out. Namely, is it possible, given an arbitrary lambda, to return the inverse of that lambda for mathematical operations? That is, invertLambda such that invertLambda(lambda x:(x+2))(2) = 0. The fact that lambdas are restricted to expressions gives me hope, but so far I have not been able to make it work. I understand that any result would have problems with functions that lose information, but I am willing to restrict users and myself to lossless functions if I have to.
Of course not: if lambda is not an injective function, you cannot invert it. Example: you cannot invert lambda mapping x to x*x, since the sign of the original x is lost.
Leaving injectivity aside, there are functions which are computationally very complex to invert. Consider, for example, restoring the original value from its md5 hash. (For a lambda calculating md5 hash, inverted function must break md5 in cryptological sense!)
Edit:
indeed, we can theoretically make lambdas invertable if we restrict the expressions which can be used there. For example, if the lambda is a linear function of 1 argument, we can easily invert it. If it's a polynomial of degree > 4, we have a problem with algebraically exact solution.
Of course, we could refrain from exact solution, and just invert the function numerically. This is possible, using, well, any method of numerical solving of the equation lambda(x) = value will do (the simplest be binary search).
I am a bit late, but I just published a python package that does this precisely. You may want to borrow some ideas from it:
https://pypi.python.org/pypi/pynverse
It essentially follows this strategy:
Figure out if the function is increasing or decreasing. For this two reference points ref1 and ref2 are needed:
In case of a finite interval, the points ref points are 1/4 and 3/4 through the interval.
In an infinite interval any two values work really.
If f(ref1) < f(ref2), the function is increasing, otherwise is decreasing.
Figure out the image of the function in the interval.
If values are provided, then those are used.
In a closed interval just calculate f(a) and f(b), where a and b are the ends of the interval.
In an open interval try to calculate f(a) and f(b), if this works those are used, otherwise it will be assume to be (-Inf, Inf).
Built a bounded function with the following conditions:
bounded_f(x):
return -Inf if x below interval, and f is increasing.
return +Inf if x below interval, and f is decreasing.
return +Inf if x above interval, and f is increasing.
return -Inf if x above interval, and f is decreasing.
return f(x) otherwise
If the required number y0 for the inverse is outside the image, raise an exception.
Find roots for bounded_f(x)-y0, by minimizing (bounded_f(x)-y0)**2, using the Brent method, making sure that the algorithm for minimising starts in a point inside the original interval by setting ref1, ref2 as brackets. As soon as if goes outside the allowed intervals, bounded_f returns infinite, forcing the algorithm to go back to search inside the interval.
Check that the solutions are accurate and they meet f(x0)=y0 to some desired precision, raising a warning otherwise.
Of course, as Vlad pointed out, the function has to be invertible for the inverse to exist, and also continuous in the domain for this to work.

Categories