Understanding numpy.random.lognormal - python

I'm translating Matlab code (written by someone else) to Python.
In one section of the Matlab code, a variable X_new is set to a value drawn from a log-normal distribution as follows:
% log normal distribution
X_new = exp(normrnd(log(X_old), sigma));
That is, a random value is drawn from a normal distribution centered at log(X_old), and X_new is set to e raised to this value.
The direct translation of this code to Python is as follows:
import numpy as np
X_new = np.exp(np.random.normal(np.log(X_old), sigma))
But numpy includes a log-normal distribution which can be sampled directly.
My question is, is the line of code that follows equivalent to the lines of code above?
X_new = np.random.lognormal(np.log(X_old), sigma)

I think I'm going to have to answer my own question here.
From the documentation for np.random.lognormal, we have
A variable x has a log-normal distribution if log(x) is normally distributed.
Let's think about X_new from the Matlab code as a particular instance of a random variable x. The question is, is log(x) normally distributed here? Well, log(X_new) is just normrnd(log(X_old), sigma). So the answer is yes.
Now let's move to the call to np.random.lognormal in the second version of the Python code. X_new is again a particular instance of a random variable we can call x. Is log(x) normally distributed here? Yes, it must be, else numpy would not call this function lognormal. The mean of the underlying normal distribution is log(X_old) which is the same as the mean of the normal distribution in the Matlab code.
Hence, all implementations of the log-normal distribution in the question are equivalent (ignoring any very low-level implementation differences between the languages).

Related

Limits involving the cumulative distribution function of a normal variable

I'm working through some exercises on improper integrals and I've stumbled across an issue I can't resolve. I'm attempting to use the limit() function on the following problem:
Here N(x) is the cumulative distribution function of the standard normal variable.
The limit() function so far hasn't caused any problems, including problems which require L'Hôpital's rule be applied. However, I'm struggling to get compute the correct answer for this particular problem and can't work out why. The following code yields an incorrect answer
from sympy import *
x, y = symbols('x y')
init_printing(use_unicode=False) #Print the answers in unicode characters
cum_distribution = (1/sqrt(2*pi)*(integrate(exp(-y**2/2), (y, -oo, x))))
func = (cum_distribution -(1/2)-(x/sqrt(2*pi)))/(x**3)
limit(func, x, 0)
If I apply L'Hôpital's rule, i get the correct
l_hopital = diff((cum_distribution -(1/2)-(x/sqrt(2*pi))), x)/diff(x**3, x)
limit(l_hopital, x, 0)
I looked through the limit() function source code and my understanding is that L'Hôpital's rule isn't applied? In this case, can this problem be solved using the limit() function without applying this rule?
At present, a limit involving the function erf (known as the error function, related to normal CDF) can only be evaluated when the argument of erf tends to positive infinity. Limits at other places are either not evaluated, or evaluated incorrectly. (Related PR). This includes the limit
limit(-(sqrt(2)*x - sqrt(pi)*erf(sqrt(2)*x/2))/(2*sqrt(pi)*x**3), x, 0)
which returns unevaluated (though I would not call this incorrect). As a workaround, you can compute the Taylor series of this function with one term (the constant term), which gives the correct value of the limit:
series(func, x, 0, 1).removeO()
returns -sqrt(2)/(12*sqrt(pi)).
As in calculus practice, L'Hopital's rule is inferior to power series techniques when it comes to algorithmic computations, and SymPy relies primarily on the latter. The algorithm it uses is devised and explained in On Computing Limits in a Symbolic Manipulation System by Dominik Gruntz.

RVS Argument of Scipy Kstest for non-standard (Landau) distribution

I am working with Landau distributions (https://en.wikipedia.org/wiki/Landau_distribution) and am trying to use scipy.stats.kstest to get a goodness of fit statistic.
Since the Landau distribution is a non-standard distribution for scipy, I am using an import from ROOT (the CERN C++ package) as a custom, callable CDF as per the stats.kstest documentation. I also have a 1-D array of my measured/empirical data.
As it stands, I have the following definition of my kstest function:
def get_ks_test(fit_info):
coeffs, erfc_bool = fit_info[0], fit_info[4]
(mu,sigma,A_l,A_e) = coeffs
def Lcdf(x):
return [A_l*Math.landau_cdf(i,sigma,mu) for i in x]
def integrand(x):
return A_e*erfc((x-mu)/sigma)
def total_cdf(x):
return Lcdf(x) + quad(integrand,0,x)
scatter_data = pdf_fit_return[3]
return kstest(scatter_data,total_cdf)
Here Lcdf and integrand are combined to form my callable CDF function. The problem arises with the first argument of kstest, the "rvs" array scatter_data. As I understand it, the "rvs" argument is supposed to be an array of observations to match with the distribution in question. But in the source code, "rvs" is sent as an argument to "cdf". This makes no sense to me; shouldn't the CDF be a function of x, not of the data it's supposed to match?
When I pass my scatter_data as the first argument of kstest, it tries to plug those values into total_cdf when that function is supposed to take an argument in x. I don't know if I am failing to understand the usage of rvs/cdf here or what, but the way the kstest function is designed seems backwards to me.
I would like to be able to give kstest my data, my intended distribution (a callable CDF) but all the examples I've seen online involve included or standard distributions (normal, beta, etc.).

Integration of a bessel function in python: subdivision issue

I am trying to integrate on a surface the following equation:
intensity=(2*J1(z)/z)^2 with z=A*sqrt((x-mu1)^2+(y-mu2)^2), A(L) a constant of x and y and J1 the first order bessel function. To do so I use the dblquad function as below:
resultinf = dblquad(lambda r,phi:intensity(mu1,mu2,L,r,phi),0,inf,lambda phi:0,lambda phi:2*pi)
The only important parameters here are r and phi in polar coordinates (the others depends of other parameters unimportant here), with x=rcos(phi) and y=rsin(phi)
But when I try to integrate the function I get this message:
C:\pyzo2013b\Lib\pyzo-packages\scipy\integrate\quadpack.py:289:
UserWarning: The maximum number of subdivisions (50) has been
achieved. If increasing the limit yields no improvement it is
advised to analyze the integrand in order to determine the
difficulties. If the position of a local difficulty can be
determined (singularity, discontinuity) one will probably gain from
splitting up the interval and calling the integrator on the
subranges. Perhaps a special-purpose integrator should be used.
warnings.warn(msg)
And then a completely unacurrate result followed by:
C:\pyzo2013b\Lib\pyzo-packages\scipy\integrate\quadpack.py:289:
UserWarning: The integral is probably divergent, or slowly convergent.
warnings.warn(msg)
I do understand the meaning of this messages but I got two questions:
Is there any mean for me to avoid this subdivision error other than just dividing my integrated intervalls in smaller segment (I'd like to check my other results by comparing them to the norm over an infinite domain and I won't be able to do so if I can't integrate over an infinite domain properly)? Maybe with a special purpose integrator? But I don't know what it is or how to use them.
Why do I get a warning about a divergent integral or a singularity knowing that J1(z)/(z) converge to 1 when z converge to zero (just like a sinus cardinal)?
Does anybody has an answer?
Here is the complete useful lines of codes (all the others parameters are defined otherwise):
def intensity(mu1,mu2,L,r,phi):# distribution area for a diffracted beam
x=r*cos(phi)
y=r*sin(phi)
X=x-mu1
Y=y-mu2
R=sqrt(X**2+Y**2)
scaled_R = R*Dt *pi/(lambd*L)
return (4*(special.jv(1,scaled_R)**2/scaled_R**2)
resultinf = dblquad(lambda r,phi:intensity(mu1,mu2,L,r,phi),0,inf,lambda phi:0,lambda phi:2*pi)
print(resultinf)
(I have modified it on the advise of gboffi for the sake of a better understanding of the function.)

Fitting a distribution to data: how to penalize "bad" parameter estimates?

I'm using using scipy's least-squares optimization to fit an exponentially-modified gaussian distribution to a set of reaction time measurements. In general, it works well, but sometimes, the optimization goes off the rails and chooses a crazy value for a parameter -- the resulting plot clearly doesn't fit the data very well. In general, it looks like the problems arise from floating-point precision errors -- we head off into 0 or inf or nan-land.
I'm thinking of doing two things:
Using the parameters to simultaneously fit a CDF and PDF to the data; I have formulas for both. (I'm using a kernel density estimate to approximate the PDF from the data.)
Somehow taking into account the distance from the initial parameter estimates (generated by the method of moments approach on the wikipedia page). Those estimates are far from perfect, but are pretty good and seem to steer clear of "exploding floating point" problems.
Combining the PDF and CDF fits sounds pretty straightforward; the scales of the error will even be generally the same. But getting the initial parameter fits in there: I'm not quite sure if it's even a good idea -- but if it is:
What would I do about the difference in scale? Should I normalize the parameter "error" to a percent error?
Is there a reasonable way to decide on a relative weight between the data estimation error and parameter "error"?
Are these even the right questions to be asking? Are there generally-regarded "correct" answers, or is "try some stuff until you find something that seems to work" a good approach?
One example dataset
As requested, here's a dataset for which this process isn't working very well. I know there are only a few samples and that the data don't fit the distribution well; I'm still hoping against hope that I can get a "reasonable-looking" result from optimization.
array([ 450., 560., 692., 730., 758., 723., 486., 596., 716.,
695., 757., 522., 535., 419., 478., 666., 637., 569.,
859., 883., 551., 652., 378., 801., 718., 479., 544.])
MLE Update
I had a bunch of problems getting my MLE estimate to converge to a "reasonable" value, until I discovered this: If X contains at least one nan, np.sum(X) == nan when X is a numpy array but not when X is a pandas Series. So the sum of the log-likelihood was doing stupid things when the parameters started to go out of bounds.
Added a np.asarray() call and everything is great!
This should have been a comment but I run out of space.
I think a Maximum Likelihood fit is probably the most appropriate approach here. ML method is already implemented for many distributions in scipy.stats. For example, you can find the MLE of normal distribution by calling scipy.stats.norm.fit and find the MLE of exponential distribution in a similar way. Combining these two resulting MLE parameters should give you a pretty good starting parameter for Ex-Gaussian ML fit. In fact I would imaging most of your data is quite nicely Normally distributed. If that is the case, the ML parameter estimates for Normal distribution alone should give you a pretty good starting parameter.
Since Ex-Gaussian only has 3 parameters, I don't think a ML fit will be hard at all. If you could provide a dataset for which your current method doesn't work well, it will be easier to show a real example.
Alright, here you go:
>>> import scipy.special as sse
>>> import scipy.stats as sss
>>> import scipy.optimize as so
>>> from numpy import *
>>> def eg_pdf(p, x): #defines the PDF
m=p[0]
s=p[1]
l=p[2]
return 0.5*l*exp(0.5*l*(2*m+l*s*s-2*x))*sse.erfc((m+l*s*s-x)/(sqrt(2)*s))
>>> xo=array([ 450., 560., 692., 730., 758., 723., 486., 596., 716.,
695., 757., 522., 535., 419., 478., 666., 637., 569.,
859., 883., 551., 652., 378., 801., 718., 479., 544.])
>>> sss.norm.fit(xo) #get the starting parameter vector form the normal MLE
(624.22222222222217, 132.23977474531389)
>>> def llh(p, f, x): #defines the negative log-likelihood function
return -sum(log(f(p,x)))
>>> so.fmin(llh, array([624.22222222222217, 132.23977474531389, 1e-6]), (eg_pdf, xo)) #yeah, the data is not good
Warning: Maximum number of function evaluations has been exceeded.
array([ 6.14003407e+02, 1.31843250e+02, 9.79425845e-02])
>>> przt=so.fmin(llh, array([624.22222222222217, 132.23977474531389, 1e-6]), (eg_pdf, xo), maxfun=1000) #so, we increase the number of function call uplimit
Optimization terminated successfully.
Current function value: 170.195924
Iterations: 376
Function evaluations: 681
>>> llh(array([624.22222222222217, 132.23977474531389, 1e-6]), eg_pdf, xo)
400.02921290185645
>>> llh(przt, eg_pdf, xo) #quite an improvement over the initial guess
170.19592431051217
>>> przt
array([ 6.14007039e+02, 1.31844654e+02, 9.78934519e-02])
The optimizer used here (fmin, or Nelder-Mead simplex algorithm) does not use any information from gradient and usually works much slower than the optimizer that does. It appears that the derivative of the negative log-likelihood function of Exponential Gaussian may be written in a close form easily. If so, optimizers that utilize gradient/derivative will be better and more efficient choice (such as fmin_bfgs).
The other thing to consider is parameter constrains. By definition, sigma and lambda has to be positive for Exponential Gaussian. You can use a constrained optimizer (such as fmin_l_bfgs_b). Alternatively, you can optimize for:
>>> def eg_pdf2(p, x): #defines the PDF
m=p[0]
s=exp(p[1])
l=exp(p[2])
return 0.5*l*exp(0.5*l*(2*m+l*s*s-2*x))*sse.erfc((m+l*s*s-x)/(sqrt(2)*s))
Due to the functional invariance property of MLE, the MLE of this function should be the same as same as the original eg_pdf. There are other transformation that you can use, besides exp(), to project (-inf, +inf) to (0, +inf).
And you can also consider http://en.wikipedia.org/wiki/Lagrange_multiplier.

Best way to write a Python function that integrates a gaussian?

In attempting to use scipy's quad method to integrate a gaussian (lets say there's a gaussian method named gauss), I was having problems passing needed parameters to gauss and leaving quad to do the integration over the correct variable. Does anyone have a good example of how to use quad w/ a multidimensional function?
But this led me to a more grand question about the best way to integrate a gaussian in general. I didn't find a gaussian integrate in scipy (to my surprise). My plan was to write a simple gaussian function and pass it to quad (or maybe now a fixed width integrator). What would you do?
Edit: Fixed-width meaning something like trapz that uses a fixed dx to calculate areas under a curve.
What I've come to so far is a method make___gauss that returns a lambda function that can then go into quad. This way I can make a normal function with the average and variance I need before integrating.
def make_gauss(N, sigma, mu):
return (lambda x: N/(sigma * (2*numpy.pi)**.5) *
numpy.e ** (-(x-mu)**2/(2 * sigma**2)))
quad(make_gauss(N=10, sigma=2, mu=0), -inf, inf)
When I tried passing a general gaussian function (that needs to be called with x, N, mu, and sigma) and filling in some of the values using quad like
quad(gen_gauss, -inf, inf, (10,2,0))
the parameters 10, 2, and 0 did NOT necessarily match N=10, sigma=2, mu=0, which prompted the more extended definition.
The erf(z) in scipy.special would require me to define exactly what t is initially, but it nice to know it is there.
Okay, you appear to be pretty confused about several things. Let's start at the beginning: you mentioned a "multidimensional function", but then go on to discuss the usual one-variable Gaussian curve. This is not a multidimensional function: when you integrate it, you only integrate one variable (x). The distinction is important to make, because there is a monster called a "multivariate Gaussian distribution" which is a true multidimensional function and, if integrated, requires integrating over two or more variables (which uses the expensive Monte Carlo technique I mentioned before). But you seem to just be talking about the regular one-variable Gaussian, which is much easier to work with, integrate, and all that.
The one-variable Gaussian distribution has two parameters, sigma and mu, and is a function of a single variable we'll denote x. You also appear to be carrying around a normalization parameter n (which is useful in a couple of applications). Normalization parameters are usually not included in calculations, since you can just tack them back on at the end (remember, integration is a linear operator: int(n*f(x), x) = n*int(f(x), x) ). But we can carry it around if you like; the notation I like for a normal distribution is then
N(x | mu, sigma, n) := (n/(sigma*sqrt(2*pi))) * exp((-(x-mu)^2)/(2*sigma^2))
(read that as "the normal distribution of x given sigma, mu, and n is given by...") So far, so good; this matches the function you've got. Notice that the only true variable here is x: the other three parameters are fixed for any particular Gaussian.
Now for a mathematical fact: it is provably true that all Gaussian curves have the same shape, they're just shifted around a little bit. So we can work with N(x|0,1,1), called the "standard normal distribution", and just translate our results back to the general Gaussian curve. So if you have the integral of N(x|0,1,1), you can trivially calculate the integral of any Gaussian. This integral appears so frequently that it has a special name: the error function erf. Because of some old conventions, it's not exactly erf; there are a couple additive and multiplicative factors also being carried around.
If Phi(z) = integral(N(x|0,1,1), -inf, z); that is, Phi(z) is the integral of the standard normal distribution from minus infinity up to z, then it's true by the definition of the error function that
Phi(z) = 0.5 + 0.5 * erf(z / sqrt(2)).
Likewise, if Phi(z | mu, sigma, n) = integral( N(x|sigma, mu, n), -inf, z); that is, Phi(z | mu, sigma, n) is the integral of the normal distribution given parameters mu, sigma, and n from minus infinity up to z, then it's true by the definition of the error function that
Phi(z | mu, sigma, n) = (n/2) * (1 + erf((x - mu) / (sigma * sqrt(2)))).
Take a look at the Wikipedia article on the normal CDF if you want more detail or a proof of this fact.
Okay, that should be enough background explanation. Back to your (edited) post. You say "The erf(z) in scipy.special would require me to define exactly what t is initially". I have no idea what you mean by this; where does t (time?) enter into this at all? Hopefully the explanation above has demystified the error function a bit and it's clearer now as to why the error function is the right function for the job.
Your Python code is OK, but I would prefer a closure over a lambda:
def make_gauss(N, sigma, mu):
k = N / (sigma * math.sqrt(2*math.pi))
s = -1.0 / (2 * sigma * sigma)
def f(x):
return k * math.exp(s * (x - mu)*(x - mu))
return f
Using a closure enables precomputation of constants k and s, so the returned function will need to do less work each time it's called (which can be important if you're integrating it, which means it'll be called many times). Also, I have avoided any use of the exponentiation operator **, which is slower than just writing the squaring out, and hoisted the divide out of the inner loop and replaced it with a multiply. I haven't looked at all at their implementation in Python, but from my last time tuning an inner loop for pure speed using raw x87 assembly, I seem to remember that adds, subtracts, or multiplies take about 4 CPU cycles each, divides about 36, and exponentiation about 200. That was a couple years ago, so take those numbers with a grain of salt; still, it illustrates their relative complexity. As well, calculating exp(x) the brute-force way is a very bad idea; there are tricks you can take when writing a good implementation of exp(x) that make it significantly faster and more accurate than a general a**b style exponentiation.
I've never used the numpy version of the constants pi and e; I've always stuck with the plain old math module's versions. I don't know why you might prefer either one.
I'm not sure what you're going for with the quad() call. quad(gen_gauss, -inf, inf, (10,2,0)) ought to integrate a renormalized Gaussian from minus infinity to plus infinity, and should always spit out 10 (your normalization factor), since the Gaussian integrates to 1 over the real line. Any answer far from 10 (I wouldn't expect exactly 10 since quad() is only an approximation, after all) means something is screwed up somewhere... hard to say what's screwed up without knowing the actual return value and possibly the inner workings of quad().
Hopefully that has demystified some of the confusion, and explained why the error function is the right answer to your problem, as well as how to do it all yourself if you're curious. If any of my explanation wasn't clear, I suggest taking a quick look at Wikipedia first; if you still have questions, don't hesitate to ask.
scipy ships with the "error function", aka Gaussian integral:
import scipy.special
help(scipy.special.erf)
The gaussian distribution is also called a normal distribution. The cdf function in the scipy norm module does what you want.
from scipy.stats import norm
print norm.cdf(0.0)
>>>0.5
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm
Why not just always do your integration from -infinity to +infinity, so that you always know the answer? (joking!)
My guess is that the only reason that there's not already a canned Gaussian function in SciPy is that it's a trivial function to write. Your suggestion about writing your own function and passing it to quad to integrate sounds excellent. It uses the accepted SciPy tool for doing this, it's minimal code effort for you, and it's very readable for other people even if they've never seen SciPy.
What exactly do you mean by a fixed-width integrator? Do you mean using a different algorithm than whatever QUADPACK is using?
Edit: For completeness, here's something like what I'd try for a Gaussian with the mean of 0 and standard deviation of 1 from 0 to +infinity:
from scipy.integrate import quad
from math import pi, exp
mean = 0
sd = 1
quad(lambda x: 1 / ( sd * ( 2 * pi ) ** 0.5 ) * exp( x ** 2 / (-2 * sd ** 2) ), 0, inf )
That's a little ugly because the Gaussian function is a little long, but still pretty trivial to write.
I assume you're handling multivariate Gaussians; if so, SciPy already has the function you're looking for: it's called MVNDIST ("MultiVariate Normal DISTribution). The SciPy documentation is, as ever, terrible, so I can't even find where the function is buried, but it's in there somewhere. The documentation is easily the worst part of SciPy, and has frustrated me to no end in the past.
Single-variable Gaussians just use the good old error function, of which many implementations are available.
As for attacking the problem in general, yes, as James Thompson mentions, you just want to write your own gaussian distribution function and feed it to quad(). If you can avoid the generalized integration, though, it's a good idea to do so -- specialized integration techniques for a particular function (like MVNDIST uses) are going to be much faster than a standard Monte Carlo multidimensional integration, which can be extremely slow for high accuracy.

Categories