I am interested in a particular density, and I need to sample it "regularly" in a way that represent its shape (not random).
Formally, f is my density function, F is the corresponding cumulative density function (F' = f), whose reverse function rF = F^-1 does exist. I am interested in casting a regular sample from [0, 1] into my variable domain through F^-1. Something like:
import numpy as np
uniform_sample = np.linspace(0., 1., 256 + 2)[1:-1] # source sample
shaped_sample = rF(uniform_sample) # this is what I want to get
Is there a dedicated way to do this with numpy, or should I do this by hand? Here is the 'by hand' way for exponential law:
l = 5. # exponential parameter
# f = lambda x: l * np.exp(-l * x) # density function, not used
# F = lambda x: 1 - np.exp(-l * x) # cumulative density function, not used either
rF = lambda y: np.log(1. / (1. - y)) / l # reverse `F^-1` function
# What I need is:
shaped_sample = rF(uniform_sample)
I know that, in theory, rF is internally used for drawing random samples when np.random.exponential is called, for example (a uniform, random sample from [0, 1] is transformed by rF to get the actual result). So my guess is that numpy.random does know the rF function for each distribution it offers.
How do I access it? Does numpy provide functions like:
np.random.<any_numpy_distribution>.rF
or
np.random.get_reverse_F(<any_custom_density_function>)
.. or should I derive / approximate them myself?
scipy has probability distribution objects for all (I think) of the probability distributions in numpy.random.
http://docs.scipy.org/doc/scipy/reference/stats.html
The all have a ppf() method that does what you want.
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.ppf.html
In your example:
import scipy.stats as st
l = 5. # exponential parameter
dist = st.expon(0., l) # distribution object provided by scipy
f = dist.pdf # probability density function
F = dist.cdf # cumulative density function
rF = dist.ppf # percent point function : reverse `F^-1` function
shaped_sample = rF(uniform_sample)
# and much more!
As far as I'm aware there isn't a way to do this directly in numpy. For the case of functions where the cumulative distribution is analytic but it's inverse isn't I generally use a spline to do the inversion numerically.
from scipy.interpolate import UnivariateSpline
x = np.linspace(0.0, 1.0, 1000)
F = cumulative_distn(x) #This we know and is analytic
rF = UnivariateSpline(F, x) #This will then be the inverse
Note that if you can do the inversion of F to rF by hand then you should. This method is only for the case where the inverse cannot be found in a closed form.
Related
I have a function represented as a narray i. e. y = f(x), where y and x are two narrays.
I am searching for a method that find the roots of f(x).
Reading the scipy documentation, I was able to find just methods that works on user defined functions, like scipy.optimize.root_scalar. I thought about using scipy.interpolate.interp1d to get an interpolated version of my function to be used in scipy.optimize.root_scalar, but I'm not sure it can work and it seems pretty complicated.
Is it there some other function that I can use instead?
You have to interpolate a function defined by numpy arrays as all the solvers require a function that can return a value for any input x, not just those in your array. But this is not complicated, here is an example
from scipy import optimize
from scipy import interpolate
# our xs and ys
xs = np.array([0,2,5])
ys = np.array([-3,-1,2])
# interpolated function
f = interpolate.interp1d(xs, ys)
sol = optimize.root_scalar(f, bracket = [xs[0],xs[-1]])
print(f'root is {sol.root}')
# check
f0 = f(sol.root)
print(f'value of function at the root: f({sol.root})={f0}')
output:
root is 3.0
value of function at the root: f(3.0)=0.0
You may also want to interpolate with higher-degree polynomials for higher accuracy of your root-finding, eg How to perform cubic spline interpolation in python?
In scipy there is no support for fitting discrete distributions using data. I know there are a lot of subject about this.
For example if i have an array like below:
x = [2,3,4,5,6,7,0,1,1,0,1,8,10,9,1,1,1,0,0]
I couldn't apply for this array:
from scipy.stats import nbinom
param = nbinom.fit(x)
But i would like to ask you up to date, is there any way to fit for these three discrete distributions and then choose the best fit for the discrete dataset?
You can use Method of Moments to fit any particular distribution.
Basic idea: get empirical first, second, etc. moments, then derive distribution parameters from these moments.
So, in all these cases we only need two moments. Let's get them:
import pandas as pd
# for other distributions, you'll need to implement PMF
from scipy.stats import nbinom, poisson, geom
x = pd.Series(x)
mean = x.mean()
var = x.var()
likelihoods = {} # we'll use it later
Note: I used pandas instead of numpy. That is because numpy's var() and std() don't apply Bessel's correction, while pandas' do. If you have 100+ samples, there shouldn't be much difference, but on smaller samples it could be important.
Now, let's get parameters for these distributions. Negative binomial has two parameters: p, r. Let's estimate them and calculate likelihood of the dataset:
# From the wikipedia page, we have:
# mean = pr / (1-p)
# var = pr / (1-p)**2
# without wiki, you could use MGF to get moments; too long to explain here
# Solving for p and r, we get:
p = 1 - mean / var # TODO: check for zero variance and limit p by [0, 1]
r = (1-p) * mean / p
UPD: Wikipedia and scipy are using different definitions of p, one treating it as probability of success and another as probability of failure. So, to be consistent with scipy notion, use:
p = mean / var
r = p * mean / (1-p)
END OF UPD
UPD2:
I'd suggest using #thilak's code log likelihood instead. It allows to avoid loss of precision, which is especially important on large samples.
END OF UPD2
Calculate likelihood:
likelihoods['nbinom'] = x.map(lambda val: nbinom.pmf(val, r, p)).prod()
Same for Poisson, there is only one parameter:
# from Wikipedia,
# mean = variance = lambda. Nothing to solve here
lambda_ = mean
likelihoods['poisson'] = x.map(lambda val: poisson.pmf(val, lambda_)).prod()
Same for Geometric distribution:
# mean = 1 / p # this form fits the scipy definition
p = 1 / mean
likelihoods['geometric'] = x.map(lambda val: geom.pmf(val, p)).prod()
Finally, let's get the best fit:
best_fit = max(likelihoods, key=lambda x: likelihoods[x])
print("Best fit:", best_fit)
print("Likelihood:", likelihoods[best_fit])
Let me know if you have any questions
Great answer by Marat.
In addition to Marat's post I would most certainly recommend taking log of the probability mass function. Some information on why log likelihood is preferred over likelihood- https://math.stackexchange.com/questions/892832/why-we-consider-log-likelihood-instead-of-likelihood-in-gaussian-distribution
I would rewrite the code for Negative Binomial to-
log_likelihoods = {}
log_likelihoods['nbinom'] = x.map(lambda val: nbinom.logpmf(val, r, p)).sum()
Note that I have used-
logpmf instead of pmf
sum instead of product
And to find out the best distribution-
best_fit = max(log_likelihoods, key=lambda x: log_likelihoods[x])
print("Best fit:", best_fit)
print("log_Likelihood:", log_likelihoods[best_fit])
I am trying to find a minimum of a created function g(alpha) and what is more important, to find value of the alpha at this minimum, or close to the minimum.
The code I use is the following: it creates function f, vectors D,avec and grad and uses it for creation of function g(alpha), minimum of which I want to find, together with the alpha value.
The problem is that after applying solve from sympy library I don't get numerical number of alpha. Instead of I get the following error:
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
The code:
import numpy as np
from scipy.optimize import fmin
from sympy import Symbol, solve
from scipy import interpolate
Emax = 10
bins = 200
x = np.linspace(1, Emax, num = Emax, dtype=np.int) #create grid of indexes
y = np.linspace(1, bins, num = bins, dtype=np.int)
z = np.random.rand(bins, Emax) # random matrix
f = interpolate.interp2d(x,y,z, kind='cubic') # make the matrix continious
D= np.zeros(bins)
D = 2*f(1.5, y) # create vector
avec = np.array([4.0, 16.0])
grad = np.array([1e-5,1e-5])
g= lambda alpha: np.sum(np.square(np.subtract(D, (avec[0]-alpha*grad[0])*f((avec[1]-
alpha*grad[1]),y))))
oo= fmin(g,(0.0))
alfa = Symbol("alfa")
slv= solve((np.sum(np.square(np.subtract(D, (avec[0]-alfa*grad[0])*f((avec[1]-
alfa*grad[1]),y)))) - oo), alfa)
I know that this solution may not be the best for this problem. I'm new in Python and if you have any better suggestions how to find alpha here, please tell me.
I think you are really confusing what sympy does. sympy is a module to solve and print analytical equations. You do not need to use that package at all for this task.
You actually do find the minimum of g here. You store this result in oo.
So basically, delete the last 2 lines starting alfa = ... and slv = ... and then just put print(oo). oo is the value you are looking for, the value of alpha which minimises the function g
The random module (http://docs.python.org/2/library/random.html) has several fixed functions to randomly sample from. For example random.gauss will sample random point from a normal distribution with a given mean and sigma values.
I'm looking for a way to extract a number N of random samples between a given interval using my own distribution as fast as possible in python. This is what I mean:
def my_dist(x):
# Some distribution, assume c1,c2,c3 and c4 are known.
f = c1*exp(-((x-c2)**c3)/c4)
return f
# Draw N random samples from my distribution between given limits a,b.
N = 1000
N_rand_samples = ran_func_sample(my_dist, a, b, N)
where ran_func_sample is what I'm after and a, b are the limits from which to draw the samples. Is there anything of that sort in python?
You need to use Inverse transform sampling method to get random values distributed according to a law you want. Using this method you can just apply inverted function
to random numbers having standard uniform distribution in the interval [0,1].
After you find the inverted function, you get 1000 numbers distributed according to the needed distribution this obvious way:
[inverted_function(random.random()) for x in range(1000)]
More on Inverse Transform Sampling:
http://en.wikipedia.org/wiki/Inverse_transform_sampling
Also, there is a good question on StackOverflow related to the topic:
Pythonic way to select list elements with different probability
This code implements the sampling of n-d discrete probability distributions. By setting a flag on the object, it can also be made to be used as a piecewise constant probability distribution, which can then be used to approximate arbitrary pdf's. Well, arbitrary pdfs with compact support; if you efficiently want to sample extremely long tails, a non-uniform description of the pdf would be required. But this is still efficient even for things like airy-point-spread functions (which I created it for, initially). The internal sorting of values is absolutely critical there to get accuracy; the many small values in the tails should contribute substantially, but they will get drowned out in fp accuracy without sorting.
class Distribution(object):
"""
draws samples from a one dimensional probability distribution,
by means of inversion of a discrete inverstion of a cumulative density function
the pdf can be sorted first to prevent numerical error in the cumulative sum
this is set as default; for big density functions with high contrast,
it is absolutely necessary, and for small density functions,
the overhead is minimal
a call to this distibution object returns indices into density array
"""
def __init__(self, pdf, sort = True, interpolation = True, transform = lambda x: x):
self.shape = pdf.shape
self.pdf = pdf.ravel()
self.sort = sort
self.interpolation = interpolation
self.transform = transform
#a pdf can not be negative
assert(np.all(pdf>=0))
#sort the pdf by magnitude
if self.sort:
self.sortindex = np.argsort(self.pdf, axis=None)
self.pdf = self.pdf[self.sortindex]
#construct the cumulative distribution function
self.cdf = np.cumsum(self.pdf)
#property
def ndim(self):
return len(self.shape)
#property
def sum(self):
"""cached sum of all pdf values; the pdf need not sum to one, and is imlpicitly normalized"""
return self.cdf[-1]
def __call__(self, N):
"""draw """
#pick numbers which are uniformly random over the cumulative distribution function
choice = np.random.uniform(high = self.sum, size = N)
#find the indices corresponding to this point on the CDF
index = np.searchsorted(self.cdf, choice)
#if necessary, map the indices back to their original ordering
if self.sort:
index = self.sortindex[index]
#map back to multi-dimensional indexing
index = np.unravel_index(index, self.shape)
index = np.vstack(index)
#is this a discrete or piecewise continuous distribution?
if self.interpolation:
index = index + np.random.uniform(size=index.shape)
return self.transform(index)
if __name__=='__main__':
shape = 3,3
pdf = np.ones(shape)
pdf[1]=0
dist = Distribution(pdf, transform=lambda i:i-1.5)
print dist(10)
import matplotlib.pyplot as pp
pp.scatter(*dist(1000))
pp.show()
And as a more real-world relevant example:
x = np.linspace(-100, 100, 512)
p = np.exp(-x**2)
pdf = p[:,None]*p[None,:] #2d gaussian
dist = Distribution(pdf, transform=lambda i:i-256)
print dist(1000000).mean(axis=1) #should be in the 1/sqrt(1e6) range
import matplotlib.pyplot as pp
pp.scatter(*dist(1000))
pp.show()
Here is a rather nice way of performing inverse transform sampling with a decorator.
import numpy as np
from scipy.interpolate import interp1d
def inverse_sample_decorator(dist):
def wrapper(pnts, x_min=-100, x_max=100, n=1e5, **kwargs):
x = np.linspace(x_min, x_max, int(n))
cumulative = np.cumsum(dist(x, **kwargs))
cumulative -= cumulative.min()
f = interp1d(cumulative/cumulative.max(), x)
return f(np.random.random(pnts))
return wrapper
Using this decorator on a Gaussian distribution, for example:
#inverse_sample_decorator
def gauss(x, amp=1.0, mean=0.0, std=0.2):
return amp*np.exp(-(x-mean)**2/std**2/2.0)
You can then generate sample points from the distribution by calling the function. The keyword arguments x_min and x_max are the limits of the original distribution and can be passed as arguments to gauss along with the other key word arguments that parameterise the distribution.
samples = gauss(5000, mean=20, std=0.8, x_min=19, x_max=21)
Alternatively, this can be done as a function that takes the distribution as an argument (as in your original question),
def inverse_sample_function(dist, pnts, x_min=-100, x_max=100, n=1e5,
**kwargs):
x = np.linspace(x_min, x_max, int(n))
cumulative = np.cumsum(dist(x, **kwargs))
cumulative -= cumulative.min()
f = interp1d(cumulative/cumulative.max(), x)
return f(np.random.random(pnts))
I was in a similar situation but I wanted to sample from a multivariate distribution, so, I implemented a rudimentary version of Metropolis-Hastings (which is an MCMC method).
def metropolis_hastings(target_density, size=500000):
burnin_size = 10000
size += burnin_size
x0 = np.array([[0, 0]])
xt = x0
samples = []
for i in range(size):
xt_candidate = np.array([np.random.multivariate_normal(xt[0], np.eye(2))])
accept_prob = (target_density(xt_candidate))/(target_density(xt))
if np.random.uniform(0, 1) < accept_prob:
xt = xt_candidate
samples.append(xt)
samples = np.array(samples[burnin_size:])
samples = np.reshape(samples, [samples.shape[0], 2])
return samples
This function requires a function target_density which takes in a data-point and computes its probability.
For details check-out this detailed answer of mine.
import numpy as np
import scipy.interpolate as interpolate
def inverse_transform_sampling(data, n_bins, n_samples):
hist, bin_edges = np.histogram(data, bins=n_bins, density=True)
cum_values = np.zeros(bin_edges.shape)
cum_values[1:] = np.cumsum(hist*np.diff(bin_edges))
inv_cdf = interpolate.interp1d(cum_values, bin_edges)
r = np.random.rand(n_samples)
return inv_cdf(r)
So if we give our data sample that has a specific distribution, the inverse_transform_sampling function will return a dataset with exactly the same distribution. Here the advantage is that we can get our own sample size by specifying it in the n_samples variable.
I have used numpy's polyfit and obtained a very good fit (using a 7th order polynomial) for two arrays, x and y. My relationship is thus;
y(x) = p[0]* x^7 + p[1]*x^6 + p[2]*x^5 + p[3]*x^4 + p[4]*x^3 + p[5]*x^2 + p[6]*x^1 + p[7]
where p is the polynomial array output by polyfit.
Is there a way to reverse this method easily, so I have a solution in the form of,
x(y) = p[0]*y^n + p[1]*y^n-1 + .... + p[n]*y^0
No there is no easy way in general. Closed form-solutions for arbitrary polynomials are not available for polynomials of the seventh order.
Doing the fit in the reverse direction is possible, but only on monotonically varying regions of the original polynomial. If the original polynomial has minima or maxima on the domain you are interested in, then even though y is a function of x, x cannot be a function of y because there is no 1-to-1 relation between them.
If you are (i) OK with redoing the fitting procedure, and (ii) OK with working piecewise on single monotonic regions of your fit at a time, then you could do something like this:
-
import numpy as np
# generate a random coefficient vector a
degree = 1
a = 2 * np.random.random(degree+1) - 1
# an assumed true polynomial y(x)
def y_of_x(x, coeff_vector):
"""
Evaluate a polynomial with coeff_vector and degree len(coeff_vector)-1 using Horner's method.
Coefficients are ordered by increasing degree, from the constant term at coeff_vector[0],
to the linear term at coeff_vector[1], to the n-th degree term at coeff_vector[n]
"""
coeff_rev = coeff_vector[::-1]
b = 0
for a in coeff_rev:
b = b * x + a
return b
# generate some data
my_x = np.arange(-1, 1, 0.01)
my_y = y_of_x(my_x, a)
# verify that polyfit in the "traditional" direction gives the correct result
# [::-1] b/c polyfit returns coeffs in backwards order rel. to y_of_x()
p_test = np.polyfit(my_x, my_y, deg=degree)[::-1]
print p_test, a
# fit the data using polyfit but with y as the independent var, x as the dependent var
p = np.polyfit(my_y, my_x, deg=degree)[::-1]
# define x as a function of y
def x_of_y(yy, a):
return y_of_x(yy, a)
# compare results
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(my_x, my_y, '-b', x_of_y(my_y, p), my_y, '-r')
Note: this code does not check for monotonicity but simply assumes it.
By playing around with the value of degree, you should see that see the code only works well for all random values of a when degree=1. It occasionally does OK for other degrees, but not when there are lots of minima / maxima. It never does perfectly for degree > 1 because approximating parabolas with square-root functions doesn't always work, etc.