I am trying to approximate a function using the Discrete Fourier Transform, being given 2M+1 values of the function.
I've seen a few different expression for the coefficients and the approximation, but the ones I was originally trying were (12) and (13) as in http://www.chebfun.org/docs/guide/guide11.html
(I apologize for the link, but apparently Stack Overflow does not support Latex.)
I have a function for computing the approximation given the coefficients and another to calculate the coefficients, but it also returns this previous function. I've tested with some values but the results weren't close at all. I compared both of them with the numpy.fft.fft: the coefficients didn't match and passing the FFT to the first function did not result in a good approximation as well, so the coefficients aren't the only problem.
Here is my code:
def model(cks, x):
n = len(cks)
assert(n%2 == 1)
M = (n-1)//2
def soma(s):
soma = 0
for i in range(n):
m = -M + i
soma += cks[i]*cmath.exp(1j*m*s)
return soma
soma = np.vectorize(soma)
return soma(x)
def fourier(y):
n = len(y)
assert(n%2 == 1)
M = (n-1)//2
def soma(k):
soma = 0
for i in range(n):
t = 2*math.pi*i/n
soma += y[i]*cmath.exp(-1j*k*t)
return (1/n)*soma
cks = np.zeros(n, dtype='complex')
for i in range(n):
j = -M + i
cks[i] = soma(j)
return cks, (lambda x: model(cks,x))
I'm not sure I understand your code, but it looks to me like you have a forward and an inverse DFT there. One of those doesn't use pi, but it should.
If you're interested in obtaining interpolating samples, you can apply the DFT, pad it with zeros, then compute the inverse DFT (I'm using MATLAB code, because that is what I know, but I think it's fairly easy to read):
f = randn(1,21); % an input signal to be interpolated
F = fft(f); % forward DFT
F = fftshift(F); % shift zero frequency to middle of array
F = [zeros(1,60),F,zeros(1,60)]; % pad with equal number of zeros on both sides
F = ifftshift(F); % shift zero frequency back to first array element
fi = ifft(F) * length(F)/length(f); % inverse DFT, normalize
% `fi` is the interpolated `f`
% plotting
x = linspace(1,length(fi)+1,length(f)+1);
x = x(1:end-1);
plot(x,f,'x');
xi = 1:length(fi);
hold on
plot(xi,fi);
If you feel like you need to implement the DFT and inverse DFT from scratch, know that you can implement the latter using the former.
If you want to create a continuous function as a summation of shifted sine functions, follow the equation for the Fourier series, with A_n and Φ_n given by the amplitude and phase of the elements of the DFT.
Related
I'm trying to implement the following formula in python for X and Y points
I have tried following approach
def f(c):
"""This function computes the curvature of the leaf."""
tt = c
n = (tt[0]*tt[3] - tt[1]*tt[2])
d = (tt[0]**2 + tt[1]**2)
k = n/d
R = 1/k # Radius of Curvature
return R
There is something incorrect as it is not giving me correct result. I think I'm making some mistake while computing derivatives in first two lines. How can I fix that?
Here are some of the points which are in a data frame:
pts = pd.DataFrame({'x': x, 'y': y})
x y
0.089631 97.710199
0.089831 97.904541
0.090030 98.099313
0.090229 98.294513
0.090428 98.490142
0.090627 98.686200
0.090827 98.882687
0.091026 99.079602
0.091225 99.276947
0.091424 99.474720
0.091623 99.672922
0.091822 99.871553
0.092022 100.070613
0.092221 100.270102
0.092420 100.470020
0.092619 100.670366
0.092818 100.871142
0.093017 101.072346
0.093217 101.273979
0.093416 101.476041
0.093615 101.678532
0.093814 101.881451
0.094013 102.084800
0.094213 102.288577
pts_x = np.gradient(x_c, t) # first derivatives
pts_y = np.gradient(y_c, t)
pts_xx = np.gradient(pts_x, t) # second derivatives
pts_yy = np.gradient(pts_y, t)
After getting the derivatives I am putting the derivatives x_prim, x_prim_prim, y_prim, y_prim_prim in another dataframe using the following code:
d = pd.DataFrame({'x_prim': pts_x, 'y_prim': pts_y, 'x_prim_prim': pts_xx, 'y_prim_prim':pts_yy})
after having everything in the data frame I am calling function for each row of the data frame to get curvature at that point using following code:
# Getting the curvature at each point
for i in range(len(d)):
temp = d.iloc[i]
c_temp = f(temp)
curv.append(c_temp)
You do not specify exactly what the structure of the parameter pts is. But it seems that it is a two-dimensional array where each row has two values x and y and the rows are the points in your curve. That itself is problematic, since the documentation is not quite clear on what exactly is returned in such a case.
But you clearly are not getting the derivatives of x or y. If you supply only one array to np.gradient then numpy assumes that the points are evenly spaced with a distance of one. But that is probably not the case. The meaning of x' in your formula is the derivative of x with respect to t, the parameter variable for the curve (which is separate from the parameters to the computer functions). But you never supply the values of t to numpy. The values of t must be the second parameter passed to the gradient function.
So to get your derivatives, split the x, y, and t values into separate one-dimensional arrays--lets call them x and y and t. Then get your first and second derivatives with
pts_x = np.gradient(x, t) # first derivatives
pts_y = np.gradient(y, t)
pts_xx = np.gradient(pts_x, t) # second derivatives
pts_yy = np.gradient(pts_y, t)
Then continue from there. You no longer need the t values to calculate the curvatures, which is the point of the formula you are using. Note that gradient is not really designed to calculate the second derivatives, and it absolutely should not be used to calculate third or higher-order derivatives. More complex formulas are needed for those. Numpy's gradient uses "second order accurate central differences" which are pretty good for the first derivative, poor for the second derivative, and worthless for higher-order derivatives.
I think your problem is that x and y are arrays of double values.
The array x is the independent variable; I'd expect it to be sorted into ascending order. If I evaluate y[i], I expect to get the value of the curve at x[i].
When you call that numpy function you get an array of derivative values that are the same shape as the (x, y) arrays. If there are n pairs from (x, y), then
y'[i] gives the value of the first derivative of y w.r.t. x at x[i];
y''[i] gives the value of the second derivative of y w.r.t. x at x[i].
The curvature k will also be an array with n points:
k[i] = abs(x'[i]*y''[i] -y'[i]*x''[i])/(x'[i]**2 + y'[i]**2)**1.5
Think of x and y as both being functions of a parameter t. x' = dx/dt, etc. This means curvature k is also a function of that parameter t.
I like to have a well understood closed form solution available when I program a solution.
y(x) = sin(x) for 0 <= x <= pi
y'(x) = cos(x)
y''(x) = -sin(x)
k = sin(x)/(1+(cos(x))**2)**1.5
Now you have a nice formula for curvature as a function of x.
If you want to parameterize it, use
x(t) = pi*t for 0 <= t <= 1
x'(t) = pi
x''(t) = 0
See if you can plot those and make your Python solution match it.
I am trying to create a PCA script that goes through a set of images and decomposes them into PC's that are sorted by power/weight. As far as I understand you want to do M = U*S*V.T which I have done below. I cut off the array at N which is the filter that removes all lower PC's. but it does not seem to be working at all and gives back gibberish that is clearly not the PC's. Am I misunderstanding the math?
def PCA(cube_array, PCA_N=False, cutoff=1, verbatim=0):
cube_shape_z, cube_shape_x, cube_shape_y = cube_array.shape
n = 1
M = cube_array.reshape(cube_array.shape[0],-1)
U, s, Vt = np.linalg.svd(M, full_matrices=False)
V = Vt.T
S = np.diag(s)
N = cutoff
Mhat = np.dot(U[:, :N], np.dot(S[:N, :N],
V[:,:N].T)).reshape(cube_array.shape)
return Mhat
The principal components are just the corresponding first N columns of V.
What you computed is the best rank N approximation of the matrix M in the 2-norm and Frobenius norm.
The random module (http://docs.python.org/2/library/random.html) has several fixed functions to randomly sample from. For example random.gauss will sample random point from a normal distribution with a given mean and sigma values.
I'm looking for a way to extract a number N of random samples between a given interval using my own distribution as fast as possible in python. This is what I mean:
def my_dist(x):
# Some distribution, assume c1,c2,c3 and c4 are known.
f = c1*exp(-((x-c2)**c3)/c4)
return f
# Draw N random samples from my distribution between given limits a,b.
N = 1000
N_rand_samples = ran_func_sample(my_dist, a, b, N)
where ran_func_sample is what I'm after and a, b are the limits from which to draw the samples. Is there anything of that sort in python?
You need to use Inverse transform sampling method to get random values distributed according to a law you want. Using this method you can just apply inverted function
to random numbers having standard uniform distribution in the interval [0,1].
After you find the inverted function, you get 1000 numbers distributed according to the needed distribution this obvious way:
[inverted_function(random.random()) for x in range(1000)]
More on Inverse Transform Sampling:
http://en.wikipedia.org/wiki/Inverse_transform_sampling
Also, there is a good question on StackOverflow related to the topic:
Pythonic way to select list elements with different probability
This code implements the sampling of n-d discrete probability distributions. By setting a flag on the object, it can also be made to be used as a piecewise constant probability distribution, which can then be used to approximate arbitrary pdf's. Well, arbitrary pdfs with compact support; if you efficiently want to sample extremely long tails, a non-uniform description of the pdf would be required. But this is still efficient even for things like airy-point-spread functions (which I created it for, initially). The internal sorting of values is absolutely critical there to get accuracy; the many small values in the tails should contribute substantially, but they will get drowned out in fp accuracy without sorting.
class Distribution(object):
"""
draws samples from a one dimensional probability distribution,
by means of inversion of a discrete inverstion of a cumulative density function
the pdf can be sorted first to prevent numerical error in the cumulative sum
this is set as default; for big density functions with high contrast,
it is absolutely necessary, and for small density functions,
the overhead is minimal
a call to this distibution object returns indices into density array
"""
def __init__(self, pdf, sort = True, interpolation = True, transform = lambda x: x):
self.shape = pdf.shape
self.pdf = pdf.ravel()
self.sort = sort
self.interpolation = interpolation
self.transform = transform
#a pdf can not be negative
assert(np.all(pdf>=0))
#sort the pdf by magnitude
if self.sort:
self.sortindex = np.argsort(self.pdf, axis=None)
self.pdf = self.pdf[self.sortindex]
#construct the cumulative distribution function
self.cdf = np.cumsum(self.pdf)
#property
def ndim(self):
return len(self.shape)
#property
def sum(self):
"""cached sum of all pdf values; the pdf need not sum to one, and is imlpicitly normalized"""
return self.cdf[-1]
def __call__(self, N):
"""draw """
#pick numbers which are uniformly random over the cumulative distribution function
choice = np.random.uniform(high = self.sum, size = N)
#find the indices corresponding to this point on the CDF
index = np.searchsorted(self.cdf, choice)
#if necessary, map the indices back to their original ordering
if self.sort:
index = self.sortindex[index]
#map back to multi-dimensional indexing
index = np.unravel_index(index, self.shape)
index = np.vstack(index)
#is this a discrete or piecewise continuous distribution?
if self.interpolation:
index = index + np.random.uniform(size=index.shape)
return self.transform(index)
if __name__=='__main__':
shape = 3,3
pdf = np.ones(shape)
pdf[1]=0
dist = Distribution(pdf, transform=lambda i:i-1.5)
print dist(10)
import matplotlib.pyplot as pp
pp.scatter(*dist(1000))
pp.show()
And as a more real-world relevant example:
x = np.linspace(-100, 100, 512)
p = np.exp(-x**2)
pdf = p[:,None]*p[None,:] #2d gaussian
dist = Distribution(pdf, transform=lambda i:i-256)
print dist(1000000).mean(axis=1) #should be in the 1/sqrt(1e6) range
import matplotlib.pyplot as pp
pp.scatter(*dist(1000))
pp.show()
Here is a rather nice way of performing inverse transform sampling with a decorator.
import numpy as np
from scipy.interpolate import interp1d
def inverse_sample_decorator(dist):
def wrapper(pnts, x_min=-100, x_max=100, n=1e5, **kwargs):
x = np.linspace(x_min, x_max, int(n))
cumulative = np.cumsum(dist(x, **kwargs))
cumulative -= cumulative.min()
f = interp1d(cumulative/cumulative.max(), x)
return f(np.random.random(pnts))
return wrapper
Using this decorator on a Gaussian distribution, for example:
#inverse_sample_decorator
def gauss(x, amp=1.0, mean=0.0, std=0.2):
return amp*np.exp(-(x-mean)**2/std**2/2.0)
You can then generate sample points from the distribution by calling the function. The keyword arguments x_min and x_max are the limits of the original distribution and can be passed as arguments to gauss along with the other key word arguments that parameterise the distribution.
samples = gauss(5000, mean=20, std=0.8, x_min=19, x_max=21)
Alternatively, this can be done as a function that takes the distribution as an argument (as in your original question),
def inverse_sample_function(dist, pnts, x_min=-100, x_max=100, n=1e5,
**kwargs):
x = np.linspace(x_min, x_max, int(n))
cumulative = np.cumsum(dist(x, **kwargs))
cumulative -= cumulative.min()
f = interp1d(cumulative/cumulative.max(), x)
return f(np.random.random(pnts))
I was in a similar situation but I wanted to sample from a multivariate distribution, so, I implemented a rudimentary version of Metropolis-Hastings (which is an MCMC method).
def metropolis_hastings(target_density, size=500000):
burnin_size = 10000
size += burnin_size
x0 = np.array([[0, 0]])
xt = x0
samples = []
for i in range(size):
xt_candidate = np.array([np.random.multivariate_normal(xt[0], np.eye(2))])
accept_prob = (target_density(xt_candidate))/(target_density(xt))
if np.random.uniform(0, 1) < accept_prob:
xt = xt_candidate
samples.append(xt)
samples = np.array(samples[burnin_size:])
samples = np.reshape(samples, [samples.shape[0], 2])
return samples
This function requires a function target_density which takes in a data-point and computes its probability.
For details check-out this detailed answer of mine.
import numpy as np
import scipy.interpolate as interpolate
def inverse_transform_sampling(data, n_bins, n_samples):
hist, bin_edges = np.histogram(data, bins=n_bins, density=True)
cum_values = np.zeros(bin_edges.shape)
cum_values[1:] = np.cumsum(hist*np.diff(bin_edges))
inv_cdf = interpolate.interp1d(cum_values, bin_edges)
r = np.random.rand(n_samples)
return inv_cdf(r)
So if we give our data sample that has a specific distribution, the inverse_transform_sampling function will return a dataset with exactly the same distribution. Here the advantage is that we can get our own sample size by specifying it in the n_samples variable.
Imagine a simulation experiment in which the output is n total numbers, where k of them are sampled from an exponential random variable with rate a and n-k are sampled from an exponential random variable with rate b. The constraints are that 0 < a ≤ b and 0 ≤ k ≤ n, but a, b, and k are all unknown. Further, because of details of the simulation experiment, when a << b, k ≈ 0, and when a = b, k ≈ n/2.
My goal is to estimate either a or b (don't care about k, and I don't need to estimate both a and b: just one of the two is fine). From speculation, it seems as though estimating just b might be the easiest path (when a << b, there is pretty much nothing to use to estimate a and plenty to estimate b, and when a = b, both there is still plenty to estimate b). I want to do it in Python ideally, but I am open to any free software.
My first approach was to use sklearn.optimize to optimize a likelihood function where, for each number in my dataset, I compute P(X=x) for an exponential with rate a, compute the same for an exponential with rate b, and simply choose the larger of the two:
from sys import stdin
from math import exp,log
from scipy.optimize import fmin
DATA = None
def pdf(x,l): # compute P(X=x) for an exponential rv X with rate l
return l*exp(-1*l*x)
def logML(X,la,lb): # compute the log-ML of data points X given two exponentials with rates la and lb where la < lb
ml = 0.0
for x in X:
ml += log(max(pdf(x,la),pdf(x,lb)))
return ml
def f(x): # objective function to minimize
assert DATA is not None, "DATA cannot be None"
la,lb = x
if la > lb: # force la <= lb
return float('inf')
elif la <= 0 or lb <= 0:
return float('inf') # force la and lb > 0
return -1*logML(DATA,la,lb)
if __name__ == "__main__":
DATA = [float(x) for x in stdin.read().split()] # read input data
Xbar = sum(DATA)/len(DATA) # compute mean
x0 = [1/Xbar,1/Xbar] # start with la = lb = 1/mean
result = fmin(f,x0,disp=DISP)
print("ML Rates: la = %f and lb = %f" % tuple(result))
This unfortunately didn't work very well. For some selections of the parameters, it's within an order of magnitude, but for others, it's absurdly off. Given my problem (with its constraints) and my goal of estimating the larger parameter of the two exponentials (without caring about the smaller parameter nor the number of points that came from either), any ideas?
I posted the question in more general statistical terms on the stats Stack Exchange, and it got an answer:
https://stats.stackexchange.com/questions/291642/how-to-estimate-parameters-of-mixture-of-2-exponential-random-variables-ideally
Also, I tried the following, which worked decently well:
First, for every single integer percentile (1st percentile, 2nd percentile, ..., 99th percentile), I compute the estimate of b using the quantile closed-form equation (where the i-th quantile is the (i *100)-th percentile) for an exponential distribution (the i-th quantile = −ln(1 − i) / λ, so λ = −ln(1 − i) / (i-th quantile)). The result is a list where each i-th element corresponds to the b estimate using the (i+1)-th percentile.
Then, I perform peak-calling on this list using the Python implementation of the Matlab peak-calling function. Then, I take the list of resulting peaks and return the minimum. It seems to work fairly well.
I will implement the EM solution in the Stack Exchange post as well and see which works better.
EDIT: I implemented the EM solution, and it seems to work decently well in my simulations (n = 1000, various a and b).
I have used numpy's polyfit and obtained a very good fit (using a 7th order polynomial) for two arrays, x and y. My relationship is thus;
y(x) = p[0]* x^7 + p[1]*x^6 + p[2]*x^5 + p[3]*x^4 + p[4]*x^3 + p[5]*x^2 + p[6]*x^1 + p[7]
where p is the polynomial array output by polyfit.
Is there a way to reverse this method easily, so I have a solution in the form of,
x(y) = p[0]*y^n + p[1]*y^n-1 + .... + p[n]*y^0
No there is no easy way in general. Closed form-solutions for arbitrary polynomials are not available for polynomials of the seventh order.
Doing the fit in the reverse direction is possible, but only on monotonically varying regions of the original polynomial. If the original polynomial has minima or maxima on the domain you are interested in, then even though y is a function of x, x cannot be a function of y because there is no 1-to-1 relation between them.
If you are (i) OK with redoing the fitting procedure, and (ii) OK with working piecewise on single monotonic regions of your fit at a time, then you could do something like this:
-
import numpy as np
# generate a random coefficient vector a
degree = 1
a = 2 * np.random.random(degree+1) - 1
# an assumed true polynomial y(x)
def y_of_x(x, coeff_vector):
"""
Evaluate a polynomial with coeff_vector and degree len(coeff_vector)-1 using Horner's method.
Coefficients are ordered by increasing degree, from the constant term at coeff_vector[0],
to the linear term at coeff_vector[1], to the n-th degree term at coeff_vector[n]
"""
coeff_rev = coeff_vector[::-1]
b = 0
for a in coeff_rev:
b = b * x + a
return b
# generate some data
my_x = np.arange(-1, 1, 0.01)
my_y = y_of_x(my_x, a)
# verify that polyfit in the "traditional" direction gives the correct result
# [::-1] b/c polyfit returns coeffs in backwards order rel. to y_of_x()
p_test = np.polyfit(my_x, my_y, deg=degree)[::-1]
print p_test, a
# fit the data using polyfit but with y as the independent var, x as the dependent var
p = np.polyfit(my_y, my_x, deg=degree)[::-1]
# define x as a function of y
def x_of_y(yy, a):
return y_of_x(yy, a)
# compare results
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(my_x, my_y, '-b', x_of_y(my_y, p), my_y, '-r')
Note: this code does not check for monotonicity but simply assumes it.
By playing around with the value of degree, you should see that see the code only works well for all random values of a when degree=1. It occasionally does OK for other degrees, but not when there are lots of minima / maxima. It never does perfectly for degree > 1 because approximating parabolas with square-root functions doesn't always work, etc.