nth order fourier series curve fitting in Python - python

I've been looking for a way to code a snippet in Python which calculate for any n-th order of Fourier series curve fitting. To calculate a certain order of Fourier series curve fitting, say 3 order is quite simple, however to do it where the order n is variable, still not workable yet. Perhaps somebody has done it, but my searching can't find it yet. I wonder if anybody could give a help. Thanks.

Well the formula is
n-th cos_coeff = (2/T)*integral(-T/2,T/2, f(t)*cos(n*t*2*pi/2)dt)
n-th sin coeff = (2/T)*integral(-T/2,T/2, f(t)*sin(n*t*2*pi/2)dt)
Check scypi and scipy.integrate for details on integration.
Here it should be
cos_coeff(f, T, N) = (2/T)*quad(lambda t: f(t)*cos(N*t*2*math.pi/2),-T/2,T/2)
(Not tested, though)
I am not familiar with Discrete Fourier Transform, but you can perhaps compute said coefficient from it, too. Check
http://docs.scipy.org/doc/scipy/reference/tutorial/fftpack.html

Related

Python: How to discretize continuous probability distributions for Kullback-Leibler Divergence

I want to find out how many samples are needed at minimum to more or less correctly fit a probability distribution (In my case the Generalized Extreme Value Distribution from scipy.stats).
In order to evaluate the matched function, I want to compute the KL-divergence between the original function and the fitted one.
Unfortunately, all implementations I found (e.g. scipy.stats.entropy) only take discrete arrays as input. So obviously I thought of approximating the pdf by a discrete array, but I just can't seem to figure it out.
Does anyone have experience with something similar? I would be thankful for hints relating directly to my question, but also for better alternatives to determine a distance between two functions in python, if there are any.

Computing intersection of a function with a specific interval using scipy

I'm stuck trying to get functions that are existent in scipy (or sympy) for the following task:
Suppose we are given the following function:
f(A,B,C) = k1-A*sin(B*k2-C)
for each of the axis A,B,C of the space we have a specific interval, like [a_lb, a_ub], [b_lb, b_ub], [c_lb, c_ub], [d_lb, d_ub].
Which functions of scipy can be used to compute if the space encompassed by the boundaries is intersected by the given function? I thought of like e.g. computing the Hessian matrix.
Thank you for hints
Best regards
If I understand correctly, what you are looking for is an answer to whether f(A,B,C) bounded in the domain [a_l,a_u]x[b_l,b_u]x[c_l,c_u] has a value within [d_l,d_u]. You can try using scipy.optimize.minimize for this.
If you run scipy.optimize.minimize on f with the bounds [a_l,a_u]x[b_l,b_u]x[c_l,c_u], you should get the minimal value of f in the domain. Similarly, minimizing -f will give you the maximal value of f in the domain. f intersects the given boundary if and only if the interval [fmin, fmax] intersects the interval [d_l,d_u].
Note that scipy.optimize.minimize is a non-linear optimization and therefore requires an initial guess. The middle point of the domain box is a natural choice, but since the non-linear optimization may encounter a local minimum (or not converge), you may want to try several other initial guesses as well. scipy.optimize.minimize has many (optional) parameters so I recommend you read its documentation and play with them to fine-tune your usage to your needs.

Using fourier analysis for time series prediction

For data that is known to have seasonal, or daily patterns I'd like to use fourier analysis be used to make predictions. After running fft on time series data, I obtain coefficients. How can I use these coefficients for prediction?
I believe FFT assumes all data it receives constitute one period, then, if I simply regenerate data using ifft, I am also regenerating the continuation of my function, so can I use these values for future values?
Simply put: I run fft for t=0,1,2,..10 then using ifft on coef, can I use regenerated time series for t=11,12,..20 ?
I'm aware that this question may be not actual for you anymore, but for others that are looking for answers I wrote a very simple example of fourier extrapolation in Python https://gist.github.com/tartakynov/83f3cd8f44208a1856ce
Before you run the script make sure that you have all dependencies installed (numpy, matplotlib). Feel free to experiment with it.
P.S. Locally Stationary Wavelet may be better than fourier extrapolation. LSW is commonly used in predicting time series. The main disadvantage of fourier extrapolation is that it just repeats your series with period N, where N - length of your time series.
It sounds like you want a combination of extrapolation and denoising.
You say you want to repeat the observed data over multiple periods. Well, then just repeat the observed data. No need for Fourier analysis.
But you also want to find "patterns". I assume that means finding the dominant frequency components in the observed data. Then yes, take the Fourier transform, preserve the largest coefficients, and eliminate the rest.
X = scipy.fft(x)
Y = scipy.zeros(len(X))
Y[important frequencies] = X[important frequencies]
As for periodic repetition: Let z = [x, x], i.e., two periods of the signal x. Then Z[2k] = X[k] for all k in {0, 1, ..., N-1}, and zeros otherwise.
Z = scipy.zeros(2*len(X))
Z[::2] = X
When you run an FFT on time series data, you transform it into the frequency domain. The coefficients multiply the terms in the series (sines and cosines or complex exponentials), each with a different frequency.
Extrapolation is always a dangerous thing, but you're welcome to try it. You're using past information to predict the future when you do this: "Predict tomorrow's weather by looking at today." Just be aware of the risks.
I'd recommend reading "Black Swan".
you can use the library that #tartakynov posted and, to not repeat exactly the same time series in the forcast (overfitting), you can add a new parameter to the function called n_param and fix a lower bound h for the amplitudes of the frequencies.
def fourierExtrapolation(x, n_predict,n_param):
usually you will find that, in a signal, there are some frequencies that have significantly higher amplitude than others, so, if you select this frequencies you will be able to isolate the periodic nature of the signal
you can add this two lines who are determinated by certain number n_param
h=np.sort(x_freqdom)[-n_param]
x_freqdom=[ x_freqdom[i] if np.absolute(x_freqdom[i])>=h else 0 for i in range(len(x_freqdom)) ]
just adding this you will be able to forecast nice and smooth
another useful article about FFt:
forecast FFt in R

Python: Plotting a power law function with exponential cutoff

I have a graph between 2 functions f and g.
I know it follows a power law function with exponential cutoff.
f(x) = x**(-alpha)*e**(-lambda*x)
How do I find the value of exponent alpha?
If you have sufficiently close x points (for example one every 0.1), you can try the following:
ln(f(x)) = -alpha ln(x) - lambda x
ln(f(x))' = - alpha / x - lambda
So depending on where you have your points:
If you have a lot of points near 0, you can try:
h(x) = x ln(f(x))' = -alpha - lambda x
So the limit of the function h when x goes to 0 is -alpha
If you have large values of x, the function x -> ln(f(x))' tends toward lambda when x goes to infinity, so you can guess lambda and use pwdyson's expression.
If you don't have close x points, the numerical derivative will be very noisy, so I would try to guess lambda as the limit of -ln(f(x)/x for large x's...
If you don't have large values, but a large number of x's, you can try a minimization of
sum_x_i (ln(y_i) + alpha ln(x_i) + lambda x_i) ^2
on both alpha and lambda (I guess It would be more precise than the initial expression)...
It is a simple least square regression (numpy.linalg.lstsq will do the job).
So you have plenty of methods, the one to chose really depends on you inputs.
The usual and general way of doing what you want is to perform a non-linear regression (even though, as noted in another response, it is possible to linearize the problem). Python can do this quite easily with the help of the SciPy package, which is used by many scientists.
The routine you are looking for is its least-square optimization routine (scipy.optimize.leastsq). Once you wrap your head around the way this general optimization procedure works (see the example), you will probably find many other opportunities to use it. Basically, you calculate the list of differences between your measurements and their ideal value f(x), and you ask SciPy to find the parameters that make these differences as small as possible, so that your data fits the model as well as possible. This then gives you the parameter you are looking for.
It sounds like you might be trying to fit a power-law to a distribution with an exponential cutoff at the low end due to incompleteness - but I may be reading too far into your problem.
If that is the problem you're dealing with, this website (and accompanying publication) addresses the issue: http://tuvalu.santafe.edu/~aaronc/powerlaws/. I wrote the python implementation of the power-law fitter on that page; it is linked from there.
If you know that the points follow this law exactly, then invert the equation and put in an x and its corresponding f(x) value:
import math
alpha = -(lambda*x + math.log(f(x)))/math.log(x)
But the if the points do not exactly fit the equation you will need to do some sort of regression to determine alpha.
EDIT: Ok, so they don't fit exactly. This is getting beyond a Python question, but there may be something in numpy that can handle it. Here is a numpy linear regression recipe but your equation can't be rearranged into a linear form, so you'll have to look into non-linear regression.

Finding the length of a cubic B-spline

Using scipy's interpolate.splprep function get a parametric spline on parameter u, but the domain of u is not the line integral of the spline, it is a piecewise linear connection of the input coordinates. I've tried integrate.splint, but that just gives the individual integrals over u. Obviously, I can numerically integrate a bunch of Cartesian differential distances, but I was wondering if there was closed-form method for getting the length of a spline or spline segment (using scipy or numpy) that I was overlooking.
Edit: I am looking for a closed-form solution or a very fast way to converge to a machine-precision answer. I have all but given up on the numerical root-finding methods and am now primarily after a closed-form answer. If anyone has any experience integrating elliptical functions or can point me to a good resource (other than Wolfram), That would be great.
I'm going to try Maxima to try to get the indefinite integral of what I believe is the function for one segment of the spline: I cross-posted this on MathOverflow
Because both x & y are cubic parametric functions, there isn't a closed solution in terms of simple functions. Numerical integration is the way to go. Either integrating the arc length expression or simply adding line segment lengths - depends on the accuracy you are after and how much effort you want to exert.
An accurate and fast "Adding length of line segments" method:
Using recurvise subdivision (a form of de Casteljeau's algorithm) to generate points, can give you a highly accurate representation with minimal number of points.
Only subdivide subdivisions if they fail to meet a criteria. Usually the criteria is based on the length joining the control points (the hull or cage).
For cubic, usually comparing closeness of P0P1+P1P2+P2P3 to P0P3, where P0, P1, P2 & P3 are the control points that define your bezier.
You can find some Delphi code here:
link text
It should be relatively easy to convert to Python.
It will generate the points. The code already calculates the length of the segments in order to test the criteria. You can simply accumulate those length values along the way.
You can integrate the function sqrt(x'(u)**2+y'(u)**2) over u, where you calculate the derivatives x' and y' of your coordinates with scipy.interpolate.splev. The integration can be done with one of the routines from scipy.integrate (quad is precise [Clenshaw-Curtis], romberg is generally faster). This should be more precise, and probably faster than adding up lots of small distances (which is equivalent to integrating with the rectangle rule).

Categories