(numpy) Wrong amplitude(?) of FFT'd array? - python

I'm using numpy and matplotlib to analyze data output form my simulations. There is one (apparent) inconsistency that I can't find the roots of. It's the following:
I have a signal that has a given energy a^2~1. When I use rfft to take the FFT and compute the energy in the Fourier space, it comes out to be significantly larger. To void giving the details of my data etc., here is an example with a simple sin wave:
from pylab import *
xx=np.linspace(0.,2*pi,128)
a=np.zeros(128)
for i in range(0,128):
a[i]=sin(xx[i])
aft=rfft(a)
print mean(abs(aft)**2),mean(a**2)
In principle both the numbers should be the same (at least in the numerical sense) but this is what I get out of this code:
62.523081632 0.49609375
I tried to go through numpy.fft documentation but could not find anything. A search here gave the following but I was not able to understand the explanations there:
Big FFT amplitude difference between the existing (synthesized) signal and the filtered signal
What am I missing/ misunderstanding? Any help/ pointer in this regard would be greatly appreciated.
Thanks!

Henry is right on the non-normalization part, but there is a little more to it, because you are using rfft, not fft. The following is consistent with his answer:
>>> x = np.linspace(0, 2 * np.pi, 128)
>>> y = 1 - np.sin(x)
>>> fft = np.fft.fft(y)
>>> np.mean((fft * fft.conj()).real)
191.49999999999991
>>> np.mean(y**2)
1.4960937500000004
>>> fft = fft / np.sqrt(len(fft))
>>> np.mean((fft * fft.conj()).real)
1.4960937499999991
But if you now try the same with rfft, things don't quite work out:
>>> rfft = np.fft.rfft(y)
>>> np.mean((rfft * rfft.conj()).real)
314.58462009358772
>>> rfft /= np.sqrt(len(rfft))
>>> np.mean((rfft * rfft.conj()).real)
4.8397633860551954
65
>>> np.mean((rfft * rfft.conj()).real) / len(rfft)
4.8397633860551954
The following does work properly, though:
>>> (rfft[0] * rfft[0].conj() +
... 2 * np.sum(rfft[1:] * rfft[1:].conj())).real / len(y)
1.4960937873636722
When you use rfft what you are getting is not properly the DFT of your data, but only the positive half of it, since the negative would be symmetric to it. To compute the mean, you need to consider every value other than the DC component twice, which is what the last line of code does.

In most FFT libraries, the various DFT flavours are not orthogonal. The numpy.fft library applies the necessary normalizations only during the inverse transform.
Consider the Wikipedia description of the DFT; the inverse DFT has the 1/N term that the DFT does not have (in which N is the length of the transform). To make an orthogonal version of the DFT, you need to scale the result of the un-normalised DFT by 1/sqrt(N). In this case, the transform is orthogonal (that is, if we define the orthogonal DFT as F, then the inverse DFT is the conjugate, or hermitian, transpose of F).
In your case, you can get the correct answer by simply scaling aft by 1.0/sqrt(len(a)) (note that N is found from the length of the transform; the real FFT just throws about half the values away, so it's the length of a that is important).
I suspect that the reason for leaving the normalization until the end is that in most situations, it doesn't matter and you therefore save the computational cost of doing the normalization twice. Indeed, the very quick FFTW library doesn't do any normalization in either direction, and leaves it entirely up to the user to deal with.
Edit: Just to be clear, the explanation above is not quite correct. The correct answer will not be arrived at with that simple scaling, as in your case the DC component will be added in twice, although 1.0/sqrt(len(a)) is still the correct scaling to produce the unitary transform.

Related

numpy roots() returns false roots

I'm trying to use numpy to find the roots of some polynomials, but I am getting some erroneous results:
>> poly = np.polynomial.Polynomial([4.383930e+00, 2.277144e+14, -7.008406e+25, -4.258004e+16])
>> roots = poly.roots()
>> roots
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>> poly(roots)
array([-3.74803539e+23, -7.99360578e-15, -1.89182003e-13])
What is up with the false root -1.64593692e+09 which results in -3.74803539e+23? This is clearly not a root.
Is this the result of floating-point errors? or something else?..
And more importantly;
Is there a way to get around it?
..perhaps something I can tweak, or a different function I can use?. Any help is much appreciated.
I found this and this previous question which seemed to be related, but after reading them and the answers/comments I don't think that they are the same problem.
First of all, computing the roots of a polynomial is a classically ill-conditioned problem, meaning (roughly) that no matter what algorithm you use to solve it, small changes in the coefficients of many polynomials can lead to huge changes in their roots. That means we should be a little careful not to place an extraordinary amount of faith in root-finding results in general, and that perhaps that we shouldn't be too surprised when a root finder gives weird results. There's a pretty good example on Wikipedia, Wilkinson's polynomial, that shows how things can go wrong.
In this instance, the coefficients of the polynomial of interest are of such different magnitudes that it's not surprising that the results seem poor. But consider this: if our original polynomial is p() and it has a root x, then p(x) = 0, but also c*p(x) = 0 for any constant c. In other words, we can scale the coefficients without changing the roots, so happen if we normalized the polynomial by dividing by the coefficient of largest magnitude, 7e25?
Original polynomial: p(x) = 4.4 + 2.3e+14*x - 7.0e25*x**2 - 4.3e16*x**3
Scaled polynomial: p(x) = 6.3e-26 + 3.2e-12*x - x**2 - 6.1e-10*x**3
So for this polynomial, the largest coefficient ~7e25 is so huge that the smallest coefficient ~4.4 is essentially negligible. That should give us a hint that what counts as zero in a root finding iteration isn't what we would normally consider "small."
The short answer is that the root calculated by NumPy isn't perfect, but it is an estimate of an actual root. Here's some code to convince us.
>>> import numpy as np
>>> coefs = np.array([4.383930e+00, 2.277144e+14, -7.008406e+25, -4.258004e+16])
>>> coefs_normed = coefs / np.abs(coefs).max()
>>> coefs_normed
array([ 6.25524549e-26, 3.24916108e-12, -1.00000000e+00, -6.07556697e-10])
>>> poly = np.polynomial.Polynomial(coefs)
>>> roots = poly.roots()
>>> roots
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>>> poly(roots)
array([-3.74803539e+23, 8.43769499e-14, -1.89182003e-13])
>>> poly_normed = np.polynomial.Polynomial(coefs_normed)
>>> roots_normed = poly_normed.roots()
>>> roots_normed
array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12])
>>> poly_normed(roots_normed)
array([-5.34791419e-03, 1.20534089e-39, -2.11221641e-39])
Now, -5e-03 is not very close to machine epsilon, but that should convince us that maybe the calculated root isn't quite as bad as it seemed at first.
A final point: the np.polynomial.Polynomial class has domain and window arguments that determine how it does its computations. Since polynomials get absolutely huge as the domain tends to +infinity or -infinity, it's unrealistic to expect accurate calculations for a value around 10^9.
The root appears real:
x = np.linspace(-2e9, 1000, 10000)
plt.plot(x, poly(x))
The problem is that the scale of the data is very large. -3e23 is tiny compared to say 6e43. The discrepancy is caused by roundoff error. Third order polynomials have an analytical solution, but it's not going to be numerically stable when your domain is on the order of 1e9.
You can try to use the domain and window parameters to attempt to introduce some numerical stability. For example, a common choice of domain is something that envelops your entire dataset. You would have to adjust the coefficients to compenstate, since those values are usually used for fitting data.

python manual fft botched

Im trying to repoduce the fft functions in python. Iv'e seen a similar question Manual fft not giving me same results as fft here, but I'm having trouble seeing if i'm doing the same error or a different one.
import numpy as np
import numpy.random as npr
N=9 ### 10 -1
MC=10
###Genrate soem data
data=complex(1,0)*npr.uniform(size=(N,MC))+complex(0,1)*npr.uniform(size=(N,MC))
naive_fft=complex(1,0)*np.zeros((N,MC))
for K in range(N):
for m in range(N):
phase=(2*np.pi*K*m)/float(N+1)
naive_fft[K,:]=naive_fft[K,:]+data[m,:]*np.exp(complex(0,1)*phase)
fft=np.fft.fft(data,axis=0)
ifft=np.fft.ifft(data,axis=0)
print('fft')
print(naive_fft-fft)
print('ifft')
print(naive_fft-ifft*(N+1.0))
Comparing my results to the numpy fft i cannot reproduce neither fft nor ifft (only the naive_fft[0,:] seem to match the fft[0,:] values.
There are several things to mention. First of all, in Python we use 1j to represent the imaginary unit, not complex(0, 1). If you would like to compare your result to numpy, then you have to check how numpy implements the fft. See the Numpy FFT docs for details. You'll find that numpy follows the most common fft definition, which uses a negative exponent. Furthermore, float(N+1) in your phase is simply wrong. It must read N.
All in all you have:
# ...
naive_fft = np.zeros((N,MC), dtype='complex')
for K in range(N):
for m in range(N):
phase=(-2*np.pi*K*m) / float(N)
naive_fft[K] += data[m] * np.exp(phase*1j)
xfft = np.fft.fft(data, axis=0)
# ...
Test it with
>>> np.isclose(xfft, naive_fft).all()
True
The inverse transformation works analogously but with a positive exponent.

Translating an FFT function from Python 2.x to Python 3.x, and computing the IFFT from it

I have an Fast Fourier Transform function in Python for versions 2.x. I want to make it in Python 3.x, but I have some problems with "xrange" and list identifiers(as my compiler said). I also have no idea how to compute Inversed FFT from my FFT without using of any non-standard libraries. Code is below. Thanks in advance...
from cmath import exp,pi
def FFT(X):
n = len(X)
w = exp(-2*pi*1j/n)
if n > 1:
X = FFT(X[::2]) + FFT(X[1::2])
for k in xrange(n/2):
xk = X[k]
X[k] = xk + w**k*X[k+n/2]
X[k+n/2] = xk - w**k*X[k+n/2]
return X
UPD: Totally reconstructed ,my FFT and constructed IFFT due to your advices.
P.S. How to close post?
There are a couple ways to convert your FFT into an IFFT. The easiest is to get rid of the minus sign inside the parameter to your exp() function for w. The next is to take the complex conjugate of the FFT of the complex conjugate of the input.
If you don't scale your forward FFT, then common practice is to scale your IFFT computation by 1/N (the length), so that IFFT(FFT()) results in the same total sum magnitude. If you do scale your FFT by 1/N, then don't scale your IFFT computation. Or scale both by 1/sqrt(N).

FFT vs least squares fitting of fourier components?

So I've got a signal, and I've tried fitting a curve to it using two methods that I thought should have been numerically equivalent, but apparently are not.
Method 1: Explicit fitting of sinusoids by least squares:
def curve(x, a0, a1, b1, a2, b2):
return a0 + a1*np.cos(x/720*2*math.pi) + b1*np.sin(x/720*2*math.pi) + a2*np.cos(x/720*2*math.pi*2) + b2*np.sin(x/720*2*math.pi*2)
def fit_curve(xdata, ydata):
guess = [10, 0, 0, 0, 0]
params, params_covariance = optimize.curve_fit(curve, xdata, ydata, guess)
return params, params_covariance
Method 2: Use of inbuilt FFT algorithm to do the same thing:
f = np.fft.rfft(y,3)
curve = np.fft.irfft(f, width)
I have two problems. The first one is minor, the FFT is 'out of scale', so I apply a scaling factor mean(y)/mean(curve) to fix it, which is a bit of a hack. I'm not sure why this is the case.
The main problem I have is that I believe these should produce almost identical results, but they don't. The explicit fitting produces a tighter fit every time than the FFT results- my question is, should it?
One can find discrete fourier transform coefficients using linear algebra, though I imagine it's only useful to understand the DFT better. The code below demonstrates this. To find coefficients and phases for a sine series will take a little bit more work, but shouldn't be too hard. The Wikipedia article cited in the code comments might help.
Note that one doesn't need scipy.optimize.curve_fit, or even linear least-squares. In fact, although I've used numpy.linalg.solve below, that is unnecessary since basis is a unitary matrix times a scale factor.
from __future__ import division, print_function
import numpy
# points in time series
n= 101
# final time (initial time is 0)
tfin= 10
# *end of changeable parameters*
# stepsize
dt= tfin/(n-1)
# sample count
s= numpy.arange(n)
# signal; somewhat arbitrary
y= numpy.sinc(dt*s)
# DFT
fy= numpy.fft.fft(y)
# frequency spectrum in rad/sample
wps= numpy.linspace(0,2*numpy.pi,n+1)[:-1]
# basis for DFT
# see, e.g., http://en.wikipedia.org/wiki/Discrete_Fourier_transform#equation_Eq.2
# and section "Properties -> Orthogonality"; the columns of 'basis' are the u_k vectors
# described there
basis= 1.0/n*numpy.exp(1.0j * wps * s[:,numpy.newaxis])
# reconstruct signal from DFT coeffs and basis
recon_y= numpy.dot(basis,fy)
# expect yerr to be "small"
yerr= numpy.max(numpy.abs(y-recon_y))
print('yerr:',yerr)
# find coefficients by fitting to basis
lin_fy= numpy.linalg.solve(basis,y)
# fyerr should also be "small"
fyerr= numpy.max(numpy.abs(fy-lin_fy))
print('fyerr',fyerr)
On my system this gives
yerr: 2.20721480995e-14
fyerr 1.76885950227e-13
Tested on Ubuntu 14.04 with Python 2.7 and 3.4.
Take a look at the docstring for np.fft.rfft. In particular, this: "If n is smaller than the length of the input, the input is cropped." When you do this:
f = np.fft.rfft(y,3)
you are computing the FFT of the first three data points in y, not the first three Fourier coefficients of y.

Python scipy.fftpack.rfft frequency bin mapping

I'm trying to get the correct FFT bin index based on the given frequency. The audio is being sampled at 44.1k Hz and the FFT size is 1024. Given the signal is real (capture from PyAudio, decoded through numpy.fromstring, windowed by scipy.signal.hann), I then perform FFT through scipy.fftpack.rfft, and compute the decibel of the result, in whole, magnitude = 20 * scipy.log10(abs(rfft(audio_sample)))
Based on this, and this, I originally had my mapping from the FFT bin index, k, to any frequency, F, as:
F = k*Fs/N for k = 0 ... N/2-1 where Fs is the sampling rate, and N is the FFT bin size, in this case, 1024. And the reverse as:
k = F*N/Fs for F = 0Hz ... Fs/2-Fs/N
However, realizing that the rfft's result is no symmetric like fft, and provides the result, in an N size array. I now have some questions in regarding the mapping and the function. Documentation unfortunately did not provide much information as I'm novice in this area.
My questions:
To me, the result of rfft on an audio sample can be used directly from the first bin to the last bin, as no symmetry occurs in the output, is that correct?
Given the lack of symmetry from the above, the frequency resolution appears to have increased, is this interpretation correct?
Because of using rfft, my mapping function from bin index k to frequency F is now F = k*Fs/(2N) for k = 0 ... N-1 is this correct?
Conversely, the reverse mapping function from frequency F to bin index k now becomes k = 2*F*N/Fs for F = 0Hz ... Fs/2-(Fs/2/N), what about the correctness of this?
My general confusion arises from how rfft is related to fft, and how the mapping can be done correctly while using rfft. I believe my mapping is offset by a small amount, and that is crucial in my application. Please point out the mistake or advise on the matter if possible, thank you very much.
First to clear up a few things for you:
A quick reference to the fftpack documentation reveals that rfft only gives you an output vector from 0..512 (in your case). The reason for this is exactly because of the symmetry present when calculating the discrete Fourier transform of a real-valued input:
y[k] = y*[N-k] (see Wikipedia page on DFTs). Therefore, the rfft function only calculates and stores N/2+1 values since you can calculate the other half by just taking the complex conjugates (should you really want it for plotting (say)). The fft function makes no assumption on the input values (they can have both a real and imaginary part) and therefore no symmetry can be assumed in the output and it gives you a full output vector with N values. Admittedly, most applications use a real input, so people tend to assume the symmetry is always there. Note that the Fast Fourier Transform (FFT) is an (efficient) algorithm to calculate the Discrete Fourier Transform (DFT) and the rfft function also uses the FFT to do the calculation.
In light of the above, your indices for accessing the output vector are out of bounds, i.e. > 512. The reasons why/how you can do this depends on your code. You should clearly distinguish between the 'logical N' (that you use to map the bin frequencies, define the DFT etc.) and the 'computational N' (the actual number of values in your output vector), then all your problems should disappear.
To concretely answer your questions:
No. There is symmetry and you need to use this to calculate the last bins (but they give you no extra information).
No. The only way to increase resolution of a DFT is to increase your sample length.
No, but almost. F = k*Fs/N for k = 0..N/2
For an output vector with N bins you get frequencies from 0 to (N-1)/N*Fs. Using the rfft you will have an output vector with N/2+1 bins. You do the maths, but I get 0..Fs/2
Hope things are clearer now.

Categories