I'm using the Lomb-Scargle package in astropy. I tried with artificial data which is a sin function with amplitude of 1:
from astropy.stats import LombScargle
t_sin2=np.arange(1000)*1.0
a_sin2=np.sin(t_sin2)
frequency=np.arange(0.001,0.5,0.001)
PSD_LS = LombScargle(t_sin2,a_sin2).power(frequency, normalization='psd')
plt.plot(frequency, PSD_LS)
The plot I got is:
my PSD plot.
The peak value of PSD is around 230. I don't know how to calculate it to amplitude.
This is the usage of Lomb-Scargle in astropy:Lomb-Scargle docs. But I'm confused with the PSD normalization. In the usage, it says: explaination of PSD normalization, and χref is the best-fit sum-of-residuals of least-squares fits around a constant reference model, which is the term I do not understand.
Thank you!
After digging into the original implementation of the code, it seems like PSD implementation takes into account amount of points you have as well.
elif normalization == 'psd':
p *= 0.5 * (dy ** -2.0).sum()
p are the calculated LS powers, while dy are the uncertainties for each data point. If you pass no uncertainties, it becomes an array of ones that has the same length as the amount of points that you have. In other words, it multiplies the power by the amount of points that you have (because of the .sum()). So to get an amplitude estimation, you take the PSD peak, divide by the amount of points, take square root and multiply by two. That should be an estimate of the peak.
sqrt(PSD_peak / amount_of_points) * 2 ≈ amplitude_of_the_fit
I am not sure why the astropy implementation multiplies by the sum of the uncertainties, because that results in this kind of behaviour when no uncertainties are given. Square root is needed because the units of the power are y.unit^2, just like in the original documentation.
For your example: sqrt(230 / 1000) * 2 = 0.959, which is close enough to the amplitude of 1. From my own trials on real data, I get a very close estimate of the amplitude fit for most objects, so that should hopefully be a correct formula.
power spectral density PSD is only the squared amplitude, hence if you calculate the square root of the PSD you can convert it to the amplitude:
import numpy as np
Amp = np.sqrt(PSD_LS)
Related
I am trying to determine the total energy recorded by a detector in time domain by means of it's spectrum.
The first step after performing the Fast Fourier Transformation with Numpy's FFT library was to confirm Parseval's theorem.
According to the theorem, the total energy in time domain and in frequency domain must be the same. I have two problems that I am not able to solve.
I can confirm the theorem when I don't use the proper units for the x-Axis during the np.trapz() integration. As soon as I use my the actual sample points/frequencies, the result is off. I do not understand why this is the case and am wondering if I can apply a normalization to solve this error.
I cannot confirm the theorem when I apply a DC offset to the signal (uncomment the f = np.sin(np.pi**t)* line).
Below is my code with an examplatory Sine function.
# Python code
import matplotlib.pyplot as plt
import numpy as np
# Create a Sine function
dt = 0.001 # Time steps
t = np.arange(0,10,dt) # Time array
f = np.sin(np.pi*t) # Sine function
# f = np.sin(np.pi*t)+1 # Sine function with DC offset
N = len(t) # Number of samples
# Energy of function in time domain
energy_t = np.trapz(abs(f)**2)
# Energy of function in frequency domain
FFT = np.sqrt(2) * np.fft.rfft(f) # only positive frequencies; correct magnitude due to discarding of negative frequencies
FFT[0] /= np.sqrt(2) # DC magnitude does not have to be corrected
FFT[-1] /= np.sqrt(2) # Nyquist frequency does not have to be corrected
frq = np.fft.rfftfreq(N,d=dt) # FFT frequenices
# Energy of function in frequency domain
energy_f = np.trapz(abs(FFT)**2) / N
print('Parsevals theorem fulfilled: ' + str(energy_t - energy_f))
# Parsevals theorem with proper sample points
energy_t = np.trapz(abs(f)**2, x=t)
energy_f = np.trapz(abs(FFT)**2, x=frq) / N
print('Parsevals theorem NOT fulfilled: ' + str(energy_t - energy_f))
The FFT computes the Discrete Fourier Transform (DFT), which is not the same as the (continuous-domain) Fourier Transform.
For the DFT, Parseval’s theorem states that the sum of the square magnitude of the discrete signal equals the sum of the square magnitude of the DFT of the signal. There is no integration involved, and therefore you should not use trapz. Just use sum.
Note that a discrete signal is a set of samples x[n] at n=0..N-1. Fourier analysis in the discrete domain, and all related operations, only consider n, not t. The sampling frequency and the actual times those samples were recorded is irrelevant in these analyses. Likewise, the DFT produces a set of samples X[k] at k=0..N-1, not at any specific f or ω related to any sampling frequency.
Now it is possible to relate n to t because we know the sampling frequency, and it is possible to relate k to f because we know the sampling frequency. But these conversions should not make us think that X[k] is a sampling of the continuous-domain Fourier transform of the original continuous-domain signal. And they should especially not make us think that we can interpolate X[k].
Reconstructing the samples x[n] is accomplished by adding N sinusoids with parameters given by X[k]. “In between” those DFT components should not be anything. Interpolating them would mean we add sinusoids that do not exist in the samples x[n].
trapz uses linear interpolation to obtain an estimate of the integral, and therefore is inappropriate to use in discrete Fourier analysis.
I'm calculating the RFFT of a signal of length 3000 (sampled at 100 Hz) with only real valued entries:
from scipy.fft import rfft
coeffs = rfft(values)
coeffs = np.abs(coeffs)
With rfft I'm only getting half of the coefficients, i.e. the symmetric ones are dicarded (due to real valued input).
Is it correct to scale the values by coeffs = (2 / len(values)) * coeffs to get the amplitudes?
Edit: Below I have appended a plot of the amplitudes vs. Frequency (bins) for accelerometer and gyroscope (shaded area is standard deviation). For accelerometer the energy in the first FFT bin is much higher than in the other bins (> 2 in the first bin and around < 0.4 in the other bins). For gyroscope it is different and the energy is much more distributed.
Does that mean that for acccelerometer the FFT looks good but for gyroscope it is worse? Further, is it reasonable to cut the FFT at 100 Hz (i.e. take only bins < 100 Hz) or take the first few bins until 95% of the energy is kept?
The approximate relationship I provided in this post holds whether you throw out half the coefficients or not.
So, if the conditions indicated in that post apply to your situation, then you could get an estimate of the amplitude of a dominant sinusoidal component with
approx_sinusoidal_amplitude = (2 / len(values)) * np.abs(coeffs[k])
for some index k corresponding to the frequency of the sinusoidal component (which according to the limitations indicated in my other post has to be at or near a multiple of 100/3000 ~ 0.033Hz in your case). For a dominant sinusoidal component, this index would typically correspond to a local peak in the frequency spectrum. Note however that if your signal is a mixture of various frequency components, the individual components may affect the frequency spectrum in such a way that the peak does not appear clearly.
Why is the amplitude I compute far, far away from original after fast Fourier transform (FFT)?
I have a signal with 1024 points and sampling frequency of 1/120000. I apply the fast Fourier transform in Python with scipy.fftpack. I normalize the calculated magnitude by number of bins and multiply by 2 as I plot only positive values.
As my initial signal amplitude is around 64 dB, I get very low amplitude values less then 1.
Please see my code.
Signal = well.ReadWellData(SignalNDB)
y, x = Signal.GetData(numpy=np)
N = y.size # Number of sample points 1024 ...
T = 1/120000 # sampling frequency (sec)
x = np.linspace(0.0, N*T, N)
yf = abs(fft(y)) # Perform fft returning Magnitude
xf = np.linspace(0.0, 1.0/(2.0*T), N//2) # Calculatel frequency bins
freqs = fftfreq(N, T)
ax1=plt.subplot(211)
ax1.plot(x,y)
plt.grid()
ax2=plt.subplot(212)
yf2 = 2/N * np.abs(yf[0:N//2]); # Normalize Magnitude by number of bins and multiply by 2
ax2.semilogy(xf, yf2) # freq vs ampl - positive only freq
plt.grid()
ax1.set_title(["check"])
#ax2.set_xlim([0,4000])
plt.show()
Please see my plot:
EDIT:
Finally my signal Amplitude after fft is exactly what I expected. What I did.
First I did fft for signal in mV. Then I converted the results to dB as per the formula: 20*log10(mV)+60; where 60 represents 1 mV proveded by the tool manufacturer.Therefore dB values presented on a linear scale format # the bottom plot rather than on the log format.
Please see the resulting plot below.
Results
Looks good to me. The FFT, or the Fourier transform in general, gives you the representation of your time-domain signal in the frequencies domain.
By taking a look at your signal, you have two main components : something oscillating at around 500Hz (period of 0.002s) and an offset (which corresponds to freq = 0Hz). Looking at the result of the FFT, we can see mainly two peaks : one at 0Hz and the other one could be at 500Hz (difficult to be sure without zooming on the signal).
The only relation between the intensities is defined by the Parseval's theorem, but having a signal oscillating around 64dB doesn't mean its FFT should have values close to 64dB. I suggest you take a look here.
The power spectral density St of a signal u may be computed as the product of the FFT of the signal, u_fft with its complex conjugate u_fft_c. In Python, this would be written as:
import numpy as np
u = # Some numpy array containing signal
u_fft = np.fft.rfft(u-np.nanmean(u))
St = np.multiply(u_fft, np.conj(u_fft))
However, the FFT definition in Numpy requires the multiplication of the result with a factor of 1/N, where N=u.size in order to have an energetically consistent transformation between u and its FFT. This leads to the corrected definition of the PSD using numpy's fft:
St = np.multiply(u_fft, np.conj(u_fft))
St = np.divide(St, u.size)
On the other hand, Scipy's function signal.welch computes the PSD directly from input u:
from spicy.signal import welch
freqs_st, St_welch = welch(u-np.nanmean(u),
return_onesided=True, nperseg=seg_size, axis=0)
The resulting PSD, St_welch, is obtained by performing several FFTs in segments of the array u with size seg_size. Thus, my question is:
Should St_welch be multiplied by a factor of 1/seg_size to give an energetically consistent PSD? Should it be multiplied by 1/N? Should it not be multiplied at all?
PD: Comparison by performing both operations on a signal is not straightforward, since the Welch method also introduces smoothing of the signal and changes the display in the frequency domain.
Information on the necessity of the prefactor when using numpy.fft :
Journal article on the matter
The definition of the paramater scale of scipy.signal.welch suggests that the appropriate scaling is performed by the function:
scaling : { ‘density’, ‘spectrum’ }, optional
Selects between computing the power spectral density (‘density’) where Pxx has units of V^2/Hz and computing the power spectrum (‘spectrum’) where Pxx has units of V^2, if x is measured in V and fs is measured in Hz. Defaults to ‘density’
The correct sampling frequency is to be provided as argumentfs to retreive the correct frequencies and an accurate power spectral density.
To recover a power spectrum similar to that computed using np.multiply(u_fft, np.conj(u_fft)), the length of the fft frame and the applied window must respectively be provided as the length of the frame and boxcar (equivalent to no window at all). The fact that scipy.signal.welch applies the correct scaling can be checked by testing a sine wave:
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
def abs2(x):
return x.real**2 + x.imag**2
if __name__ == '__main__':
framelength=1.0
N=1000
x=np.linspace(0,framelength,N,endpoint=False)
y=np.sin(44*2*np.pi*x)
#y=y-np.mean(y)
ffty=np.fft.fft(y)
#power spectrum, after real2complex transfrom (factor )
scale=2.0/(len(y)*len(y))
power=scale*abs2(ffty)
freq=np.fft.fftfreq(len(y) , framelength/len(y) )
# power spectrum, via scipy welch. 'boxcar' means no window, nperseg=len(y) so that fft computed on the whole signal.
freq2,power2=scipy.signal.welch(y, fs=len(y)/framelength,window='boxcar',nperseg=len(y),scaling='spectrum', axis=-1, average='mean')
for i in range(len(freq2)):
print i, freq2[i], power2[i], freq[i], power[i]
print np.sum(power2)
plt.figure()
plt.plot(freq[0:len(y)/2+1],power[0:len(y)/2+1],label='np.fft.fft()')
plt.plot(freq2,power2,label='scipy.signal.welch()')
plt.legend()
plt.xlim(0,np.max(freq[0:len(y)/2+1]))
plt.show()
For a real to complex transform, the correct scaling of np.multiply(u_fft, np.conj(u_fft)) is 2./(u.size*u.size). Indeed, the scaling of u_fft is 1./u.size. Furthermore, real to complex transforms only report half of the frequencies, because the magnitude of the bin N-k would be the complex conjugate of that of the bin k. The energy of that bin is therefore equal to that of bin k and it is to be summed to that of bin k. Hence the factor 2. For the tested sine wave signal of amplitude 1, the energy is reported as 0.5: it is indeed, the average of a squared sine wave of amplitude 1.
Windowing is useful if the length of the frame is not a multiple of the period of the signal or if the signal is not periodic. Using smaller fft frames is useful if the signal is made of damped waves: the signal could be considered as periodic on a characteristic time: choosing a fft frame smaller than that characteristic time but larger than the period of the waves seems judicious.
I am doing some work, comparing the interpolated fft of the concentrations of some gases over a period, of which is unevenly sampled, with the lomb-scargle periodogram of the same data. I am using scipy's fft function to calculate the fourier transform and then squaring the modulus of this to give what I believe to be the power spectral density, in units of parts per billion(ppb) squared.
I can get the lomb-scargle plot to match almost the exact pattern as the FFT but never the same scale of magnitude, the FFT power spectral density always is higher, even though I thought the lomb-scargle power was power spectral density. Now the lomb code I am using:http://www.astropython.org/snippet/2010/9/Fast-Lomb-Scargle-algorithm, normalises the dataset taking away the average and dividing by 2 times the variance from the data, therefore I have normalised the FFT data in the same manner, but still the magnitudes do not match.
Therefore I did some more research and found that the normalised lomb-scargle power could unitless and therefore I cannot the plots match. This leads me to the 2 questions:
What units (if any) are the power spectral density of a normalised lim-scargle perioogram in?
How would I proceed to match my fft plot with my lomb-scargle plot, in terms of magnitude and pattern?
Thank you.
The squared modulus of the Fourier transform of a series is defined as the energy spectral density (ESD). You need to divide the ESD by the length of the series to convert to an estimate of power spectral density (PSD).
Units
The units of a PSD are [units]**2/[frequency] where [units] represents the units of your original series.
Normalization
To check for proper normalization, one can numerically integrate the PSD of a white noise (with known variance). If the integrated spectrum equals the variance of the series, the normalization is correct. A factor of 2 (too low) is not incorrect, though, and may indicate the PSD is normalized to be double-sided; in that case, just multiply by 2 and you have a properly normalized, single-sided PSD.
Using numpy, the randn function generates pseudo-random numbers that are Gaussian distributed. For example
10 * np.random.randn(1, 100)
produces a 1-by-100 array with mean=0 and variance=100. If the sampling frequency is, say, 1-Hz, the single-sided PSD will theoretically be flat at 200 units**2/Hz, from [0,0.5] Hz; the integrated spectrum would thus be 10, equaling the variance of the series.
Update
I modified the example included in the python code you linked to demonstrate the normalization for a normally distributed series of length 20, with variance 1, and sampling frequency 10:
import numpy
import lomb
numpy.random.seed(999)
nd = 20
fs = 10
x = numpy.arange(nd)
y = numpy.random.randn(nd)
fx, fy, nout, jmax, prob = lomb.fasper(x, y, 1., fs)
fNy = fx[-1]
fy = fy/fs
Si = numpy.mean(fy)*fNy
print fNy, Si, Si*2
This gives, for me:
5.26315789474 0.482185882163 0.964371764327
which shows you a few things:
The "Nyquist" frequency asked for is actually the sampling frequency.
The result needs to be divided by the sampling frequency.
The output is normalized for a double-sided PSD, so multiplying by 2 makes the integrated spectrum nearly 1.
In the time since this question was asked and answered, the AstroPy project has gained a Lomb-Scargle method, and this question is addressed in the documentation: http://docs.astropy.org/en/stable/stats/lombscargle.html#psd-normalization-unnormalized
In brief, you can compute a Fourier periodogram and compare it to the astropy Lomb-Scargle periodogram as follows
import numpy as np
from astropy.stats import LombScargle
def fourier_periodogram(t, y):
N = len(t)
frequency = np.fft.fftfreq(N, t[1] - t[0])
y_fft = np.fft.fft(y)
positive = (frequency > 0)
return frequency[positive], (1. / N) * abs(y_fft[positive]) ** 2
t = np.arange(100)
y = np.random.randn(100)
frequency, PSD_fourier = fourier_periodogram(t, y)
PSD_LS = LombScargle(t, y).power(frequency, normalization='psd')
np.allclose(PSD_fourier, PSD_LS)
# True
Since AstroPy is a common tool used in astronomy, I thought this might be more useful than an answer based on the code snippet mentioned above.